When your AI-generated invention disclosure feels vague, or legally useless, you’re most likely using a generic LLM like ChatGPT, Copilot, or Gemini. These models aren’t trained to recognize novelty or apply legal structure, so even well-written outputs can miss what matters.
But the problem isn’t just the model, it’s also how you’re prompting it.
If you ask a generic AI to “describe your invention,” don’t be surprised when you get fluff or hallucinations. Without domain-specific cues and legal guardrails, the result is usually more noise than signal.
The goal is to eliminate any potential guesswork or gaps that the AI might hallucinate to fill. The trick is prompting the AI like nothing is obvious. This is called implementing Theory of Mind (ToM). It’s not quite as easy as saying “act like a patent attorney” but that’s a start.
Theory of Mind (ToM) is the cognitive ability to track what someone else knows, believes, and intends. It’s how humans anticipate confusion, fill in gaps, and tailor their communication to the listener.
In patent work, that means knowing:
LLMs don’t come with real ToM. But newer models (like GPT-4) can approximate it if you design prompts that force them to simulate audience-awareness and reasoning.
Performance on false-belief tasks, a key measure of ToM, has improved with each GPT release. GPT-3.5 performed at the level of a typical 3-year-old, while GPT-4 increased to the ToM reasoning of a 6-year-old. ToM-like reasoning is slowly increasing within generic LLMs, but these measures should still be taken with a grain of salt. What a 6 year old assumes and infers is very different than a patent attorney or IP specialist.
Too often, people treat LLMs like text generators, not reasoning engines. They ask for “a summary” or “a draft disclosure” with zero context on:
The result? Disclosures that are long, off-point, and missing the technical differentiation that matters.
At Tangify, we force the AI to “think about thinking” at every step. This isn’t just chain-of-thought prompting. It’s role-specific, audience-specific, context-aware reasoning.
Here’s how:
Theory of Mind simulation in AI isn’t just an academic curiosity. It’s the difference between an invention disclosure that triggers more follow-up meetings... and one that gets filed faster with fewer billable hours.
Tangify doesn’t rely on the AI to “just know” what’s important. We build structured prompts and workflows that guide the AI to surface the right details with legal-grade precision.
If you’re not engineering your prompts with ToM principles in mind, you’re leaving clarity, and money, on the table.
Generic AI can’t read minds, but with the right prompts, it can move invention disclosures from vague to viable. When you prompt with ToM, you stop relying on the model to guess. You guide it to ask better questions, surface unstated assumptions, and reframe the invention in terms your legal team and the patent office actually care about.
You need smarter prompts and LLMs. Tangify’s approach makes LLMs behave less like autocomplete machines and more like IP-aware collaborators. That shift turns your technical documentation into patent-ready insights, fast. And in an environment where speed, clarity, and cost control matter, that’s the edge IP teams should be looking for.