AI

Theory of Mind Isn’t Just for Humans Anymore: How to Stop AI from Writing Useless IDFs

How Theory of Mind prompting turns generic AI into an IP-aware collaborator, helping legal teams create clear, patent-ready invention disclosures faster.


AI Can’t Read Minds, But It Can Simulate Them (If You Prompt It Right)

When your AI-generated invention disclosure feels vague, or legally useless, you’re most likely using a generic LLM like ChatGPT, Copilot, or Gemini. These models aren’t trained to recognize novelty or apply legal structure, so even well-written outputs can miss what matters.

But the problem isn’t just the model, it’s also how you’re prompting it.

If you ask a generic AI to “describe your invention,” don’t be surprised when you get fluff or hallucinations. Without domain-specific cues and legal guardrails, the result is usually more noise than signal.

The goal is to eliminate any potential guesswork or gaps that the AI might hallucinate to fill. The trick is prompting the AI like nothing is obvious. This is called implementing Theory of Mind (ToM). It’s not quite as easy as saying “act like a patent attorney” but that’s a start.

What is Theory of Mind in AI and Why Should Legal Teams Care?

Theory of Mind (ToM) is the cognitive ability to track what someone else knows, believes, and intends. It’s how humans anticipate confusion, fill in gaps, and tailor their communication to the listener.

In patent work, that means knowing:

  • What the inventor assumes is obvious
  • What legal counsel needs spelled out
  • What the patent office examiner won’t infer

 

LLMs don’t come with real ToM. But newer models (like GPT-4) can approximate it if you design prompts that force them to simulate audience-awareness and reasoning.

Performance on false-belief tasks, a key measure of ToM, has improved with each GPT release. GPT-3.5 performed at the level of a typical 3-year-old, while GPT-4 increased to the ToM reasoning of a 6-year-old. ToM-like reasoning is slowly increasing within generic LLMs, but these measures should still be taken with a grain of salt. What a 6 year old assumes and infers is very different than a patent attorney or IP specialist.

Where Most IP Prompts Go Wrong

Too often, people treat LLMs like text generators, not reasoning engines. They ask for “a summary” or “a draft disclosure” with zero context on:

  • The target reader (IP counsel, examiner, etc.)
  • The knowledge gaps to fill
  • The risk of assumed knowledge in technical documents

 

The result? Disclosures that are long, off-point, and missing the technical differentiation that matters.

How Tangify Builds ToM Into Prompt Engineering

At Tangify, we force the AI to “think about thinking” at every step. This isn’t just chain-of-thought prompting. It’s role-specific, audience-specific, context-aware reasoning.

Here’s how:

  1. Perspective Shifting: The AI is prompted to reframe the invention from multiple viewpoints, first as the engineer, then as the IP specialist, then as the examiner with no background knowledge.
  2. Gap Identification: The system proactively flags assumptions the inventor might be making (e.g., missing prior art comparisons, unexplained acronyms).
  3. Audience Simulation: Prompts explicitly ask the AI to write for someone unfamiliar with the invention domain, mirroring how attorneys and examiners actually engage with disclosures.
  4. Multi-turn Reasoning: Tangify doesn’t let the AI stop after one pass. It pushes the model to analyze its own output, check for legal adequacy, and suggest follow-up clarifications for the inventor.
  5. Source Identification: Tangify avoids hallucinations by citing the exact documents and locations its outputs are based on, so legal teams can trace every suggestion back to its source.

 

Why This Matters for IP Teams

Theory of Mind simulation in AI isn’t just an academic curiosity. It’s the difference between an invention disclosure that triggers more follow-up meetings... and one that gets filed faster with fewer billable hours.

Tangify doesn’t rely on the AI to “just know” what’s important. We build structured prompts and workflows that guide the AI to surface the right details with legal-grade precision.

If you’re not engineering your prompts with ToM principles in mind, you’re leaving clarity, and money, on the table.

From Vague to Visible Invention Disclosures

Generic AI can’t read minds, but with the right prompts, it can move invention disclosures from vague to viable. When you prompt with ToM, you stop relying on the model to guess. You guide it to ask better questions, surface unstated assumptions, and reframe the invention in terms your legal team and the patent office actually care about.

You need smarter prompts and LLMs. Tangify’s approach makes LLMs behave less like autocomplete machines and more like IP-aware collaborators. That shift turns your technical documentation into patent-ready insights, fast. And in an environment where speed, clarity, and cost control matter, that’s the edge IP teams should be looking for.

Similar posts

Give Tangify a Try

Draft a disclosure you can take action on now, with a limited free trial from Tangify.