From Discovery to Disclosure: How AI Platforms like Tangify Transform the Patent Process
Discover how AI-driven patent tools like Tangify streamline invention discovery, optimize disclosures, and supercharge your IP success.
Imagination in AI Summit April 15 at MIT Highlights. How technologists and futurists are applying AI on the frontlines of business, science, and society.
Juan Enriquez opens with a philosophical and historical perspective on AI, arguing that today's developments are evolutionary rather than revolutionary. He traces AI's origins to industrial-era punch cards and highlights how society has long grappled with machines as "intelligent" beings. He frames our current AI moment as part of a long arc, starting from Babbage and Turing, only now turbocharged by computation and compounding data. Modern LLMs and chatbots feel new but are built on decades-old concepts. Yet their capabilities, like empathetic conversations or passing medical exams, are reshaping human expectations and interactions.
He emphasizes how AI is now ubiquitous but invisible: embedded in search engines, phones, and daily tools. The key shift is how AI systems are learning in real-time and beginning to evolve autonomously. He compares this to biological evolution, describing today’s AI landscape as an evolutionary tree with divergent, unpredictable branches. Enriquez warns of the challenge in understanding or benchmarking these systems, especially as they begin to outperform humans across disciplines.
David Kenny follows by emphasizing how society, especially Gen Z, is already trusting AI more than humans in medicine, finance, and relationships. He argues that while technological advancements have progressed rapidly, human readiness has lagged. His core message is that society must develop better critical thinking and interpersonal skills to coexist with and leverage AI meaningfully.
Key Takeaways:
Explain it like I'm 5:
AI used to be slow and simple, but now it’s like a super-fast student that learns from everything and helps people, sometimes better than people do, but we still need to know how to ask it smart questions.
Building “scientific superintelligence” - AI that can surpass humans in scientific reasoning, hypothesis generation, and experimentation. Geoff von Maltzhan defines it as pushing science beyond human limits by automating every step of the scientific method, from modeling and hypothesis to experiment design and real-world testing. His company, Lila.AI, is building autonomous science systems to unlock rapid discoveries across materials science, life sciences, and chemistry.
Carina Hong explains Axiom’s focus on “quantitative superintelligence” for mathematics. Today’s LLMs perform well on numeric answers but fail at step-by-step reasoning and proofs. Axiom is designing AI systems that use formal programming languages to simulate the logic of mathematical reasoning. Enabling trust, verification, and use in high-stakes domains.
Riccardo Sabatini adds that LLMs are biased toward consensus and averages (what he calls the "Gaussian trap") which risks suppressing novelty and real insight. True scientific discovery often lies in the statistical outliers, not the middle. Their goal is to teach machines not just to regurgitate known information but to evaluate and generate surprising, creative, and useful results.
They discuss how this shift may allow scientific discovery to happen 24/7, across disciplines, using robotic labs and automated experiments. They believe this will radically accelerate innovation in areas like drug design, climate materials, and cryptographic security.
Key Takeaways:
Explain it like I'm 5:
They're building really smart robot scientists that can do math, test ideas, and make discoveries faster than people, even while we sleep.
Open-source generative AI is challenging the dominance of closed, proprietary systems and enabling broader participation in the AI revolution. Moderated by John Werner, the session began with remarks from Yvonne Hao, who framed AI as a pivotal tool in solving social and economic challenges - emphasizing Massachusetts’ commitment to open, values-driven AI development.
Alvin Graylin and Bob Young reflected on how AI parallels past tech revolutions. Alvin stressed that AI systems, while not human, may evolve to become better-informed leaders if aligned with broad societal data. He argued that what matters is not AI’s sentience but its usefulness and alignment with human values. Bob, invoking Red Hat’s open-source legacy, emphasized that proprietary models restrict innovation and transparency, whereas open-source AI democratizes knowledge and security.
Karl Zhao provided evidence that open-source models can compete with, and sometimes outperform, closed models. DeepSeek’s R1 and B3 models match proprietary systems at up to 500x cost efficiency, enabling anyone, even universities or individuals, to run powerful LLMs on local machines. This signals a shift from elite labs to a globally distributed innovation model.
The panel debated AI’s ethical implications, job displacement, and whether AI should use personal pronouns like “I.” They concluded that transparency and pluralism, not control by a few labs, are essential for aligning AI systems with global cultural and moral variation. The panel agreed that while AI will displace jobs, it will also create new ones, if supported by thoughtful governance and open participation.
Key Takeaways
Explain it Like I'm 5
Smart computer brains used to be locked in big secret labs, but now people around the world can build and share their own smart helpers to solve problems together.
This panel discussed how major financial firms are adopting AI agents and integrating them into highly regulated, high-stakes environments. The discussion focused on practical use cases, challenges, governance, vendor navigation, and how firms are balancing experimentation with compliance.
Prudential’s Robert Sala described live deployments of GenAI for document summarization in legal and asset management, co-generation for marketing, and chatbots in customer service. Regulatory constraints, especially ones around explainability, are a major limiter for further expansion.
Fidelity’s Lisa Huang explained how GenAI has shifted AI from prediction-only tools to a new computing paradigm that augments human alpha rather than replacing it. Fidelity’s innovation efforts focus on finding product-market fit using scientific, iterative experimentation. She emphasized that data ownership and use-case alignment are more important than owning model architecture.
WEX’s Karen Stroup discussed a structured approach: aligning with executive teams around bold multi-year goals, balancing short- and long-term bets, and measuring both customer impact and P&L. Small-scale experiments such as using AI to triage email generate insight and momentum for scaled adoption.
John Hancock’s Jieyu Fan stressed educating leadership, ensuring decisions are joint and grounded in business priorities, and being tech-agnostic. The emphasis is on minimizing lock-in by designing systems to be swappable every 24 months.
The panel agreed that while vendor overload is real, the risk lies in not experimenting. Startups must demonstrate ROI, minimize switching risk, and prioritize data privacy. On-prem or secure deployments are increasingly expected.
Big companies are teaching smart robots to help people do their jobs better, but they’re being super careful to make sure the robots are safe, fair, and don’t mess things up.
In this short but high-energy session, Professor Ramesh Raskar introduced Nanda, an open-source architecture for what he calls the "Internet of AI Agents." Drawing parallels to the shift from personal computing to the internet and then to the World Wide Web, Raskar argued that we're entering a new phase - not just about AI models, but about AI agents that interact with each other on our behalf.
Unlike today’s isolated AI tools (like individual chatbots or voice assistants), future decentralized agents will discover, negotiate, transact, and coordinate tasks across domains; not in silos, but as a federated network. Whether it’s planning a birthday party or managing complex health decisions, these agents will interact autonomously, leveraging a system of privacy-respecting protocols, reputation mechanisms, and real-time pricing models for services.
Nanda (short for Network Agents in a Decentralized Architecture) enables this ecosystem, offering primitives similar to the original internet: registries, authentication, agent identity (KYA: Know Your Agent), and pricing/logging via cryptographic mechanisms. It is already being piloted by 15 top AI universities and multiple startups.
Raskar emphasized user agency, open architecture, and the importance of building outside of walled gardens controlled by tech giants. He predicted a near-future where each individual has a team of AI agents, just like apps today, but smarter and more collaborative, who can be trained, upgraded, and work together to serve our goals.
Smart robots on the internet will talk to each other to do things for you, like plan your party or help with homework, and they’ll work together like a team.
In this fireside-style session, Chase Lochmiller and Nadav Eiron of Crusoe unpacked the convergence of energy infrastructure and artificial intelligence. Crusoe is building massive, vertically integrated AI data centers, placing compute resources near abundant, low-cost clean energy sources like West Texas wind and solar. Their Abilene facility alone will consume 1.2 GW of power, nearly 25% the size of all Northern Virginia’s data center capacity.
Chase emphasized that AI’s recent breakthroughs aren't due to radically new algorithms but to the availability of massive data and compute. Crusoe's bet is on energy-first AI infrastructure, placing compute where clean energy is plentiful rather than trying to retrofit energy solutions in traditional locations. The result: cheaper, greener computing, potentially removing the “green premium” from sustainable tech.
On adoption, Chase sees us in early innings. Businesses are experimenting with pilots and beginning to scale use cases. He believes AI will be a net job creator, boosting productivity and generating new categories of employment, even as older roles shift.
Nadav focused on accessibility. Crusoe’s vision extends beyond model performance to usability for non-technical business users. He compared Crusoe’s approach to Amazon’s use of the internet - applying tech to transform a traditional domain. He stressed making AI deployment practical, not just pushing bleeding-edge research.
The session ended with advice for founders: Chase highlighted the value of vision, humility, and building a strong support network, while Nadav contrasted Google’s long-cycle big problem solving with startups’ agility in solving fast, focused challenges.
Smart computers need a lot of electricity to work, so these people are building giant computer homes where the wind and sun make power, and they're helping more people use those smart tools.
Sridhar Ramaswamy reflects on his career from building large-scale machine learning systems at Google to founding Neeva, a private search engine, and now leading Snowflake, a cloud data company rapidly expanding into AI. Ramaswamy discusses how Snowflake’s early innovation of decoupling storage and compute created a highly scalable and user-friendly platform that is now essential for AI-driven enterprises.
He explains that AI has transformed Snowflake from a data warehouse into a launchpad for intelligent, real-time applications, such as chatbots, document analysis, and agent-based workflows. Snowflake’s key AI principles are: make it easy to use, efficient, and trustworthy. Rather than chasing hype or locking customers into rigid commitments, Snowflake focuses on low-risk, high-impact deployments, often starting with simple but valuable applications like converting PDFs into structured data.
On the AI model ecosystem, Ramaswamy favors a partnership-first approach with Anthropic, OpenAI, and Mistral. He notes the open-source movement’s impact on democratizing model access, but acknowledges the immense costs of training frontier models.
He warns against “checkbox AI”, or corporate initiatives driven by budget, not value. Instead, Snowflake encourages iterative experimentation and organic growth. On the product side, he highlights the company’s native app sandboxing, built-in marketplace, and secure collaboration tools that let startups thrive on its platform.
Personally, Ramaswamy is a heavy AI user, applying ChatGPT and Anthropic for creative writing, data cleanup, itinerary parsing, memory aids, and even home construction research. He describes this era as “magical,” full of tools that supercharge human creativity and productivity.
A really smart person from a company that stores lots of important data said we can now use super-smart robots to help us understand, search, and use that data better - and even help plan things like vacations or do homework faster!
The panel on AI safety explored the urgent need to embed ethical, legal, and civic safeguards into AI development and deployment. Moderator Alison Sander introduced a high-profile panel comprising experts from policy, civil rights, technology, and academia.
Jamie Metzl emphasized that all AI development occurs within a political context. Without global cooperation and governance, AI risks being siloed and unsolvable like other global threats such as climate change. He argued that safety must be addressed systemically and globally, but warned that a pause in one country would not stop progress elsewhere.
Albert Cahn took a more localized view, warning that AI is already harming people today. He shared real-world examples: facial recognition leading to false arrests and welfare fraud detection systems wrongly prosecuting thousands. He stressed that existing laws, though under-enforced, already provide tools for accountability and that local and state-level action is where real change can start.
Noelle Russell shared her journey from Amazon to Microsoft and described the risks of deploying “baby tiger” models, or powerful but poorly understood AI systems. She urged builders to think long-term and advocated for making responsible AI a core mindset, not a compliance checklist.
Cam Kerry highlighted the need for rigorous, ongoing measurement throughout AI lifecycles. He supported a “network of networks” model for AI governance, where localized efforts and international cooperation build mutual resilience. He also criticized the politicization of safety conversations and emphasized the importance of research, transparency, and international engagement.
Overall, the panel agreed that AI safety is not about slowing innovation, it’s about aligning technological power with human values and legal safeguards now, before harm scales further.
AI is like a really smart robot, and we need to teach it good manners now so it doesn't accidentally hurt anyone later.
This panel explores the evolving relationship between artificial intelligence and human creativity across domains like education, XR (extended reality), game design, sound, product development, and cognitive health. The discussion embraces AI not as a standalone creator but as a co-creative agent that expands human imagination and redefines how we interact with technology.
Joanna Peña-Bickley highlights the underexplored frontier of sound and voice as creative interfaces, proposing audio as a tool for social bonding, cognitive diagnostics, and even treating dementia. Rus Gant introduces the concept of "positive hallucinations" - AI's ability to produce novel, often unexplainable content - as a useful method of breaking cultural and cognitive boundaries. His work with XR and generative 3D content creation demonstrates how AI can now produce immersive experiences in real-time that once took years.
Konstantina Yaneva’s company creates games that are both fun and diagnostic, enabling players to learn about themselves while engaging with adaptive AI systems. Armen Mkrtchyan adds a systems perspective, likening AI innovation to nature’s evolutionary processes, emphasizing a future shaped by “co-intelligence” among humans, machines, and biological systems.
The panel critiques outdated modes of learning (e.g., essays as benchmarks of intelligence) and advocates for more human-centered, sensory-rich, and adaptive tools, especially for younger generations. Across the board, speakers argue that AI should not mimic humans but collaborate with them in previously unimaginable ways, opening new doors for creativity, empathy, and health.
AI is like a super helpful imaginary friend that can draw, sing, talk, and build worlds with you so that you both make cool things together.
In this fast-paced panel, executives from Deloitte, WEX, Nissan, Tomorrow.io, and AlgoVerde.ai discussed how AI is tangibly reshaping modern businesses beyond just hype. Karen Stroup (WEX) emphasized the importance of grassroots AI adoption through "AI Accelerators," a company-wide program that equips employees with tools and coaching to redesign their workflows. Dan Slagen (Tomorrow.io) shared how his weather tech company evolved from SaaS to launching satellites to power AI-driven forecasts, with marketing becoming the internal evangelist for AI use across departments.
Robert Farmer (Nissan) and Vladimir Jacimovic (AlgoVerde.ai) described how synthetic personas are being used to test customer reactions at scale, creating faster insights. But the real challenge lies in “synchronizing clocks”: AI delivers in seconds, teams move in days, and enterprise systems still think in quarters and years. The hardest part isn’t AI's capabilities, it’s getting people to trust and act on it.
Panelists agreed that "trust" is the new battleground. That includes trust in AI agents, synthetic customers, and AI-generated creative output. Human-in-the-loop models are the current safety net, helping AI earn credibility before it can act fully independently.
The conversation closed with a call to action: don’t just optimize workflows; disrupt your own business before someone else does.
Companies are using robot brains to help with work, but people are still figuring out how to trust them and work together like a team.
This panel explores how physical and agentic AI will reshape work, warfare, and corporate operations. Daniela Rus begins by distinguishing digital AI from physical AI (machines that interact with the real world), and defines agentic AI as systems capable of understanding goals and autonomously executing multi-step actions.
Col. Tucker Hamilton shares that in 2023, an AI agent flew a high-performance uncrewed military aircraft for the first time, highlighting progress in physical AI. However, he cautions that military systems remain years away from safe, autonomous deployment due to gaps in edge computing, sensor fusion, and sim-to-real translation.
Jonas Diezun describes Beam AI’s work automating white-collar tasks like KYC (know-your-customer) processing, arguing that many routine digital jobs are already automatable today. His team blends symbolic reasoning with LLMs to improve reliability.
Emrecan Dogan emphasizes that true productivity comes from AI agents that personalize to individuals’ workflows. Glean’s product learns from internal communications to act like a chief-of-staff, responding to emails or prioritizing tasks in a user-specific way, not just with statistical “averages.”
Rich Nanda notes that Fortune 1000 companies are much slower than the tech itself, but predicts faster AI adoption compared to prior innovations like cloud computing. He stresses that humans must stay in the loop, especially in complex decision-making.
Consensus emerges that agentic digital AI is close, with 70–90% of routine office work soon to be automatable. Physical AI is further behind due to challenges in autonomy, safety, and real-world unpredictability. Still, timelines are compressing: what once seemed 5 years off may be 6 months away.
We’re teaching smart robots to do people’s jobs and even fly planes, but they still need help understanding the world and doing tricky stuff without messing up.
This panel led by Dave Blundin, Mark Gorenberg, James Currier, Rudina Seseri, and Mark Machin offered a candid and multifaceted conversation among venture capitalists and technologists about the current state of AI, focusing on open-source models, investment implications, and societal impacts. The session opened with reflections on AI as a transformative moment, comparing it to historical inflection points like the American Revolution. Massachusetts’ proactive state-level AI task force was highlighted as a model of values-driven innovation rather than fear-based regulation.
The core discussion revolved around AI as both a powerful tool and a cultural shift, with opinions diverging on whether models should be personified ("AI says 'I'") or kept strictly mechanical. Panelists debated the philosophical dimensions of AI consciousness, referencing moral relativism and the technical unknowability of whether machines might one day be sentient.
The rise of open-source AI was praised as a democratizing force, especially as models like DeepSeek’s can rival proprietary models at a fraction of the cost, putting powerful tools in the hands of universities and small businesses. Participants expressed concern that the centralization of AI in the hands of a few labs would be a cultural and strategic mistake, both ethically and practically.
The discussion closed by grappling with AI’s economic consequences. While some argued that automation boosts productivity and wealth, others urged thoughtful policy intervention and warned of job displacement at unprecedented scales, particularly in white-collar work. Despite philosophical concerns, panelists were optimistic about AI’s potential if guided with transparency, collaboration, and moral awareness.
AI is like a super-smart robot that helps people do things faster, but we have to share it nicely and make sure it plays fair.
Anshul Ramachandran of Windsurf discusses how AI is transforming software development, both in terms of who builds software and how it's built. Windsurf positions itself as an infrastructure-first company using AI to supercharge every part of the software development lifecycle. Their goal is not to eliminate developers but to multiply their impact, aspiring to a 99% reduction in time-to-software. The company was co-founded by a tight-knit team with engineering backgrounds and long personal histories, some going back to kindergarten.
Anshul critiques the notion that AI will kill coding. Instead, he argues that AI will shift the developer's role toward problem-solving, architecture, and business logic, while automating rote coding tasks. This shift could democratize development: PMs and non-engineers can build prototypes, reducing friction between vision and implementation.
Windsurf embraces the term "vibe coding," meaning rapid, fluid development with low activation energy - akin to going from idea to prototype almost instantly. The company has seen massive adoption in just four months, gaining unique behavioral data on how developers interact with AI-driven coding tools.
In terms of industry disruption, Windsurf sees IT services firms and simple SaaS platforms as most vulnerable. Their strategy focuses on building tools with real production trustworthiness, not just code generation demos. They've built proprietary infrastructure and training pipelines to handle enterprise-scale deployments and maintain control over context-aware execution.
The rebrand from Codeium to Windsurf reflects a shift toward harmonized human-AI collaboration that feels effortless but is deeply powerful under the surface.
AI is helping more people make computer programs faster and easier, kind of like using Legos with instructions that build themselves.
Francis Pedraza’s keynote tackles a modern adaptation of The Innovator’s Dilemma in the AI era. Drawing from his experience founding Invisible Technologies, Pedraza explains how his startup evolved from barely surviving to scaling $100M+ revenue, only to face a deeper existential issue: growth was narrowing their original vision. By focusing solely on large enterprise clients, Invisible was abandoning smaller organizations and individuals, who were the very audience the company was founded to empower.
This challenge led Pedraza to restructure Invisible into a holding company and spin out Infinity Constellation, a new “AI process platform studio” capable of launching zero-to-one ventures with aligned incentives. He contrasts this structure to Elon Musk’s companies, which are separately funded and misaligned, and compares it favorably to the PayPal Mafia's post-exit collaboration.
The talk is deeply philosophical, invoking Nietzsche, Da Vinci, and even the burning of the Library of Alexandria, but it’s grounded in a practical insight: large-scale innovation requires not only great technology, but new organizational design and incentive models. Pedraza positions Infinity as the scalable solution to build many impactful AI-native companies without losing sight of individual empowerment or systemic reinvention.
The ultimate goal? Build a suite of interoperable AI companies that can empower a one-person billion-dollar business and use this architecture to unlock a new renaissance of human creativity.
Key Takeaways
He built a robot helper, but it only helped big companies. So he made a team of robot builders to help everyone, even small kids with big ideas.
This panel explores the seismic shifts coming to the labor economy as AI agents begin to permeate the workplace. Kicking off the discussion, Ayush Chopra introduces Project Iceberg, which uses simulations of millions of AI agents to understand how labor automation will affect 163 million jobs across 923 U.S. occupations. The tool tracks how AI spreads through industries, not just obvious sectors like software or content creation, but also less expected roles like legal assistants and logistics - identifying both risk and opportunity.
The panel explores concerns around power concentration. Michael Casey raises red flags about the centralization of AI agent ownership by a handful of companies, warning that without decentralized infrastructure and human-centered data models, society may slip into digital feudalism. He advocates for a future where people negotiate with AI systems on their own terms, emphasizing data sovereignty.
Dave Blundin provides a venture capital lens, pointing out how fast change is arriving: white-collar jobs are being transformed, yet institutions (governments, schools) remain largely unprepared. He highlights the absurd contrast between new 21-year-old billionaires and the vast number of professionals whose jobs are at risk within 18–24 months.
The conversation turns to governance and orchestration. The goal, panelists argue, is to design a consensus-based digital labor economy where humans and AI agents co-evolve, with transparent systems to guide outcomes. Governments and enterprises alike must recognize the inevitability of AI agents and proactively shape their integration.
Ultimately, the future of work demands tools like Iceberg not only to measure disruption, but to steer it toward equity, decentralization, and strategic upskilling.
Lots of robot helpers are coming to work with people, and if we don’t plan carefully, only a few people might get to be the boss of all the robot helpers.
This panel reviewed cyber resilience in an AI-powered world. Moderator Alison Sander framed the session with statistics and quick audience polls, underscoring how ubiquitous cyber risk has become, even among those who don’t work in the field.
FBI agent Doug Domin confirmed the staggering scope of cybercrime, reporting $12.5 billion in self-reported losses to the FBI’s IC3 system in the past year. Karen Nershi highlighted the increasing professionalization of cybercriminals, citing ransomware gangs like Conti that operate like structured corporations, complete with HR functions and recruiting tactics. Kip Boyle emphasized that even companies who don't think they’re tech companies, like fruit distributors, are now wholly reliant on their digital infrastructure and often unknowingly vulnerable.
Keri Pearlson and others emphasized how the adoption of AI by criminals has raised the stakes. Cyber attackers are using AI to scale phishing, automate deepfakes, and accelerate attack cycles. The panel agreed: AI has dramatically increased the speed and sophistication of cyber threats, while defenders struggle to keep up.
On the solution side, the discussion leaned heavily into the concept of resilience: not just being hard to hack, but fast to recover. That means embracing a culture of cybersecurity, user awareness, collaboration with law enforcement, and embedding security into AI systems from the start.
The panel ended with war stories, WTF moments that ranged from North Korea stealing $1.5B in crypto to elderly victims nearly losing savings. The closing message: every organization is a potential target, and every person has a role in resilience.
Bad guys on computers are getting super smart with robot helpers, so we all have to learn to lock our digital doors fast and help each other if someone gets hacked.
In this fireside conversation at the IIA AI Summit, MIT CSAIL Director Daniela Rus and SandboxAQ CEO Jack Hidary discuss the next wave of AI, one that moves beyond language models and into quantitative, physics-based applications. Hidary outlines SandboxAQ’s journey from an alphabet moonshot to a nearly $1B-funded enterprise with a focus on AI for scientific and medical breakthroughs.
Their thesis: while large language models (LLMs) have become commoditized, the real frontier lies in combining neural networks with fundamental physics. SandboxAQ is leading this charge, applying AI to model molecules, simulate chemical interactions, and develop new materials - all at the atomic and electron level. This enables faster drug discovery, better materials for aerospace, and innovations like quantum sensors that can diagnose cardiac issues via magnetic fields rather than EKGs.
Hidary emphasizes the contrarian approach of not training AI on internet data but instead on physics equations. This methodology has expanded from biopharma to domains like navigation, where SandboxAQ uses the Earth’s magnetic field as an un-jammable GPS alternative. Rus emphasizes the importance of academic collaboration, highlighting SandboxAQ’s residency program that integrates PhDs into real-world engineering work.
Together, Rus and Hidary envision a synergistic future where LLMs and physics-based models (LQMs) coexist to transform how we understand matter, health, and even how we navigate the world.
Smart people are using super-powered math and science to help computers find new medicines, sense your heartbeat without touching you, and figure out where you are without using Google Maps.
Discover how AI-driven patent tools like Tangify streamline invention discovery, optimize disclosures, and supercharge your IP success.
Learn how AI reshapes USPTO policy, fosters inclusive innovation, and ensures responsible IP protection in our rapidly evolving tech landscape.
Streamline invention disclosures with AI-driven methods to capture essential details, reduce legal back-and-forth, and create stronger patent...