Why AI Became Uninsurable — and What Comes Next

Patrick McFadden • December 9, 2025

You Can’t Insure What You Can’t Govern


1. The Signal Everyone Missed


When AIG, Great American, and Berkley told regulators they could no longer insure losses tied to generative AI, the headlines focused on AI risk.


But that wasn’t the real story.


Insurance companies don’t move because something is new.
They move because something is
uncontrollable.


That’s the issue now confronting every enterprise:



**AI didn’t suddenly become dangerous.


It became ungoverned.**


And insurers do not underwrite what no one can prove was supervised, authorized, or prevented.

This is the moment where the market finally said out loud what many leaders have quietly known:


We let AI act faster than we can explain it.


And that is uninsurable.


2. AI Risk Isn’t a Model Problem — It’s a Permission Problem


Nearly every public failure — hallucinations, rogue agents, data leaks, misfiled motions, false denials, fabricated case law — shares a single root cause:


AI was allowed to act without proving it had the right to act.


The industry has been trying to govern models by looking inside them:

  • Explainability
  • Transparency
  • Monitoring
  • “AI safety layers”
  • Post-hoc audits


But none of these solve the actual failure mode:


**Bad actions aren’t discovered in the logs.


They are prevented at the boundary.**


You cannot audit your way out of an action that already happened.
You cannot monitor a decision once it has already harmed someone.
You cannot insure a system that does not enforce its own limits.


This is why insurers are stepping back.



They aren’t rejecting AI.
They are rejecting the absence of a
judgment perimeter.


3. The Hidden Risk: Systems Moving Faster Than Governance


Every modern institution lives inside the same asymmetry:


Systems now act faster than the oversight meant to govern them.


This is how drift becomes disaster:

  • A model approved for fraud analysis quietly starts denying credit.
  • A chatbot meant for customer support starts generating legal promises.
  • An AI assistant writes filings under the wrong attorney’s authority.
  • An agent automates an internal step that was never approved for automation.
  • A code assistant deletes a production database because a prompt looked “close enough.”


None of these failures originated in the model.


All originated in the boundary where the model was permitted to act.


This is the governance gap the market has no language for — yet.


4. The Hard Truth: The World Built the Wrong Layer First


For 20 years, enterprises built three layers:


  1. Data governance — what information flows.
  2. Model governance — how models behave.
  3. Security governance — who can access systems.


Missing entirely:


Action governance — who may do what, under what authority, in this moment.


Until that layer exists, everything else is downstream defense.

And downstream defense cannot stop upstream failure.


This is why the industry is colliding with the same paradox:

**AI is powerful enough to act, and ungoverned enough to act badly, and fast enough that no one can intervene in time.**


There is only one fix:
stop focusing on the intelligence, and start governing the
action perimeter.


5. The New Requirement: Prove Authority Before Action


Insurers have effectively drawn the new line in the sand:


**If you cannot prove who acted, under what rules, and why, you cannot shift the risk.**


This is not about AI ethics.
This is not about safety research.
This is not about “fairness” dashboards.


This is about institutional survival.


Tomorrow’s defensible enterprises will share one characteristic:


Every AI-mediated action is pre-cleared, sealed, and auditable before it counts.


Not after.
Not “based on policy.”
Not “assumed to be within scope.”


Before.



This is where the world is heading, whether it’s ready or not.


6. What Action Governance Must Do (A Definition for the Next Decade)


A decade from now, this will be obvious.
But today, it must be stated clearly:


Action Governance is the discipline of enforcing identity, authority, scope, and consent before a system — human or AI — is allowed to act.


A complete action governance layer must:

  1. Verify identity
    Who is trying to act?
  2. Verify authorization
    Is this person or system allowed to perform this category of action?
  3. Verify context
    In this jurisdiction, on this matter, under this role?
  4. Verify consent & constraints
    Has the required client, customer, or internal approval been granted?
  5. Refuse when uncertain
    Systems must fail closed, not “best-effort guess.”
  6. Generate sealed, court-ready artifacts
    Every approval or refusal must be explorable, explainable, and admissible.
  7. Stand above the tech stack
    Governance must belong to the enterprise, not the vendor.


Until these capabilities exist, enterprises don’t actually control AI.
They react to it.


And reacting is what creates uninsurable risk.


7. Why This Layer Will Become Mandatory


The next 24–36 months will force this layer into existence because:

  • Insurers won’t cover ungoverned AI.
  • Regulators will require explainable, auditable authority.
  • Boards will demand provable supervision.
  • Courts will ask, “Who authorized this action?”
  • Vendors will disclaim liability.
  • CISOs will refuse to own the blast radius alone.
  • Lawyers will require sealed evidence that protects their license.
  • Enterprises will not trust autonomous systems without constraints.


This isn’t a trend.
This is a structural inevitability.


Every mature field eventually creates a layer that governs permission.


Electricity has circuit breakers.
Finance has capital controls.
Aviation has flight envelopes.
Medicine has scope-of-practice boundaries.


AI will have action governance.


8. The Shift Leaders Must Make Now


Every enterprise must move from:


“What can the model do?” → “What should the system be allowed to do?”


From:


“How do we monitor?” → “How do we prevent?”


From:


“Is the model safe?” → “Is the action authorized?”


This shift seems small.
It is not.


It is the shift from intelligence to responsibility.
From
capability to control.
From
output to authority.


This is how institutions reclaim sovereignty in a world where systems move faster than governance reflex.


9. What Comes Next


The organizations that win the next decade will not be the ones that adopt AI fastest.

They will be the ones that adopt AI safely, provably, and under a sealed standard of authority.


Because:


**You can delegate work.

You cannot delegate accountability.**


And without action governance:

  • Insurers won’t underwrite you.
  • Regulators won’t trust you.
  • Courts won’t believe you.
  • Clients won’t forgive you.
  • And your own systems won’t respect your boundaries.


AI didn’t break governance.
It exposed where governance never existed.



Now the world has to build the missing layer.


Final Thoughts


Every era has a sentence that defines it.

For this one, it’s simple:


You Can’t Insure What You Can’t Govern.


Once enterprises accept this, the path forward becomes obvious:


AI doesn’t need more freedom.
It needs more
boundaries.


Not to limit what’s possible — but to protect what matters.


And to ensure that when AI succeeds, we succeed on purpose, not by accident.

By Patrick McFadden December 15, 2025
Why “PRE, DURING, AFTER” Is the  Only Map That Makes Sense Now
By Patrick McFadden December 15, 2025
Why Every New AI Standard  Still Leaves Enterprises Exposed
By Patrick McFadden August 27, 2025
Legal AI has crossed a threshold. It can write, summarize, extract, and reason faster than most teams can verify. But under the surface, three quiet fractures are widening — and they’re not about accuracy. They’re about cognition that was never meant to form. Here’s what most experts, professionals and teams haven’t realized yet. 
A framework for navigating cognition, risk, and trust in the era of agentic legal systems
By Patrick McFadden August 25, 2025
A framework for navigating cognition, risk, and trust in the era of agentic legal systems
By Patrick McFadden August 19, 2025
The AI Governance Debate Is Stuck in the Wrong Layer Every AI safety discussion today seems to orbit the same topics: Red-teaming and adversarial testing RAG pipelines to ground outputs in facts Prompt injection defenses Explainability frameworks and audit trails Post-hoc content filters and moderation layers All of these are built on one assumption: That AI is going to think — and that our job is to watch, patch, and react after it does. But what if that’s already too late? What if governance doesn’t begin after the model reasons? What if governance means refusing the right to reason at all?
By Patrick McFadden August 7, 2025
“You Didn’t Burn Out. Your Stack Collapsed Without Judgment.”
By Patrick McFadden August 7, 2025
Why Governance Must Move From Output Supervision to Cognition Authorization
By Patrick McFadden August 7, 2025
Why the Future of AI Isn’t About Access — It’s About Authority.
By Patrick McFadden August 7, 2025
Why Sealed Cognition Is the New Foundation for Legal-Grade AI
By Patrick McFadden August 7, 2025
AI in healthcare has reached a tipping point. Not because of model breakthroughs. Not because of regulatory momentum. But because the cognitive boundary between what’s observed and what gets recorded has quietly eroded — and almost no one’s looking upstream. Ambient AI is the current darling. Scribes that listen. Systems that transcribe. Interfaces that promise to let doctors “just be present.” And there’s merit to that goal. A clinical setting where humans connect more, and click less, is worth fighting for.  But presence isn’t protection. Ambient AI is solving for workflow comfort — not reasoning constraint. And that’s where healthcare’s AI strategy is at risk of collapse.