Why Ungoverned AI Became Harder to Insure — and What Comes Next

Patrick McFadden • December 9, 2025

You Can’t Insure What You Can’t Govern


1. The Signal Everyone Missed


When insurers started signaling tighter posture around generative-AI risk, the headlines focused on AI risk.


But that wasn’t the real story.


Insurance companies don’t move because something is new.
They move because something is
uncontrollable.


That’s the issue now confronting every enterprise:


"AI didn’t suddenly become dangerous. It became powerful enough to act without execution-time authority control."


And insurers do not underwrite what no one can prove was supervised, authorized, or prevented.

This is the moment where the market finally said out loud what many leaders have quietly known:


"We let AI act faster than we can explain it."


And that is uninsurable.


2. AI Risk Isn’t a Model Problem — It’s a Permission Problem


Nearly every public failure — hallucinations, rogue agents, data leaks, misfiled motions, false denials, fabricated case law — shares a single root cause:


"AI was allowed to act without proving it had the right to act at the moment of execution."


The industry has been trying to govern models by looking inside them:



  • Explainability
  • Transparency
  • Monitoring
  • “AI safety layers”
  • Post-hoc audits


But none of these solve the actual failure mode:


"Bad actions aren’t discovered in the logs. They are prevented at the boundary."


You cannot audit your way out of an action that already happened.
You
cannot monitor a decision once it has already harmed someone.
You
cannot insure a system that does not enforce its own limits.


This is why insurers are stepping back.


They aren’t rejecting AI.
They are rejecting the absence of a
judgment perimeter.


3. The Hidden Risk: Systems Moving Faster Than Governance


Every modern institution lives inside the same asymmetry:


"Systems now act faster than the oversight meant to govern them."


This is how drift becomes disaster:


  • A model approved for fraud analysis quietly starts denying credit.
  • A chatbot meant for customer support starts generating legal promises.
  • An AI assistant writes filings under the wrong attorney’s authority.
  • An agent automates an internal step that was never approved for automation.
  • A code assistant deletes a production database because a prompt looked “close enough.”


None of these failures originated in the model.


All originated in the boundary where the model was permitted to act.


This is the governance gap the market has been missing language for: Action Governance at the Commit Layer.


4. The Hard Truth: The World Built the Wrong Discipline First


For 20 years, enterprises built three disciplines:


  1. Data governance — what information flows.
  2. Model governance — how models behave.
  3. Identity and access governance — who can reach what.


Missing entirely:


Action Governance — the discipline of governing what may execute in the real world, under authority, in context.

And the missing layer where it lives:


The Commit Layer — the pre-execution authority gate before an irreversible step.


Until that discipline exists, everything else is downstream defense.

And downstream defense cannot stop upstream failure.


This is why the industry is colliding with the same paradox:


"AI is powerful enough to act,  and ungoverned enough to act badly, and fast enough that no one can intervene in time."


There is only one fix:


stop focusing on the intelligence, and start governing the
action perimeter.


5. The New Requirement: Prove Authority Before Action


Insurers have effectively drawn the new line in the sand:


"If you cannot prove who acted, under what rules, and why,  you cannot shift the risk."


This is not about AI ethics.
This is not about safety research.
This is not about “fairness” dashboards.


This is about institutional survival.


Tomorrow’s defensible enterprises will share one characteristic:


"Every high-risk AI-mediated action is evaluated at a pre-execution authority gate and produces a sealed, integrity-verifiable decision artifact before it counts."


Not after.
Not “based on policy.”
Not “assumed to be within scope.”


Before.


This is where the world is heading, whether it’s ready or not.


6. What Action Governance Must Do (A Definition for the Next Decade)


A decade from now, this will be obvious.
But today, it must be stated clearly:


"Action Governance is the discipline of governing what may execute in the real world — by a human, AI system, or automation — under authority, in context, before a high-risk action is allowed to run."


"The missing layer where it lives is the Commit Layer: a pre-execution authority gate that returns Approve, Refuse, or Supervised Override before an irreversible step."



A complete action governance discipline must:


  1. Verify identity
    Who is trying to act?
  2. Verify authorization
    Is this person or system allowed to perform this category of action?
  3. Verify context
    In this jurisdiction, on this matter, under this role?
  4. Verify consent & constraints
    Has the required client, customer, or internal approval been granted?
  5. Refuse when uncertain
    Systems must fail closed, not “best-effort guess.”
  6. Generate sealed, integrity-verifiable decision artifacts
    Every approval, refusal, or supervised override should produce a reviewable decision artifact designed to support insurer, regulator, audit, and court review.
  7. Operate at the Commit Layer
    Governance should belong to the enterprise through its own
    identity, policy, and authority sources of truth — not only to the vendor.


Until these capabilities exist, enterprises don’t actually control AI.
They react to it.


And reacting is what creates uninsurable risk.


7. Why This Discpline Will Become Mandatory


The next 24–36 months will force this discipline into existence because:


  • Insurers will increasingly narrow, price, or refuse exposure to ungoverned high-risk AI actions.
  • Regulators will require explainable, auditable authority.
  • Boards will demand provable supervision.
  • Courts will ask, “Who authorized this action?”
  • Vendors will disclaim liability.
  • CISOs will refuse to own the blast radius alone.
  • Lawyers will require sealed evidence that protects their license.
  • Enterprises will not trust autonomous systems without constraints.


This isn’t a trend.
This is a structural inevitability.


Every mature field eventually creates a discipline that governs permission.


Electricity has circuit breakers.
Finance has capital controls.
Aviation has flight envelopes.
Medicine has scope-of-practice boundaries.


AI will have Action Governance.


8. The Shift Leaders Must Make Now


Every enterprise must move from:


“What can the model do?” → “What should the system be allowed to do?”


From:


“How do we monitor?” → “How do we prevent?”


From:


“Is the model safe?” → “Is the action authorized?”


This shift seems small.
It is not.


It is the shift from intelligence to responsibility.
From
capability to control.
From
output to authority.


This is how institutions reclaim sovereignty in a world where systems move faster than governance reflex.


9. What Comes Next


The organizations that win the next decade will not be the ones that adopt AI fastest.

They will be the ones that adopt AI safely, provably, and under a sealed standard of authority.


Because:


"You can delegate work. You cannot delegate accountability."


And without action governance:


  • Insurers won’t underwrite you.
  • Regulators won’t trust you.
  • Courts won’t believe you.
  • Clients won’t forgive you.
  • And your own systems won’t respect your boundaries.


AI didn’t break governance.
It exposed where governance never existed.


Now the world has to build the missing discipline.


Here is the clean model:


  • Action Governance is the discipline.
  • The Commit Layer is the missing layer where it lives.
  • Refusal Infrastructure is the architecture that makes it real.
  • SEAL Legal Runtime is the product that applies it to high-risk legal actions.

Final Thoughts


Every era has a sentence that defines it.

For this one, it’s simple:


"You Can’t Insure What You Can’t Govern."


Once enterprises accept this, the path forward becomes obvious:


"AI doesn’t need more freedom.
It needs pre-execution boundaries."


Not to limit what’s possible — but to protect what matters.


And to ensure that when AI succeeds, we succeed on purpose, not by accident.

By Patrick McFadden April 7, 2026
The Commit Layer is the execution-boundary control point where a system decides, before an irreversible action runs, whether that action may proceed under authority, in context. It applies to humans, agents, systems, tools, and workflows.
By Patrick McFadden April 7, 2026
Action Governance is the discipline of deciding whether a specific action may execute under authority, in context, before it runs. Learn how it differs from IAM, model governance, and monitoring — and why it lives at the Commit Layer.
By Patrick McFadden April 2, 2026
Most enterprises already have more controls than they can name. They have IAM. They have model guardrails. They have GRC platforms. They have dashboards, logs, alerts, and post-incident reviews. And yet one question still goes unanswered at the exact moment it matters: May this action run at all? That is the gap. Not a visibility gap. Not a policy gap. Not a “we need one more dashboard” gap. A control gap. The problem is not that enterprises have no governance. The problem is that their existing layers stop short of the final decision that matters at the moment of action. The market has language for identity, model safety, policy management, and monitoring. What it still lacks, in most stacks, is a control that decides whether a governed high-risk action may execute under the organization’s authority before anything irreversible happens. That is what I mean by execution-time authority control . Not a new category. A clearer control-language translation for what Action Governance does at the Commit Layer .
By Patrick McFadden March 17, 2026
Most AI governance stops at models and monitoring. The missing runtime discipline is Action Governance.
By Patrick McFadden March 10, 2026
Most “AI governance” decks sound impressive but leave one blind spot: Who is actually allowed to do what, where, under which authority, before anything executes? These seven questions let a board test, in one meeting, whether the organization has real governance or just model settings and policies on paper.
By Patrick McFadden March 6, 2026
Define AI Risk P&L and the prevented-loss ledger. Learn how refusals, overrides, and sealed artifacts make AI governance provable.
By Patrick McFadden March 3, 2026
Why You Still Get AI Incidents Even When Both Look “Mature”
By Patrick McFadden March 1, 2026
Everyone’s asking how to govern AI decisions at runtime. The catch is: you can’t govern “thinking” directly – you can only govern which actions are allowed to execute . Serious runtime governance means putting a pre-execution authority gate in front of file / send / approve / move and deciding, for each attempt: may this action run at all – yes, no, or escalate?
By Patrick McFadden February 28, 2026
The Commit Layer is the missing control point in AI governance: the execution-boundary checkpoint that can answer, before an action runs.
By Patrick McFadden February 26, 2026
AI governance isn’t one product—it’s a 5-layer control stack. See where vendors mislead, where a pre-execution gate fits, and how to close the gaps that matter.