How to Govern AI Decisions at Runtime (By Governing Actions at the Execution Gate)

Patrick McFadden • March 1, 2026

Everyone’s asking how to govern AI decisions at runtime.
The catch is: you can’t govern “thinking” directly – you can only govern
which actions are allowed to execute.


Serious runtime governance means putting a
pre-execution authority gate in front of file / send / approve / move and deciding, for each attempt: may this action run at all – yes, no, or escalate?


Most conversations about AI governance still orbit three questions:


  • Do we have an AI governance platform?
  • Are we mapped to EU AI Act / NIST / ISO 42001?
  • Do we have model guardrails and TRiSM in place?


All necessary.


But when AI systems start taking real actions—filing, sending, approving, moving money—boards, regulators, and insurers eventually ask something much sharper:


“Who was allowed to let this happen, under what rules, and where is your record that proves it?”


That’s not a prompt-engineering problem.
That’s a
runtime governance problem.


This piece is a practical answer to one specific question:


How do I govern AI decisions at runtime, not just on paper?


1. Runtime is where AI governance actually lives


Most stacks today concentrate controls in two places:


1. Formation (data & model layer)

  • DLP and data classification
  • “No PII in public LLMs” rules
  • Approved model endpoints and gateways
  • Guardrails and safety filters

2. Forensics (after-the-fact layer)

  • Logs, traces, dashboards
  • Incident response and investigations
  • Audit reports and post-mortems


Those are important. But they don’t answer the runtime question:


“At the moment this action tried to execute, who had the authority to say YES or NO?”


Formation explains what the model saw and how it reasoned.
Forensics explains what already happened.


Runtime governance decides what is allowed to happen at all.


That’s the missing piece.


2. Capability isn’t the risk. Authority drift is.


Most real failures won’t come from “hallucinations.”
They’ll come from
authority gaps:



In those moments, two very different questions get quietly conflated:


  • “Is the model confident this is the right thing to do?”
  • “Is the system authorized to do this at all?”


Confidence is statistical.
Authority is structural.


Governing AI decisions at runtime means separating those two and giving authority its own, enforceable layer.


3. The control stack you actually need at runtime


A useful way to structure this is a five-layer AI Governance Control Stack:


  1. Data / Formation Governance
    What may the system know and learn from?
  2. Model / Agent Behavior Controls
    How is the system allowed to behave?
  3. Pre-Execution Authority Gate (Commit Layer)
    For this actor, this action, right now – may it run at all?
  4. In-Execution Constraints
    Given it may start, how far may it go while running?
  5. Post-Execution Monitoring & Reconciliation
    What actually happened, and did it match our intent?


Most AI governance platforms live mainly in 1, 2, and 5.
Runtime decision governance is
3 + 4.


If you want to govern AI decisions at runtime, you need to make Layer 3 explicit and non-optional.


4. The heart of runtime governance: a pre-execution authority gate


At runtime, you need something brutally simple:


A pre-execution authority gate that sits in front of file / send / approve / move and answers one question per attempt:


“Is this specific person or system allowed to take this specific action, in this context, under this authority, right now – yes, no, or supervised?”


Concretely, that means:


4.1 What the pre-execution authority gate sees


For each intent to act, the gate receives a small structured payload – not full document content:


  • Who is acting?
    (human, agent, service account, role)
  • Where are they acting?
    (matter / client / account / venue / environment)
  • What are they trying to do?
    (file, send, approve, transfer, modify record, delete, etc.)
  • How fast / exposed is it?
    (standard, expedited, emergency)
  • Under which authority / consent?
    (client consent, license, policy, regulatory regime)


4.2 What the pre-execution authority gate returns


Exactly one of three outcomes:


  • Approve – action may proceed
  • Refuse – action is blocked; nothing executes
  • ⚖️ Supervised override – action may proceed only with a named human decision-maker attached


4.3 What the pre-execution authority gate produces


For every decision, the gate emits a sealed artifact you own:


  • Who tried to act
  • On what, where, and under which authority envelope
  • Verdict: approve / refuse / supervised
  • Reason codes and timestamps
  • Policy / rule version in force at that moment


That artifact lives in client-controlled, append-only audit storage – not just in a vendor’s log table.


At that point, you’re no longer “hoping governance happens.”
You’ve installed a
decision kernel in front of real-world actions.



5. Decision & evidence sovereignty: the two questions that change everything


Runtime governance collapses into two forms of sovereignty:


5.1 Decision sovereignty – whose rules run?


“When an AI-assisted action tries to execute, whose rules decide what happens?”


You own decision sovereignty if:


  • Authority rules are authored and versioned in your GRC / policy / identity stack
  • The gate enforces those rules as-is, rather than replacing them with vendor-designed logic
  • A vendor cannot silently change who may act, on what, under which authority


If your authority model effectively lives inside a vendor’s admin console, your liability is yours, but your NO isn’t.


5.2 Evidence sovereignty – who owns the proof?


“Who owns the artifacts that prove what your system allowed, refused, or escalated?”


You own evidence sovereignty if:


  • Every governed attempt to act yields a decision-grade artifact, not just telemetry
  • Those artifacts are stored under your retention, access, and jurisdiction rules
  • You can answer a regulator with:

“Here is our artifact. Here are the rules in force. Here is the decision.”
not:
“We’ll ask the platform vendor what happened.”


Most AI governance platforms help with visibility.
Very few answer sovereignty.


Runtime governance requires both.


6. How to actually implement runtime decision governance


Here’s a practical sequence you can use as a checklist.


Step 1 – Identify high-risk actions


Across your AI and automation landscape, list where systems can:


  • File with courts or regulators
  • Send binding communications to clients / counterparties
  • Approve / sign decisions under your seal
  • Move money, change limits, or alter critical records
  • Issue orders / prescriptions / commands


These are governed actions.
Everything else can be “monitored.” These must be
gated.


Step 2 – Wire a pre-execution gate in front of those actions


For each governed workflow:


  • Ensure the final “execute” call (file / send / approve / transfer) is routed through a pre-execution authority gate
  • Remove side paths that bypass the gate “just for this one integration”
  • Standardize the minimal intent-to-act payload the gate sees


If nothing ever calls the gate, you have a concept, not a control.


Step 3 – Bind the gate to your own sources of truth


Connect the gate to:



Authority rules stay client-owned.
The gate is runtime enforcement, not a substitute policy engine.


Step 4 – Fail closed, on purpose


For governed actions, ambiguity should mean:


No execution – with a refusal artifact.


That includes:


  • Unknown or mismatched identity / role
  • Missing or expired consent
  • Action type outside declared scope
  • Inconsistent jurisdiction / venue
  • Policy gaps for that action class


If the system can’t tell whether it’s allowed, it isn’t.


Step 5 – Own your evidence surface


Decide where sealed artifacts live:


  • Tenant-controlled, append-only audit store
  • Proper retention and legal hold policies
  • Accessible to legal, risk, audit, and insurers – without logging into a vendor dashboard


Then standardize what those artifacts contain and how they’re used:


  • Internal incident review
  • Regulator / supervisory responses
  • Malpractice / E&O defense
  • Board and risk-committee reporting


At that point, you’re not just governing AI decisions at runtime.
You’re building a
defensible narrative of authority over time.


7. Where Thinking OS™ / SEAL Legal Runtime fits


Thinking OS™ was built specifically for this runtime job in law and adjacent regulated domains.


In wired workflows, SEAL:


  • Receives intent-to-act payloads from your systems
  • Evaluates them against your own identity, matter, and policy stack
  • Returns approve / refuse / supervised override
  • Emits sealed, tenant-owned artifacts for every decision


It doesn’t draft, reason, or replace lawyers.
It
decides what may execute and proves it.


AI governance platforms can inventory, map, and monitor around that gate.
They just don’t replace the gate itself.

8. The one-line test you can steal


If you want something simple to keep on the wall, use this:


If we can’t point to where “NO” lives at runtime – and show the artifacts that prove it – we’re not governing AI decisions. We’re just watching them.


Runtime AI governance isn’t another feature.


It’s the line between AI that acts under your authority
and
AI that drags your authority along for the ride.

By Patrick McFadden April 7, 2026
The Commit Layer is the execution-boundary control point where a system decides, before an irreversible action runs, whether that action may proceed under authority, in context. It applies to humans, agents, systems, tools, and workflows.
By Patrick McFadden April 7, 2026
Action Governance is the discipline of deciding whether a specific action may execute under authority, in context, before it runs. Learn how it differs from IAM, model governance, and monitoring — and why it lives at the Commit Layer.
By Patrick McFadden April 2, 2026
Most enterprises already have more controls than they can name. They have IAM. They have model guardrails. They have GRC platforms. They have dashboards, logs, alerts, and post-incident reviews. And yet one question still goes unanswered at the exact moment it matters: May this action run at all? That is the gap. Not a visibility gap. Not a policy gap. Not a “we need one more dashboard” gap. A control gap. The problem is not that enterprises have no governance. The problem is that their existing layers stop short of the final decision that matters at the moment of action. The market has language for identity, model safety, policy management, and monitoring. What it still lacks, in most stacks, is a control that decides whether a governed high-risk action may execute under the organization’s authority before anything irreversible happens. That is what I mean by execution-time authority control . Not a new category. A clearer control-language translation for what Action Governance does at the Commit Layer .
By Patrick McFadden March 17, 2026
Most AI governance stops at models and monitoring. The missing runtime discipline is Action Governance.
By Patrick McFadden March 10, 2026
Most “AI governance” decks sound impressive but leave one blind spot: Who is actually allowed to do what, where, under which authority, before anything executes? These seven questions let a board test, in one meeting, whether the organization has real governance or just model settings and policies on paper.
By Patrick McFadden March 6, 2026
Define AI Risk P&L and the prevented-loss ledger. Learn how refusals, overrides, and sealed artifacts make AI governance provable.
By Patrick McFadden March 3, 2026
Why You Still Get AI Incidents Even When Both Look “Mature”
By Patrick McFadden February 28, 2026
The Commit Layer is the missing control point in AI governance: the execution-boundary checkpoint that can answer, before an action runs.
By Patrick McFadden February 26, 2026
AI governance isn’t one product—it’s a 5-layer control stack. See where vendors mislead, where a pre-execution gate fits, and how to close the gaps that matter.
By Patrick McFadden February 23, 2026
A pre-execution AI governance runtime sits before high-risk actions and returns approve/refuse/supervised—using your rules—and emits sealed evidence you can audit and defend.