What Is a Pre-Execution AI Governance Runtime?

Patrick McFadden • February 23, 2026

Short version:


A pre-execution AI governance runtime is a gate that sits in front of high-risk actions (file, submit, approve, move money, change records) and decides:

“Is this specific person or system allowed to take this specific action, in this matter, under this authority, right now?”

It doesn’t write content. It doesn’t run the model.
It
governs what actually executes in the real world — and it leaves behind evidence you can audit.


For the full spec and copy-pasteable clauses, see:
“Sealed AI Governance Runtime: Reference Architecture & Requirements”


1. Plain-Language Definition


A pre-execution AI governance runtime (often shortened to “pre-execution governance runtime” or “sealed AI governance runtime”) is:

A dedicated system that evaluates a requested action before it runs and returns approve / refuse / supervised override, based on the organization’s own policies, identity, and context — while producing a sealed record of that decision.

In practical terms:


  • It sits between AI tools / applications and the systems that do things:
  • court / regulator portals
  • payment rails
  • core banking / claims / policy admin
  • internal record systems
  • It receives a request to act, with metadata like:
  • who is acting (user, role, group, service account)
  • what they’re trying to do (file, approve, transfer, update)
  • where (matter/case/account, jurisdiction, environment)
  • how urgent (normal vs emergency)
  • under which constraints (client instructions, risk posture, supervision requirements)
  • It applies the organization’s governance rules and returns one of three outcomes:
  • Approve – action may proceed
  • Refuse – action is blocked under current rules
  • 🟧 Supervised Override – action may proceed only if a named human decision-maker accepts responsibility


For each decision, it produces a sealed artifact — a tamper-evident, tenant-controlled record that shows what was decided and why, without exposing full client content or internal logic.


2. Why Pre-Execution Governance Exists


Most AI governance today is focused on:


  • Data – who can access what, which datasets models see
  • Models – which models are allowed, what guardrails they run with
  • Monitoring – what the model is saying, drift, bias, etc.


All of that matters. But there’s a missing layer at the top:

The execution gate — the moment where an AI-assisted action actually becomes real.

Without a pre-execution runtime:



  • An AI can generate a plausible filing and submit it to the wrong venue.
  • A workflow can use AI to pre-approve a payment and push it straight through.
  • After an incident, the organization may have no clean record of:
  • who triggered the action
  • what rules applied
  • whether anyone explicitly approved it
  • whether any system tried to refuse it


A pre-execution governance runtime exists to fill that gap. It doesn’t try to solve “Is this output good?” It solves:

“Should this action be allowed to execute, given who is asking and what our rules say?”

3. Core Properties of a Pre-Execution Governance Runtime


A system that claims to be a pre-execution AI governance runtime should have a few non-negotiable behaviors.


3.1 It runs before actions, not after


  • It evaluates the final execute step:
  • file / submit
  • approve / commit
  • transfer / move
  • update / delete
  • It does not just log what happened after the fact.
  • It is in the critical path: no runtime decision → no action.


3.2 It uses your policies and identity, not its own


  • Governance rules come from entity-owned sources:
  • GRC / policy systems
  • identity providers (IdP, roles, org structure)
  • matter / case / account systems
  • DLP / classification labels
  • The runtime enforces your rules; it does not invent policy or give advice.


3.3 It fails closed by default


  • If identity is unclear, policy is missing, or context conflicts:
  • it returns Refuse, not “do your best.”
  • This is a core difference from many application-level checks, which often fail open in edge cases.


3.4 It is non-bypassable for governed workflows


  • For workflows that are declared governed:
  • there should be no side door that lets actions execute without hitting the gate.
  • Any exceptions (disaster recovery, manual emergency paths) should be:
  • rare
  • documented
  • auditable



3.5 It emits sealed, tenant-owned artifacts


  • Every approve / refuse / override yields a sealed artifact that:
  • is tamper-evident
  • is scoped to the tenant / entity
  • does not expose model prompts or raw client content
  • These artifacts are designed to be:
  • usable in internal audit and ethics review
  • shown to courts, regulators, and insurers as evidence of control


For a detailed list of MUST/SHOULD requirements, see:
“Sealed AI Governance Runtime: Reference Architecture & Requirements” → Sections 4 & 6.


4. How It Differs from Guardrails, Logging, and Normal Access Control


It’s easy to confuse a pre-execution governance runtime with other controls. The distinctions matter.


4.1 Not just guardrails


  • Guardrails:
  • Govern content (what a model can say / generate)
  • Operate at the model / API level
  • Pre-execution runtime:
  • Governs actions (what can actually be filed, approved, or executed)
  • Operates at the execution / workflow level


You can have perfect guardrails and still approve the wrong action.


4.2 More than logging


  • Logs tell you what happened.
  • A pre-execution runtime:
  • actively decides what is allowed to happen, then logs that decision as a sealed artifact.
  • Logging is about observation; the runtime is about control + evidence.


4.3 Distinct from IAM alone


  • IAM (identity & access management) answers:
“Can this identity access this system or resource?”
  • The runtime answers:
“Given this identity, context, and policy, can this specific high-risk action execute right now?”

IAM controls entry to systems.
The runtime controls
commit in the workflow.


5. Mini-Checklist: Does This System Actually Qualify?


If you’re evaluating a vendor or internal build, you can quickly test whether it’s really a pre-execution governance runtime, or just “guardrails plus logging.”


Ask:


1. Placement

  • Does this system sit in front of high-risk actions so that nothing executes without a decision?

2. Decision Types

  • Does it return approve / refuse / supervised override, or just log and alert?

3. Default Behavior

  • When policy, identity, or context are ambiguous, does it fail closed (refuse), or allow?

4. Bypass

  • For workflows that claim to be governed by it, are there any execution paths that bypass this layer?

5. Evidence

  • Does it generate sealed, tamper-evident artifacts for each decision, stored under the organization’s control?

6. Policy & Identity Origin

  • Are policy and identity data owned by the entity (GRC, IdP, matter systems), or are they re-implemented in a vendor-specific way?


If the answer to several of these is “no,” you’re likely dealing with something else (guardrails, logging, or a simple policy engine), not a full pre-execution governance runtime.


For a more complete evaluation checklist, see:
“Sealed AI Governance Runtime: Reference Architecture & Requirements” → Integration Requirements & Evaluation Checklist.


6. How It Fits Existing Governance Frameworks


A pre-execution AI governance runtime doesn’t replace your current frameworks; it completes them:


  • For model risk management:
  • It connects model outputs to real-world actions under explicit control.
  • For operational risk & resilience:
  • It creates a clear, testable control point and a sealed trail for investigations.
  • For conduct & accountability regimes:
  • It ensures senior management can prove that rules were enforced at the moment of action, not just written down.
  • For data protection & confidentiality:
  • It can operate primarily on metadata and labels, preserving data minimization while still enforcing rules.


You can think of it as:

The missing top layer in the AI governance stack — the execution-time authority gate that ties your policies and identity to what AI-assisted systems are actually allowed to do.

7. Where to Go Next


If you’re:


  • drafting guidance or rules,
  • designing underwriting questionnaires, or
  • writing procurement requirements,


you don’t need to invent this from scratch.


Use the public spec:

Sealed AI Governance Runtime: Reference Architecture & Requirements



  • a concise definition of the pattern
  • non-negotiable guarantees (MUST/SHOULD)
  • an evaluation checklist for deployments
  • copy-pasteable RFP and policy clauses
  • FAQ language you can adapt


A pre-execution AI governance runtime is not just another feature.
It’s the system that answers — in a way you can prove later:

“Who allowed this AI-assisted action to go through, under which rules, and where is the record?”
By Patrick McFadden February 22, 2026
Decision Sovereignty, Evidence Sovereignty, and Where AI Governance Platforms Stop.
By Patrick McFadden February 21, 2026
Why Authority and Evidence Still Have to Belong to the Enterprise
By Patrick McFadden February 16, 2026
Short version: Guardrails control what an AI system is allowed to say. A pre-execution governance runtime controls what an AI system is allowed to do in the real world. If you supervise firms that use AI to file, approve, or move things, you need both. But only one of them gives you decisions you can audit . For the full spec and copy-pasteable clauses, see: “ Sealed AI Governance Runtime: Reference Architecture & Requirements. ”
By Patrick McFadden February 3, 2026
Everyone’s talking about Decision Intelligence like it’s one thing. It isn’t. If you collapse everything into a single “decision system,” you end up buying the wrong tools, over-promising what they can do, and still getting surprised when something irreversible goes out under your name. In any serious environment— law, finance, healthcare, government, critical infrastructure —a “decision” actually has three very different jobs: 
By Patrick McFadden January 13, 2026
One-line definition A pre-execution authority gate is a sealed runtime that answers, for every high-risk action:  “Is this specific person or system allowed to take this specific action, in this context, under this authority, right now — approve, refuse, or route for supervision?” It doesn’t draft, predict, or explain. It decides what is allowed to execute at all.
By Patrick McFadden January 11, 2026
If you skim my AI governance feed right now, the patterns are starting to rhyme. Different authors. Different vendors. Different sectors. But the same themes keep showing up: Context graphs & decision traces – “We need to remember why we decided, not just what happened.” Agentic AI – the question is shifting from “what can the model say?” to “what can this system actually do?” Runtime governance & IAM for agents – identity and policy finally move into the execution path instead of living only in PDFs and slide decks. All of that matters. These are not hype topics. They’re real progress. But in high-stakes environments – law, finance, healthcare, national security – there is still one question that is barely named, much less solved: Even with perfect data, a beautiful context graph, and flawless reasoning… 𝗶𝘀 𝘁𝗵𝗶𝘀 𝘀𝗽𝗲𝗰𝗶𝗳𝗶𝗰 𝗮𝗰𝘁𝗼𝗿 𝗮𝗹𝗹𝗼𝘄𝗲𝗱 𝘁𝗼 𝗿𝘂𝗻 𝘁𝗵𝗶𝘀 𝘀𝗽𝗲𝗰𝗶𝗳𝗶𝗰 𝗮𝗰𝘁𝗶𝗼𝗻, 𝗳𝗼𝗿 𝘁𝗵𝗶𝘀 𝗰𝗹𝗶𝗲𝗻𝘁, 𝗿𝗶𝗴𝗵𝘁 𝗻𝗼𝘄? That’s not a data question. It’s not a model question. It’s an authority question.  And it sits in a different layer than most of what we’re arguing about today.
By Patrick McFadden December 30, 2025
Designing escalation as authority transfer, not a pressure-release valve.
By Patrick McFadden December 30, 2025
Why Thinking OS™ Owns the Runtime Layer (and Not Shadow AI)
By Patrick McFadden December 28, 2025
System Integrity Notice Why we protect our lexicon — and how to spot the difference between refusal infrastructure and mimicry. Thinking OS™ is: Not a prompt chain. Not a framework. Not an agent. Not a model. It is refusal infrastructure for regulated systems — a sealed governance runtime that sits in front of high-risk actions, decides what may proceed, what must be refused, or what must be routed for supervision, and seals that decision in an evidence-grade record . In a landscape full of “AI governance” slides, copy-pasted prompts, and agent graphs, this is the line.
By Patrick McFadden December 23, 2025
Action Governance — who may do what, under what authority, before the system is allowed to act.