The Three Phases of AI Governance

Patrick McFadden • December 15, 2025

Why “PRE, DURING, AFTER” Is the

Only Map That Makes Sense Now


Most people talk about “AI governance” like it’s a single thing.


It isn’t.


If you don’t separate when governance shows up, you will:


  • buy the wrong tools,
  • overestimate your safety, and
  • still get blindsided when something irreversible happens fast.


The clean way to see it is simple:

AI governance has three phases: PRE, DURING, and AFTER.
And only one of them actually prevents an unsafe action from running.

Let’s map them in plain English.


AI Governance Phase 1 — PRE: Permission (the Gate)


PRE is everything that happens before an action runs.


This is the moment that decides whether an email gets sent, a database gets touched, money moves, or a filing hits a court at all.

The only question that matters at PRE is:

“Is this person or system allowed to take this action, in this context, under this authority — yes or no?”

If the answer is no, the system must:


  • refuse the action, or
  • escalate it for review


…before anything executes.


That’s it. That’s PRE.


What PRE really does


A real PRE layer:


  • Checks who is acting (identity, role, license, supervision).
  • Checks what they’re trying to do (action type, risk level, system touched).
  • Checks where and when they’re doing it (jurisdiction, environment, deadlines).
  • Checks under what authority (client consent, policy, contract, regulation).


And then it makes a binary call:


  • ✅ allowed to run
  • ❌ refused or escalated


Not “we’ll log it.”
Not “we’ll warn them.”
A hard
stop/go.


Why PRE is so rare


Most stacks today have:


  • authentication (you can log in), and
  • authorization (you can see things, click things).


Very few have governed permission at the moment of action.


That’s why we keep seeing stories where:


  • A model was originally approved for one purpose, then silently used for another.
  • An agent deletes or alters data it never should have touched.
  • An AI tool sends something externally that was meant to stay inside the firewall.


In each of those cases, the real failure wasn’t the model.
It was the absence of a
pre-execution authority gate.


AI Governance Phase 2 — DURING: Detection (Guardrails + Observability)


DURING is everything that happens while the system is running.


This is where most “AI governance” tools live today:



  • monitoring, traces, logs
  • “step-by-step” agent timelines
  • anomaly detection and policy checks
  • rate limits and safety filters


DURING answers:


  • “What did the system do?”
  • “What sequence of steps did the agent take?”
  • “Did anything look unusual in this run?”


This is useful. You need it.

But for irreversible actions, DURING is often too late.


If an AI agent has already:


  • denied 1,000 credit applications,
  • overwritten a production database, or
  • sent privileged content outside the firm,


then seeing a beautiful trace of how it did that doesn’t un-do the outcome.


DURING is necessary for:


  • debugging
  • tuning
  • operational trust


…but it is not the same as governing whether the action should have been allowed.


AI Governance Phase 3 — AFTER: Forensics (Audit + Postmortems)


AFTER is everything that happens once you realise you have a problem.


This includes:


  • incident reports
  • internal investigations
  • breach notifications
  • regulator and board briefings
  • litigation and malpractice defense


AFTER answers:


  • “What happened?”
  • “When did it happen?”
  • “Who was involved?”
  • “What controls did we have on paper?”


AFTER is essential for:


  • accountability
  • learning
  • regulatory trust


But we should be honest about what it is:

AFTER is damage accounting, not prevention.

You need it. Regulators will demand it. Insurers will ask for it.


But if your governance only shows up in the AFTER phase, you’re not governing decisions — you’re documenting them.



Where Most “AI Governance” Lives Today


When you strip away the marketing language, most “AI governance” in the market is:


  • DURING (monitoring, guardrails, observability), and
  • AFTER (audit trails, compliance dashboards, reports).


Those are valuable.


But the failures that make headlines — and the ones that keep GCs, CISOs, and boards awake — are almost always PRE failures:


  • A system was allowed to act under the wrong authority.
  • An agent executed outside of its intended scope.
  • A model was quietly repurposed without a fresh approval.


Everyone only notices after the blast radius.


By then, DURING and AFTER can explain and document what happened.



Neither can say:

“This action was never permitted to run under our seal.”

That claim lives in PRE.


Authority vs Audit: Where Real Governance Lives


Here’s the distinction most boards miss:


  • Auditability asks:
    “Can we explain what happened?”
  • Authority asks:
    “Was this action ever allowed to happen?”


You can have perfect DURING + AFTER:


  • full traces,
  • detailed logs,
  • clean incident reports,


…and still have no answer to the real question:

“Who authorized this specific action, under what rules, and why didn’t anything stop it?”

Real governance lives where:



  • authority is checked, not assumed;
  • refusal and escalation are first-class options, not UX annoyances;
  • the system is structurally capable of saying: “No, not under these conditions.”



The Simple Test for Any “AI Safety” Claim


When you evaluate any AI safety, governance, or agent framework, ask this one question:

“Can you refuse an out-of-scope action before it runs — even when the user asks nicely and the model is confident?”

If the answer isn’t a clean “yes”, they’re mostly operating in DURING and AFTER.



And that’s fine — until something irreversible happens fast.


How to Use PRE / DURING / AFTER AI Governance Inside Your Organization


You don’t have to be technical to apply this.


1. Map your current controls


For a given AI system, literally draw three columns:


  • PRE (Permission)
  • DURING (Detection)
  • AFTER (Forensics)


Then ask:


  • What do we have in PRE today that can actually refuse or escalate an action?
  • What are we doing DURING runs (monitoring, alerts, traces)?
  • What shows up AFTER (logs, reports, postmortems)?


Most orgs discover:


  • DURING: crowded
  • AFTER: decent
  • PRE: almost empty


That’s your exposure.


2. Clarify who owns PRE


Someone needs to own the answer to:

“Who is allowed to do what, where, and under whose authority — and what must never run at all?”

Depending on the context, that might be:


  • GC / legal
  • risk / compliance
  • security / CISO
  • business owner for the domain


But PRE cannot be “owned by the model” or “left to the vendor.”
It must be owned by the institution.


3. Demand hard gates, not just better dashboards


When a vendor says “we do AI governance,” ask them:


  • “Show me the PRE layer in your solution. Where is the gate?”
  • “What exactly happens when an out-of-policy action is attempted?”
  • “Do you log it, warn about it, or actually refuse it?”


If everything they show you lives in DURING and AFTER, you know what you’re buying:



  • great visibility,
  • better paperwork,
  • no real brake.



Where Thinking OS™ Sits


Thinking OS™ exists because PRE is missing almost everywhere.


We don’t tune models, train agents, or design prompts.


We focus on one thing:

 Pre-execution authority gate.
An upstream gate that helps ensure certain actions don’t run at all unless conditions are met.
  • If the identity, scope, consent, or authority checks fail → refuse or escalate, with a sealed record.
  • If the checks pass → let existing tools run, with a sealed record of why.


Not “better prompts.”
Not “more dashboards.”
Not “we’ll audit it later.”


A boundary.


The New Baseline for Trust


In the next wave of AI adoption, the question won’t be:


  • “Do you have AI?”
  • “Do you have a governance dashboard?”


It will be:

“Show me where your system can prove that certain actions were never allowed to run under your seal.”

PRE, DURING, and AFTER all matter.


  • DURING scales performance.
  • AFTER supports accountability.
  • PRE is where real governance lives.


That’s the piece most organizations are missing.


And that’s the piece you just put your name on.

By Patrick McFadden February 3, 2026
Everyone’s talking about Decision Intelligence like it’s one thing. It isn’t. If you collapse everything into a single “decision system,” you end up buying the wrong tools, over-promising what they can do, and still getting surprised when something irreversible goes out under your name. In any serious environment— law, finance, healthcare, government, critical infrastructure —a “decision” actually has three very different jobs: 
By Patrick McFadden January 13, 2026
One-line definition A pre-execution authority gate is a sealed runtime that answers, for every high-risk action:  “Is this specific person or system allowed to take this specific action, in this context, under this authority, right now — approve, refuse, or route for supervision?” It doesn’t draft, predict, or explain. It decides what is allowed to execute at all.
By Patrick McFadden January 11, 2026
If you skim my AI governance feed right now, the patterns are starting to rhyme. Different authors. Different vendors. Different sectors. But the same themes keep showing up: Context graphs & decision traces – “We need to remember why we decided, not just what happened.” Agentic AI – the question is shifting from “what can the model say?” to “what can this system actually do?” Runtime governance & IAM for agents – identity and policy finally move into the execution path instead of living only in PDFs and slide decks. All of that matters. These are not hype topics. They’re real progress. But in high-stakes environments – law, finance, healthcare, national security – there is still one question that is barely named, much less solved: Even with perfect data, a beautiful context graph, and flawless reasoning… 𝗶𝘀 𝘁𝗵𝗶𝘀 𝘀𝗽𝗲𝗰𝗶𝗳𝗶𝗰 𝗮𝗰𝘁𝗼𝗿 𝗮𝗹𝗹𝗼𝘄𝗲𝗱 𝘁𝗼 𝗿𝘂𝗻 𝘁𝗵𝗶𝘀 𝘀𝗽𝗲𝗰𝗶𝗳𝗶𝗰 𝗮𝗰𝘁𝗶𝗼𝗻, 𝗳𝗼𝗿 𝘁𝗵𝗶𝘀 𝗰𝗹𝗶𝗲𝗻𝘁, 𝗿𝗶𝗴𝗵𝘁 𝗻𝗼𝘄? That’s not a data question. It’s not a model question. It’s an authority question.  And it sits in a different layer than most of what we’re arguing about today.
By Patrick McFadden December 30, 2025
Designing escalation as authority transfer, not a pressure-release valve.
By Patrick McFadden December 30, 2025
Why Thinking OS™ Owns the Runtime Layer (and Not Shadow AI)
By Patrick McFadden December 28, 2025
System Integrity Notice Why we protect our lexicon — and how to spot the difference between refusal infrastructure and mimicry. Thinking OS™ is: Not a prompt chain. Not a framework. Not an agent. Not a model. It is refusal infrastructure for regulated systems — a sealed governance runtime that sits in front of high-risk actions, decides what may proceed, what must be refused, or what must be routed for supervision, and seals that decision in an evidence-grade record.  In a landscape full of “AI governance” slides, copy-pasted prompts, and agent graphs, this is the line.
By Patrick McFadden December 23, 2025
Action Governance — who may do what, under what authority, before the system is allowed to act.
By Patrick McFadden December 15, 2025
Why Every New AI Standard  Still Leaves Enterprises Exposed
By Patrick McFadden December 9, 2025
You Can’t Insure What You Can’t Govern
By Patrick McFadden August 27, 2025
Legal AI has crossed a threshold. It can write, summarize, extract, and reason faster than most teams can verify. But under the surface, the real fracture isn’t about accuracy. It’s about actions that were never structurally authorized to run. Here’s the gap most experts and teams still haven’t named.