Why You Need a AI Governance Control Stack (and Where a Pre-Execution Authority Gate Fits)
Most of what’s being sold today as “AI governance” is a category error.
- Vendors pitch one product as the governance solution.
- Buyers hope for “one throat to choke.”
- Regulators and boards ask, “Do we have AI governance?” like it’s a yes/no checkbox.
Reality: AI governance is not a product. It’s a control stack.
If you try to buy it from one box, you usually end up with false confidence and gaps in exactly the places that matter most.
This is the framing I’ve found most useful when evaluating serious AI governance claims.
The One Line to Remember
“Authorized” is not the same as “executed correctly.”
“Executed correctly” is not the same as “governed.”
Most confusion comes from mixing those three jobs.
Let’s unbundle them.
The AI Governance Control Stack (Five Layers)
In any serious environment — law, finance, healthcare, critical infrastructure — you need at least five layers of control:
- Data / Formation Governance – what the system is allowed to see and learn from.
- Model / Agent Behavior Controls – what the system is allowed to say and attempt.
- Pre-Execution Authority Gate (Commit Layer) – who is allowed to let an action start at all.
- In-Execution Constraints – how far the action is allowed to go while it’s running.
- Post-Execution Monitoring & Reconciliation – what actually happened, and whether it matched your intent.
If someone tells you they “do AI governance” and can’t tell you which of these they cover, you don’t have a governance solution.
You have a feature.
Note: policy ownership sits above this stack: boards, GRC, risk committees, and business leadership decide the rules. These five layers are the operational control stack that enforces and evidences them.
Layer 1 – Data / Formation Governance
“What do we let this system know?”
This is the part most organizations already recognize:
- Data classification (public / internal / confidential / restricted).
- “No customer data in public LLMs” rules.
- DLP, access controls, and encryption.
- Decisions about which datasets can train which models.
Risk it controls:
- Data leakage
- Privacy violations
- Illegitimate training / use of sensitive data
What it does not control:
- Whether the system’s decisions are authorized.
- Whether a specific action may run under your name.
You can have perfect data governance and still wire a system that moves money, files court documents, or changes records under the wrong authority.
Layer 2 – Model / Agent Behavior Controls
“How is this system allowed to behave?”
This is the “AI safety” layer people talk about:
- Guardrails and safety filters.
- Red teaming and jailbreak resistance.
- Prompt / response filtering.
- Agent policies (“don’t call this tool”, “stay within this scope”).
Risk it controls:
- Harmful content
- Obvious misuse of tools by the model
- Some classes of biased or non-compliant behavior
What it does not control:
- Whether the resulting action is allowed to execute.
- Whether the decision is under the right license, consent, or authority.
A model can be beautifully “safe” and still send something to the wrong regulator, move the wrong funds, or file under the wrong attorney — simply because no one ever asked “may this run at all?”
Layer 3 – Pre-Execution Authority Gate (Commit Layer)
“For this actor, this action, right now — may it run at all?”
This is the layer that’s been missing in most stacks, and the one we specialize in.
A pre-execution authority gate sits in front of high-risk actions:
- file / send / serve
- sign / approve / commit
- move money / change limits / alter critical records
For each attempted action, it evaluates a small, structured payload:
- Who is acting?
- Where are they acting?
- What are they trying to do?
- How fast / how exposed is it?
- Under which authority / consent?
Then it returns exactly one of three outcomes:
- ✅ Approve – the action may proceed.
- ❌ Refuse – the action is blocked; nothing executes.
- ⚖️ Supervised override – the action may proceed only with a named human decision-maker attached.
And for each verdict, it produces a sealed, tenant-owned artifact:
- Who tried to do what.
- Under which identity / role / matter / venue.
- Which policy / rule set was applied.
- What the verdict was (approve / refuse / supervised).
- Reason codes and timestamps.
The gate’s job is not correctness.
Its job is authority and accountability.
It answers:
“Was this action allowed to start under our rules, and who owned that call?”
It does not guarantee:
- that the model’s reasoning inside the action was correct,
- that every downstream system behaved, or
- that the overall outcome was “good.”
That’s not a flaw — it’s separation of concerns.
In our work in legal environments, SEAL Legal Runtime is built as this layer: a sealed pre-execution authority gate in front of high-risk legal actions, returning approve, refuse, or supervised override and emitting sealed artifacts per governed action.
Layer 4 – In-Execution Constraints
“Given that it’s allowed to start, how far may it go?”
Once an action is authorized to start, you still need controls while it’s running:
- Transaction limits and exposure caps.
- Dual control / four-eyes for large or unusual actions.
- Step-up authentication when risk spikes.
- Escrow / staged commits (plan, preview, then apply).
- Circuit breakers and anomaly detection mid-flight.
- Bounded tool permissions for agents (“this tool only on these accounts”).
- Idempotency / rollback guards.
Risk it controls:
- “Approved but dangerous” actions.
- Partial failures and cascading problems while the system is mid-flight.
What it does not control:
- Whether the action should have been allowed to start in the first place.
- Whether the action was under the right authority or consent.
This is the layer many “AI ops” / “agentic” frameworks talk about (and where people like Drew / Vadym are focusing on invariants and state).
Layer 5 – Post-Execution Monitoring & Reconciliation
“What actually happened, and did it match our intent?”
After the action is done, you still need:
- Reconciliation against books and records.
- Exception reports and flags.
- Supervisory sampling and QA.
- Incident detection and response.
- Forensic reconstruction when something goes wrong.
- Model / policy updates based on what you learn.
Risk it controls:
- Drift over time (models, policies, behavior).
- Hidden failure modes that only show up in aggregate.
- Regulatory and litigation exposure when you need to show your work.
What it does not control:
- Whether the action was authorized in the first place.
- Whether the action was bounded while it ran.
This layer is where a lot of “AI audit” and “AI observability” lives today.
Why One Vendor Can’t (and Shouldn’t) Own All Five
Think about security:
- IAM
- Endpoint protection
- Network / zero trust
- SIEM
- SOAR / incident response
No one serious buys
all of that from one product.
You expect multiple layers, multiple vendors, with clear interfaces.
AI will go the same way.
If a vendor claims to be your data layer, model safety layer, pre-execution authority gate, runtime constraint system, and audit platform all at once, you should scrutinize that claim very carefully. Serious governance usually depends on separation of duties, clear interfaces, and independent lines of defense.
How to Use This Control Stack in Practice
Here’s a way to turn this into action inside your org.
Step 0 — Find the execution paths
Before assessing the five layers, inventory where high-risk actions actually execute today. In many environments, important workflows sit outside formal governance paths. If an action never reaches the execution gate, it cannot be refused before it runs.
Step 1 — Pick one high-risk class of actions:
- filing with a regulator or court,
- moving funds above a threshold,
- changing prices/limits in production,
- issuing orders / prescriptions / commands.
Step 2 —Then ask, in plain language:
1. Data / Formation
- What data is this action allowed to see and learn from?
- Who owns those classifications and access rules?
2. Model / Agent Behavior
- What are models/agents allowed to say or try?
- What’s explicitly out of scope?
3. Pre-Execution Authority Gate
- Who (human or system) is allowed to say “yes, this action may start under our name”?
- Where does that decision live — in a policy PDF, or in a gate in front of the “execute” button?
- Can we prove that decision six months from now?
4. In-Execution Constraints
- Once started, what caps / limits / circuit breakers apply?
- What happens if the action goes out of bounds while it’s running?
5. Monitoring & Reconciliation
- How do we spot and investigate odd patterns after the fact?
- What record would we show a regulator or insurer?
If any layer is answered with “we don’t really have that” or “we’re hoping the vendor’s magic takes care of it,” that’s your gap.
Where We Focus
In our own work, we have deliberately not tried to be a single, all-in-one “AI governance platform.”
We chose to specialize in the pre-execution authority layer:
Pre-execution authority gates for high-risk legal actions,
with sealed, client-owned artifacts for every approve / refuse / supervised override.
- I don’t do DLP.
- I don’t tune models.
- I don’t manage every runtime limit or observability feed.
We integrate with the systems that already own those pieces (IdP, GRC, matter systems, DLP), and I answer one question ruthlessly:
“For this actor, this action, in this context, under this authority — may it run at all, and where is the record that proves the decision?”
That’s the Commit job.
Your stack may choose different vendors or build some layers in-house.
But the structure doesn’t really change.
The Practical Takeaway
If you want a quotable, referenceable version of all this, it’s this:
AI governance is not one product. It’s a control stack.
- Data governance decides what the system may know.
- Model controls decide how it may behave.
- The pre-execution gate decides what may start.
- In-execution constraints decide how far it may go.
- Monitoring and reconciliation decide what we learn and how we prove it.
And:
If your stack cannot clearly show where “may this run at all?” is decided for each high-risk action, then your governance program still has a critical gap.









