Why You Need a AI Governance Control Stack (and Where a Pre-Execution Gate Fits)
Most of what’s being sold today as “AI governance” is a category error.
- Vendors pitch one product as the governance solution.
- Buyers hope for “one throat to choke.”
- Regulators and boards ask, “Do we have AI governance?” like it’s a yes/no checkbox.
Reality: AI governance is not a product. It’s a control stack.
If you try to buy it from one box, you end up with:
- false confidence,
- vendor-management hell, and
- gaps in exactly the places that matter (money, filings, records, safety).
This is the post I’d stamp as the reference point.
The One Line to Remember
“Authorized” is not the same as “executed correctly.”
“Executed correctly” is not the same as “governed.”
Most confusion comes from mixing those three jobs.
Let’s unbundle them.
The AI Governance Control Stack (Five Layers)
In any serious environment — law, finance, healthcare, critical infrastructure — you need at least five layers of control:
- Data / Formation Governance – what the system is allowed to see and learn from.
- Model / Agent Behavior Controls – what the system is allowed to say and attempt.
- Pre-Execution Authority Gate (Commit Layer) – who is allowed to let an action start at all.
- In-Execution Constraints – how far the action is allowed to go while it’s running.
- Post-Execution Monitoring & Reconciliation – what actually happened, and whether it matched your intent.
If someone tells you they “do AI governance” and can’t tell you which of these they cover, you don’t have a governance solution.
You have a feature.
Note: “Above this stack sits Policy & Ownership (boards, GRC, risk appetite).These five layers are the runtime control stack that enforces and evidences those policies.”
Layer 1 – Data / Formation Governance
“What do we let this system know?”
This is the part most organizations already recognize:
- Data classification (public / internal / confidential / restricted).
- “No customer data in public LLMs” rules.
- DLP, access controls, and encryption.
- Decisions about which datasets can train which models.
Risk it controls:
- Data leakage
- Privacy violations
- Illegitimate training / use of sensitive data
What it does not control:
- Whether the system’s decisions are authorized.
- Whether a specific action may run under your name.
You can have perfect data governance and still wire a system that moves money, files court documents, or changes records under the wrong authority.
Layer 2 – Model / Agent Behavior Controls
“How is this system allowed to behave?”
This is the “AI safety” layer people talk about:
- Guardrails and safety filters.
- Red teaming and jailbreak resistance.
- Prompt / response filtering.
- Agent policies (“don’t call this tool”, “stay within this scope”).
Risk it controls:
- Harmful content
- Obvious misuse of tools by the model
- Some classes of biased or non-compliant behavior
What it does not control:
- Whether the resulting action is allowed to execute.
- Whether the decision is under the right license, consent, or authority.
A model can be beautifully “safe” and still send something to the wrong regulator, move the wrong funds, or file under the wrong attorney — simply because no one ever asked “may this run at all?”
Layer 3 – Pre-Execution Authority Gate (Commit Layer)
“For this actor, this action, right now — may it run at all?”
This is the layer that’s been missing in most stacks, and the one I specialize in.
A pre-execution authority gate sits in front of high-risk actions:
- file / send / serve
- sign / approve / commit
- move money / change limits / alter critical records
For each attempted action, it evaluates a small, structured payload:
- Who is acting? (human, agent, service account, role)
- Where are they acting? (matter / client / account / venue / environment)
- What are they trying to do? (file, send, approve, move)
- How fast / how exposed is it? (standard, expedited, emergency)
- Under which authority / consent? (client, contract, policy, regulation)
Then it returns exactly one of three outcomes:
- ✅ Approve – the action may proceed.
- ❌ Refuse – the action is blocked; nothing executes.
- ⚖️ Supervised override – the action may proceed only with a named human decision-maker attached.
And for each verdict, it produces a sealed, tenant-owned artifact:
- Who tried to do what.
- Under which identity / role / matter / venue.
- Which policy / rule set was applied.
- What the verdict was (approve / refuse / supervised).
- Reason codes and timestamps.
The gate’s job is not correctness.
Its job is authority and accountability.
It answers:
“Was this action allowed to start under our rules, and who owned that call?”
It does not guarantee:
- that the model’s reasoning inside the action was correct,
- that every downstream system behaved, or
- that the overall outcome was “good.”
That’s not a flaw — it’s separation of concerns.
In our world (law), SEAL Legal Runtime is exactly this:
a sealed,
pre-execution authority gate in front of file / send / approve / move for legal workflows, returning approve / refuse / supervised and emitting sealed artifacts per action.
Layer 4 – In-Execution Constraints
“Given that it’s allowed to start, how far may it go?”
Once an action is authorized to start, you still need controls while it’s running:
- Transaction limits and exposure caps.
- Dual control / four-eyes for large or unusual actions.
- Step-up authentication when risk spikes.
- Escrow / staged commits (plan, preview, then apply).
- Circuit breakers and anomaly detection mid-flight.
- Bounded tool permissions for agents (“this tool only on these accounts”).
- Idempotency / rollback guards.
Risk it controls:
- “Approved but dangerous” actions.
- Partial failures and cascading problems while the system is mid-flight.
What it does not control:
- Whether the action should have been allowed to start in the first place.
- Whether the action was under the right authority or consent.
This is the layer many “AI ops” / “agentic” frameworks talk about (and where people like Drew / Vadym are focusing on invariants and state).
Layer 5 – Post-Execution Monitoring & Reconciliation
“What actually happened, and did it match our intent?”
After the action is done, you still need:
- Reconciliation against books and records.
- Exception reports and flags.
- Supervisory sampling and QA.
- Incident detection and response.
- Forensic reconstruction when something goes wrong.
- Model / policy updates based on what you learn.
Risk it controls:
- Drift over time (models, policies, behavior).
- Hidden failure modes that only show up in aggregate.
- Regulatory and litigation exposure when you need to show your work.
What it does not control:
- Whether the action was authorized in the first place.
- Whether the action was bounded while it ran.
This layer is where a lot of “AI audit” and “AI observability” lives today.
Why One Vendor Can’t (and Shouldn’t) Own All Five
Think about security:
- IAM
- Endpoint protection
- Network / zero trust
- SIEM
- SOAR / incident response
No one serious buys
all of that from one product.
You expect multiple layers, multiple vendors, with clear interfaces.
AI will go the same way.
If a vendor claims:
“We are your data layer, model safety layer, pre-execution gate, runtime constraints, and audit platform”
…you should treat that like a bank that says:
“We are your core ledger, auditor, regulator, and fraud team.”
Regulators don’t like that.
Boards shouldn’t like it either.
You want:
- Clear separation of duties.
- Multiple lines of defense.
- Vendors that know exactly which layer they own.
How to Use This Control Stack in Practice
Here’s a way to turn this into action inside your org.
Step 0 — Find the ungoverned paths (Discovery before governance)
Before you assess the five layers, inventory where the action actually executes today. In most estates, the dominant risk surface is undocumented integrations and shadow workflows that bypass formal controls. If an action doesn’t reach Layer 3, it can’t be refused — it can only be investigated later.
Step 1 — Pick one high-risk class of actions:
- filing with a regulator or court,
- moving funds above a threshold,
- changing prices/limits in production,
- issuing orders / prescriptions / commands.
Step 2 —Then ask, in plain language:
1. Data / Formation
- What data is this action allowed to see and learn from?
- Who owns those classifications and access rules?
2. Model / Agent Behavior
- What are models/agents allowed to say or try?
- What’s explicitly out of scope?
3. Pre-Execution Authority Gate
- Who (human or system) is allowed to say “yes, this action may start under our name”?
- Where does that decision live — in a policy PDF, or in a gate in front of the “execute” button?
- Can we prove that decision six months from now?
4. In-Execution Constraints
- Once started, what caps / limits / circuit breakers apply?
- What happens if the action goes out of bounds while it’s running?
5. Monitoring & Reconciliation
- How do we spot and investigate odd patterns after the fact?
- What record would we show a regulator or insurer?
If any layer is answered with “we don’t really have that” or “we’re hoping the vendor’s magic takes care of it,” that’s your gap.
Where We’ve Placed Our Bet
In our own work, we deliberately did not try to be “the AI governance platform.”
We chose to specialize in the
Commit layer:
Pre-execution authority gates for high-risk legal actions,
with sealed, client-owned artifacts for every approve / refuse / supervised override.
- I don’t do DLP.
- I don’t tune models.
- I don’t manage every runtime limit or observability feed.
We integrate with the systems that already own those pieces (IdP, GRC, matter systems, DLP), and I answer one question ruthlessly:
“For this actor, this action, in this context, under this authority —
may it run at all, yes / no / supervised, and where is the record that proves it?”
That’s the Commit job.
Your stack may choose different vendors or build some layers in-house.
But the structure doesn’t really change.
The Takeaway You Can Steal
If you want a quotable, referenceable version of all this, it’s this:
AI governance is not one product. It’s a control stack.
- Data governance decides what the system may know.
- Model controls decide how it may behave.
- The pre-execution gate decides what may start.
- In-execution constraints decide how far it may go.
- Monitoring and reconciliation decide what we learn and how we prove it.
And:
If your stack can’t clearly show where “may this run at all?” lives for each high-risk action,
you don’t have AI governance. You have AI hope.









