AI Security Keeps Intruders Out. AI Governance Decides What Your Own Systems May Do.

Patrick McFadden • March 17, 2026

Most AI governance stops at models and monitoring.

The missing runtime discipline is Action Governance.


Most of the AI conversation with CISOs sounds like this:


  • “Are we protecting model weights?”
  • “Do we have prompt injection defenses?”
  • “Is our data exfiltration surface under control?”


All good questions.



But as soon as AI systems can file, send, approve, move money, or update records, a different question becomes more important:


“What are our own systems allowed to do under our name?”


That’s not an AI security question.
It’s the runtime part of AI
governance — the part I call Action Governance.


Security keeps intruders out.
Action Governance decides what even trusted identities are allowed to do.


The confusion between the two is exactly where most organizations are exposed.


1. Same Stack, Different Job


Here’s the clean separation in one sentence:


AI security protects the environment from hostile actors.

AI governance sets the rules for how AI may be used.
Action Governance enforces, at runtime, which high-risk actions legitimate actors may actually execute under your authority.

Side-by-side view AI Security – “Who can reach what?” AI Governance / Action Governance – “What are they allowed to do?”
Primary focus Attack surface, intrusion, abuse Authority, admissibility, accountability
Core question “Can this actor reach this system/data?” “May this actor take this action, here, now?”
Typical controls IAM, network segmentation, secrets mgmt, confidential compute, model hardening, threat detection Pre-execution authority gates, policy + identity evaluation, action-level allow/refuse/escalate
Objects of control Users, agents, services, endpoints, APIs Actions: file, send, approve, move money, update records
Failure mode Data breach, model theft, compromised endpoint Authorized actor doing unauthorized thing; no proof of who allowed it
Evidence surface Security logs, alerts, forensics traces Sealed decision artifacts tied to policy & authority
View of AI models Assets to protect and harden Tools that propose options; governance decides what may execute
Success metric Fewer successful attacks, faster detection & containment Fewer unauthorized actions, stronger ability to prove “who allowed what, under which rules”

A secure stack without governance is like a fortress where everyone inside the walls can pull any lever.



No intruders.
Plenty of internal blast radius.


2. Why “Secure but Over-Privileged” Is the New Failure Mode


Recent incidents keep following the same pattern:


  1. An AI agent or automation tool is given a powerful role (“admin”, “production”, “billing”).
  2. IAM, KMS, network, and model guardrails are all configured correctly for that role.
  3. Under pressure, the agent chooses a destructive but technically valid action.
  4. Every security layer says “Yes” – because the credential is legitimate.
  5. The system does something no one believed it was authorized to do.


That isn’t:


  • a prompt injection failure
  • a jailbreak
  • or a “model hallucination” problem.


It’s an authority failure:

Identity passed.
Security passed.
Authority should have failed.

The missing question at runtime was:


“Is this specific person or system allowed to take this specific action, in this context, under this authority – yes, no, or escalate?”


Security never got a chance to answer it, because that isn’t its job.


3. Three Layers of a Decision (Where Security Stops)


You already know these layers intuitively:


1. Propose – Intelligence

  • Models, agents, copilots, optimizers.
  • They answer: “What could we do?”

2. Commit – Authority

  • The pre-execution gate.
  • Answers: “May this run at all?”
  • Outcomes: Approve / Refuse / Escalate.

3. Remember – Judgment Memory

  • Logs, traces, observability, replay.
  • Answers: “What did we do, and why?”


Most security work today lives at the edges of Propose and Remember:


  • Protect models, data, and infra in the Propose layer.
  • Capture traces, alerts, and forensics in the Remember layer.


What’s missing in most stacks is a Commit Layer that’s:


  • Downstream of IAM, GRC, and policy
  • Upstream of any irreversible action


…and is able to say, deterministically:


“This will not run under our name.”


Without that commit layer you get what you see today:


  • Fantastic security posture
  • Beautiful observability
  • And still no structural way to prevent a legitimate, authenticated, well-meaning system from taking an action it was never meant to be allowed to take.


That is the runtime gap inside AI governance.
That gap is what I call Action Governance.


4. “But We Have IAM and Guardrails”


You do. You should.


They just aren’t doing the job you think.


IAM vs. Authority

  • IAM answers: “Who can sign in, and what can they reach?”
  • Authority answers: “What may they do, right now, in this context?”


If you give an agent an IAM role that can delete a production environment, then:


  • IAM is doing its job,
  • Security is doing its job,
  • But there is no separate place where someone encoded:


No system may delete production without dual control and human accountability.”


That rule lives in a policy PDF or someone’s head, not in a runtime gate.


Guardrails vs. Action Governance

  • Guardrails shape what a model is allowed to say.
  • Action Governance decides what any actor is allowed to do.


You can have perfect prompt filtering and safe completions and still:


  • send the wrong filing to the wrong court,
  • approve a payment to the wrong account,
  • or overwrite a critical record,


…because nothing was governing the action itself – only the words leading up to it.


5. Action Governance in One Line


Here’s the definition we use with GCs and CISOs:


Action Governance =
the runtime discipline inside AI governance that decides whether a high-risk action may execute under your authority, and leaves behind evidence of that decision.In practice, that means:


1.  Every high-risk “file / send / approve / move money / change record” call is routed through a commit layer.

2.  That layer sees a minimal intent-to-act payload, not raw matter content:

  • Who is acting (human / agent, role, license)
  • Where they’re acting (matter, account, environment, jurisdiction)
  • What they’re trying to do (action type)
  • How fast / exposed (urgency, risk class)
  • Under which authority / consent (policy, client instructions, regulator rules)

3. It evaluates that payload against your own policy and identity sources of truth.

4. It returns: Approve, Refuse, or Supervised Override.

5. It emits a sealed decision artifact into your tenant-controlled audit store.


Security is assumed.
Action Governance is layered on top.


6. Where SEAL Legal Runtime Sits in the Security Stack


For law firms and legal departments, SEAL Legal Runtime (Thinking OS™) implements that commit layer.


In a standard stack you already have:


  • Humans & Agents – attorneys, staff, AI tools, agentic workflows
  • IAM / Zero Trust – SSO, MFA, device posture, conditional access
  • Guardrails / Models – LLM gateways, content filters, safety layers
  • GRC / Policy Systems – ethics rules, risk posture, client instructions
  • DLP / Classification – sensitivity labels, destination controls


SEAL doesn’t replace any of that.


It sits after IAM and guardrails, before high-risk actions:


Humans / Agents → IAM → Models / Guardrails → SEAL Legal Runtime → High-Risk Actions

                                                                            (Pre-execution authority gate)


At that gate, SEAL:


  • Treats humans and AI agents the same way: untrusted operators trying to take actions.
  • Enforces firm-owned rules derived from your IdP, matter systems, and GRC tools.
  • Defaults fail-closed: missing authority, broken context, or ambiguous scope → Refusal, not best effort.
  • Produces sealed, tenant-owned artifacts for every governed decision.


Security still does what it does best:


  • Keeps intruders out,
  • Minimizes attack surface,
  • Protects secrets and models,
  • Detects and contains abuse.


SEAL makes sure that even the most trusted identities cannot take certain actions without explicit, traceable authority.


7. A Secure but Ungoverned Agent: Two Futures


Take a simple scenario:


“AI agent manages a cost-management dashboard in the cloud.”


Without a commit layer


  • Agent is given a role with rights to modify budgets and delete environments.
  • IAM: ✅ Authenticated as “Cost-Mgmt-Bot-Prod”.
  • Network: ✅ Inside the right VPC, correct security groups.
  • Guardrails: ✅ No obviously malicious prompts.
  • Under pressure, the agent chooses: “Delete and recreate the environment” as a valid fix.
  • Result: everything executes.


Post-incident, you learn:

  • The agent was allowed to do exactly what its role permitted.
  • You have logs, but no single authoritative record of who was allowed to approve that action.


With a pre-execution authority gate (SEAL pattern)


  • Same agent, same IAM role, same environment.
  • Agent sends:  DELETE_ENVIRONMENT   request.
  • Before the API call executes, the gate receives:
  • Actor: Cost-Mgmt-Bot-Prod (agent)
  • Context: Production, Region X
  • Action: DeleteEnvironment
  • Authority anchors: No dual control, no maintenance window, destructive in prod
  • Policy says:
  • “No destructive actions in production without dual sign-off from [Role A] + [Role B].”
  • Gate verdict: Refuse.
  • No network call is made. No environment is deleted.
  • A sealed refusal artifact is written to your audit store.


Security was necessary in both futures.
Action Governance was decisive in only one.


8. What CISOs Should Add to Their Mental Model


You don’t have to become “the AI governance person.”
You just need to insist on one thing:


Security without a commit layer is incomplete.


When you review AI programs, add three questions:


1. Where is the pre-execution authority gate in this design?

  • Show me the box that can hard-refuse a “file / send / approve / move” request.

2. Whose rules run in that gate?

  • Are we enforcing our policy and identity, or renting a vendor’s risk model?

3. Who owns the artifacts?

  • If everything goes sideways, can we independently prove who was allowed to do what, under which rules?


If the answer to all three is “the platform handles it” or “we’ll reconstruct it from logs,” you don’t have governance.


You have AI hope, secured by great IAM.


9. The Clean Story for Slides


If you need one slide for your board, use this:


AI Security:

  • Protects models, data, and infrastructure from attackers.
  • Controls who can reach what.
  • Evidence = security logs and incident reports.


AI Governance:

  • sets the organization’s policies, constraints, and acceptable-use posture for AI


Action Governance:

  • enforces which actions may execute under your authority
  • decides what even trusted systems are allowed to do at runtime
  • evidence = sealed, client-owned decision artifacts


Together:

  • Security keeps intruders out.
  • AI governance defines the rules.
  • Action Governance decides what your own systems may do.


All three matter. None replaces the others.



For law, SEAL Legal Runtime is that governance layer:
a refusal-first authority gate between your tools and high-risk legal actions, enforcing your rules and leaving behind evidence that belongs to you.


That’s the difference between:


“We think the system did the right thing.”


and


“Here is the artifact that shows who allowed this action, under which authority, at that moment in time.”


One is a story.
The other is something a regulator, a court, or an insurer can actually rely on.

By Patrick McFadden March 10, 2026
Most “AI governance” decks sound impressive but leave one blind spot: Who is actually allowed to do what, where, under which authority, before anything executes? These seven questions let a board test, in one meeting, whether the organization has real governance or just model settings and policies on paper.
By Patrick McFadden March 6, 2026
Define AI Risk P&L and the prevented-loss ledger. Learn how refusals, overrides, and sealed artifacts make AI governance provable.
By Patrick McFadden March 3, 2026
Why You Still Get AI Incidents Even When Both Look “Mature”
By Patrick McFadden March 1, 2026
Everyone’s asking how to govern AI decisions at runtime. The catch is: you can’t govern “thinking” directly – you can only govern which actions are allowed to execute . Serious runtime governance means putting a pre-execution authority gate in front of file / send / approve / move and deciding, for each attempt: may this action run at all – yes, no, or escalate?
By Patrick McFadden February 28, 2026
The Commit Layer is the missing control point in AI governance: the execution-boundary checkpoint that can answer, before an action runs.
By Patrick McFadden February 26, 2026
AI governance isn’t one product—it’s a 5-layer control stack. See where vendors mislead, where a pre-execution gate fits, and how to close the gaps that matter.
By Patrick McFadden February 23, 2026
A pre-execution AI governance runtime sits before high-risk actions and returns approve/refuse/supervised—using your rules—and emits sealed evidence you can audit and defend.
By Patrick McFadden February 22, 2026
Regulators won’t ask if you “have AI governance.” They’ll ask who could say NO—and where’s the proof. Decision + evidence sovereignty, explained.
By Patrick McFadden February 21, 2026
AI governance platforms help you monitor and coordinate—but they can’t own your “NO” or your proof. Here’s where authority and evidence must stay enterprise-owned.
By Patrick McFadden February 16, 2026
Guardrails shape what AI can say—but regulators need control over what AI can do. Learn the questions that expose real governance: fail-closed gates + sealed decision artifacts.