Is This Really AI Governance? 7 Questions Boards Can Ask in One Meeting

Patrick McFadden • March 10, 2026

Most “AI governance” decks sound impressive but leave one blind spot:
Who is actually allowed to do what, where, under which authority, before anything executes?


These seven questions let a board test, in one meeting, whether the organization has real governance or just model settings and policies on paper.



1. Where is the pre-execution gate in our AI stack?


“Show me, on one diagram, where we decide whether an AI-driven action may run at all.”


  • Look for a clearly defined authority gate in front of high-risk actions (file, send, approve, move money, change records).
  • Good answer: “Here is the runtime that says approve / refuse / supervised before anything leaves the building.”
  • Red flag: “The system logs everything and we can always audit it later.”



2. Whose rules run there? (Decision sovereignty)


“At that gate, whose rules are we enforcing – ours, or the vendor’s?”


  • Good answer: firm-owned policies (roles, risk posture, matter rules) are loaded into the gate; vendor logic is subordinate.
  • You want to hear: “If we change our policy tomorrow, the gate changes tomorrow.”
  • Red flag: “The vendor’s generic ‘safety layer’ decides what’s allowed.”



3. Who owns the artifacts? (Evidence sovereignty)


“When the gate allows, refuses, or escalates, who owns the record of that decision?”


  • Good answer: client-owned, tamper-evident artifacts that record who tried to do what, under which policy set, and why it was approved or blocked.
  • Those artifacts should live in your environment and be exportable for regulators, courts, and insurers.
  • Red flag: “The logs live in the vendor’s SaaS; we can request reports if needed.”



4. What is the worst-case failure mode: bypass or documented refusal?


“If something goes wrong, is it because the gate was bypassed, or because it refused and we ignored it?”



  • Healthy design fails closed: the default is “no,” with a documented refusal or supervised override.
  • You want to hear: “If the gate is down or uncertain, the action does not execute.”
  • Red flag: “If the AI is confident, it just proceeds; we review afterwards.”

5. Can we swap vendors and keep our authority model and evidence?


“If we change AI or workflow vendors, what survives?”


  • Good answer: your authority model (who may do what, where) and your decision artifacts are portable and remain intact if you change model providers or UI layers.
  • This is the test for true control-plane vs. vendor-plane separation.
  • Red flag: “If we move off this platform, we lose the policies, logs, and approvals history.”



6. Who defines “high-risk actions,” and how often is that list reviewed?


“Show me the current list of actions that cannot run without explicit authorization.”


  • Good answer: a board-visible catalog of high-risk actions (by system, data class, and destination) tied to the gate – updated as the business changes.
  • You want clear ownership: “This committee updates the list; the gate enforces it.”
  • Red flag: “Everything is treated the same; the model just has general guardrails.”



7. If a regulator or opposing counsel calls tomorrow, what single artifact do we send first?


“Walk me through the evidence we would rely on to prove that an AI-assisted action was authorized and in-policy.”


  • Good answer: a sealed decision artifact showing actor, action, matter, policy set, verdict (approve/refuse/supervised), and reasons.
  • The board should see that this artifact is standardized and repeatable, not manually assembled after the fact.
  • Red flag: “We’d pull logs from several systems and reconstruct what happened.”

What “Yes” Looks Like


If your team can answer these seven questions crisply, with one diagram of the pre-execution authority gate and examples of client-owned decision artifacts, you’re in real AI governance territory.



If they can’t, you don’t have an AI problem.
You have an
action governance gap – and that’s where the real legal and fiduciary risk lives.


If you can’t point to a pre-execution authority gate and show who owns the artifacts it emits, you don’t have AI governance – you just have AI hope.

By Patrick McFadden March 6, 2026
Define AI Risk P&L and the prevented-loss ledger. Learn how refusals, overrides, and sealed artifacts make AI governance provable.
By Patrick McFadden March 3, 2026
Why You Still Get AI Incidents Even When Both Look “Mature”
By Patrick McFadden March 1, 2026
Everyone’s asking how to govern AI decisions at runtime. The catch is: you can’t govern “thinking” directly – you can only govern which actions are allowed to execute . Serious runtime governance means putting a pre-execution authority gate in front of file / send / approve / move and deciding, for each attempt: may this action run at all – yes, no, or escalate?
By Patrick McFadden February 28, 2026
The Commit Layer is the missing control point in AI governance: the execution-boundary checkpoint that can answer, before an action runs.
By Patrick McFadden February 26, 2026
AI governance isn’t one product—it’s a 5-layer control stack. See where vendors mislead, where a pre-execution gate fits, and how to close the gaps that matter.
By Patrick McFadden February 23, 2026
A pre-execution AI governance runtime sits before high-risk actions and returns approve/refuse/supervised—using your rules—and emits sealed evidence you can audit and defend.
By Patrick McFadden February 22, 2026
Regulators won’t ask if you “have AI governance.” They’ll ask who could say NO—and where’s the proof. Decision + evidence sovereignty, explained.
By Patrick McFadden February 21, 2026
AI governance platforms help you monitor and coordinate—but they can’t own your “NO” or your proof. Here’s where authority and evidence must stay enterprise-owned.
By Patrick McFadden February 16, 2026
Guardrails shape what AI can say—but regulators need control over what AI can do. Learn the questions that expose real governance: fail-closed gates + sealed decision artifacts.
By Patrick McFadden February 3, 2026
Decision intelligence isn’t one thing: Propose, Commit, Remember. Most stacks miss Commit—the authority gate that stops irreversible actions and creates proof.