For Regulators & Policymakers


Supervising AI with a Pre-Execution Governance Runtime

1. What You Are Worried About

If you supervise organizations that use AI to file, approve, or move things in the real world, you are likely concerned with:


  • Unauthorized or opaque decisions
    AI or automation taking actions that no one clearly authorized.
  • Lack of reconstructable evidence
    After an incident, there is no reliable record of
    who approved what, under which rules, and when.
  • Fragmented governance
    Policies exist on paper, but there is no proof that they were applied at the moment of action.
  • Cross-border and cross-vendor risk
    Sensitive data, models, and workflows span multiple vendors and jurisdictions.


A sealed pre-execution governance runtime gives you a concrete pattern to point to — one that lets firms automate safely while preserving authority, accountability, and evidence.

2. What “Good” Looks Like

Supervisory Checklist for High-Risk AI Workflows


For AI systems that can submit filings, approve decisions, or move assets, regulators can reasonably expect the following architecture:


A. Clear Separation of Layers


1. Model & Data Layer (Formation Stack)

  • Data controls, DLP, classification
  • Approved AI endpoints and LLM gateways
  • Model risk management and monitoring

2. Execution Governance Layer (Runtime Gate)

  • A dedicated pre-execution governance runtime in front of high-risk actions
  • Approve / refuse / supervised-override decisions
  • Sealed, tamper-evident artifacts for every governed attempt

3. Audit & Oversight Layer

  • Tenant-owned artifact store
  • Internal risk, compliance, and audit review
  • Reporting to supervisors and, where appropriate, to courts and counterparties


B. Runtime Behaviors You Should See


A compliant governance runtime for high-risk AI SHOULD:


  • Run before irreversible actions (file, submit, approve, execute, move funds).
  • Decide based on who is acting, on what, under which authority, at what urgency, with what consent.
  • Fail closed by default when policies, identity, or context are missing or inconsistent.
  • Return one of three explicit outcomes for each request:
  • ✅ Approve
  • ❌ Refuse
  • 🟧 Supervised Override (with named human decision-maker)
  • Generate a sealed, tenant-controlled artifact for each outcome.
  • Be non-bypassable for workflows that claim to be governed by it.


C. Evidence You Should Be Able to Request


From a regulated firm using this pattern, you should be able to obtain:


  • A description of the runtime architecture and its integration points (GRC, IdP, matter/case, payments, etc.).
  • Samples of sealed artifacts (appropriately anonymized) showing approve, refuse, and override decisions.
  • Policy mappings: which risk policies, roles, and classifications are enforced by the runtime.
  • Metrics and summaries: volumes of approvals, refusals, and overrides over time; trends and outliers.

3. Supervisory Expectations Checklist

You can adapt this as an internal or public checklist.


Regulators SHOULD EXPECT that regulated entities:


  1. Identify high-risk AI workflows where AI or automation can submit filings, approve decisions, or move client or customer assets.
  2. Front those workflows with a pre-execution governance runtime that is separate from the AI model and from vendor business logic.
  3. Use client-owned policy, identity, and matter data as inputs to the runtime, rather than bespoke configurations hard-coded into applications.
  4. Operate the runtime on a fail-closed basis, so that ambiguity in rules or identity results in refusal, not best-effort execution.
  5. Record every approve, refuse, and override as a sealed, tamper-evident artifact under the firm’s control.
  6. Store artifacts in a tenant-owned audit environment, under the firm’s retention, access, and jurisdiction rules.
  7. Run new AI workflows initially in “observe-only” mode, to identify and address governance gaps before full enforcement.
  8. Periodically review artifacts and metrics through internal audit, risk, or ethics functions, and make those reviews available to supervisors on request.
  9. Ensure runtime decision data is not used for cross-tenant model training without explicit consent and appropriate safeguards.
  10. Maintain clear accountability for policy content: business leaders, not vendors, own the rules that the runtime enforces.

4. Questions to Ask Firms and Vendors

You can use or adapt the following yes / no and open questions in supervisory reviews, examinations, and RFPs.


A. Architecture & Governance

1. Do you front all high-risk AI workflows with a pre-execution governance runtime that evaluates actions before they are executed?

2. Is this runtime architecturally separate from the AI models and from application business logic?

3. Which systems supply policy, identity, and matter context to the runtime (e.g., GRC, IdP, case/matter management, payments core)?

4. Can high-risk actions be executed through any path that bypasses this runtime? If yes, describe those paths.


B. Behavior & Controls

5. Does the runtime make explicit approve / refuse / supervised-override decisions for each high-risk action?

6. What is the default behavior when policy information is missing, identity is ambiguous, or context is incomplete?

7.  How do you ensure that supervised overrides are tied to a named decision-maker, not an anonymous “system”?

8. Do you support an observe-only mode, and have you used it to test policies before full enforcement?


C. Evidence & Artifacts

9. Do you generate sealed, tamper-evident artifacts for every governed decision?

10. Where are these artifacts stored, and who controls access (tenant vs vendor)?

11. Can you provide example artifacts (appropriately anonymized) showing approvals, refusals, and overrides for recent periods?

12. How long are artifacts retained, and how does this align with legal, regulatory, and internal retention requirements?


D. Data Use & Cross-Tenant Risk

13. Is runtime decision data (including artifacts and logs) used for training or tuning any models?

14. Is runtime decision data combined across tenants for analytics? If so, under what legal basis and with what safeguards?

15. If a third-party vendor operates the runtime, what contractual limits exist on their use of artifact data?

5. Model Language You Can Reuse

Use these as templates for discussion papers, supervisory statements, or rules.
They are intentionally neutral and architecture-focused.


A. Supervisory Statement (Example)


High-Risk AI Governance – Pre-Execution Runtime Expectation


Where AI systems or automated workflows are capable of initiating, approving, or executing high-risk actions (including but not limited to regulatory filings, client instructions, payments, and record changes), supervised entities are expected to:


  1. Front such workflows with a pre-execution governance runtime that:
  • evaluates who is acting, on what, under which authority and risk policies, prior to execution;
  • issues explicit approve, refuse, or supervised-override decisions; and
  • records each decision as a sealed, tamper-evident artifact under the entity’s control; and


    2. Demonstrate, upon request, that this runtime operates on a fail-closed basis and is not bypassed for governed workflows.


B. Rule / Guideline Text (Example)


[X]. Pre-Execution Governance Runtime for AI-Enabled Actions


(1) A regulated entity that deploys AI or automated systems capable of initiating or executing material actions affecting clients, counterparties, or markets SHALL implement a pre-execution governance runtime for those actions.


(2) The runtime SHALL:


  • (a) rely on entity-owned policy, identity, and contextual information, including applicable laws, mandates, and risk limits;
  • (b) provide explicit approval, refusal, or supervised-override outcomes for each governed action;
  • (c) default to refusal where policy, identity, or context is incomplete or inconsistent; and
  • (d) generate sealed, tamper-evident decision artifacts that record, at a minimum, the time, actor, action type, applicable policy version, and outcome.


(3) Regulated entities SHALL ensure that high-risk actions cannot be executed through channels that bypass the governance runtime, except under documented contingency procedures approved by the entity’s governing body.


(4) Decision artifacts produced by the runtime SHALL be stored in an entity-controlled environment, under retention and access rules that support regulatory investigations, audits, and enforcement.


C. Examination / On-Site Procedure (Example)


Objective: Assess whether the firm’s use of AI in high-risk workflows is governed by an effective pre-execution runtime and whether decision evidence is available and reliable.


Steps:


  1. Identify at least three AI-assisted workflows that can submit filings, approve transactions, or change critical records.
  2. For each workflow, determine whether a pre-execution runtime gate is present and describe how it is invoked.
  3. Obtain and review a sample of sealed decision artifacts for a recent period, including approvals, refusals, and supervised overrides.
  4. Verify that artifacts contain sufficient metadata (actor, action, matter/context, policy version, outcome, timestamps) to support supervisory review.
  5. Assess whether alternate technical paths exist that would allow actions to execute without passing through the runtime.


Evaluate the firm’s process for reviewing artifacts and metrics (e.g., by risk, compliance, audit, or ethics committees) and how issues identified are remediated.

6. How This Pattern Fits Existing Regulatory Frameworks

This architecture is designed to work with, not replace, existing expectations around:



  • Model risk management – the runtime does not judge model quality; it governs what actions may execute despite model outputs.
  • Operational resilience – fail-closed behavior and sealed artifacts support incident response, root-cause analysis, and lessons learned.
  • Accountability and senior management responsibility – policies enforced by the runtime remain authored and owned by the firm’s leadership.
  • Data protection and confidentiality – the runtime can operate primarily on labels and metadata, minimizing exposure of full client content.


In other words: the pre-execution governance runtime is the missing layer that connects model governance to real-world consequences.

7. FAQ – For Regulators & Policymakers

Q1.  Does this force all firms to use the same vendor or platform?

A. No. The pattern is architectural. Firms may implement it with internal systems, external vendors, or hybrids, provided the behaviors and guarantees described here are met.

Q2.  Are we prescribing technology, or just outcomes?

A. The emphasis is on outcomes: pre-execution decisions, fail-closed behavior, non-bypassability, and sealed evidence. Technology choices remain with firms, subject to these outcomes.

Q3.  How does this help with cross-border supervision?

A. Sealed artifacts provide a portable, interpretable record of decisions that can be shared with multiple authorities, without exposing full client content or proprietary models.

Q4.  Can smaller firms implement this, or is it only for large institutions?

A. The pattern scales down. Smaller firms may start with a simpler runtime and limited scope (e.g., only filings to courts or regulators) and expand as AI use grows.

Q5.  What if the policies encoded in the runtime are themselves flawed?

A. The runtime does not remove the need for sound policies and professional judgment. It makes those policies explicit and enforceable, and it records how they were applied, which improves supervisory visibility when policies need to change.

“This page is intended as open governance reference material. You may quote or adapt with attribution ‘Thinking OS™’.” but it’s not required.

Version 1.0 – This reference will be updated as supervisory expectations and industry practice evolve.