1. What You Are Worried About
If you insure organizations that are deploying AI into core operations, you are likely thinking in terms of:
- Frequency risk
How often AI will cause bad events: mis-filings, unauthorized changes, bad approvals, data exposures. - Severity risk
When something goes wrong, how big is the claim? Regulatory investigations, class actions, systemic outages, reputational damage. - Attribution and recoverability
When a loss occurs, can you actually reconstruct who did what, under whose authority, and whether controls were followed? - Systemic and correlated exposure
Many insureds may rely on the same models, platforms, or vendors — creating clustered loss scenarios and uncertainty in capital models.
Most insureds show you
policies, model inventories, and security controls.
Very few can show you
evidence that those rules were actually enforced at the moment of action.
A
sealed pre-execution governance runtime gives you exactly that:
a
control that reduces loss frequency and a
data source that improves claims handling, reserving, and pricing.
2. What “Good” Looks Like From an Underwriting Perspective
When an insured is using AI to file, approve, or move things in the real world, you should look for:
A. A Gate in Front of High-Severity Actions
For workflows that can:
- submit filings to courts or regulators
- approve payments, credits, or changes in limits
- amend records that affect coverage or customer balances
…there is a pre-execution governance runtime that:
- runs before the action is executed
- checks who is acting, on what, under which authority and policy
- returns one of three decisions:
- ✅ Approve
- ❌ Refuse
- 🟧 Supervised Override (named human decision-maker)
B. Explicit Controls to Contain Frequency
Controls that meaningfully reduce incident count:
- Fail-closed behavior – ambiguous rules or identities result in refusal, not “best-effort”.
- Role and matter scoping – AI and automation cannot act outside specific roles, matters, or accounts.
- Destination controls – data/classification aware: e.g., confidential data cannot be sent to public channels.
- Supervision triggers – certain actions always require senior or specialist sign-off.
C. Evidence You Can Rely On After a Loss
For every governed attempt (including those that were blocked), the runtime creates a sealed artifact:
- append-only, tamper-evident
- under the insured’s control
- with: actor, action, context, policy version, decision, timestamps, refusal / override codes
That means that after a claim you can answer:
- What exactly was the AI / system trying to do?
- Was this within the insured’s stated policies and risk appetite?
- Did they operate the runtime as warranted in the proposal / application?
This directly affects coverage analysis, subrogation potential, and future pricing.
3. How a Sealed Runtime Shows Up in Your Loss Ratio
Think about the runtime in terms of the classic frequency × severity decomposition.
Frequency
The runtime reduces event frequency by:
- Blocking clearly out-of-scope actions (wrong role, wrong matter, wrong destination).
- Preventing obvious policy breaches (e.g., “no public LLM for client data”, “no approvals above £X without officer sign-off”).
- Forcing supervision where the risk curve is steep, turning potential losses into managed decisions.
You should expect to see refusal and override statistics that show these controls working.
Severity
When a loss happens anyway, the runtime:
- Provides clear evidence of whether controls were followed.
- Shortens claims investigation time and reduces disputes about facts.
- Supports regulatory cooperation, which can reduce fines, penalties, and reputational damage.
- Enables better root-cause analysis, improving controls and pricing on renewal.
Sealed artifacts convert “we think we had controls” into
“here is the signed, time-stamped evidence of what the gate decided.”
4. Underwriting & Risk Engineering Checklist
You can adapt this as an internal checklist or underwriting guideline.
When AI systems can initiate or execute high-risk actions, underwriters SHOULD:
- Identify those workflows clearly (filings, approvals, transfers, limit changes, core record updates).
- Verify the presence of a pre-execution governance runtime in front of these workflows.
- Confirm fail-closed behavior and non-bypassability for governed actions.
- Review sample sealed artifacts for approvals, refusals, and overrides.
- Assess metrics: volumes, refusal rates, override patterns, and review processes.
- Document these controls in the risk file and consider explicit wording in binders, endorsements, or conditions.
5. Due-Diligence Questions for Proposals, Renewals, and Risk Surveys
Use or adapt these for cyber, tech E&O, MPL, D&O supplements, or bespoke AI questionnaires.
A. Governance Architecture
1. Which systems in your environment can use AI or automation to submit filings, approve transactions, or change critical records?
2. Do you front these workflows with a pre-execution governance runtime that explicitly approves, refuses, or escalates each action?
3. Is that runtime technically separate from your AI models and application business logic?
4. Can any of those high-risk actions be executed via a path that bypasses the runtime? If yes, please describe.
B. Controls and Behavior
5. What is the default behavior when the runtime lacks clear policy or identity information?
6. How do you determine when supervised overrides are allowed, and who may authorize them?
7. Do you operate the runtime in observe-only mode prior to enforcing new policies or workflows?
8. How quickly can runtime policies be updated in response to losses, near misses, or regulatory changes?
C. Evidence, Metrics, and Oversight
9. Do you generate sealed, tamper-evident artifacts for every approve, refuse, and override decision?
10. Where are these artifacts stored, and who controls access?
11. Can you provide anonymized samples of these artifacts for underwriting review?
12. Which internal functions (risk, compliance, internal audit, ethics) review runtime metrics and artifacts, and how often?
D. Data Use and Vendor Risk
13. Is runtime decision data used for model training or cross-tenant analytics by any vendor?
14. If a third party operates the runtime, what contractual restrictions apply to its use of artifacts and metadata?
15. How do you monitor vendor changes that might affect the behavior of the runtime or its guarantees?
6. Example Policy & Endorsement Language (Copyable)
These are illustrative samples you can adapt with your own legal, product, and regulatory teams.
A. Proposal / Warranty Language (Underwriting)
AI Governance Runtime Warranty (Example)
The Applicant warrants that, for AI-enabled workflows capable of submitting filings, approving payments or credits, or amending customer or client records, the Applicant:
- operates a pre-execution governance runtime that evaluates who is acting, on what, under which authority, prior to execution;
- ensures that such runtime operates on a fail-closed basis, such that missing or ambiguous policy or identity information results in refusal; and
- records each approve, refuse, and supervised-override decision as a sealed, tamper-evident artifact retained in an Applicant-controlled environment.
B. Condition / Risk Management Clause
Pre-Execution Governance Runtime – Condition Precedent (Example)
It is a condition precedent to the Insurer’s liability in respect of any Claim arising from AI-enabled execution of High-Risk Actions that the Insured:
- maintains and uses a pre-execution governance runtime in front of such actions;
- does not knowingly permit the execution of High-Risk Actions through channels that bypass the runtime, except under documented contingency procedures; and
- retains sealed decision artifacts for such actions for no less than [X] years.
(“Condition precedent” vs softer “warranty” / “risk management clause” is a product decision; tone here is intentionally strong so you can weaken as needed.)
C. Coverage Clarification / Positive Story
Recognized Control – Sealed AI Governance Runtime (Example)
In assessing this risk, the Insurer has taken into account the Insured’s deployment of a
sealed AI governance runtime generating tamper-evident decision artifacts for AI-enabled High-Risk Actions.
Subject to all other terms and conditions, the presence of this control may be considered in:
- the assessment of any alleged failure of AI governance controls, and
- the evaluation of cooperation and remediation efforts following a covered event.
7. Claims & Actuarial Use of Runtime Artifacts
A sealed runtime is not only a front-end control; it is a back-end data source.
For claims teams, artifacts can:
- accelerate coverage investigations (what happened, when, and under which policy rule);
- clarify allocation of responsibility between insured, vendors, and third parties;
- support negotiations with regulators, potentially reducing penalties and remediation scope.
For actuarial and portfolio management teams, artifacts can:
- improve exposure modelling by showing actual frequencies of governed attempts, refusals, and overrides;
- highlight systemic risk where many insureds rely on similar architectures or vendors;
- support experience rating and risk segmentation based on real control performance, not just self-attested policies.
Over time, a portfolio of insureds running sealed runtimes can provide a
richer, more structured loss and near-miss dataset than traditional narrative incident reports.
8. FAQ – For Insurers & Reinsurers
Q1. Are we effectively endorsing a particular vendor if we reference this pattern?
A. No. The pattern is architectural. Any implementation that meets the guarantees (pre-execution decisions, fail-closed behavior, sealed artifacts, non-bypassability) can qualify, whether built in-house or procured.
Q2. Can smaller insureds realistically implement this?
A. Yes. They can start with a narrow scope (e.g., only regulatory filings or legal actions) and a simpler runtime, then expand coverage as AI use grows. The key is that critical actions pass through a gate that records decisions.
Q3. Does this remove moral hazard or guarantee good judgment?
A. No. The runtime enforces the policies the insured chooses and records what happened. It does not guarantee those policies are appropriate. But it significantly improves control execution and post-event evidence, which reduces uncertainty in both pricing and claims.
Q4. How does this interact with existing cyber, tech E&O, or MPL wordings?
A. The runtime is a control that cuts across lines. It can be referenced in endorsements or risk management clauses in cyber, tech E&O, MPL, and even D&O, wherever AI-driven decision-making could lead to claims.
Q5. Could sealed artifacts be discoverable in litigation?
A. Possibly, depending on jurisdiction and context. From a carrier’s perspective, having accurate, tamper-evident records is generally preferable to reconstructing events from incomplete logs and emails.
Q5. Does this change how we set reserves?
A. Over time, yes. Better evidence and faster investigation reduce claims handling uncertainty, which can justify more confident reserving and potentially lower capital charges for portfolios with strong runtime adoption.
This page is intended as open governance reference material for the insurance market.
You may quote or adapt this language in underwriting guidelines, broker questionnaires, policies, and endorsements, with attribution to “Thinking OS™” if helpful (but attribution is not required).
Version 1.0 – 2026.
This reference will be updated as claims experience, supervisory expectations, and industry practice evolve.