top of page

Your AI Policy Isn't Covered. And Your SOC 2 Won't Save You.

AI AND THE COVERAGE GAP SERIES  |  Part 1  |  By Holly Hartman, Fractional CAIO


 


The insurance market changed. Most organizations haven't noticed yet. Here's what's actually happening — and why the compliance you already paid for doesn't cover the risk you're already carrying.

 

A Lesson From the Recent Past


Imagine a mid-market company. Solid organization. Well-run IT department. Cyber insurance policy in place — they had gone through the underwriting process, answered the questions, attested to their controls. They felt protected.

Then came the ransomware attack. Their systems were locked. Their data was compromised. And they filed their claim expecting the policy they had paid for to respond.


It didn't. Not fully. The carrier invoked a misrepresentation clause. The organization had attested to 24/7 monitoring capabilities they could not actually prove when the claim arrived. No documentation. No audit trail. No evidence the control existed as described.

 

This is not a hypothetical. In 2025, 34% of cyber insurance claim denials were for "failure to maintain stated security controls" — carriers auditing logs post-claim and comparing them to underwriting attestations. [1] And 82% of denied cyber claims had one thing in common: missing or partial MFA implementation that had been attested to at underwriting. [2]

 

The coverage gap wasn't created at claim time. It was created at underwriting — the moment the organization attested to controls it could not document.

That was cyber insurance in 2024. Now pay attention, because AI insurance is following the identical path — and your organization is currently sitting exactly where that company was before the breach.


 

The Cyber-to-AI Underwriting Parallel

The evolution of cyber insurance underwriting is the clearest preview available of where AI insurance is heading. Here is the documented timeline:

 

CYBER INSURANCE TIMELINE: FROM CHECKBOX TO GOVERNANCE

 

2017–2020

"Do you have cybersecurity insurance elsewhere?" Basic yes/no questions on data encryption and incident history.

2021–2023

Post-Colonial Pipeline and SolarWinds: Carriers shifted to technical supplements requiring MFA proof, EDR deployment, patch management evidence, and incident response plans.

2023–2025

Full governance documentation as baseline. SBOMs, 24/7 SOC logs, phishing simulation results, NIST CSF alignment. 60% of underwriters mandating pre-coverage cybersecurity assessments by 2025. [3]

2025 AI

"Do you use AI?" — the checkbox era. Simple yes/no on applications. Most organizations check yes and move on.

2026 AI NOW

ISO exclusions CG 40 47 and CG 40 48 effective January 2026. 70%+ of renewals now receiving AI supplements. [4] Carriers asking about governance documentation, vendor AI contracts, human oversight. 20–50% surcharges for AI-exposed risks. [5]

2027–2028 AI

Where cyber was in 2023. The denial wave. Organizations that attested to AI governance they cannot document will discover the gap at claim time — when it is too late. [6]

 

The industry confirmation of this trajectory is direct. As one broker guidance publication put it in April 2026: "Cyber went from checkbox to NIST CSF 2.0 governance demands by 2024. AI follows identically — expect AI supplements demanding policies and audits by 2026." [7]


Insurance Business Magazine's cyber risk team was equally direct: "AI underwriting mirrors cyber's shift: simple questions to continuous monitoring and governance evidence." [8]


 

Why AI Governance Is Not Just a Bigger Version of Cyber

Here is where most commentary on this topic stops — and where your organization needs to go further.


The cyber parallel is instructive but incomplete. AI governance is not a larger version of the same problem. It is a categorically different problem. Understanding that difference is what determines whether your governance response will actually protect you.

 

Cybersecurity was a perimeter problem.

The threat model was external. Bad actors trying to get in. Data coming through vulnerable systems. The governance response was logical: protect the perimeter, document your controls, prove you locked the doors. MFA. EDR. Patch management. 24/7 monitoring. All of it oriented around keeping threats out and detecting intrusion.


The organizational footprint was relatively contained. IT owned it. The CISO was the accountable seat. HR, Finance, Legal, and Marketing were stakeholders. They received reports. They did not run the firewall.

 

AI governance is a whole-organization problem.

The threat model is simultaneously internal and external — and the harm can originate from inside your own approved tools, your own employees, your own decisions, and your own vendor relationships. Nobody is breaking in. The risk is already inside. It came in through the software you bought and the tools your employees are using right now.


And unlike cyber, AI touches every function simultaneously:

 

HR

AI-assisted hiring decisions — discrimination liability, EEOC exposure, active litigation (Mobley v. Workday)

Finance

AI-generated forecasting and reporting — SOX implications, D&O personal liability exposure

Legal

AI-assisted contract review and legal research — malpractice risk, privilege issues, confidentiality exposure

Marketing

AI-generated content — IP liability, defamation exposure, brand risk from hallucinated outputs

Operations

Agentic AI making autonomous decisions — vendor liability, accountability gaps, cascading failure risk

The Board

AI oversight obligations — fiduciary duty exposure, SEC disclosure requirements, personal D&O liability

 

ISACA confirmed this directly in 2025: "AI governance demands that privacy, cybersecurity, and legal work in lockstep" across functions — not inside a single technical owner. [9] The EU AI Act and NIST AI RMF both mandate cross-functional accountability explicitly.


This is why cyber governance needed one strong perimeter. AI governance needs accountability woven into every seat simultaneously.


 

The SOC 2 Conversation You Need to Have

This week, two separate organizations told me they felt protected because they had SOC 2 certification. Both were surprised by what followed.


SOC 2 is a legitimate and valuable compliance framework. A SOC 2 Type II certification means your data security controls have been independently audited and verified over time. It is a meaningful signal of operational maturity and it matters to your customers and partners.


It does not cover your AI governance risk. Those are different things and conflating them is one of the most common — and most expensive — assumptions organizations are making right now.

 

What a named auditor confirmed in March 2026:

"AI Oversight Missing in SOC 2 Reports... Model governance isn't evaluated. Data processing through AI isn't assessed. No AI-specific controls are tested."

— Sanjana K., Auditor (LinkedIn, March 26, 2026) [10]

 

The Replicant AI security team was equally direct: "A company could be SOC 2 compliant yet remain open to prompt injection... SOC 2 predates generative AI." [11]


A Mavericks audit firm publication confirmed the gap explicitly: "Organizations can be fully SOC 2 Type II certified yet have no AI acceptable use policy or human-in-the-loop process." [12]


This means an organization can have SOC 2 Type II certification and still be completely unable to answer the AI underwriting questions carriers are now asking at renewal. The two frameworks govern different things.


 

Here is what SOC 2 does not cover:

 

SOC 2 Covers

SOC 2 Does Not Cover

✓  Data security and access controls

✗  AI bias detection and fairness testing

✓  System availability and uptime

✗  Model drift monitoring and performance degradation

✓  Processing integrity for data systems

✗  AI explainability and output traceability

✓  Confidentiality of stored data

✗  Human-in-the-loop oversight requirements

✓  Privacy controls for personal data

✗  AI acceptable use policy

✓  Incident response for systems

✗  AI governance committee and RACI

✓  Change management processes

✗  AI vendor contract liability clauses

✓  Identity and access management

✗  AI systems inventory across all business functions

 

ISO 42001 — the AI Management System standard — begins to fill this gap. It explicitly addresses algorithmic fairness, model lifecycle ethics, explainability requirements, and internal AI harms that SOC 2 was never designed to govern.

[13] But as LBMC, a named auditing firm, confirmed: "Including AI in SOC 2 decreases ISO 42001 burden, but unique risks demand both." [14] The FWS AI Governance Framework is cross-validated against ISO 42001, NIST AI RMF 1.0, the EU AI Act, and the NAIC AI Model Bulletin precisely because no single existing framework covers the full scope of the problem.


 

Why This Required a New Seat at the Table

There is structural evidence that AI governance is categorically different from what came before it — and that evidence is the creation of the Chief AI Officer role.


Cybersecurity needed a CISO. That role was sufficient because the threat was contained enough that one technical owner could govern it across the organization. The CISO built the perimeter. Everyone else received the reports.

AI required a new role to be invented. As of 2025, 26% of organizations have appointed a CAIO, up from 11% in 2023. [15] That adoption trajectory mirrors the earlier rise of Chief Digital Officers — except the problem being solved is more complex and more enduring.


The CAIO exists because no existing seat had the cross-functional fluency to govern AI across strategy, workforce, technology, ethics, and risk simultaneously. The CTO is too technical. The CHRO is too people-focused. The CIO is too infrastructure-focused. The CEO does not have the bandwidth. AI governance needed a new kind of owner — one that sits at the intersection of every function that AI now touches.


And it does not stop at one seat. Governing AI across an organization requires accountability at every function simultaneously. That is why the FWS AI Governance Committee structure seats nine roles at the table — CEO, CFO, COO, CTO/CISO, CHRO, General Counsel, a business unit owner, a frontline representative, and the CAIO as committee chair. Each seat owns a dimension of AI risk that no other seat can cover.

 

Cybersecurity governance asked: Is the organization protected from external threats?

AI governance asks: Is every decision the organization makes — and every output it produces — being made responsibly, documented, and defensible?

That is not a larger version of the same question. It is a categorically different question. And the insurance market is about to price the difference.

 

The Numbers Behind the Risk

 

Stat

Number

Source

Cyber claims denied — missing MFA

82%

IntelTech, Jan 2026

2026 renewals with AI supplements

70%+

MoneyGeek, 2026

Orgs lacking AI governance policies

63%

The Next Web, Mar 2026

Orgs with mature AI governance committees

12%

Trussed, Apr 2026

Premium surcharges for AI-exposed risks

20–50%

MoneyGeek, 2026

Cyber denials: failure to maintain controls

34%

MedhaCloud, Mar 2026

 

 

Three Questions. Three Audiences.

Before your next renewal, every seat at your leadership table should be able to answer these:

 

For the CEO/CFO

For the CTO/CISO/General Counsel

For the CHRO/COO

When your carrier asks for AI governance documentation at your next renewal, what will you hand them?

Does your SOC 2 certification cover AI bias, hallucinations, or model drift — and have you asked your auditor that question directly?

Which AI tools are your employees using right now, and could you prove to a carrier that human oversight exists for every one of them?

 

 

Three Things to Do This Week

  •  Ask your insurance broker one question at your next renewal conversation: "What AI governance documentation does our carrier require — and does our current SOC 2 certification satisfy it?" The answer will tell you exactly where you stand.

 

  •  Pull your last renewal application and find the AI questions. If your carrier sent an AI supplement, read every question and ask honestly: could we produce documentation for every attestation we made? If the answer is no for any question, you have a gap.

 

  •  Map your AI use against your current governance documentation. Which functions in your organization are using AI tools right now? HR? Finance? Marketing? For each one, ask: do we have a policy, a named owner, and a response plan? If not, that function is ungoverned — and ungoverned AI is exactly what carriers are writing exclusions to avoid covering.


 

Governance Is the Organization's Job

The insurance market is not waiting for organizations to get ready. Carriers are filing exclusions, attaching supplements, and in some cases simply declining to cover AI-related claims altogether. The organizations that will fare best at the next wave of AI claim disputes are the ones that treated governance documentation as an operational requirement — not a compliance afterthought.


SOC 2 is not enough. A CISO is not enough. Good intentions are not enough. What carriers are now asking for — and what courts and regulators will eventually require — is documented evidence that your organization knows what AI it is running, who is accountable for it, how outputs are reviewed, and what happens when something goes wrong.


That documentation does not happen on its own. It requires a structure, a committee, a set of owned documents, and someone at the table whose job it is to make sure every seat is doing its part.


Governance is the organization's job. The coverage gap is the cost of skipping it.

 

WHAT'S YOUR NEXT STEP?

 

If you want to see where your organization stands before your next renewal, the FWS AI Governance Readiness Assessment takes 4 minutes and shows you exactly which gaps to prioritize first.


Or join Holly live for the free AI Governance Webinar: Does Your Organization Have the Right Seats at the AI Governance Table? 60 minutes. No filler. A complete framework — plus a free 7-page Governance Committee Charter — yours to keep whether or not you ever work with FWS.


This is Part 1 of the AI and the Coverage Gap series. Every post goes deeper on one layer of the AI insurance and governance gap your organization is carrying right now.

Read the full series at futureworkforcesystems.com/blog


Ready to go deeper now? A 30-minute conversation with Holly won't cost you anything — and it might save you the thing that costs everything.



 

About the Author

Holly Hartman is the founder of Future Workforce Systems (FWS) and serves as a Fractional Chief AI Officer for mid-to-large organizations building AI governance infrastructure. FWS helps organizations move from AI-anxious to AI-ready — and from guardrails to governance — always through an Ethical AI lens. Because how we adopt AI matters as much as whether we adopt it. Learn more at futureworkforcesystems.com.

 

AI and the Coverage Gap Series  |  Part 1 of 6    Next: Part 2 — The New Exclusions Your Policy Probably Already Has

 

 

FREQUENTLY ASKED QUESTIONS

 

Q1:  What is the AI insurance coverage gap and why does it matter in 2026?

The AI insurance coverage gap refers to the growing disconnect between what organizations assume their existing insurance policies cover and what carriers will actually pay when an AI-related claim occurs. In 2026, commercial carriers including AIG, Great American, and W.R. Berkley introduced new ISO exclusions (CG 40 47 and CG 40 48) that explicitly remove coverage for generative AI-related harms from standard commercial general liability policies. More than 70% of 2026 renewals now include AI-specific underwriting supplements. Organizations that cannot produce documented AI governance programs — including an acceptable use policy, a governance committee, and evidence of human oversight — face higher premiums, limited coverage, or outright exclusions. The gap matters because most organizations do not know it exists until after a claim is filed.

 

Q2:  Does SOC 2 Type II certification cover AI governance and AI insurance requirements?

No. SOC 2 Type II certification covers data security, system availability, processing integrity, confidentiality, and privacy controls. It was designed for data and systems governance — not AI behavior, AI outputs, or AI decision-making. A named auditor confirmed in March 2026: 'AI Oversight Missing in SOC 2 Reports... Model governance is not evaluated. Data processing through AI is not assessed. No AI-specific controls are tested.' An organization can hold full SOC 2 Type II certification and still have no AI acceptable use policy, no AI governance committee, no human-in-the-loop oversight process, and no AI incident response protocol. Insurance carriers are now asking for documentation that SOC 2 does not produce. ISO 42001 — the AI Management System standard — addresses the governance gaps SOC 2 leaves uncovered.

 

Q3:  How is AI insurance underwriting different from cybersecurity insurance underwriting?

Cyber insurance underwriting evolved from simple yes/no questions (pre-2020) to multi-page technical supplements requiring MFA proof, EDR deployment, and governance documentation (2023-2025). AI insurance underwriting is following the identical path — currently in the early supplement stage — with carriers asking about AI tools used, human oversight processes, vendor AI contracts, and governance documentation. The critical difference is scope. Cyber insurance addressed external threats to data and systems, owned primarily by the CISO. AI governance covers internally generated risk across every business function simultaneously — HR (bias in hiring), Finance (AI in reporting), Legal (contract review), Marketing (generated content), Operations (agentic decisions), and the Board (oversight obligations). Cyber needed one owner. AI requires nine seats at the governance table.

 

Q4:  What AI governance documentation do insurance carriers require at renewal in 2026?

Carriers are now requesting documentation across five categories during AI underwriting: (1) AI usage and scope — which tools are deployed, which functions are affected, what percentage of operations involve AI; (2) human oversight — who reviews AI outputs before they are acted on, what the sign-off process is; (3) governance documentation — whether the organization has an AI acceptable use policy, a governance committee, a charter, and a RACI matrix; (4) third-party and vendor AI — which external AI providers are used, what contracts and indemnity clauses exist; and (5) ongoing disclosure obligations — what the organization will report to the carrier if AI use materially changes during the policy period. Organizations without these documents face higher premiums, sublimits, or exclusions. The FWS 9-document AI governance suite is designed specifically to satisfy carrier documentation requirements.

 

Q5:  What is ISO 42001 and how does it relate to AI governance and insurance coverage?

ISO 42001 is the international AI Management System standard. Where SOC 2 governs data security and system controls, ISO 42001 explicitly covers the AI governance areas SOC 2 leaves unaddressed — including algorithmic fairness and bias detection, model drift monitoring, explainability requirements, human-in-the-loop oversight, AI lifecycle management, and the organizational accountability structures required to govern AI responsibly. Insurance carriers and regulators are beginning to reference ISO 42001 as a positive governance signal at underwriting. The FWS AI Governance Framework is cross-validated against ISO 42001, NIST AI RMF 1.0, the EU AI Act, and the NAIC AI Model Bulletin. Organizations that can demonstrate ISO 42001 alignment — through documented policies, a governance committee, and a complete AI systems inventory — are better positioned to maintain coverage, negotiate favorable terms, and defend claims if they occur.

 

 

DISCLOSURE AND SOURCES

 

Published April 2026  ·  Last Reviewed April 2026

 

AI-Assisted Disclosure

This post was developed with the assistance of AI research and drafting tools, reviewed and shaped by Holly Hartman's professional expertise and human editorial oversight. FWS uses AI tools consistent with the Ethical AI practices we teach — transparently, intentionally, and with human judgment at the center.

Rapid Change Notice

The insurance market, carrier requirements, and AI governance frameworks are evolving rapidly. Statistics, citations, and carrier information in this post reflect what was publicly available at the time of research and publication. We recommend verifying specific policy terms and governance requirements directly with your broker and legal counsel before making coverage decisions.

Not Legal or Insurance Advice

This content is for educational and informational purposes only and does not constitute legal, compliance, insurance, or professional advice. Consult qualified legal, technology, and insurance professionals for guidance specific to your organization.


Spotted something? AI moves fast. If you find a stat, carrier requirement, or framework detail that has changed, let us know: contact@futureworkforcesystems.com

 

Sources

 

[1]  MedhaCloud, 'Cyber Insurance Statistics 2026' (March 13, 2026) — medhacloud.com/blog/cyber-insurance-statistics-2026

[2]  IntelTech, '82% of Cyber Insurance Denied Claims Had One Thing in Common' (January 20, 2026) — inteltech.com

[3]  SQ Magazine, 'Cyber Insurance Statistics' (June 26, 2025) — sqmagazine.co.uk/cyber-insurance-statistics

[4]  MoneyGeek, 'How AI Is Changing Insurance' (2026) — moneygeek.com/insurance/how-ai-is-changing-insurance

[5]  MoneyGeek, 'How AI Is Changing Insurance' (2026) — moneygeek.com/insurance/how-ai-is-changing-insurance

[6]  CRC Group, AI Underwriting Parallel Analysis (2026) — crcgroup.com

[7]  CyberAdvisors, 'What's New in Cyber Insurance 2026' (April 13, 2026) — blog.cyberadvisors.com/whats-new-in-cyber-insurance-2026

[8]  Insurance Business Magazine, 'Cyber Insurance Enters the AI Risk Era' (February 12, 2026) — insurancebusinessmag.com

[9]  ISACA, 'Collaboration and the New Triad of AI Governance' (September 18, 2025) — isaca.org

[10]  Sanjana K., LinkedIn (March 26, 2026) — linkedin.com

[11]  Replicant, 'AI SOC ISO 27001 SOC 2 and the Security Stack Real AI Teams Need in 2026' (March 10, 2026) — replicant.com

[12]  The Mavericks, 'SOC 2 AI Compliance News: Security Audit Trends' (January 4, 2026) — themavericksco.com

[13]  Penligent, 'AI SOC ISO 27001 SOC 2 and the Security Stack Real AI Teams Need in 2026' (March 19, 2026) — penligent.ai

[14]  LBMC, 'Generative AI and SOC 2' — lbmc.com/blog/generative-ai-soc-2

[15]  Vantedge Search, 'The CAIO Emergence: Why the Chief AI Officer Is Today's Critical C-Suite Role' (September 9, 2025) — vantedgesearch.com

[16]  PHL Firm, 'New Generative AI Insurance Exclusions: What Businesses Need to Know in 2026' (February 9, 2026) — phl-firm.com/generative-ai-insurance-exclusions-2026

[17]  The Next Web, 'Why 2026 Will Be the Year of Governed Cybersecurity AI' (March 9, 2026) — thenextweb.com

[18]  NAIC, Official AI Insurance Topics Page (2026) — content.naic.org/insurance-topics/artificial-intelligence

 

FWS Enterprise LLC  ·  futureworkforcesystems.com  ·  Holly Hartman, Fractional CAIO

From guardrails to governance — because how we adopt AI matters as much as whether we adopt it.

Comments


FWS Logo Transparent

(502) 509-4070

 

Other Questions or Inquiries:

Email: contact@futureworkforcesystems.com

Company

    Louisville, KY​

    Southern, IN

    USA Based Company

    Nationwide Coverage

What brought you here today?

© 2026 Future Workforce Systems · Holly Hartman. All rights reserved. 
These tools are for personal use and professional development only.
Reproduction, redistribution, or use in paid offerings without written consent is not permitted.


To license or adapt tools for your team or program, contact us: contact@futureworkforcesystems.com

|

|

bottom of page