top of page

Your Business Insurance May Not Cover Your AI Failures — Here's What You Need to Know in 2026

By Holly Hartman | Founder, Future Workforce Systems



Your Last Renewal. Did You Read It?


Here's a scenario I want you to sit with for a moment.


Your company deployed an AI-powered hiring tool six months ago. It was fast, the vendor demo looked great, and your CHRO signed off. Last week, a candidate filed a discrimination complaint — the algorithm, it turns out, systematically downgraded resumes from certain zip codes.


Your legal team files a claim under your Employment Practices Liability policy. Your broker calls back three days later with news you weren't prepared for: the policy was renewed in early 2026 with a new AI-related questionnaire your team filled out quickly.


Because you couldn't document bias testing or human-in-the-loop review of the tool, the carrier is invoking a sublimit. The claim that should have been covered at $5 million is now capped at $500,000.


This is not a hypothetical from 2030. This is the insurance market you are operating in right now.


 

The Assumption That's Leaving You Exposed


Most C-suite leaders I work with make a reasonable assumption: their existing insurance portfolio covers their AI-related risks. They have Tech E&O. They have cyber. They have D&O and EPLI. They're protected.


That assumption is becoming dangerously incorrect and the gap is widening every quarter.


The insurance industry is in the early stages of a fundamental repricing of AI risk. Carriers who spent 2023 and 2024 figuring out what their exposure looked like are now acting. They are filing exclusions, adding questionnaires, introducing sublimits, and in some cases, simply declining to cover AI-related claims altogether.


The challenge is that most businesses won't discover this until they file a claim. By then, it's too late to close the gap.


I want to walk you through what's actually happening across six major policy types and then explain why this is, at its core, a governance problem.

 


Six Policies, and What They Actually Cover (Right Now)


1. Technology Errors & Omissions (Tech E&O)


Tech E&O remains your best chance at coverage for AI performance failures, an AI system that produces incorrect outputs, generates flawed code, or causes professional harm. But carriers like W.R. Berkley have introduced what legal commentators are calling "absolute" AI exclusions into E&O policies, language so broad it can bar coverage for any claim "arising out of" AI use or development, including chatbot outputs and AI-generated content. If your policy was renewed recently without a close read of the endorsements, you may have signed away coverage you assumed you had.


2. Cyber Liability


Cyber remains the policy type most protective of AI-adjacent risk for now. Network breaches, ransomware, and data exfiltration are still covered under most cyber forms. But as of January 1, 2026, a critical shift occurred: many carriers began explicitly excluding AI-generated deepfake fraud from standard social engineering coverage. Standard cyber policies renewed after that date may provide no coverage for deepfake fraud. If your CFO approves a wire transfer after being deceived by a deepfake of your CEO, your cyber policy may not respond.


3. Directors & Officers (D&O)


D&O is where the exclusion language is most aggressive right now. Major carriers AIG, W.R. Berkley, and Great American have sought regulatory clearance for new AI exclusions affecting management liability lines. Berkley's exclusion, which is already in use on some policies, purports to exclude coverage for any claim based upon or arising out of "the actual or alleged use, deployment, or development of artificial intelligence." That language is broad enough to eliminate coverage for securities claims, regulatory actions, and governance failures — if AI played any role in the underlying decision.


4. Employment Practices Liability (EPLI)


EPLI is rapidly becoming an underwriting flashpoint. AI-driven hiring, promotion, and termination decisions create discrimination, disparate impact, and wrongful termination exposure that underwriters are watching closely. In response, many EPLI markets are now adding AI-specific questionnaires at renewal ,as king whether you use automated tools in employment decisions, whether you've conducted bias testing, and whether human review exists in the process. If you can't answer those questions with documentation, expect sublimits, exclusions, or pressure on your renewal terms.


5. Commercial General Liability (CGL)


The Insurance Services Office introduced two new CGL endorsements in 2026: CG 40 47 and CG 40 48. These endorsements allow carriers to explicitly exclude generative AI-related claims from standard CGL coverage, including defamation, privacy violations, copyright infringement, and bodily injury or property damage arising from AI outputs. Historically, some AI-generated harm would have been picked up incidentally by CGL. That era is ending. This is the formal close of what the industry calls "silent AI" coverage, the period when AI risk slipped into your policy simply because nothing excluded it.


6. Standalone AI Insurance Products (Emerging)


The market is beginning to respond with purpose-built products. Testudo launched specialty coverage for generative AI liability risks in January 2026, explicitly designed to fill the gaps created by the new CGL exclusions — covering defamation, copyright, and privacy claims arising from AI outputs. Armilla Assurance offers AI performance warranties backed by Lloyd's with capacity up to $25 million. Relm Insurance offers programs like NOVAAI and PONTAAI for enterprises with material algorithmic risk in fintech and healthtech.



The catch: most of these products are surplus lines, not admitted in all states, and must be placed through specialty brokers. And because underwriters are pricing these products with limited loss history, they are doing deep diligence on buyers, which brings me to the part of this conversation that matters most.


 

The New Reality: AI Exclusions Are Being Filed Now


I want to be direct about the trajectory here, because I think the urgency is being underestimated in most boardrooms.


Carriers are not waiting for a wave of AI-related court verdicts to act. They are filing exclusions proactively getting regulatory clearance, building the language, and rolling changes into renewal policies now, before claims volume spikes. AIG has told regulators it had "no plans to implement" its proposed exclusions immediately but wanted the option as AI claim volume grows. That is a carrier telling you exactly what is coming.


What does this mean in practice? It means your AI exposure and your insurance coverage are moving in opposite directions. Your AI use is expanding more tools, more decisions, more enterprise surface area. Your coverage, at renewal, may be contracting. The gap between those two lines is your uninsured risk.


The companies that will be hurt most are not the ones knowingly taking AI risks. They're the ones that haven't looked closely enough at what changed in their policies at last renewal.


 

The Governance-Insurability Connection


Here is where I need to say something that may feel uncomfortable: this is not just an insurance problem. It's a governance problem. And the two are now formally linked.


Insurers are increasingly using AI governance documentation as an underwriting criterion. This is not informal. Carriers are introducing affirmative AI security riders that require clients to demonstrate governance controls before granting expanded coverage.


The documentation they're evaluating includes:

• A formal AI governance policy with defined roles and board-level oversight

• An inventory of AI systems in use, their purposes, and associated risk assessments

• Evidence of model testing, including bias and robustness testing, plus change control and incident response plans for AI failures

• Vendor management procedures for third-party AI tools used in material business decisions

 

For EPLI specifically, underwriters are asking whether bias testing has been conducted on hiring and promotion tools, whether human review exists for AI-assisted employment decisions, and whether vendor-provided platforms have been evaluated for disparate impact.


Think about what this means at the board level. Twenty-three states and Washington, D.C. have now adopted the NAIC AI Model Bulletin, which requires insurers to establish governance, documentation, and audit procedures for their own AI systems. That standard is becoming the lens through which insurers evaluate their policyholders' AI programs as well. If your own AI governance would not satisfy NAIC-style expectations, expect more friction at renewal, tighter terms, higher retentions, or exclusions you cannot negotiate around.


One 2026 market analysis captures the direction of travel clearly: the insurance market is bifurcating. Companies that can demonstrate robust AI governance frameworks will increasingly access affirmative coverage, narrower exclusions, and more stable premiums. Companies that cannot will find themselves relegated to broad exclusions, lower limits, or no coverage at all.


This is not a prediction. This is a description of a market that is already sorting.

 


What Executives Should Do Right Now


I am not an attorney or an insurance broker, and nothing here should substitute for legal or insurance counsel. What I can offer is a governance practitioner's perspective on where to start.


Pull your actual policy forms. Ask your broker to provide every endorsement effective at your last renewal, especially anything amending coverage for AI, algorithmic decisions, synthetic media, or automated systems. Don't rely on your summary of coverage.


Map your AI use against your coverage. Create a simple inventory of every AI tool your organization uses in a material decision-making context, hiring, lending, pricing, customer communications, content generation. Then ask your broker explicitly: is each of these uses covered? Under which policy?


Ask the renewal questionnaire questions now. Don't wait for renewal to discover you can't answer the governance questions underwriters are asking. Run through them with your risk and legal teams today. The gaps you find are your AI governance work plan.


Explore the standalone market. If your industry has elevated AI exposure, financial services, healthcare, technology, professional services — ask your broker whether specialty AI products are appropriate for your risk profile. Understand that these are surplus lines products requiring specialty placement.


Document what you're doing. Insurance coverage is increasingly conditioned on your ability to demonstrate governance, not just have it. Written policies, documented testing, named oversight roles, and vendor review procedures all matter at renewal. Governance that isn't documented didn't happen, in the eyes of an underwriter.


 

The Governance Gap Is Closeable


The executives I most often see surprised by this conversation are not careless people. They're busy people who assumed that because their AI tools came from reputable vendors, and their insurance program was professionally managed, the two were aligned.


They weren't. And the gap is growing.


The good news is that AI governance is not a compliance exercise that takes eighteen months to begin. It starts with clarity, knowing what AI you're using, where it touches decisions that carry liability, and whether the structures around those tools meet the bar that underwriters, regulators, and courts are beginning to set.


At Future Workforce Systems, we help organizations build AI Governance Committees and frameworks that treat governance as a business capability, not a checkbox. If you're heading into a renewal, preparing for board-level AI oversight conversations, or simply trying to understand where your organization actually stands, that's where we start.


Ready to assess your AI governance readiness?

 

 

Sources

 

Holly Hartman is the founder of Future Workforce Systems, an AI governance, ethical AI, and workforce readiness consulting firm serving mid-sized enterprises. She advises C-suite leaders and builds AI Governance Committees grounded in collaborative intelligence.



Comments


FWS Logo Transparent

Other Questions or Inquiries:

Email: contact@futureworkforcesystems.com

Company

© 2026 Future Workforce Systems · Holly Hartman. All rights reserved. 
These tools are for personal use and professional development only.
Reproduction, redistribution, or use in paid offerings without written consent is not permitted.


To license or adapt tools for your team or program, contact us: contact@futureworkforcesystems.com

|

|

bottom of page