top of page

The 7 Questions Your AI Governance Committee Should Be Able to Answer — But Can't

By Holly Hartman  |  Future Workforce Systems

 

 



Imagine you're sitting in an executive briefing — not a planning session, but a crisis call. A decision your organization made, or allowed to be made, was influenced by an AI system. Something went wrong. Someone was harmed. And now your General Counsel is asking a very simple question: "Who owned that decision?"


The room goes quiet.

Not because nobody cares. Because nobody actually knows.

 

This is the governance gap that keeps me up at night — and it should keep yours up too. Not because AI is dangerous in some abstract, futuristic way, but because the accountability vacuum it creates is already producing real consequences: class action lawsuits, insurance claim denials, regulatory enforcement, and write-downs that wipe out hundreds of millions in shareholder value.


The pattern I see across organizations is consistent: a committee gets formed, a policy gets written, a framework gets cited. Gartner research found that only 35% of corporate boards have formally integrated AI into their oversight responsibilities. That means nearly two-thirds are approving AI investments and deployments without a functioning oversight structure in place. The committee exists. The governance does not.


The real test of any AI governance structure is not whether it exists on paper. It is whether the people in the room can answer the questions that matter when something goes wrong. Below are seven of those questions. Most governance committees cannot answer them. The ones that can are building competitive advantage. The ones that cannot are accumulating risk they may not see until it surfaces in a headline, a lawsuit, or a coverage denial.

 

 

 

Question 1:

When our AI makes a consequential mistake, who in this organization owns that decision and do they have the authority to act in the first hour?


This is not a hypothetical. In 2023, a class action was filed against UnitedHealth Group's NaviHealth subsidiary alleging that its nH Predict algorithm systematically denied post-acute care coverage to Medicare Advantage patients, often within minutes of admission. The lawsuit alleged that the denial rate was driven by an AI tool, not by individual clinical judgment, and that no single person with authority was meaningfully in the loop when those decisions were made. The case proceeded into 2025 with significant legal and reputational consequences.


This is what the "accountability vacuum" looks like in practice. When AI makes a decision, or heavily shapes one and something goes wrong, every person adjacent to that decision is exposed if nobody was explicitly designated as owning it. Legal analyses of the EU AI Act consistently emphasize that designated human oversight is not optional for high-risk systems; it is a core requirement. But designation on paper is not the same as a person who has the authority, the training, and the mandate to act within the first hour of an incident.


Ask your governance committee who that person is for each of your consequential AI systems. Then ask whether that person knows they own it. The answer to the second question is usually more revealing than the first.



 

 Question 2:

Do the people on this committee have enough AI fluency to evaluate what they are approving or are they signing off on things they do not fully understand?


A governance committee that cannot evaluate what it is governing is not a governance committee. It is a rubber stamp with liability exposure. And yet, this is precisely what is happening inside many organizations. Research from Deloitte found that enterprises where senior leadership actively shapes AI governance, rather than delegating it entirely to technical teams, achieve significantly greater business value. The implication is both encouraging and sobering: most are still delegating it.


The most common version of this gap sounds like: "We have technical experts on the committee who handle the AI questions." That structure creates a two-tier system where the people with authority don't understand the technology, and the people who understand the technology don't have authority. Neither group is fully accountable. ISO/IEC 42001, the international AI management system standard, requires that organizations ensure "competence" for roles involved in AI governance — but leaves it to the organization to define what that means. Most organizations have not defined it.


The question is not whether everyone on your committee needs to become a data scientist. The question is whether they can read a model card, ask meaningful questions about training data, identify when a vendor is overselling capability, and know enough to push back when something doesn't pass the smell test. That is AI literacy at the governance level. It is not widely present, and it is not optional.


 

 Question 3:

Can we explain, in plain language, what each of our AI systems does, what data it uses, and what it is optimizing for?


If you cannot explain what your AI is optimizing for, you cannot govern it. This sounds obvious. It is not widely practiced. NIST's AI Risk Management Framework — the de facto standard for U.S. organizations includes "Map" as one of its four core functions, specifically requiring that organizations build a contextual understanding of their AI systems: what they do, what data they rely on, who is affected, and what the failure modes look like. Most organizations that claim to follow the AI RMF have not completed a meaningful Map function for their full AI portfolio.


The Zillow implosion of 2021 is the cleanest illustration of what happens when an organization cannot answer this question. Zillow's iBuying algorithm worked exactly as designed. The problem was that leadership did not fully understand what it was optimizing for or how the model would behave in volatile market conditions. The result was a $500M+ write-down and the complete wind-down of a major business unit. The algorithm did not fail. The governance did.


Plain-language AI inventories are not glamorous. They are also not optional if you want to have a defensible answer when a regulator, a plaintiff's attorney, or your own board asks what your AI is doing and why.



 

 Question 4:

Do we know which of our AI tools are being used by employees without formal approval and what data those tools have access to?


Shadow AI is not a future problem. It is a present one. Employees across virtually every function are using AI tools: writing assistants, summarization tools, code generators, & research platforms. AI tools that were never reviewed, never approved, and never assessed for what data they ingest or where that data goes. Governance implementation guides consistently flag poor AI asset inventory as one of the primary failure points in real-world governance programs, specifically because it leads to missed high-risk use cases.


The risk profile of shadow AI is not theoretical. When an employee pastes customer data, financial projections, or personnel information into an unapproved tool, that data may be used to train third-party models, stored in jurisdictions your contracts don't account for, or subject to breach without your knowledge. Your governance committee cannot manage exposure it doesn't know exists.


This question also has a cultural dimension that governance frameworks rarely address directly. Employees are using unsanctioned tools because the approved tools don't meet their needs, or because nobody communicated what the rules are, or because the approval process is so slow that working around it feels rational. Shadow AI is a governance problem and a workforce readiness problem simultaneously.


 

 Question 5:

If a regulator, plaintiff's attorney, or insurer asked us to produce documentation of our AI governance decisions from the past 12 months, what could we hand them today?


This question is the one that changes the temperature in a room. Because most governance committees, when they actually think about it, realize the answer is: not much. Meeting minutes, if those exist. Maybe an approved use case form or two. Possibly a policy document that was signed but never operationalized. Berkeley Insurance and AIG have both been reported to be adding absolute AI exclusions to D&O and E&O policies. The implication is direct: companies without documented governance programs may find that their coverage does not extend to AI-related claims the way they assumed it did.


This is not a hypothetical insurance scenario. The EU AI Act requires detailed technical documentation, logging, and conformity assessments for high-risk AI systems. The NAIC AI Model Bulletin, now adopted by at least 24 states, explicitly requires insurers to maintain documentation demonstrating responsible AI use and governance oversight. SR 11-7 in financial services has long required that model risk decisions be documented and subject to independent validation. In each of these contexts, governance documentation is not a nice-to-have. It is the evidence.


The organizations that are building documentation practices now: decision logs, committee charters, risk assessment records, & audit trails are the ones that will be able to respond defensibly when the question is no longer hypothetical.

 


 Question 6:

Has anyone audited our AI-assisted hiring, performance, or promotion tools for demographic bias in the past year and do we have documentation of that audit?


In 2023, the EEOC reached its first AI hiring discrimination settlement — $365,000 after finding that a company's AI screening tool had learned bias from historical hiring data and was systematically screening out older applicants. Nobody had audited it. Nobody had even flagged that it might be worth looking at. The tool worked exactly as trained. The training data reflected decades of discriminatory hiring patterns, and the AI faithfully replicated them.


This is not an isolated case. NYC Local Law 144 now requires employers using automated employment decision tools for hiring or promotion in New York City to conduct and publish an annual bias audit. Colorado's AI Act (SB 24-205) imposes risk management and impact assessment obligations on deployers of high-risk AI systems affecting employment decisions. The Illinois AI Video Interview Act requires notification, consent, and usage disclosures for AI-analyzed video interviews. The regulatory landscape around AI in employment is moving fast, and most governance committees are not moving with it.


If your organization uses AI in any part of the talent lifecycle: sourcing, screening, performance management, promotion decisions and you cannot produce a bias audit with documentation, you are carrying legal exposure that most of your leaders don't know exists. And unlike some forms of legal risk, this one tends to become public.


 

 Question 7:

Are we ready for agentic AI? Do we have a human oversight framework for when AI is not just generating text but taking actions on our behalf?


Every framework your organization is currently working with — NIST AI RMF, ISO 42001, even the EU AI Act — was designed primarily around AI systems that assist human decision-making. They assume a human in the loop who can evaluate, approve, or reject what the AI produces. Agentic AI breaks that assumption. A 2026 governance analysis describes this as the emerging "governance gap": existing policies and frameworks do not translate into effective control over AI that can plan, execute multi-step tasks, and interact with external systems autonomously.


Agentic AI is not a future scenario. It is already being deployed inside enterprise environments via workflow automation, autonomous research agents, AI-driven customer interactions, and tools that can write and execute code, send communications, and take actions inside your systems often with minimal human review of individual steps. Singapore's Model AI Governance Framework for Agentic AI, launched in January 2026 and described as the first global standard specifically for autonomous agents, emphasizes that human accountability must be explicitly designated for agent behavior, not assumed. It requires escalation playbooks, continuous monitoring, and clear kill-switch mechanisms for high-risk deployments.


The question for your governance committee is not whether your organization will use agentic AI. The question is whether you will have a human oversight framework in place before something goes wrong or after. The organizations that answer it proactively are the ones that will be able to move fast without losing control.

 

 

 


What These Questions Actually Tell You


The inability to answer these questions is not a sign of negligence. It is a sign of where most organizations actually are in their AI governance journey and it is more common than any executive wants to admit publicly. The committee was formed in good faith. The policy was written with good intentions. But somewhere between the structure and the substance, the capability to actually govern didn't get built.


The organizations that are getting this right share a few things in common. They have executive owners who understand what they are overseeing, not just who they are overseeing it with. They have documentation practices that would hold up in a legal proceeding. They have audited their high-risk use cases including the HR tools that rarely get scrutinized and they have records to prove it. And they have begun to build a human oversight framework for agentic systems before the regulation catches up. Research consistently shows that enterprises where senior leadership actively shapes AI governance achieve measurably greater business value. Governance is not a cost center. It is a competitive advantage, if it is real.


The organizations that cannot answer these questions are not necessarily behind. They are at a decision point. The risk they are carrying is real, but it is also addressable. The window to get ahead of it is still open.

 

 

 

Let's Talk About Your Governance Readiness


If reading these questions surfaced more uncertainty than confidence, that is exactly where the conversation with FWS starts. We help organizations build the governance structure and develop the people inside it, so that these questions have clear, defensible answers. We work with C-suite leaders, General Counsel, Chief Risk Officers, CHROs, and CFOs who understand that AI governance is not a technical problem. It is a leadership problem.


Learn more at futureworkforcesystems.com, or reach out directly to start the conversation.

 

 

 

Sources & Further Reading


Regulatory Frameworks & Standards

[1] NIST AI Risk Management Framework (AI RMF 1.0) — https://www.nist.gov/itl/ai-risk-management-framework

[2] EU AI Act — High-Level Summary — https://artificialintelligenceact.eu/high-level-summary/

[3] ISO/IEC 42001 AI Management Systems Standard — https://learn.microsoft.com/en-us/compliance/regulatory/offering-iso-42001

[4] OECD AI Principles (updated 2024) — https://www.oecd.org/en/topics/ai-principles.html

[5] Federal Reserve SR 11-7: Supervisory Guidance on Model Risk Management — https://www.federalreserve.gov/supervisionreg/srletters/sr1107.htm


Case Studies & Legal Developments

[7] UnitedHealth / NaviHealth nH Predict Algorithm — Class Action Coverage (Courthouse News) — https://www.courthousenews.com

[8] EEOC First AI Hiring Discrimination Settlement — https://www.eeoc.gov/newsroom/eeoc-settles-first-ai-hiring-discrimination-lawsuit

[9] Zillow iBuying Wind-Down and Write-Down (Wall Street Journal) — https://www.wsj.com/articles/zillow-offers-real-estate-algorithm-home-buying-11635883117

[10] Insurance AI Exclusions — D&O and E&O Coverage Developments (Insurance Journal) — https://www.insurancejournal.com/news/national/2023/01/17/704611.htm


Employment & HR AI Regulations

[11] NYC Local Law 144 — Automated Employment Decision Tools — https://www.nyc.gov/site/dca/about/automated-employment-decision-tools.page

[12] Colorado AI Act SB 24-205 — High-Risk AI System Obligations — https://leg.colorado.gov/bills/sb24-205

[13] Illinois AI Video Interview Act (AIVIA) — https://www.ilga.gov/legislation/ilcs/ilcs3.asp?ActID=4015


Research, Governance Analysis & Implementation Guides

[14] AI Governance Regulatory Landscape 2026 (Hung Yi Chen) — https://www.hungyichen.com/en/insights/ai-governance-regulatory-landscape-2026

[15] Singapore Model AI Governance Framework for Agentic AI, Jan 2026 — https://business20channel.tv/agentic-ai-governance-guide-frameworks-certifications-strategy-31-january-2026

[16] NIST AI RMF Implementation Best Practices (Lumenova AI) — https://www.lumenova.ai/blog/ai-risk-assessment-best-practices-nist-ai-rmf/

[17] NIST AI RMF — Govern, Map, Measure, Manage in Plain English (Dawgen Global) — https://www.dawgen.global/nist-ai-rmf-in-plain-english-govern-map-measure-manage-done-right/

[18] ISO 42001 Certification — Lessons Learned (Schellman) — https://www.schellman.com/blog/iso-certifications/iso-42001-lessons-learned

[19] NIST AI RMF — Board & Executive Implementation (Diligent) — https://www.diligent.com/resources/blog/nist-ai-risk-management-framework

[20] Responsible AI in 2026: A 3-Step Governance Guide (OneTrust) — https://www.onetrust.com/blog/responsible-ai-in-2026-a-3-step-guide-for-governance-that-scales/

[21] SR 11-7 + CFPB AI Compliance in Financial Services (Team Innovatics) — https://teaminnovatics.com/blogs/ai-compliance-platform-surviving-sr-11-7-and-cfpb-enforcement/

 


Holly Hartman is the founder of Future Workforce Systems, an AI governance, ethical AI, and workforce readiness consulting firm serving mid-sized enterprises. She advises C-suite leaders and builds AI Governance Committees grounded in collaborative intelligence.



© Future Workforce Systems  |  futureworkforcesystems.com

Comments


FWS Logo Transparent

Other Questions or Inquiries:

Email: contact@futureworkforcesystems.com

Company

© 2026 Future Workforce Systems · Holly Hartman. All rights reserved. 
These tools are for personal use and professional development only.
Reproduction, redistribution, or use in paid offerings without written consent is not permitted.


To license or adapt tools for your team or program, contact us: contact@futureworkforcesystems.com

|

|

bottom of page