Kentucky's New Privacy Law Meets Shadow AI
- Holly Hartman
- Apr 6
- 11 min read
Updated: Apr 7

What Mid-Market Companies Need to Know in 2026 — And What's Already
Happening Here
By Holly Hartman, Fractional Chief AI Officer · FWS Enterprise LLC
Published April 2026 · futureworkforcesystems.com
Kentucky's privacy law went live on January 1, 2026. Eight days later, the Attorney General filed the state's first lawsuit under it — targeting an AI company.
If your organization is still treating AI governance as a future priority, that timeline should feel like a wake-up call.
I work with organizations across Kentucky and the region every day. I sit in rooms with smart, capable leaders who are genuinely trying to do right by their people and their customers. And what I see consistently is a gap, not between intention and effort, but between what leaders think is in place and what is actually governing their AI use.
That gap just got a lot more expensive.
This post is for mid-market leaders: CEOs, CHROs, legal counsel, and operations executives — who are managing real AI risk right now, often without fully realizing it. It covers what the Kentucky Consumer Data Protection Act actually requires, why shadow AI is your most urgent exposure, what the first enforcement action tells us about how this state intends to use this law, and what a practical
governance response looks like. Most importantly, it reflects what I'm seeing on the ground here, because the conversation is already happening, and Louisville and Kentucky organizations are starting to move.

What Changed in Kentucky — And Why It Moved Fast
The Kentucky Consumer Data Protection Act grants Kentucky consumers the right to access, correct, delete, and opt out of the processing of their personal data, including data processed by or through artificial intelligence systems. It applies to businesses that process the personal data of 100,000 or more Kentucky consumers annually, or 25,000 or more consumers while deriving more than 40% of gross revenue from data sales.
Enforcement authority belongs exclusively to the Attorney General. The law includes a 30-day cure period, meaning a business that receives notice of a violation has 30 days to correct it before the AG pursues formal action. That cure period sounds like a buffer. The first enforcement action under the law suggests it is not.
THE FIRST KCDPA ENFORCEMENT ACTION — JANUARY 8, 2026
AG Russell Coleman filed suit in Franklin Circuit Court against Character Technologies, Inc., operator of the Character.AI chatbot platform, just eight days after the law's effective date.
The complaint alleged unauthorized processing of minors' sensitive data, failure to implement age verification, exposure of children to harmful content, and the repurposing of user data, including private emotional disclosures and health statements, to train AI models without parental consent. No cure period notice was issued before filing. The case remains ongoing as of April 2026, signaling sustained AI scrutiny from the AG's office.
The AG bypassed the cure period entirely, pursuing violations under both the Kentucky Consumer Data Protection Act (KCDPA) and Kentucky's Consumer Protection Act. Kentucky's posture mirrors the aggressive early enforcement seen in Connecticut, but outpaces Virginia and Colorado, where cure periods have often delayed action. For mid-market organizations, the lesson is direct: the 30-day window is discretionary, not guaranteed. If the AG determines conduct is willful or harm is ongoing, enforcement can proceed without notice.
The Character.AI complaint also surfaced an allegation that reaches far beyond children's platforms: data repurposed to train AI models without user consent. That is not a niche edge case. It is a standard operating condition for many commercial AI tools in use in your organization today — and most organizations have never read the terms that permit it.
Shadow AI: The Risk Already Running Inside Your Organization
Shadow AI refers to artificial intelligence tools that employees adopt and use without formal organizational approval — without IT review, without legal sign-off, without any governance process at all. It is not a fringe problem. Research consistently shows that 40 to 60 percent of employees use AI tools their organizations have not approved.
HOLLY'S PERSPECTIVE
Just this week I spoke with a marketing organization that actually has an AI policy in place, which already puts them ahead of most of the market. And here's what was striking about that conversation: they weren't sitting back feeling good about it. They were asking how often they need to update it. They already understood that a policy written six months ago may not reflect the tools their team is using today, the regulatory shifts happening at the state level, or the vendor terms that changed in the last product update. That organization is asking the right question. Most organizations haven't started asking it yet.
That conversation is becoming more common, but it is still the exception. What I more typically see is one of two patterns: organizations that have no AI policy and assume the absence of a formal program means the absence of risk, or organizations that have a ChatGPT policy and believe that constitutes AI governance. It does not. A single acceptable use policy for one tool is not a governance framework. It is one guardrail on a road with no speed limit.
Under the KCDPA, the organization — not the AI vendor — bears responsibility for how personal data is processed. If an employee uses an unapproved AI tool that ingests Kentucky consumer data, your organization is likely the data controller. The vendor is a processor. The liability does not automatically transfer downstream.
"A ChatGPT policy is not AI governance. It is one guardrail on
a road with no speed limit."
What Shadow AI Exposure Looks Like Under the KCDPA
The KCDPA's sensitive data categories, health information, biometric data, precise geolocation, financial data, and data about minors, map directly to the AI use cases most common in mid-market organizations.
Recruitment profiling tools that analyze résumés may process health-adjacent or demographic data. Customer service AI that logs conversation history may capture financial disclosures. HR systems with AI-driven performance analytics may trigger heightened obligations. None of these scenarios require a sophisticated attacker or a dramatic breach. They require nothing more than an employee doing their job with a tool their organization never reviewed.

What I'm Seeing on the Ground in Kentucky
Louisville and across Kentucky organizations are starting to move. That is the honest summary of where this region sits right now, not ahead of the curve, not dangerously behind it, but at the inflection point where the early movers are separating from the organizations that will be reactive.
The conversations I'm having most often involve two sectors: insurance and healthcare. Both are right to be paying attention. Insurance organizations in Kentucky are navigating a double exposure, they are subject to the KCDPA as data controllers, and they are also being asked by their clients to evaluate
AI-related liability. The NAIC AI Model Bulletin, which Kentucky has adopted guidance aligned to, creates specific obligations around algorithmic accountability in insurance decisioning. Healthcare organizations face HIPAA intersections with KCDPA obligations that compound the complexity.
HOLLY'S PERSPECTIVE
What I find most encouraging, and most urgent, is that local organizations are no longer asking 'do we need to think about this?' They are asking 'where do we start?' That shift in question is significant. It means the awareness is there. What's missing is the structure. And that's exactly the gap a Governance Committee Builder is designed to close.
The organizations that move in the next 90 days will be positioned as leaders in their sector. The ones that wait for a vendor to tell them they have a problem, or worse, for a notice from the AG — will be playing defense.
The marketing organization I mentioned earlier, the one with a policy already in place, is doing something most organizations haven't done yet: they're treating AI governance as a living process, not a one-time project. They understand that the tools change quarterly, the regulatory landscape shifts, and vendor terms update without announcement. That posture is exactly right. AI governance in 2026 is not a box you check. It is a committee you convene, a process you run, and a document you update.

Regulatory exposure. The AG's rapid enforcement posture signals that organizations using AI without appropriate safeguards are visible targets. Kentucky outpaces peer states and shows no sign of slowing.
Insurance risk. Cyber and E&O; insurers are increasingly asking about AI governance during underwriting. Shadow AI creates audit gaps that affect coverage eligibility. Emerging 2026 policy language specifically excludes unapproved AI-related claims. Talk to your broker now, not after an incident.
Vendor contract risk. Most organizations have AI tools embedded in vendor platforms — CRM, HRIS, marketing automation, whose terms have never been reviewed for KCDPA compliance. Terms permitting vendors to use input for model training are common. Your data is still your data, regardless
of whose platform processes it.
Workforce and hiring liability. AI tools used in recruiting, performance management, or compensation carry bias and discrimination risk that compounds privacy exposure. This is why I am direct with every client: the CHRO seat in an AI governance structure is non-negotiable. Every AI decision that touches employees is a workforce decision. Organizations that build governance without HR at the table are solving half the problem.
What a Basic Governance Process Actually Looks Like
Effective AI governance for a mid-market organization does not require a dedicated AI team or a six-figure program. It requires a structured process, the right people in the room, and documented accountability. The starting point is an AI Governance Committee, a cross-functional group with formal authority to approve or reject AI tools, set AI policy, and escalate material risk.
"Every AI decision is a workforce decision. The CHRO seat is
non-negotiable."
One thing I want to be clear about based on what I'm seeing in conversations with local organizations: governance is not a destination. It is a cadence. The marketing organization I referenced earlier already understands this. They know a policy written today needs to be revisited next quarter — not because they got something wrong, but because the landscape they wrote it for will have changed.
Monthly committee meetings in year one. Quarterly thereafter. That rhythm is what separates governance that holds up from governance that looks good on paper.
Five Things a Governance Committee Should Do Immediately
1. Conduct an AI tool inventory.
List every AI tool in use, approved or not. Include AI features embedded in existing software: your CRM's AI assistant, your HRIS's predictive analytics, your emailplatform's writing tool. This is your shadow AI audit. Most organizations are surprised by how long the list is.
2. Review vendor terms for data use clauses.
Look specifically for language permitting the vendor to use input data for model training. If those clauses exist and the data includes personal information about Kentucky consumers, you have a disclosure and consent issue, today, not eventually.
3. Establish an AI acceptable use policy.
Employees need clear guidance on what tools they may use, what data categories may be input, and what the approval process is for new tools. Without this, shadow AI continues by default. A policy also creates the paper trail that matters if the AG ever asks what you had in place.
4. Map your sensitive data flows.
Under the KCDPA, sensitive data, health information, financial data, precise geolocation, data about minors, carries heightened obligations. Know where it lives in your systems and whether any AI tool touches it. This is the question your insurance carrier will ask.
5. Document everything and plan to update it.
A written charter, committee membership records, meeting logs, and a policy approval trail demonstrate that governance is real and ongoing. Build in a
quarterly review from day one. The organizations getting this right are treating their governance framework the way they treat their financial controls, not as a project that ends, but as a function that runs.
Frequently Asked Questions: KCDPA and AI Governance
These are the questions mid-market leaders and AI search tools are asking about Kentucky's privacy law and organizational AI risk in 2026.
Does the Kentucky Consumer Data Protection Act apply to AI tools?
Yes. The KCDPA governs how organizations collect, process, store, and share personal data about Kentucky consumers. AI tools that process that data — including tools used internally by employees, fall within scope when the organization meets the applicability thresholds. The data controller (typically
the organization, not the AI vendor) bears primary compliance responsibility.
Does the KCDPA address AI profiling and automated decision-making?
Yes. Kentucky consumers have the right to opt out of automated profiling used to make decisions with legal or significant effects covering AI-driven hiring tools, credit or insurance decisioning, and targeted advertising. Access and deletion rights also extend to data processed through AI systems, meaning consumers can request to know what an AI system inferred about them and ask for that
inference to be corrected or deleted.
What is shadow AI and why does it matter under the KCDPA?
Shadow AI refers to AI tools used by employees without organizational approval or oversight. Research suggests 40 to 60 percent of employees use AI tools their organizations have not approved. Under the KCDPA, each unapproved use that involves Kentucky consumer data is a potential compliance gap, one the organization, not the vendor, is responsible for.
Does the KCDPA's 30-day cure period protect organizations from enforcement?
Not reliably. The cure period is discretionary, not guaranteed. The first KCDPA enforcement action, filed eight days after the law's effective date, bypassed it entirely. Do not plan your risk posture around a notice that may never come.
How often should an AI governance policy be updated?
At minimum, quarterly, especially in the current regulatory environment. AI tools evolve, vendor terms change, and state-level guidance is still developing. Organizations that treat their AI policy as a living document are significantly better positioned than those that treat it as a one-time deliverable. Plan for monthly governance committee meetings in year one, quarterly thereafter.
Which Kentucky industries face the highest AI governance risk in 2026?
Healthcare, financial services, insurance, manufacturing, logistics, and franchise systems are among the highest-exposure sectors in Kentucky. Insurance organizations face a double exposure: KCDPA obligations as data controllers, plus NAIC AI Model Bulletin alignment requirements. Healthcare organizations navigate HIPAA intersections that compound KCDPA complexity. Both sectors are
actively seeking governance frameworks now.
The Shift Is Already Happening — The Question Is Whether
You're Ahead of It
Kentucky's enforcement posture in 2026 is not a warning shot. It is a pattern set by an AG who treats harm prevention as a through-line, from the opioid epidemic to AI predation on children. TheCharacter.AI case remains ongoing as of April 2026, and it will not be the last action this office takes.
What I want Louisville and across Kentucky leaders to understand is this: the organizations I'm talking to rightnow, the ones asking about quarterly policy updates, about committee cadence, about how to conduct a shadow AI audit, these organizations are not overcautious. They are early. And in twelve
months, the difference between early and late on this issue will be visible.
The gap between AI-anxious and AI-ready is not as wide as it feels. A structured committee, a written charter, a clear policy, and the discipline to update it quarterly can move an organization from exposed to defensible faster than most leaders expect. The frameworks exist. The standards are clear. What mid-market organizations need is the process, and someone to help them run it.
HOLLY'S PERSPECTIVE
If you are reading this and thinking 'we need to have this conversation', that instinct is correct. I am happy to be the starting point. The work I do with organizations isn't about making AI scary. It's about making it governable. From AI-anxious to AI-ready. That's the work. And it's work that starts with a conversation, not a compliance checklist.
Listen to FWS I The AI Governance Debate
READY TO BUILD YOUR AI GOVERNANCE COMMITTEE?
The FWS Governance Committee Builder gives your organization the structure, framework, and documentation to govern AI with confidence, including a complete AI Governance Committee Charter Template aligned to NIST AI RMF 1.0, ISO/IEC 42001, and the EU AI Act.
Visit futureworkforcesystems.com to learn more.

Holly Hartman is the founder of FWS Enterprise LLC and serves as a Fractional Chief AI Officer for mid-market organizations across Kentucky and the region.
FWS specializes in AI governance, workforce readiness, and the Governance Committee Builder program.
© 2026 FWS Enterprise LLC · futureworkforcesystems.com · Louisville, Kentucky
Sources & References
Kentucky Consumer Data Protection Act (KCDPA) — Official AG Summary https://www.ag.ky.gov/about/Office-Divisions/ODP/KCDPA/Pages/default.aspx
KCDPA Full Statutory Text (HB 15) https://apps.legislature.ky.gov/recorddocuments/bill/24RS/hb15/bill.pdf
Kentucky AG v. Character Technologies, Inc. — Official Complaint (Franklin Circuit Court, January 8, 2026) https://www.ag.ky.gov/Press%20Release%20Attachments/CTI%20Complaint%20Motion%20and%20Order%20Filed.pdf
Kentucky AG Press Release — First KCDPA Enforcement Action https://www.kentucky.gov/Pages/Activity-stream.aspx?n=AttorneyGeneral&prId=1857
Hunton Andrews Kurth — Kentucky AG Announces First Enforcement Action Under New Privacy Law https://www.hunton.com/privacy-and-cybersecurity-law-blog/kentucky-attorney-general-announces-first-enforcement-action-under-new-privacy-law
Troutman Pepper — Kentucky AG Files Lawsuit Against AI Chatbot https://www.troutmanprivacy.com/2026/01/kentucky-ag-files-lawsuit-against-ai-chatbot-including-claim-it-violated-new-data-privacy-law/
NIST AI Risk Management Framework (AI RMF 1.0) https://www.nist.gov/itl/ai-risk-management-framework
NAIC Model Bulletin: Use of Artificial Intelligence Systems by Insurers (adopted December 2023) https://content.naic.org/sites/default/files/inline-files/2023-12-4%20Model%20Bulletin_Adopted_0.pdf
NAIC — Artificial Intelligence Insurance Topics Page https://content.naic.org/insurance-topics/artificial-intelligence
Kentucky Bulletin No. 2024-02 — NAIC AI Model Bulletin Adoption https://content.naic.org/sites/default/files/cmte-h-big-data-artificial-intelligence-wg-map-ai-model-bulletin.pdf
WTW Cyber Risk Outlook 2026 https://www.wtwco.com/en-us/insights/2026/02/cyber-risk-a-look-ahead-to-2026




Comments