top of page

Angela Lipps Got Her Story Told. Most Don't.

What a Breaking Case Teaches Us About Ethical AI, Responsible AI, AI Bias, and the Governance Gap That Makes It All Possible

By Holly Hartman | Future Workforce Systems


Angela Lipps had never been to North Dakota.


She had barely left a 100-mile radius of her home in Elizabethton, Tennessee. She was a 50-year-old grandmother of five, babysitting her grandchildren on a July afternoon in 2025 when U.S. marshals showed up — guns drawn — and arrested her.


Investigators in Fargo had been working a bank fraud case. They ran surveillance footage through an AI facial recognition system. The system returned a match.


That match was Angela Lipps. A warrant was issued. No one verified whether she had ever been to North Dakota. No one checked her bank records. No one asked the most basic investigative question: could she have actually been there?

She spent more than five months in jail. Her court-appointed attorney, Jay Greenwood, found her alibi in about a week — grocery receipts and Social Security deposits placing her in Tennessee during every instance of the fraud. Charges were dismissed on December 23, 2025.


She was released onto the street. In North Dakota. In winter. With no money, no transportation, and no false teeth, which police had not allowed her to bring. By the time she made it home, she had lost her house, her car, her dog, and five months of her life.


Fargo police acknowledged "a few errors." They stopped short of an apology.


This case is breaking nationally right now. It is generating outrage, and it should. But outrage without understanding changes nothing. So let's use this moment to understand exactly what went wrong — and what every organization using AI tools needs to hear.



What Is Ethical AI — and Where Did It Fail Angela Lipps?


Ethical AI refers to the design, development, and use of artificial intelligence systems in ways that are fair, transparent, accountable, and aligned with human values. It asks not just can the system do this but should it, and under what conditions, with what safeguards, and for whose benefit?


Ethical AI is not a feature you toggle on. It is a commitment that runs through every decision made before, during, and after an AI system is deployed — including the decision about how much authority that system is given over human lives.


In Angela Lipps' case, every pillar of Ethical AI was absent.

Fairness was absent because the system was used without any assessment of whether it performed equally across demographics — and as we'll see, it doesn't.


Transparency was absent because Lipps had no way of knowing an AI system had flagged her, no way to contest it, and no notification that the technology had been used against her.


Accountability was absent because the officer who submitted the warrant — with zero corroborating investigation — faced no named responsibility for what that decision cost a grandmother in Tennessee.


Human values were absent the moment a probability score became a verdict.


Ethical AI does not mean AI that never makes mistakes. It means AI deployed inside a system designed to catch mistakes before they become someone's five months.



What Is Responsible AI — and Who Was Responsible Here?


Responsible AI is the operational twin of Ethical AI. Where Ethical AI asks what should we do, Responsible AI asks how do we make sure we do it. It encompasses the policies, practices, roles, and review structures that govern how AI is actually used day to day — including who is accountable when something goes wrong.


At the center of Responsible AI are two concepts that every organization using AI tools needs to understand right now.


Human in the Loop means a human being is actively involved in reviewing and approving AI outputs before decisions are made. The AI generates a result. A human evaluates it. The human decides what happens next. The AI is a tool. The human retains authority.


In the Angela Lipps case, there was neither.


Her attorneys stated plainly that the officer used AI facial recognition "as a shortcut for basic investigation." The system returned a match. The officer checked her social media. A warrant was submitted. No human was meaningfully in the loop — asking hard questions, verifying the output, or doing the work that a $0 bank records request would have accomplished in days.


This is the Responsible AI failure at the heart of this case. The technology did not send Angela Lipps to jail. The absence of human accountability structure did.


When I work with organizations on AI governance, one of the first questions I ask is: who is the named human being responsible for every AI output that touches a person's life? In Fargo, in July 2025, the honest answer was: no one.



What Is AI Bias — and Why Are All These Faces?


AI Bias refers to systematic errors in AI outputs that produce unfair outcomes for certain groups of people. Bias enters AI systems through the data they are trained on, the choices made in how they are designed, and the environments in which they are deployed. Because AI systems learn from historical data — and history is not neutral — AI systems frequently reflect and amplify the inequities already present in the world.


In facial recognition specifically, the research is unambiguous: these systems are significantly less accurate for darker-skinned individuals, women, and older people. The error rates are not evenly distributed. The harm is not evenly distributed.


And Angela Lipps, a white grandmother from rural Tennessee, is actually the exception in the documented pattern of wrongful AI facial recognition arrests in the United States.


She is not the norm. She is the outlier. The people this system has been failing longest are Black Americans — and their stories received a fraction of the national attention that Lipps' case is generating this week.


These are their names, and they deserve to be spoken.


Robert Williams. A Black man and father of two young daughters in Detroit, Michigan. Police arrested him in 2020 after a facial recognition system falsely matched him to a watch theft at a store he had never visited. He was held for approximately 30 hours. His case became the first publicly reported wrongful arrest from facial recognition in the United States and eventually led to a landmark settlement with the City of Detroit.


Nijeer Parks. A Black man in Woodbridge, New Jersey. Facial recognition tied him to a shoplifting and assault case during a period when he was provably elsewhere. He spent 10 days in jail before prosecutors dropped the case.


Michael Oliver. A Black man in the Detroit area. Wrongly arrested after a facial recognition match. He pursued legal action.


Christopher Williams. A Black man in New York City. NYPD facial recognition led to his wrongful arrest and two days in jail despite alibi evidence. Prosecutors dismissed the case after public defenders proved the misidentification.


Porcha Woodruff. A Black woman in Detroit, Michigan. A facial recognition-driven investigation led to her wrongful arrest in a robbery case. She filed suit.


The ACLU and ACLU-NJ have stated that nearly every known wrongful arrest from AI facial recognition in the United States has involved a Black person. The Electronic Frontier Foundation has warned Congress that these systems err most frequently for people of color and women.


I served as Director of the RAARE Women Collective for 2 years, founded by Dr. Nikki Lanier — Radical Action Advancing Racial Equity — helping build that community and I remain Director Emeritus. I have spent years watching systems that claim neutrality operate with embedded bias. AI does not escape history. It is trained on data produced by human beings, in a world shaped by human decisions, many of which were shaped by racism. That bias does not disappear when it enters an algorithm. It accelerates — and it scales.


Angela Lipps had bank records. She had a lawyer who asked the right questions. She had a story that translated to a national audience. Not everyone does. The people most harmed by AI misidentification are often the least resourced to fight back. That asymmetry is not a side effect of the technology. It is the predictable output of deploying biased systems without governance.


This is not a coincidence. It is a pattern. And a pattern this consistent is not a technical glitch — it is a design accountability failure that demands more than a software patch.



What Is AI Governance — and What Does Its Absence Cost?


AI Governance is the framework of policies, oversight structures, accountability mechanisms, and human decision-making processes that an organization puts in place to ensure AI is used responsibly, ethically, and in alignment with its values and legal obligations. It is the answer to the question: who is in charge of how AI operates here, and what happens when it goes wrong?


Good AI governance does not prevent AI from being used. It ensures that AI is used with guardrails — that outputs are reviewed, that bias is audited, that accountability is assigned, and that there is a named, enforceable process for catching errors before they become someone's five months.



Here is the governance reality in the United States right now:


There is no comprehensive federal law governing how law enforcement uses facial recognition technology. None.


States have begun to move. Maryland passed strong legislation in 2024 limiting facial recognition to serious crimes, requiring defendants be notified when the technology was used, and banning live real-time identification. Montana and Utah have enacted warrant-based restrictions. But these are islands in a largely unregulated sea.


The U.S. Commission on Civil Rights has urged the Department of Justice and Department of Homeland Security to establish minimum technical requirements and oversight structures. Those recommendations have not yet produced a binding federal standard.


What this means in practice: law enforcement agencies across the country are deploying AI facial recognition today, with wildly inconsistent — or entirely nonexistent — internal policies governing what happens after a match.


Angela Lipps' case is not an outlier from a broken system. It is the predictable output of a system with no governance.


And if you think this is only a law enforcement problem, think again. Facial recognition and AI decision-making tools are being adopted right now across HR technology, workplace monitoring, financial services, healthcare, and insurance.


The governance gap is not unique to policing. It is the default state of AI adoption in most organizations.



Six Terms. One Framework. Questions You Need to Ask.


Before you read the closing, take these six definitions with you. They are not academic. They are operational.


Ethical AI — Using AI in ways that are fair, transparent, accountable, and aligned with human values. Asks: should we, and with what safeguards?


Responsible AI — The policies, practices, and accountability structures that make Ethical AI real in day-to-day operations. Asks: how do we make sure we do it right?


AI Bias — Systematic errors in AI outputs that produce unfair results for certain groups, often rooted in flawed or unrepresentative training data.


AI Governance — The framework that determines who is in charge of AI use, how it is overseen, and what happens when it fails.


Human in the Loop — A human actively reviews and approves AI outputs before decisions are made. Required for high-stakes decisions that affect people's lives.


Now — three questions for your organization:

1. Who owns the output?  When your AI system generates a result, is there a named human being accountable for verifying it before it becomes a decision? Or does the output move through your process unchecked?


2. How was the system trained, and on whom?  Do you know the error rates of the tools you are using — broken down by race, gender, and age? If not, you don't know what the system is doing to the people it touches.


3. What happens when it's wrong?  Every AI system will fail someone. The question is whether your organization has a process for catching that failure before it causes irreversible harm. What is your review mechanism? What is your correction process? What does accountability look like?


These are not hypothetical questions for future AI adoption. They are operational questions for right now.


This Is the Preview


Angela Lipps is home. Her charges were dismissed. Her story is being told.

But Robert Williams, Nijeer Parks, Michael Oliver, Christopher Williams, and Porcha Woodruff told their stories too — and the systems that harmed them kept operating, largely unchanged, until they harmed someone else.


We are at a moment where the consequences of ungoverned AI are no longer theoretical. They are arriving in the lives of real people — grandmothers babysitting their grandchildren, fathers driving home from work, women going about their days — with no warning and no recourse.


The question is not whether AI will get things wrong. It will. Every system will. The question is what kind of organizations, institutions, and governance structures we are building to catch those errors before they become someone's five months.


Accuracy without accountability is not enough. Speed without equity is not enough. A match is not a verdict.


That is what an Ethical AI Lens demands we see — and demands we say out loud, every time, until the systems change.



Holly Hartman is the founder of Future Workforce Systems, an AI governance and workforce readiness consultancy, and Director Emeritus of the RAARE Women Collective (Radical Action Advancing Racial Equity). She works with organizations to build the governance structures, policies, and human accountability frameworks that responsible AI adoption requires.


Sources

Comments


FWS Logo Transparent

Other Questions or Inquiries:

Email: contact@futureworkforcesystems.com

Company

© 2026 Future Workforce Systems · Holly Hartman. All rights reserved. 
These tools are for personal use and professional development only.
Reproduction, redistribution, or use in paid offerings without written consent is not permitted.


To license or adapt tools for your team or program, contact us: contact@futureworkforcesystems.com

|

|

bottom of page