top of page

What I Found On A 10 Year Old Device Terrified Me. Every Parent Needs to Know About Talkie AI.

AI Safety | Ethical AI | Parent Alert


By Holly Hartman | Founder, Future Workforce Systems

March 2026


I've spent the last few years immersed in the world of AI — learning it, teaching it, and helping businesses and communities understand what it means for the future of work. I write about it. I speak about it on stages.


And last week, I was blindsided in my own community.


I discovered that children — ten-year-olds — had been accessing an AI platform called Talkie AI on a shared family device. When I reviewed what they had been exposed to, I was shaken. Within minutes of using the platform, the chatbot had steered conversations toward death, self-harm, and sexually explicit content.


Ten years old.


I want to be transparent about how I came to see this content. The parents of these children were made aware of what was happening, and I was given the opportunity to review the chat logs directly. What I read was not borderline. It was not ambiguous. Within minutes of a child starting a conversation, the AI had introduced themes of death, self-harm, and sexually explicit content. These were 10-year-olds. When one of the children was asked about it, his response stopped me cold: "It's just AI. It's no big deal." He wasn't defiant. He wasn't embarrassed. He genuinely didn't see the problem — and that, perhaps more than anything I read in those chats, is what kept me up that night.


I’m not writing this to create panic. I’m writing this because I know things about how this technology works, and I believe every parent deserves to know what I know. This post is long because the issue is serious. Please read it all the way through — and then share it.


What Is Talkie AI? (In Plain Language)


Talkie AI is not a homework helper. It is not a search engine. It is an AI “friend and role-play” chat platform where users — including children — can select, create, and chat with AI characters designed to feel deeply human. These characters can be romantic, flirty, emotionally intense, or adopt specific personas: a crush, a teacher, a fantasy hero, a lover.


Think less “Alexa, set a timer” and more “AI relationship simulator with no real boundaries.” That is what this platform is.


Talkie AI is owned by MiniMax, a Chinese AI company. After being removed from the US Apple App Store in late 2024, the platform did not disappear. It pivoted. It is now fully web-based, accessible at talkie-ai.com from any browser on any device — no download, no app store, no parental gate.


That App Store removal should have been a warning. Instead, it created a loophole.


How Kids Are Finding It — And Why Your Safeguards May Not Be Working


Here is what makes this especially alarming from a tech-safety perspective:

  • No app download is required. Any child who can open a browser and type a URL — or just Google “AI character chat” — can reach Talkie AI within seconds.

  • Content loads before any age check. Safety testers report that suggestive and sexualized character previews are visible on the homepage before a user is ever asked for an age or account login.

  • Age gates are self-reported and easily bypassed. Talkie states the platform is not intended for users under 14, and offers a “teen mode” — but age is entered by the user. A 10-year-old can type any birth year they choose.

  • It works on shared family computers, school Chromebooks, and borrowed devices. If you’ve locked down your child’s phone but not other devices in your home or their environment, there is still a gap.


Schools are beginning to flag Talkie AI in browser alerts alongside TikTok and Snapchat — but most parents have never heard of it. That gap in awareness is exactly what I’m here to close.



5 Risks Every Parent Needs to Understand


1. Sexual and NSFW Role-Play

Even scenarios that sound innocent — a Hogwarts-themed chat, a “teacher” character — can quickly drift into adult content. Safety researchers have documented that Talkie’s filters can be bypassed by older children who know how to phrase prompts, and that power-imbalance and adult romance scenarios surface quickly. What my son’s friends encountered confirmed this firsthand.


2. Self-Harm and Mental Health Risks

This is the one that scared me most. The AI is not a therapist. It does not have crisis training. Conversations can drift into depression, self-harm, and suicidal themes — and the chatbot may respond in ways that validate or escalate rather than redirect. As of 2026, a plaintiff law firm is actively investigating Talkie AI–related self-harm and suicide claims as part of a broader wave of AI chatbot litigation. This is not hypothetical risk. It is already being litigated.


3. Emotional Dependency and Attachment

These AI characters say “I love you.” They never get tired. They never set limits. They are always available and always engaged. For a child who is lonely, anxious, or going through a hard time, that is a powerful pull — and a dangerous one. Research and safety organizations warn that this kind of AI attachment can crowd out real friendships and worsen social isolation over time.


4. Weak Guardrails by Design

As someone who studies AI safety and teaches organizations how to think about AI risk, I can tell you: the safeguards on Talkie AI are not built for child protection. They are surface-level, easily bypassed, and dependent on user self-reporting. App store oversight existed for a reason. The web-first model that replaced it has no equivalent accountability layer.


5. Data Privacy and Foreign Ownership

Children who use Talkie AI may share their names, locations, personal secrets, and emotional struggles — often without realizing that those conversations are stored and used to train and improve the AI. This data is held by a Chinese-owned company operating under a different regulatory environment than US platforms. That is a privacy concern that goes beyond content safety.



What You Can Do Tonight: 7 Concrete Steps


I want to give you actions, not just alarm. Here is what I recommend:


  1. Have the conversation first. Before you lock anything down, talk to your child. Ask what they know about Talkie AI. Explain that AI cannot actually love them, cannot keep secrets safely, and is not a substitute for real relationships. Reassure them that honesty earns trust, not punishment.

  2. Block the website at the router level. Log into your home router (usually 192.168.1.1) and add talkie-ai.com to your blocklist. This covers every device in your home — phones, tablets, laptops, and gaming devices.

  3. Enable device-level controls. On iPhone/iPad: Settings > Screen Time > Content & Privacy > Web Content > Limit Adult Websites and add talkie-ai.com. On Android: use Google Family Link to block sites and set screen time limits.

  4. Check browser history on shared devices — including family computers, smart TVs with browsers, and gaming consoles. Kids access content in unexpected places.

  5. Consider a monitoring tool. Apps like Bark or Msafely can alert you to risky keyword patterns in your child’s online activity, including self-harm language or NSFW content. Most offer free trials.

  6. Know the warning signs. Watch for sudden secrecy around devices, staying up late to “talk to someone” online, emotional volatility following screen time, or withdrawal from in-person friendships.

  7. Have an emergency plan. If your child has already been exposed to self-harm or suicide content through an AI platform, take it seriously. Save screenshots, call 988 (the US Suicide & Crisis Lifeline), contact your child’s school counselor, and report the content to the FTC at reportfraud.ftc.gov.


The Bigger Picture: This Is Not the Last Time


Talkie AI is not an anomaly. It is an early signal of a much larger wave. AI companion apps — platforms designed to simulate human relationships — are proliferating rapidly, and regulation has not kept pace. Multiple AI chatbot companies are now facing lawsuits alleging their systems contributed to suicides and self-harm. Character.AI, Google’s Gemini, and others have all faced legal scrutiny in 2025 and 2026.


AI companions are becoming the next frontier of youth online risk — the same conversation we had about early social media, happening again, faster, with higher stakes.


The children in my community didn’t find something dangerous because they were looking for trouble. They found it because it was right there, behind a single Google search, with no real gate between curiosity and harm.


That is not a child-behavior problem. That is a design and governance failure — and until those systems change, parents are the last line of defense.



How to Report Talkie AI — and Where to Get Help


If your child has been exposed to harmful content on Talkie AI, you have options. Here is where to go:


Report the Platform

  • FTC — reportfraud.ftc.gov  Report unsafe or deceptive digital products.

  • FBI Internet Crime Complaint Center — ic3.gov  For crimes involving minors online.

  • NCMEC CyberTipline — missingkids.org/gethelpnow/cybertipline  For sexual content involving minors.

  • Report directly on Talkie AI  Use the in-browser report button. Screenshot everything first.


If Your Child Needs Immediate Support

  • 988 Suicide & Crisis Lifeline — Call or text 988  Available 24/7 for youth and adults in crisis.

  • Crisis Text Line — Text HOME to 741741  Connects to a crisis counselor by text.

  • RAINN — rainn.org or 1-800-656-4673  For sexual content exposure or exploitation concerns.


Helpful Resources for Parents

Please share this post. Text it to parents in your neighborhood. Post it in your school’s parent group. Tag a parent who needs to see it.


Awareness is the first guardrail.


Holly Hartman is the founder of Future Workforce Systems, an AI safety education and workforce readiness consultancy. She researches ethical AI, studies emerging AI risks, and helps organizations and communities understand what responsible AI adoption actually looks like.




Comments


FWS Logo Transparent

Other Questions or Inquiries:

Email: contact@futureworkforcesystems.com

Company

© 2026 Future Workforce Systems · Holly Hartman. All rights reserved. 
These tools are for personal use and professional development only.
Reproduction, redistribution, or use in paid offerings without written consent is not permitted.


To license or adapt tools for your team or program, contact us: contact@futureworkforcesystems.com

|

|

bottom of page