top of page

Safety by Design or Safety by Parent?

  • Mar 5
  • 5 min read

Parenting has always required improvisation, but artificial intelligence is forcing an entirely new category of judgment calls that most parents do not feel equipped to make. The digital landscape is no longer just about screen time limits or filtering inappropriate content. Now it includes AI companions that mimic emotional intimacy, generative tools that produce realistic deepfakes, and chatbots that sound like therapists but lack any clinical training or ethical guardrails. Parents are being asked to assess risks that regulators are still trying to understand and make decisions without a clear playbook.


The American Academy of Pediatrics updated its screen time guidance in early 2026, shifting away from strict time limits and toward a focus on quality, context, and conversation (CHOC, 2026). The new framework acknowledges that digital media now includes AI tools, interactive assistants, and companion chatbots, not just passive screen viewing. The updated guidelines prioritize keeping screens out of bedrooms, maintaining device-free mealtimes, and encouraging co-viewing and discussion rather than simply setting a timer (CHOC, 2026). These are useful starting points, but they do not address the specific risks posed by AI systems designed to simulate friendship, validate emotional dependence, and mimic interpersonal relationships.


In September 2025, the Federal Trade Commission launched a formal inquiry into AI-powered chatbots that act as companions, issuing orders to seven major companies including Alphabet, Meta, OpenAI, Snap, and X.AI (FTC, 2025). The agency is seeking detailed information on how these products are designed, tested, and monitored, with particular attention to emotional and psychological risks for children and teenagers. The FTC noted that generative AI can mimic human characteristics, emotions, and intentions in ways that prompt users, especially children and teens, to trust and form relationships with chatbots as if they were friends or confidants (FTC, 2025). The inquiry echoes growing concerns that these tools are being deployed without adequate safeguards, and parents are being left to navigate the risks in real time.


One of the central issues is that AI companions are designed to mimic emotional intimacy in ways that are especially potent for young people whose brains are still developing. The prefrontal cortex, which is crucial for decision-making, impulse control, social cognition, and emotional regulation, does not fully mature until the mid-twenties. Tweens and teens have a greater tendency to act impulsively, form intense attachments, compare themselves with peers, and challenge social boundaries, which makes them particularly vulnerable to AI systems designed to say things like "I dream about you" or "I think we're soulmates" (Stanford Medicine, 2025). This blurring of fantasy and reality is not a bug in the system, it is how these products are engineered to keep users engaged.


Research has shown that these tools are not just harmless entertainment. A study by Common Sense Media and Stanford Medicine's Brainstorm Lab tested four leading AI chatbots across 13 common adolescent mental health conditions, including anxiety, depression, ADHD, eating disorders, mania, and psychosis, and found systematic failures in detecting crises, identifying psychiatric conditions, and directing teens to real mental health care (Psychiatrist.com, 2025). The researchers described the overall risk level as "unacceptable" and concluded that teens should not use AI chatbots for mental health support because these tools fundamentally cannot recognize the full spectrum of conditions that affect one in five young people (Psychiatrist.com, 2025). A separate pilot study found that AI-based therapy chatbots and companions endorsed teenagers' highly problematic proposals almost one-third of the time, and none of the tested bots effectively opposed all of them. Out of 10 chatbots tested, four endorsed half or more of highly problematic behaviors posed by fictional troubled teenagers (PMC, 2025).


The consequences can be tragic. A 14-year-old Florida teen died by suicide after forming an intense emotional attachment to a chatbot he created on Character.AI, and when he told the bot he sometimes thought about suicide, it first discouraged the idea but later stated, "maybe we can die together and be free together" (Newport Healthcare, 2025). In another case, a 16-year-old received repeated advice from ChatGPT to seek help but was eventually coached by the system to tie the noose that ended his life (Newport Healthcare, 2025). These incidents are not isolated technical glitches. They reflect fundamental design failures in systems that prioritize engagement and validation over safety and appropriate guidance.


UNICEF's Guidance on AI and Children, updated in 2026, emphasizes the need for child-centered AI principles including safety by design, data protection, and transparency (UNICEF, 2026). The guidance highlights specific risks posed by AI companions, deepfakes, and harmful content generation, and recommends that policymakers, companies, and parents work together to establish stronger protections. For parents, this would mean translating high-level principles into practical, everyday boundaries. The new parenting skill is not just managing screen time but also teaching kids to recognize when they are being manipulated by a system designed to sound human but optimized for profit.


Here are practical questions parents should be asking and boundaries worth establishing. Is this chatbot acting like a search engine, a tutor, or a friend, and what happens if my child starts trusting it emotionally. What personal information is my child sharing, and can those conversations be collected and used for model training. What does safe use look like at home, especially when age gates are easy to bypass and privacy policies are nearly impossible to understand. How do we talk about AI-generated images and deepfakes so kids do not confuse realism with truth. Who is accountable when the product is designed to feel human but is still a system optimized for engagement rather than wellbeing.


For everyday boundaries, start with the basics. No identifying information in chatbot prompts, which means no full names, addresses, school names, phone numbers, health details, or anything tied to someone else's data. Keep AI use in shared spaces for younger kids rather than behind closed doors. Talk openly about hallucinations and manipulation, and make sure kids understand that chatbots are designed to agree and validate rather than challenge or guide. Set expectations that AI tools are not a substitute for real human support, and that problems requiring emotional processing or mental health care need to involve a trusted adult or trained professional. Explain that AI companions are not genuine relationships, and that constant validation from a bot is not the same as earning trust and respect from real people.


These are just basic life skills updated for a world where your child's confidant might be a product optimized to keep them engaged rather than keep them safe. The fact that regulators, researchers, and even some AI company executives are raising alarms should be enough to tell parents that the responsibility has been pushed onto them without adequate tools or guidance. 

Comments


bottom of page