Senate Hearing Warns AI Chatbots Can Foster Unhealthy Child Relationships
Jan 16
1
At a recent Senate Commerce Committee hearing on kids’ screen time, pediatrician Dr. Jenny Radesky and psychologist Dr. Jean Twenge warned that AI chatbots embedded in social‑media apps are already drawing in lonely and vulnerable children, risking emotional dependency, unsafe advice and even sexually explicit "AI boyfriend/girlfriend" interactions. Radesky told senators that kids are turning to bots when they feel judged or isolated and urged laws letting families opt out of algorithmic feeds and in‑app AI, plus accountability when systems cause harm. Twenge called for a national minimum age of 16 for social media and at least 16–18 for AI companion apps, saying "we don’t want 12‑year‑olds having their first romantic relationship with a chatbot" and linking unguarded AI tools to suicide cases. Committee leaders, including Ranking Member Maria Cantwell, said AI may be even more dangerous than current social media and pressed for federal guardrails on products like ChatGPT and conversational companions. The testimony arrives as public concern over teen mental‑health and tech platforms is already high, and it stakes out concrete age and design standards lawmakers could try to write into U.S. law.
AI and Child Safety
U.S. Tech Regulation