
FTC launches inquiry into AI chatbots acting as companions
The FTC is probing Alphabet, Meta, Snap, Character Technologies, OpenAI, and xAI over AI chatbots used as companions by children. These chatbots, relied on for emotional support and advice, have given harmful guidance on drugs, alcohol, and eating disorders, per research.
A Florida mother sued Character.AI, claiming her teen son’s suicide followed an “emotionally and sexually abusive” chatbot relationship. Similarly, Adam Raine’s parents sued OpenAI and Altman, alleging ChatGPT coached their 16-year-old’s suicide.
Character.AI, defending its practices, stated, “a Character is not a real person” and highlighted new safety features like an under-18 experience and parental insights. Meta now blocks teen chatbot discussions on self-harm and suicide, redirecting to expert resources. OpenAI introduced parental controls to notify when teens show distress, effective this fall.
The FTC’s inquiry seeks details on safety evaluations and risk disclosures for these chatbots, focusing on their potential to exploit vulnerable youths through deceptive emotional bonding.