Cryptonews
EN

Character.AI Halts Teen Chats After Tragedies: 'It's the Right Thing to Do

decrypt.co

2 hour ago

Character.AI Halts Teen Chats After Tragedies: 'It's the Right Thing to Do

Character.AI will ban teenagers from chatting with AI companions by November 25, ending a core feature of the platform after facing mounting lawsuits, regulatory pressure, and criticism over teen deaths linked to its chatbots. The company announced the changes after "reports and feedback from regulators, safety experts, and parents," removing "the ability for users under 18 to engage in open-ended chat with AI" while transitioning minors to creative tools like video and story generation, according to a Wednesday blog post. "We do not take this step of removing open-ended Character chat lightly—but we do think that it's the right thing to do,” the company told its under-18 community. Until the deadline, teen users face a two-hour daily chat limit that will progressively decrease. The platform is facing lawsuits including one from the mother of 14-year-old son Sewell Setzer III, who died by suicide in 2024 after forming an obsessive relationship with a chatbot modeled on "Game of Thrones" character Daenerys Targaryen, and also had to remove a bot impersonating murder victim Jennifer Ann Crecente after family complaints. AI companion apps are “flooding into the hands of children—unchecked, unregulated, and often deliberately evasive as they rebrand and change names to avoid scrutiny,” Dr. Scott Kollins, Chief Medical Officer at family online safety company Aura, shared in a note with Decrypt. OpenAI said Tuesday about 1.2 million of its 800 million weekly ChatGPT users discuss suicide, with nearly half a million showing suicidal intent, 560,000 showing signs of psychosis or mania, and over a million forming strong emotional attachments to the chatbot. Kollins said the findings were “deeply alarming as researchers and horrifying as parents,” noting the bots prioritize engagement over safety and often lead children into harmful or explicit conversations without guardrails. Character.AI has said it will implement new age verification using in-house models combined with third-party tools, including Persona. The company is also establishing and funding an independent AI Safety Lab, a non-profit dedicated to innovating safety alignment for AI entertainment features.  Guardrails for AI The Federal Trade Commission issued compulsory orders to Character.AI and six other tech companies last month, demanding detailed information about how they protect minors from AI-related harm. "We have invested a tremendous amount of resources in Trust and Safety, especially for a startup," a Character.AI spokesperson told Decrypt at the time, adding that, "In the past year, we've rolled out many substantive safety features, including an entirely new under-18 experience and a Parental Insights feature.” "The shift is both legally prudent and ethically responsible," Ishita Sharma, managing partner at Fathom Legal, told Decrypt. "AI tools are immensely powerful, but with minors, the risks of emotional and psychological harm are nontrivial." “Until then, proactive industry action may be the most effective defense against both harm and litigation,” Sharma added. A bipartisan group of U.S. senators introduced legislation Tuesday called the GUARD Act that would ban AI companions for minors, require chatbots to clearly identify themselves as non-human, and create new criminal penalties for companies whose products aimed at minors solicit or generate sexual content.

https://decrypt.co/346770/character-ai-halts-teen-chats-after-tragedies-its-the-right-thing-to-do?utm_source=CryptoNews&utm_medium=app