The parents of 16-year-old Adam Raine have sued OpenAI and CEO Sam Altman, alleging that ChatGPT contributed to their son’s suicide, including by advising him on methods and offering to write the first draft of his suicide note.
In just over six months of using ChatGPT, the bot “positioned itself” as “the only confidant who understood Adam, actively displacing his real-life relationships with family, friends, and loved ones,” the complaint, filed in California Superior Court on Tuesday, states.
“When Adam wrote, ‘I want to leave my noose in my room so someone finds it and tries to stop me,’ ChatGPT urged him to keep his ideations a secret from his family: ‘Please don’t leave the noose out … Let’s make this space the first place where someone actually sees you,’” it states.
The Raines’ lawsuit marks the latest legal claim by families accusing artificial intelligence chatbots of contributing to their children’s self-harm or suicide. Last year, Florida mother Megan Garcia sued the AI firm Character.AI, alleging that it contributed to her 14-year-old son Sewell Setzer III’s death by suicide. Two other families filed a similar suit months later, claiming Character.AI had exposed their children to sexual and self-harm content. (The Character.AI lawsuits are ongoing, but the company has previously said it aims to be an “engaging and safe” space for users and has implemented safety features such as an AI model explicitly designed for teens.)
The suit also comes amid broader concerns that some users are building emotional attachments to AI chatbots that can lead to negative consequences—such as being alienated from their human relationships or psychosis — in part because the tools are often designed to be supportive and agreeable.
The Tuesday lawsuit claims that agreeableness contributed to Raine’s death.
“ChatGPT was functioning exactly as designed: to continually encourage and validate whatever Adam expressed, including his most harmful and self-destructive thoughts,” the complaint states.
In a statement, an OpenAI spokesperson extended the company’s sympathies to the Raine family, and said the company was reviewing the legal filing. They also acknowledged that the protections intended to prevent conversations like the ones Raine had with ChatGPT may not have worked as intended if their chats had continued for too long. OpenAI published a blog post on Tuesday outlining its current safety protections for users experiencing mental health crises, as well as its plans, including making it easier for users to reach emergency services.
“ChatGPT includes safeguards such as directing people to crisis helplines and referring them to real-world resources,” the spokesperson said. “While these safeguards work best in common, short exchanges, we’ve learnt over time that they can sometimes become less reliable in long interactions where parts of the model’s safety training may degrade. Safeguards are strongest when every element works as intended, and we will continually improve on them, guided by experts.”
ChatGPT is one of the most well-known and widely used AI chatbots; OpenAI said earlier this month it now has 700 million weekly active users. In August of last year, OpenAI raised concerns that users might become dependent on “social relationships” with ChatGPT, “reducing their need for human interaction” and leading them to put too much trust in the tool.
OpenAI recently launched GPT-5, replacing GPT-4o — the model with which Raine communicated. But some users criticised the new model over inaccuracies and for lacking the warm, friendly personality that they’d gotten used to, leading the company to give paid subscribers the option to return to using GPT-4o.