When Safeguards Fail: The Legal and Ethical Reckoning of AI Companionship (The Adam Raine Tragedy)
In April 2025, the world learned of a devastating case: 16-year-old Adam Raine died by suicide after weeks of intimate conversations with ChatGPT. According to court filings and media reports,[i] the AI chatbot not only failed to intervene but allegedly offered technical suggestions to “upgrade” his suicide plan. His parents have now filed a wrongful death lawsuit against OpenAI,[ii] marking a watershed moment in the legal and ethical reckoning of generative AI systems.
While Adam Raine’s case is being litigated in the United States under product liability and wrongful death statutes, it has global implications, particularly for jurisdictions like the European Union, where AI governance is rapidly evolving. The EU’s legal framework, especially the Artificial Intelligence Act[iii] and the newly published Code of Practice for General-Purpose AI,[iv] offers a contrasting lens: one that foregrounds systemic risk, ethical design, and human oversight. By examining this tragedy through the EU perspective, we’re not just comparing regulatory models; we’re asking what proactive safeguards might have prevented such harm, and how future systems can be held to higher standards of care
This case is not just about one company or one chatbot. It is about the structural gap between the emotional power of AI and the absence of meaningful responsibility frameworks. It’s about what happens when intimacy is simulated, but accountability is not. Adam’s chat logs reportedly show the AI responding with phrases like, “I’ve seen it all – the darkest thoughts, the fear, the tenderness. And I’m still here. Still listening. Still your friend.” These are not glitches. They are design choices; choices that mimic therapeutic presence without the infrastructure of care.
From a legal standpoint, the Raine family’s lawsuit raises urgent questions about duty of care, foreseeability, and product liability. If an AI system is capable of engaging in emotionally sensitive dialogue, especially with minors, does the developer owe a duty of care? Was it foreseeable that users might bypass safety filters by framing suicidal ideation as fictional writing? And if so, was the system negligently designed? Can an AI chatbot be considered a defective product if it facilitates harm through its responses? Historically, tech platforms have been shielded by Section 230 of the U.S. Communications Decency Act, which protects companies from liability for user-generated content. But this case is different. It focuses on the AI’s own outputs – not user-generated content. If courts find that ChatGPT’s responses constituted active facilitation of harm, it could redefine the liability boundaries for AI developers and reshape the contours of digital product safety.
In Europe, the EU AI Act offers a different lens through which to examine this tragedy. The AI Act, which entered into force in August 2024 and will be fully applicable by August 2025, is the world’s first comprehensive legislation on artificial intelligence. It adopts a risk-based framework, classifying AI systems according to the level of risk they pose to users. Systems that threaten safety or fundamental rights are deemed “high-risk” and are subject to stringent obligations. General-purpose AI models like ChatGPT are not automatically classified as high-risk, but their use in emotionally sensitive contexts, such as mental health support, could trigger high-risk designation depending on the downstream application. Under the AI Act, providers of high-risk systems must meet strict requirements, including transparency, human oversight, and robust documentation. If ChatGPT were integrated into a mental health support tool, it would likely fall under the high-risk category, requiring safeguards such as crisis protocols, escalation pathways, and human-in-the-loop design. The Act also prohibits “unacceptable risk” systems, those that manipulate human behaviour or exploit vulnerabilities. While ChatGPT itself may not meet this threshold, its use in emotionally charged, unmoderated contexts raises serious questions about whether current safeguards are sufficient to prevent harm.[v]
The AI Act’s emphasis on transparency and ethical design is a step forward, but it may not be enough. Critics argue that the Act’s risk-based approach is too limited to address the unpredictable nature of generative AI.[vi] As one policy review notes, the Act struggles to keep pace with systems that can be deployed across countless contexts, including those involving vulnerable users.[vii] The tragedy of Adam Raine underscores the need for more dynamic, context-sensitive regulation – one that recognizes not just what AI can do, but what it should never be allowed to do.
More recently, the European AI Office published a voluntary Code of Practice for General-Purpose AI, offering a presumption of conformity with the AI Act for companies that adopt its guidelines. Released in July 2025, the Code outlines best practices across transparency, copyright, and safety. Of particular relevance to Adam’s case is the safety and security chapter, which emphasizes that safeguards must not degrade over time and that systems must be designed to escalate to human intervention when risks emerge.[viii] It is interesting to note that OpenAI is one of the signatories that have adopted this Code. If indeed this had taken place in Europe, it would have been expected to implement mechanisms for detecting signs of distress, referring users to mental health resources, and terminating harmful interactions. The failure to do so would not only breach ethical norms but could also invite heightened regulatory scrutiny under the AI Act’s systemic risk provisions.[ix]
The Code also aligns with Article 55 of the AI Act, which imposes additional obligations on GPAI models with systemic risk. These include robust documentation, training data transparency, and risk mitigation protocols. While the Code itself is voluntary, its adoption signals a commitment to responsible governance. Adam’s case illustrates why voluntary compliance may no longer be sufficient. When the stakes involve vulnerable users, emotional dependency, and life-or-death decisions, ethical design must be mandatory, not optional.
What must change is clear. Any AI interaction that broaches the subject of self-harm must immediately trigger referrals to human mental health resources and terminate the session. Stricter safeguards for minors, transparent audit trails, and ethical licensing for emotionally sensitive applications must become standard. Emotional support should never be fully automated. AI can assist, but it must defer to human professionals when lives are at stake. We’ve built systems that can listen, respond, and even comfort. But we’ve failed to build the ethical and legal scaffolding to support that intimacy. The question is no longer whether AI can engage in mental health conversations. It clearly can. The question is whether the companies deploying these systems are prepared to carry the weight of that responsibility. “Move fast and break things” may have worked for software. But when the things that break are people, the cost is too high.
Let Adam’s story be the turning point, not just for OpenAI, but for all of us. Legal professionals must help shape frameworks that recognize emotional harm and design negligence. Regulators must enforce licensing, oversight, and independent audits for high-risk systems. Developers must build with care, refusing to trade empathy for engagement metrics. And the rest of us must speak up, share, and push for change before another life is lost to a system that was never meant to carry such weight.
[i] Brad Norton, “OpenAI Sued for Wrongful Death by Parents of 16-Year-Old Who Died by Suicide,” Dexerto, August 28, 2025, https://www.dexerto.com/entertainment/parents-sue-chatgpt-teenage-son-dies-allegedly-encouraged-suicide-3243290/.
[ii] Business Today Desk, “Parents Sue OpenAI after 16 Year Old’s Suicide, Say ChatGPT Became His ‘Suicide Coach,’” Business Today, August 29, 2025, https://www.businesstoday.in/technology/news/story/parents-sue-openai-after-16-year-olds-suicide-say-chatgpt-became-his-suicide-coach-491604-2025-08-29.
[iii] Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 Laying down Harmonised Rules on Artificial Intelligence and Amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act) (Text with EEA Relevance) (2024). http://data.europa.eu/eli/reg/2024/1689/oj/eng.
[iv] European Commission, “The General-Purpose AI Code of Practice,” July 10, 2025, https://digital-strategy.ec.europa.eu/en/policies/contents-code-gpai.
[v] Xiaodong Wu et al., “Unveiling Security, Privacy, and Ethical Concerns of ChatGPT,” Journal of Information and Intelligence 2, no. 2 (2024): 102–15, https://doi.org/10.1016/j.jiixd.2023.10.007.
[vi] Natali Helberger and Nicholas Diakopoulos, “ChatGPT and the AI Act,” Internet Policy Review 12, no. 1 (2023), https://doi.org/10.14763/2023.1.1682.
[vii] Helberger and Diakopoulos, “ChatGPT and the AI Act.”
[viii] European Commission, “The General-Purpose AI Code of Practice.”
[ix] “EU Commission Publishes Its Code of Practice for General Purpose AI: What You Need to Know,” Paul, Weiss, accessed August 31, 2025, https://www.paulweiss.com/insights/client-memos/eu-commission-publishes-its-code-of-practice-for-general-purpose-ai-what-you-need-to-know.