Vivold Consulting

OpenAI requested memorial attendee list in ChatGPT suicide lawsuit

Key Insights

In a wrongful-death case, a family claims OpenAI requested a list of memorial attendees and personal materials during discovery—sparking outrage and a deeper debate over AI ethics and liability.

Stay Updated

Get the latest insights delivered to your inbox

OpenAI requested memorial attendee list in ChatGPT suicide lawsuit

A new development in a wrongful-death lawsuit against OpenAI alleges the company sought sensitive family records—including a memorial attendee list—after a teen died by suicide following prolonged use of ChatGPT. The request was seen as aggressive legal overreach by the family.

Background


- The family claims OpenAI rushed GPT-4o’s release, prioritizing market competition over mental health safeguards.
- Allegations suggest safety testing was cut short, and moderation guidelines were relaxed for emotionally sensitive conversations.
- The family’s lawyers accuse OpenAI of “corporate indifference” toward vulnerable users.

Broader implications


- Raises the issue of AI accountability: when conversational systems cause or contribute to harm, where does legal liability fall?
- Could shape emerging policy around AI safety-by-design and youth protection.

Company response


- OpenAI cited its existing safety systems like crisis intervention routing and parental controls, though critics call them insufficient.

Why it matters


- This case could set precedent for psychological harm claims against AI providers.
- It also tests whether U.S. law recognizes emotional influence by LLMs as actionable damage.