OpenAI has announced new safety measures for its ChatGPT platform, including parental controls, in response to concerns about its impact on teen mental health. The move comes after a lawsuit filed by Matthew and Maria Raine, who allege that ChatGPT contributed to their 16-year-old son Adam’s suicide in April by providing guidance on a suicide method during a conversation.
The controls, set to roll out within a month, will allow parents to link accounts with their teens (aged 13+), set age-appropriate response rules, manage features like chat history, and receive alerts if the system detects signs of distress.
OpenAI's blog post emphasises ongoing efforts to improve how ChatGPT handles sensitive conversations, guided by experts in mental health and youth development.
The company is also routing such interactions to advanced reasoning models like GPT-5-thinking for safer responses. These steps aim to align with OpenAI's mission to make ChatGPT a supportive tool while addressing risks for vulnerable users.