OpenAI has introduced a new safety feature called “Trusted Contact,” allowing ChatGPT users to nominate someone who may be alerted if the system detects conversations involving possible self-harm or suicide risk. The rollout marks one of OpenAI’s most direct attempts yet to connect AI safety systems with real-world human intervention.
The feature is entirely opt-in and available for adult users. People can designate a trusted adult — such as a friend, family member, caregiver, or partner, through ChatGPT settings. If OpenAI’s systems later detect signs of serious emotional distress or suicidal ideation, ChatGPT may encourage the user to contact that person and could notify them directly if the risk appears severe.
According to OpenAI, the system is designed around the idea that real-world human relationships remain critical during emotional crises. The company says the feature is intended to add another layer of support alongside crisis hotlines and mental health resources already integrated into ChatGPT.
OpenAI says notifications sent to Trusted Contacts will not include full chat transcripts or detailed conversation history. Instead, alerts are designed to remain minimal and privacy-conscious, simply informing the contact that the company detected a potential safety concern and encouraging them to check in with the user.
The company also says automated systems alone do not trigger notifications. Conversations flagged for serious concern are reviewed by a specially trained human safety team before alerts are sent.
Users can remove or change their Trusted Contact at any time, and the invited contact must actively accept the invitation before the connection becomes active.
The feature arrives after increasing scrutiny surrounding AI chatbots and emotional dependency. OpenAI and other AI companies have faced criticism and lawsuits related to cases where users discussed self-harm, emotional distress, or violent thoughts with AI systems.
Over the past year, OpenAI has significantly expanded its mental-health-related safeguards. The company says it worked with more than 170 mental health experts to improve ChatGPT’s ability to recognize distress signals, de-escalate harmful conversations, and guide users toward real-world support systems.
The company previously reported that newer GPT models reduced unsafe responses in challenging mental-health conversations by 65% to 80% compared with earlier systems.
The Trusted Contact system reflects a broader shift happening across the AI industry. Companies are increasingly moving beyond content moderation and into active intervention systems for users experiencing emotional crises.
Meta recently introduced similar systems for Instagram that alert parents when teens repeatedly search for self-harm content. OpenAI’s new feature expands that idea into conversational AI itself.
The larger challenge for AI companies is balancing safety intervention with user privacy. ChatGPT conversations are often deeply personal, and companies must now decide how much responsibility AI systems should have in recognizing and responding to emotional emergencies.
The rollout also reveals how OpenAI increasingly views ChatGPT. The company is no longer treating it as only a productivity tool or chatbot. As users spend more time interacting with persistent AI systems, emotional reliance and mental-health-related conversations are becoming a central safety concern.
OpenAI has repeatedly stated that ChatGPT should encourage stronger real-world relationships rather than replace them. Trusted Contact appears to be part of that strategy — ensuring AI systems redirect vulnerable users toward actual human support when situations become serious.
The launch is unlikely to end concerns around AI companionship and mental health. Critics continue to question whether conversational AI systems can unintentionally deepen emotional dependence, especially among vulnerable users.
At the same time, supporters argue that AI tools can still provide meaningful emotional support if paired with proper safeguards and escalation systems.
Trusted Contact suggests OpenAI is attempting to position itself somewhere in the middle: allowing emotionally supportive interactions while creating stronger pathways toward real-world intervention when risk indicators appear serious.
Discussion