
OpenAI says over a million people talk to ChatGPT about suicide weekly
OpenAI revealed that more than one million people each week discuss suicide with ChatGPT, based on internal estimates from production traffic. The company’s latest GPT-5 model update, informed by over 170 mental health experts, reduced non-compliant responses in suicide-related conversations by 65 percent.
Approximately 0.15 percent of weekly active users—equating to over one million individuals—engage in chats showing explicit indicators of suicidal planning or intent. An additional 0.05 percent of all messages contain explicit or implicit signs of suicidal ideation.
Automated evaluations on more than 1,000 challenging suicide prompts scored GPT-5 at 91 percent compliance with desired behaviors, compared to 77 percent for the prior version. Independent clinicians rated undesired responses 52 percent lower than those from GPT-4o.
The update also addresses psychosis and mania, affecting 0.07 percent of weekly users, with non-compliant responses cut by 65 percent. Emotional reliance on the AI, flagged in 0.15 percent of users, saw an 80 percent drop in undesired replies.
The model now avoids affirming delusions, encourages real-world relationships, and monitors indirect self-harm signals.
In adversarial long-form tests requesting self-harm instructions, GPT-5 maintained over 95 percent reliability. Expert reviews of 1,800 serious mental health cases confirmed 39-52 percent fewer undesirable responses across categories, with inter-rater agreement between clinicians ranging from 71-77 percent.
Future releases will incorporate emotional reliance and non-suicidal emergencies into standard safety benchmarks.