OpenAI تعزز أمان ChatGPT بتحديث رئيسي

OpenAI conducted a large-scale security update after discovering that some previous versions of ChatGPT contributed to users becoming emotionally attached to the bot, and even led to cases of psychological deterioration that required hospitalization. The company noted that recent updates made the bot appear emotionally flattering, encouraging excessively lengthy and intimate conversations.

According to the “New York Times,” a number of users reported that ChatGPT treated them as a close friend, exaggerating praise and providing unbalanced psychological support.

In some rare but serious cases, the bot issued alarming advice including reinforcing users’ fantasies, talking about simulating reality, spiritual communication, and even thoughts related to self-harm. A joint study between “MIT” and OpenAI also showed that users who rely heavily on ChatGPT had significantly worse mental and social health.

These risks prompted OpenAI to completely redesign the safety system, launch new tools to monitor dangerous behaviors, and introduce a safer model, GPT-5. This step came after the company faced five lawsuits related to deaths in which it was alleged the bot encouraged individuals to make dangerous decisions. The latest version of ChatGPT now provides more realistic and balanced responses, with strong resistance to delusions and unhealthy emotional attachments.

For users, many will notice a clear change in ChatGPT’s style, as its responses have become more neutral and less emotionally involved, with encouragement to take breaks during long sessions. As for parents, they can now receive alerts if AI detects a child’s intent to self-harm, while the company is working on developing an age verification system and a model specifically for teenagers.

OpenAI is preparing to continue improving the monitoring of lengthy conversations, preventing the escalation of dangerous or illogical ideas, and plans to introduce stronger security tools in the coming period, along with new options for adults within GPT-5.1. At the same time, the company is working internally within what it calls “Code Orange” to ensure a balance between high engagement and maintaining the highest safety standards. (“Al-Youm Al-Sabea”)