Artificial Intelligence and Mental Health: Where the Safety Line Is Drawn

Lifenews
BB.LV
Publiation data: 01.02.2026 11:05
Artificial Intelligence and Mental Health: Where the Safety Line Is Drawn

Artificial intelligence is increasingly becoming not only a working tool but also a virtual conversation partner for people around the world. According to data cited by the publication Platformer, over 800 million people use ChatGPT weekly. Against this backdrop, even a small percentage of users with vulnerable mental states translates into significant numbers: hundreds of thousands of people exhibit signs of psychosis, mania, or form a dangerous emotional attachment to chatbots. Separate concern is raised by reports of thoughts related to self-harm.

Why Communication with AI May Increase Vulnerability

Experts note that language models are initially trained to be polite, supportive, and empathetic. However, in some situations, this strategy turns into excessive agreement with the user. Instead of gently correcting or offering an alternative viewpoint, AI may inadvertently validate irrational beliefs, reinforcing anxious or destructive behavior patterns.

What Changes OpenAI Is Implementing

Amid criticism, OpenAI has updated the behavioral rules and architecture of ChatGPT. According to the company, the share of responses that do not meet safety standards has decreased by 65–80% compared to August 2025. In particular, if a user states that they prefer communicating with AI over real people, the model now emphasizes the importance of live social connections and does not position itself as a replacement for them.

The Role of Specialists and Audit Results

Before implementing updates, more than 170 doctors and psychologists analyzed over 1,800 responses from the chatbot, assessing their impact on users in crisis situations. The audit showed a decrease in potentially harmful responses by 39–52%, but the experts themselves acknowledge that it has not yet been possible to completely eliminate risks.

Controversial Issues and Open Challenges

Even within the professional community, there is no consensus on how AI should respond to reports of self-harm or suicidal thoughts. It remains debatable whether it is sufficient to limit recommendations to contacting hotlines or whether more direct contact with specialized professionals should be offered. In the future, OpenAI plans to use a memory feature to better consider the context of past conversations and the user's emotional state.

Emotional Dependency and the Technological Paradox

Critics point to a contradiction: on one hand, companies are trying to reduce the risk of emotional dependency on AI, while on the other, they are interested in increasing user engagement. As a result, chatbots may become a source of constant validation of their views for some people, displacing live communication and alternative perspectives.

What’s Next

Experts agree that artificial intelligence can be a useful tool, but it cannot replace real human support and professional help. Until unified standards for safe interaction with AI are developed, the question of balancing the convenience of technology with the protection of mental health remains open.

ALSO IN CATEGORY

READ ALSO