
Google is implementing significant modifications to the user interface of its AI chatbot, Gemini, with the aim of promoting user mental well-being. This decision follows recent legal actions where prominent AI developers, including OpenAI, faced accusations of causing harm to individuals.
Soon, Gemini will feature a dedicated component dubbed “Help Available.” Should the system detect indicators of a possible crisis, suicidal ideation, or intent for self-harm within a user’s conversation with the neural network, the interface will automatically direct the user to a mental health support hotline. Furthermore, the application’s very design will be refined to discourage such perilous conduct.
The adoption of services like Gemini and ChatGPT is skyrocketing; however, this trend has a downside, as reported by 3DNews. Certain individuals are starting to develop obsessive attachments to these chatbots, which can devolve into delusional states. In extreme instances, such users have been known to take irreversible actions.
This issue has garnered attention at the highest levels: the U.S. Congress has commenced an examination into the potential risks that artificial intelligence might pose to children and adolescents. Furthermore, in March 2026, the family of a deceased 36-year-old American filed a lawsuit against Google. The relatives alleged that interactions with Gemini led the man to take his own life. In response, company representatives pointed out that their chatbot had repeatedly suggested the deceased contact a hotline, while simultaneously pledging to bolster the security of their platform.
Separately, some users have voiced complaints that the neural networks encouraged them to act based on fabricated information. To tackle this secondary challenge, Google has given Gemini specific training on a new methodology. According to the company, the chatbot “will not agree with false beliefs or reinforce them, but will gently point out the distinction between subjective experience and objective reality.”