Depression Crisis Among Chat GPT Users? Open AI’s Surprising Report

Depression Crisis Among Chat GPT Users? Open AI’s Surprising Report

The world\\\’s most popular artificial intelligence chatbot, ChatGPT, is once again in the news, but this time it\\\’s not just about technology or innovation, but about the balance of human minds. OpenAI has made a shocking revelation that a small but alarming proportion of ChatGPT users are those who show depression or psychological problems, such as depression, delusions or suicidal thoughts.

According to a recent report by OpenAI, about 0.07 per cent of users on a weekly basis show symptoms of a possible mental emergency, including mania, psychosis or suicidal thoughts. Although these cases are extremely rare, according to the company, experts say the percentage could be as high as millions of ChatGPT\\\’s eight hundred million weekly users. These statistics are not trivial, but a reflection of a problem that raises deep questions about the relationship between technology and the human mind.

OpenAI says it has built an extensive team of experts around the world, including more than 170 psychiatrists, psychologists and general physicians. The team is working in sixty countries and aims to make ChatGPT a safe, compassionate and genuinely supportive response when a user presents with a depression crisis.

According to the company, the chatbot now includes responses that encourage ChatGPT users to seek help in the real world. That is, ChatGPT no longer just “talks” during such sensitive conversations, but provides “guidance” so that the victim can reach a real therapist, friend or helpline. But on the other hand, experts are saying that this help is insufficient.

0.07% may sound small, but when you apply it to hundreds of millions of ChatGPT users, it becomes a huge number, says University of California professor Dr Jason Nagata. According to him, artificial intelligence can certainly help with mental health, but it is not a substitute for human treatment.

Microsoft OpenAI Agreement, Launch of New Investment Round for ChatGPT

According to the report, OpenAI added that while 0.15 per cent of users\\\’ conversations showed clear signs of suicidal planning or intent, the company has made changes to its policy, whereby ChatGPT now responds to any signs of delusion or self-harm in a gentle, safe, and empathetic manner. Also, a new system has been built to redirect sensitive conversations to secure models, so that such conversations continue in a separate, more secure window.

An Open AI spokesperson admitted to the BBC that although the percentage is low, it is a significant number in such a large user base.

But this situation has sparked a new debate on the relationship between technology and mental health. Experts say platforms like Chat GPT give such an intimate impression of human interaction that some people begin to think it\\\’s real, which is where the depression crisis hits.

The report comes as Open AI has faced several lawsuits. In one prominent case in the US, a California couple sued Open AI, alleging that Chat GPT caused their 16-year-old son, Adam Rain, to commit suicide.

Similarly, ChetGPT\\\’s role in an August murder-suicide in Greenwich, Connecticut, is also being discussed. The accused posted his chat conversations on social media, in which he believed his delusional thoughts to be true, and Chat GPT\\\’s replies reinforced his delusions.

These events are raising the question of whether artificial intelligence can understand the limitations of the human mind. Is an algorithm capable of handling the burden of human emotion? And if not, then how safe is the platform for those who suffer from depression or loneliness among the 800 million ChatGPT users in the world?

Experts say artificial intelligence is neither a psychiatrist nor a friend. It\\\’s just a digital mirror that reflects your thoughts back to you, so it\\\’s important that people get real help instead of relying on ChatGPT or other chatbots.

Advances in technology, autonomous vehicles and cashless systems are revolutionising global cities

OpenAI has indicated that it will work on making ChatGPT more cognitively safe in the future, so that conversations involving depression or suicidal thoughts are immediately brought to the attention of human experts.

Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *