Artificial intelligence developer OpenAI has released data with estimates of the number of users who show signs that could indicate mental health problems, including mania, psychosis or suicidal thoughts, writes the BBC.
The company announced that about 0.07% of weekly users show such signs, and indicated that the ChatGPT chatbot recognizes them and responds appropriately to such conversations. Although OpenAI emphasizes that such cases are very rare, critics argue that the small percentage could be as many as hundreds of thousands of people, considering that about 800 million users use the chatbot every week.
Chatbots are receiving increasing attention, and the company has announced that it is creating a network of specialists to help in such situations. The experts include psychiatrists, psychologists and family doctors working in 60 countries. The chatbot has created a series of answers that encourage users to seek help in real life.
However, a look at the company’s data has some mental health professionals frowning. Jason Nagata, a professor at the University of California, said that 0.07% may seem like a small number, but given the millions of users it has, it is actually quite a significant number. He added that artificial intelligence can expand the approach to addressing mental health issues, but it is important to be aware of its limitations.
OpenAI also estimates that
about 0.15% of chatbot users indicate possible suicidal planning or intent in their conversations.
The latest updates to the chatbot are designed to respond safely and sensitively to possible mania or delusions, and to capture indications of a possible risk of self-harm or suicide.
When asked by the BBC about the number of users with suicidal tendencies, the company said that a small percentage means a large number of people, and that the situations are being taken seriously.
The changes made by the company come as the artificial intelligence developer faces investigations into its chatbot’s interactions with users. One of the largest lawsuits has been launched in California, where OpenAI is accused of encouraging a 16-year-old to commit suicide by a chatbot. Another case involves a murder-suicide in Connecticut, where a chatbot allegedly fueled the killer’s delusions.
Robin Feldman, a professor at the University of California, said chatbots create an illusion of reality. She said OpenAI deserves credit for publishing the statistics and trying to address the problem, but added that the company can display all possible warnings on the screen, but a mentally unstable person will not heed them.
Read also: Dutch authorities warn voters: chatbots should not determine political choice
