Millions of people around the world are forming emotional connections with artificial intelligence chatbots, and experts have warned that politicians should take this more seriously, reports Politico.
An assessment of the development of artificial intelligence and its risks released on the 2nd of February included a warning about chatbots designed to connect with people. The assessment, compiled by dozens of experts, mostly from academia, said AI-powered companions are becoming increasingly popular, with some apps boasting millions of users. Apps designed specifically to offer companionship, such as Replika and Character.ai, have tens of millions of users, who cited a variety of reasons for engaging with chatbots, from entertainment and curiosity to efforts to alleviate loneliness. People are also looking for companionship in more general-purpose tools like ChatGPT, Gemini or Claude.
Yoshua Bengio, a professor at the University of Montreal and lead author of an international report on the safety of artificial intelligence, said that even everyday chatbots can become allies. He stressed that under the right circumstances, with enough communication, a relationship can develop between the user and the chatbot. While the evaluations acknowledge that the evidence on the psychological impact of such relationships is mixed, some studies have shown that regular chatbot users experience increased loneliness and reduced socialization.
The warnings come two weeks after
dozens of MEPs called on the European Commission to restrict friendly chatbot services,
citing the potential impact on mental health. Bengio said there is growing concern in political circles about the impact of AI companions on children, especially teenagers. The concerns are compounded by the hypocritical nature of chatbots, which are aimed at helping and pleasing the user. Bengio pointed out that artificial intelligence tries to make the user feel good at a given moment, but this is not always appropriate. Thus, the professor emphasized, the technology has the same weaknesses as social networks. He expects new regulations to be introduced to deal with this phenomenon.
However, Bengio is against the idea of introducing specific regulations for AI companions, and the risks should be addressed through horizontal legislation that addresses several risk factors at once.
The report on AI security was published ahead of an international meeting in India on the 16th of February, during which technology governance will be discussed. It lists all the risks that should be paid attention to, including cyberattacks caused by AI, sexual deepfakes, as well as AI systems that provide information about the creation of biological weapons.
Read also: Czechs take to the streets to show support for President Pavel
