Chatbot’s antisemitic remarks prompt EU to consider tougher rules

Antisemitic remarks made by Grok, a chatbot from billionaire Elon Musk’s startup xAI, are prompting European lawmakers to take a tougher approach to regulating artificial intelligence, Politico reports.
Musk’s company’s chatbot has sparked a backlash after it responded to questions about what could best deal with possible discrimination against white people with texts praising Nazi Germany’s leader Adolf Hitler. European Union lawmakers have seen this as an opportunity to demand specific rules for the most sophisticated artificial intelligence models, such as Grok. The microblogging platform X has also come under renewed scrutiny by the EU, which is investigating it for violating the bloc’s media laws. Italian MEP Brando Benifei said the incident with Grok highlights the real risks that EU legislation on the use of artificial intelligence is designed to prevent.
xAI quickly deleted the “inappropriate posts” and announced on the 9th of July that it had taken action to prevent Grok from posting hate speech, but did not elaborate on what exactly it had done.
The EU guidelines are a voluntary tool for large-scale AI software, such as Open AI, Google and xAI’s chatbots.

Lawmakers and citizens’ groups say there are concerns that the guidelines will be weak and vague.

The EU began work on the AI ​​guidelines in late 2022, when ChatGPT was made available to the public. The law, which covers AI, is due to enter into force on the 2nd of August and outlines rules for companies like xAI. These include rules on data disclosure for training AI models, compliance with copyright laws and action to address various systemic risks. However, much of this is tied to voluntary compliance with guidelines that the European Commission has been working on for the past nine months.
On the 9th of July, a group of five influential lawmakers expressed concern that crucial points, such as transparency practices, were removed from the guidelines at the last minute. The lawmakers point out that Grok’s actions are evidence of the need for stricter rules. This is opposed by technology companies and the US government.
One of the points removed is directly related to the incident with the “xAI” chatbot. The latest revisions to the guidelines treat regulating illegal content as something that AI companies should consider, not necessarily do. This has caused a sharp backlash. Benife pointed out that the industry needs to offer clear guidelines to ensure that AI models are used responsibly and do not violate democratic values ​​and fundamental rights.
Although the EU does not have a common law defining what constitutes illegal content, many countries have criminalized hate speech and, in particular, antisemitic expressions.
Read also: Chatbot from Musk’s company spreads antisemitic messages