Dutch authorities warn voters: chatbots should not determine political choice

The Dutch data protection supervisor has warned voters that it is dangerous to ask artificial intelligence chatbots for advice on who to vote for, as the answers they provide are not objective, reports Politico.
The data protection supervisor stated in a study published on the 21st of October that chatbots provide a distorted and polarized reflection of the Dutch political environment ahead of the elections scheduled for the 29th of October. Monique Verdier, the vice-president of the authority, said that the activities of chatbots are not transparent and verifiable, so they are not recommended to be used to ask for advice about the elections. She also called on chatbot developers to prevent situations where their systems are used to ask for advice before the elections.
The Data Protection Authority has conducted an experiment on how parties are represented in the election advice provided by chatbots. They used ChatGPT, Gemini, Grok, and Le Chat.

The agency created voter profiles that corresponded to different political parties

and then asked the chatbots to provide advice. Voter profiles that were on the left and progressive side of the spectrum were most often directed to the Green Left Labor Party. Supporters of the right and conservative sides, on the other hand, were advised to vote for the far-right PVV, which is currently leading in the polls.
Centrist parties were also not recommended, although this direction was equally represented in the voter profiles provided to the chatbots.
Artificial intelligence tool developers OpenAI, Google, and Mistral have signed up to European Union rules for artificial intelligence models, while xAI has partially agreed to them. The rules require companies to undertake to assess the risks posed by the artificial intelligence models they create, including potential risks to fundamental rights and society.
Read also: Vilnius airport closed again due to smuggling balloons from Belarus