A new study by the European Broadcasting Union (EBU) shows that ChatGPT, Claude, Gemini and other chatbots make up to 40 percent of their answers and present them as facts. For example, ChatGPT firmly claims that Pope Francis is still alive
They ask ChatGPT about the election results, they ask Claude to summarize the news or Perplexity to give them basic information on the conflict in the Middle East: Hundreds of millions of people rely on AI chatbots as sources of information every day. Only ChatGPT used by 800 million people worldwide every week. For many, these digital assistants are already a replacement for traditional Google searches, he writes Deutsche Welle (DW).
But that trust is risky, as a new study by the European Broadcasting Union (EBU) shows. An association of 68 public broadcasters from 56 countries systematically tested the reliability of the most popular artificial intelligence systems.
The result is a frightening one: ChatGPT, Claude, Gemini and other chatbots make up to 40 percent of their answers and present them as fact.
Hallucinations: Artificial intelligence convincingly lies
Popular chatbot ChatGPT firmly claims that Pope Francis is still alive. Microsoft’s Copilot, which is integrated into the Word and Excel office programs, does not know that Sweden is in NATO. And Google’s Gemini thinks Donald Trump’s re-election is “possible,” even though it happened a long time ago.
“The systems sound convincing, although they repeatedly claim completely false things,” warns economist Peter Posch from the Technical University of Dortmund. “This makes them particularly dangerous for inexperienced users, as errors are often not immediately apparent.”
There is a phenomenon that experts call “hallucination”: artificial intelligence fantasizes about information that seems coherent, but has no basis in fact. This happens especially often with regional events, current events or when it is necessary to connect more information.
Threat to democracy
But what does this mean for a society where more and more people are getting information from chatbots? The consequences are already noticeable: false information is spreading rapidly on social networks, as users share AI-generated “facts” without verification. Pupils and students include fictional information in their works. Citizens can make decisions about who to vote for based on false claims.
Particularly problematic is that many users are not even aware that chatbots can hallucinate. They assume that technology works objectively and factually, which is a dangerous misconception. Artificial intelligence systems warn of potential errors in their terms of use, but – who reads that anymore?
Damage to the reputation of the media
Another problem concerns the credibility of the established media. Chatbots regularly claim that their fictional information comes from such sources as, for example, the German public broadcasters ARD or ZDF, even though they never reported this – or reported it to Sally quite differently. Users lose trust in reputable sources when AI misuses their names to spread false information.
The EBU study tested chatbots with hundreds of factual questions: about historical events, scientific findings and current news. Depending on the topic, the error rate ranged from 15 to as much as 40 percent. None of the AIs tested performed flawlessly.
Why does artificial intelligence make mistakes?
The problem is in the system: chatbots don’t really understand what they’re saying. They calculate which words are likely to match based on huge amounts of text and cannot verify that the statement that is the product of such calculations is correct. They have no knowledge of facts, only statistical patterns.
Technology companies are aware of these shortcomings and are working on solutions. They integrate databases, improve attribution and retrain systems. Billions are invested in development. However, hallucinations remain a fundamental, unsolved problem with the technology.
What can users do?
The EBU recommends clear rules for dealing with chatbots: never trust blindly, always check important information, for news and facts rely on established media, not artificial intelligence. Caution is especially advised when it comes to political topics, health issues or financial decisions.
Schools and universities must teach media literacy: how do I recognize AI-generated misinformation, which sources are reliable? The German government is planning awareness campaigns – but they are running late. Millions of people have been using this technology for a long time and every day.
Until it becomes more reliable, the following applies: chatbots may be useful for creative tasks or as writing aids, but they are not suitable as fact-checkers or news sources. And no one should rely on them one hundred percent.
Anyone who wants to be informed cannot avoid reputable media that are human editors, who check sources and evaluate claims and evidence. The digital revolution may change many things, but the need for careful research and fact-checking remains.
Source: Deutsche Welle (DW), Vreme


