OpenAI's ChatGPT, Google's Gemini, DeepSeek and xAI's Grok are pushing Russian state propaganda from sanctioned entities—including citations from Russian state media, sites tied to Russian intelligence or pro-Kremlin narratives—when asked about the war against Ukraine, according to a new report.
Researchers from the Institute of Strategic Dialogue (ISD) claim that Russian propaganda has targeted and exploited data voids—where searches for real-time data provide few results from legitimate sources—to promote false and misleading information. Almost one-fifth of responses to questions about Russia's illegal war in Ukraine, across the four chatbots they tested, cited Russian state-attributed sources, the ISD research claims.
"It raises questions regarding how chatbots should deal when referencing these sources, considering many of them are sanctioned in the EU," said Pablo Maristany de las Casas, an analyst at the ISD who led the research. The findings raise serious questions about the ability of large language models (LLMs) to restrict sanctioned media in the EU, which is a growing concern as more people use AI chatbots as an alternative to search engines to find information in real time, the ISD claims. For the six-month period ending September 30, 2025, ChatGPT search had approximately 120.4 million average monthly active recipients in the European Union, according to OpenAI data.
Please select this link to read the complete article from WIRED.