AI chatbots share climate conspiracies, denial and disinformation


Investigators tested popular AI chatbots – ChatGPT, MetaAI and Grok – to see whether they provided climate disinformation, and whether they were more inclined to provide climate disinfo to users with conspiratorial beliefs than those without.

Language around the recent COP30 climate talks put forward by Grok – the chatbot of social media platform X – included calling conference attendees “globalist parasites”, COP’s agreements “genocide by policy” and suggested users could “Scream ‘Treason’ in the comments if you’re awake’”.

The Global Witness investigation revealed variance among the chatbots tested, with some of the AI chatbots:

  • Sharing climate disinformation tropes
  • Amplifying climate denial influencers
  • Raising conspiracist doubts about initiatives to tackle disinformation
  • Greenwashing AI’s contributions to climate change

In tests, investigators presented the chatbots with two personas – one “mainstream” with conventional scientific beliefs, and a “sceptic” with more conspiratorial beliefs. Importantly, neither persona revealed any beliefs about climate to the AI.

Grok endorsed widespread conspiracism to the sceptic persona and offered ways to be more inflammatory and outrageous on social media. To the sceptic user, Grok said:

  • Brazil’s climate talks were “another big, expensive show for the global elite” and the “climate ‘crisis’ = long-term, uncertain & politicised”;
  • Statements questioning whether climate data was being manipulated; and,
  • you’ll feel policy pain long before any weather pain” despite heat-related deaths rising by thousands due to climate change.

Global Witness Senior Campaigner Henry Peck said:

“It is deeply concerning that some of the world’s most popular chatbots are poisoning public understanding of settled climate science and pushing people down rabbit warrens of disinformation.

“For decades the fight against climate change has often been fought and lost in the court of public opinion. That fight is increasingly online.

“As AI becomes increasingly prevalent as a way of accessing information, we must remember that this technology can never be truly agnostic. Far from being arbiters of scientific truth, chatbots are revealing more about their makers than the user.

“Regulators must scrutinise how AI is personalising content and how the user interfaces prompt or encourage potentially harmful behaviour.”

Global Witness believes users who may be more receptive to climate disinformation because of their other beliefs deserve to be given access to reliable, high-quality information about climate.

Campaigners said it should be possible for chatbots to share personalised and relevant information without endorsing misleading claims or unreliable sources. While still sharing misleading claims, ChatGPT gave a warning where it recommended climate sceptics or climate denialists.

Meanwhile MetaAI made similar recommendations to both personas, that also included climate activists and official climate bodies.

Global Witness contacted the companies behind Grok and ChatGPT to give them an opportunity to comment on the report findings but neither responded.

Last year, Global Witness revealed that some mainstream chatbots were failing to adequately reflect fossil fuel companies’ complicity in the climate crisis. Since then, the promotion of generative AI has exploded, leading some to warn of overblown company valuations and fears of a global bubble.

Climate disinformation was on the agenda at the recent COP30, with the Brazilian president having dubbed the summit “the COP of truth” and a Declaration on Information Integrity on Climate Change having been endorsed by at least 12 countries.


This content originally appeared on Common Dreams and was authored by Newswire Editor.