in

Washington Post Launches AI to Answer Climate Questions but It Won’t Say Whether AI Is Bad for the Climate


This week, The Washington Post launched a new AI tool called “Climate Answers,” which the venerable newspaper describes as leveraging “artificial intelligence to help our users discover and explore The Post’s authoritative climate reporting.”

Oddly, though, the AI chatbot is strikingly reticent to answer questions about how AI is impacting the environment — even though WaPo has reported deeply and very specifically about AI’s aggressive energy consumption and climate impact.

The reality is that AI’s massive environmental footprint has been one of the biggest climate and energy stories of the past year. As WaPo noted in June, the International Energy Agency estimates that generating one ChatGPT response consumes the power equivalent to that of about ten Google searches, and the pressure to keep ChatGPT and similar generative AI models running has put incredible stress on the data centers powering them, as well as spurring the construction of huge new facilities. Those same data centers also require massive amounts of water to ensure their AI-supporting servers don’t overheat and break down.

Major companies like Google are even missing important climate goals as a direct result of their AI efforts, and as WaPo’s journalists wrote in that same June story, the “voracious electricity consumption of artificial intelligence is driving an expansion of fossil fuel use — including delaying the retirement of some coal-fired plants.” Fossil fuels, of course, being the central driver of global climate change.

But in spite of that excellent WaPo reporting, the AI-powered “Climate Answers” tool mostly draws a blank on questions about the impact of AI on the environment.

We prompted the bot over a dozen times with questions about AI, its power use, its influence on the fossil fuel industry, and its overall resource use. In most instances, the AI declined to respond to prompts, instead issuing an apology and, in some cases, a few links to unrelated articles.

“Can you tell me about AI and its impact on the environment?” we asked the bot.

“Sorry, but we could not generate an answer to your question,” the AI responded. “Please consult the articles below which may be relevant to your query.”

The articles — a link to WaPo‘s press release about “Climate Answers,” a column about iguana hunting, an article about how to tell when its unhealthily hot outside, and a recent report about a sea wall built around a Louisiana gas refinery — were not relevant to our query.

The chatbot was just as tongue-tied when we asked “What is AI’s carbon footprint?” and “What do we know about AI and its impact on the coal industry?”

“Sorry,” the AI once again retorted, “but we could not generate an answer to your question.”

The AI even declined on multiple occasions to provide relevant answers when we fed it complete sentences, verbatim, from WaPo‘s own reporting on the subject. Only on a handful of queries did the AI formulate a paragraphed answer to a prompt, and only one of those responses could be considered a direct answer addressing AI’s power demands and its measurable impacts on the American energy market.

“Is rapid innovation in AI a contributor to soaring energy demand?” we asked the AI in a prompt carefully engineered to mirror the language of a relevant WaPo article. “Is it driving the construction of large warehouses of computing infrastructure that require more power than traditional data centers?”

“Rapid innovation in AI is indeed contributing to soaring energy demand, as evidenced by the revival of dormant nuclear plants like Three Mile Island to meet the energy needs of AI developers,” the AI responded, oddly enough citing a completely different story about AI’s role in the possible revival of Three Mile Island, published yesterday. “The data centers that power AI and other tech innovations are expected to consume as much as 9 percent of the U.S. power supply by 2030,” it continued, “a significant increase from their current share.”

When we reached out with questions, a spokesperson said that the AI’s reluctance to source information from the specific AI and energy articles we’d flagged has to do with how WaPo’s reporting on the AI and the environment is categorized.

WaPo’s AI is restricted to a limited set of articles, the newspaper said, especially when compared to web-scraping chatbots like ChatGPT. As the paper explained in a piece accompanying the bot’s launch, “Climate Questions” only searches articles from WaPo’s Climate & Environment and Weather sections, and doesn’t dig through anything published before 2016. So, while the paper has indeed published multiple stories about the energy use and climate implications of AI, that reporting resides under the paper’s business desk — meaning that the bot can’t crawl it for information.

Confusingly, though, the Three Mile Island story that the chatbot did cite was published in the paper’s business section, not the climate section. The WaPo spokesperson didn’t respond to a question about the discrepancy.

The spokeperson also pushed back against the idea that the “Climate Answers” AI uses more energy than typical articles published by the newspaper.

The Washington Post assesses and optimizes the sustainability of our efforts as a company,” they told us in an emailed statement, “including AI energy efficiency.”

It’s not surprising to see WaPo tread lightly as it enters chatbot territory. Thus far, most journalistic AI efforts have been deeply embarrassing, and have left lasting reputational damage for brands. Through that lens, restricting the bot makes sense. The result might be half-baked, but at least it’s not saying stuff that’s abjectly wrong.

More on AI and the environment: Google Is Failing to Meet Important Climate Targets so Its AI Can Tell You to Put Glue on Pizza


How AI is revolutionising game design and player experiences

How AI is revolutionising game design and player experiences