in

The Wall Street Journal Is Testing AI-Generated Summaries of Its Articles


Who asked for this?

Wrong Turn

Top newspaper The Wall Street Journal is testing AI-generated summaries of its articles, The Verge reports, in the latest sign of the technology’s inroads into the media industry.

The way the feature is implemented, strikingly, will likely mean that most readers of the century-plus old paper may not even realize they’re viewing the product of a large language model.

The summaries appear as a “Key Points” box above the body of the article, featuring several bullet points that provide the main takeaways — but these aren’t labeled as AI-generated up front. Instead, that disclaimer is tucked away in a dropdown menu accessed by clicking an information icon in the box’s top right corner.

“An artificial intelligence tool created this summary, which was based on the text of the article and checked by an editor,” the disclaimer reads. It’s unclear what AI model the organization is using.

Little Experiment

The disclaimer also links to a page detailing how the WSJ and Dow Jones Newswires employ AI models. Along with article summaries, the news organizations use AI in “complex data-driven investigations,” for translation, and for creating audio versions of articles. It’s unclear what AI model is being used.

“We are always assessing new technologies and methods of storytelling to provide more value to our subscribers,” Taneth Evans, head of digital at the WSJ, said in a statement to The Verge. “To that end, we are currently running a series of A/B tests to understand our users’ needs with regards to summarization.” (An A/B test involves randomly assigning users to view one of two versions of a webpage and evaluating which design was the most successful.)

“The newsroom does this hand-in-hand with colleagues in technology and while speaking with readers at every step of the way,” Evans added. “We also disclose how we leverage artificial intelligence tools to support our journalism whenever it’s used.”

Bot News

Despite the massive question marks around the factual reliability of the tech, AI tools have continued to be adopted by news agencies. Some, like CNET, used AI to write entire error-ridden articles.

Cannier organizations, however, have deployed the tech to serve more ancillary functions.

AI-generated summaries are one such form. USA Today, for example, uses them in a similar “Key Points” format to the WSJ, but does the due diligence of labeling the box as an “AI-assisted summary.”

Meanwhile, The Washington Post hosts its own AI-powered tool to answer questions about climate change, with glaring flaws. Even newspaper of record The New York Times is experimenting with the use of AI to draft headlines and article summaries.

However limited in scope, these cases should be alarming — because generative AI models chronically hallucinate, or make up facts, posing a threat to the rigor of the editorial process.

More on AI: Journalist Alarmed When Newspaper Replaces Humans With Glitchy AI Bot

Character.AI Promises Changes After Revelations of Pedophile and Suicide Bots on Its Service

Character.AI Promises Changes After Revelations of Pedophile and Suicide Bots on Its Service