in

Former Google News Director Admits Big Tech Is Killing Journalism


Between many massive rounds of industry layoffs, a changing technological landscape, and increasingly eroding public trust in the news, the future of journalism is muddier than ever. And according to one former Big Tech executive? Big Tech’s current project, generative AI, might put a nail in the media’s coffin — or at the very least, fundamentally change what journalism is and how it’s consumed.

In an opinion column for The Washington Post, ex-Googler Jim Albrecht — who until 2023 was the senior director for the search giant and major AI player’s news ecosystem products division — argues that AI is the real “wolf” that threatens the business of journalism. Ever since the internet rose to prominence, the media industry has bickered with tech giants over things like search results and compensation models; but with AI now able to both paraphrase and deliver news, argues Albrecht, these qualms may already be mute.

“To me, watching publishers bicker about payment for search results while [large language models (LLMs)] advanced at a silent, frenetic pace,” Albrecht recalled, “was like watching people squabble about the floral arrangements at an outdoor wedding while the largest storm cloud you can imagine moves silently closer.”

And now, due to this “new technology,” the ex-Googler continues, we’re entering a digital world in which platforms “might not need to link to news sites at all — they can just take the news, have a robot rewrite it and publish it in their own products.”

In other words, if chatbots become the consumer’s go-to means of finding and interpreting news, quarrels over compensation and algorithmic changes won’t matter, as readers may not feel inclined to go to a publisher’s website at all. The media industry’s current revenue model isn’t just flipped on its head, but shattered. In an AI-powered news ecosystem, who becomes the arbiter of information?

Albrecht isn’t the first to seriously consider this future. In its recently-leveled lawsuit against OpenAI, The New York Times doesn’t just argue that OpenAI has violated its copyrighted work. It argues that OpenAI has used the NYT‘s journalism to build a competing product — indeed, as Albrecht suggests, a product that can swallow up the work of the NYT‘s human journalists and regurgitate it for readers.

Whatever the outcome, the lawsuit stands to be a landmark case. But it’ll be a while until we have a verdict, and in the meantime it’s worth reflecting on some of the ironies that Big Tech faces in this growing media dilemma.

For years, platforms like Google and Facebook have defended themselves against accusations of revenue theft on grounds that they were only vehicles for news delivery — the humble envoy of useful information from the desks of journalists to the eyeballs of readers. Now, though, as apps like ChatGPT and Microsoft’s Bing Search and Google’s Search Generative Experience enter the collective digital fold, Big Tech platforms are suddenly finding themselves creators of news content and the voice through which it’s heard.

That’s a very different role, and one with its own set of challenges, ethics, and consequences. And though AI, in its current form, is nowhere near advanced enough to reliably deliver accurate news and information, Big Tech companies have made it very clear that they’re determined to take on the role of news arbiter and creator nonetheless. (Of course, as this happens, many corners of the media industry are likely to be crushed or permanently altered under platforms’ giveth-and-taketh thumbs.)

In one particularly compelling example of what the news industry could soon look like, Albrecht, in his column, presents an imagined information world where various AI agents communicate with each other and ultimately to the consumer. In this imagined day-to-day digital life, individual newspapers and otherwise news outlets have their own chatbots; these chatbots communicate with your personal AI assistant, which, per Albrecht, will “brief you on the news, your day, your emails; respond for you; answer your questions; help with your work.” This assistant will be hyper-personalized, as will the news you consume — an already tangible element of the digital news economy that’s come with its own set of risks and dangers.

And yet, in this speculative future, the full role of journalists remains unclear.

Journalism and journalists are as necessary as they’ve ever been. Plus, though ChatGPT might be able to paraphrase news right now, it can’t actually do the work of journalism — like speaking to sources, for example. The question really at the heart of the media’s growing AI problem, then, is not whether journalists are necessary, but whether the generative AI-driven ambitions of Big Tech companies will allow for a functioning — not even asking for healthy here, just baseline functioning! — media economy. (Nevermind that in reality Google’s news algorithms continue to crumble under the onslaught of often garbage-quality AI text and imagery online.)

Anyway. Could the proliferation of AI-generated content backfire spectacularly in the face of AI companies? Maybe. Could ongoing and future legal battles carve new revenue models for journalistic institutions in the AI age? Hopefully!

But as it stands, the media’s AI-entangled future remains as uncertain as its present. And though recent decades have watched the industry go through several technological shifts, Big Tech’s most recent project will likely have a more explicit impact on news and information creation and sharing than ever before — and what remains of the media industry when the dust settles is the multibillion-dollar question.

More on AI and journalism: Google Secretly Showing Newspapers an AI-Powered News Generator

LinkedIn's New AI Chatbot Wants to Help You Find Your Next Job

LinkedIn’s New AI Chatbot Wants to Help You Find Your Next Job

Brief – Back to Basics: For Better Security, Bank on Function Over Form