in

Cory Doctorow Blasts AI as a Fraud-Filled Bubble


Renowned journalist and science fiction author Cory Doctorow is convinced that the AI is doomed to drop off a cliff.

“Of course AI is a bubble,” he wrote in a recent piece for sci-fi magazine Locus. “It has all the hallmarks of a classic tech bubble.”

Doctorow likens the AI bubble to the dotcom crisis of the early 2000s, when Silicon Valley firms started dropping like flies when venture capital dried up. It’s a compelling parallel to the current AI landscape, marked by sky-high expectations and even loftier promises that stand in stark contrast to reality.

But it’s not all doom and gloom. Doctorow believes that the situation isn’t a zero-sum game, and that once the dust settles, there may be some residual upshots that could drive meaningful technological progress in the future.

“Tech bubbles come in two varieties: The ones that leave something behind, and the ones that leave nothing behind,” Doctorow argued.

Following the dotcom bubble popping, millions of young people were lured into the tech sector, creating an “army of technologists.”

Similarly, the AI bubble may soon burst — but according to Doctorow, it may leave something behind.

“AI is a bubble, and it’s full of fraud, but that doesn’t automatically mean there’ll be nothing of value left behind when the bubble bursts,” he wrote.

That makes it stand out against other recent bubbles, per the “Little Brother” author, like the cryptocurrency and NFT industry, which “left behind very little reusable residue.”

“The fraud of the cryptocurrency bubble was far more pervasive than the fraud in the dotcom bubble, so much so that without the fraud, there’s almost nothing left,” Doctorow argued.

He also pointed out the immense costs involved in keeping large language models like OpenAI’s ChatGPT running, requiring — for now — an influx of capital under the assumption that the tech will eventually start to pay for itself.

Then there’s the pervasive tendency of AIs to hallucinate facts, which only further undermines their utility.

Nonetheless, Doctorow posits that AI could still be useful, like a radiologist using AI to scan an X-ray for cancerous growths or an accountant drafting a tax return — as long as they’re leveraging the tech to do what they already did, except better.

Betting on AI replacing human workers, though, is going to be either ghoulish or misplaced.

Case in point: Cruise, which recently pulled all its self-driving cars from public streets across the country following an incident in San Francisco involving a woman being dragged across the ground. Even while it was still in operation, the company needed a higher number of more expensive supervisors to run its fleet as compared to the existing taxi system.

“If Cruise is a bellwether for the future of the AI regulatory environment, then the pool of AI applications shrinks to a puddle,” Doctorow wrote.

Fortunately, he argues, the AI bubble has motivated people to learn about things like “statistical analysis at scale and how to wrangle large amounts of data,” which is technical know-how that could be used to “solve real problems.”

It’s a refreshing and unflinching take on one of the biggest debates being faced by the tech sector right now. If Doctorow is to be believed, current conversations surrounding the safety and ethics of AI are a mere distraction.

Instead, what we really should be asking ourselves is what’ll be left behind after the bubble pops — a question that few are wrestling with as Silicon Valley beats the AI drum.

More on AI: Microsoft’s Stuffing Talking Generative AI Into Your Car

mm

Revolutionizing Physical Skills: AI Robot Surpasses Human Ability in Labyrinth Marble Game

EU to expand support for AI startups to tap its supercomputers for model training

EU to expand support for AI startups to tap its supercomputers for model training