in

AI Expert Warns Crash Is Imminent As AI Improvements Hit Brick Wall


“The economics are likely to be grim.”

Crash and Burn

The scales are falling from the eyes of the tech industry right now, as generative AI models are reportedly hitting a technological brick wall.

As some experts have long predicted would happen, improvements that once came easily by simply scaling up large language models — in other words, by adding more parameters, training data, and processing power — are now slowing down, and that’s if they’re yielding any significant gains at all.

Gary Marcus, a cognitive scientist and AI skeptic, is warning that once everyone wises up to these shortcomings, the entire industry could crash.

“The economics are likely to be grim,” Marcus wrote on his Substack. “Sky high valuation of companies like OpenAI and Microsoft are largely based on the notion that LLMs will, with continued scaling, become artificial general intelligence.”

“As I have always warned,” he added, “that’s just a fantasy.”

Diminishing Returns

The canary in the coal mine came when The Information reported this week that behind the scenes, OpenAI researchers discovered that its upcoming flagship model, code-named Orion, demonstrated noticeably less improvement over its predecessor GPT-4, than GPT-4 did over GPT-3.

In areas like coding — a major appeal for these LLMs — there may even be no improvements at all.

This is echoed elsewhere in the industry. Ilya Sutskever, founder of the startup Safe Superintelligence and co-founder and former chief science officer of OpenAI, told Reuters that improvements from scaling up AI models have plateaued.

In short, the dogma that “bigger is better” when it comes to AI models, which has predicated the industry’s ludicrous growth, may no longer be true.

This is not the death knell of AI. “But,” Marcus wrote, “the economics will likely never make sense: additional training is expensive, the more scaling, the more costly.”

Outside the Box

Per Reuters, training runs for large models can cost tens of millions of dollars, require using hundreds of AI chips, and can take months to complete at a time. Tech companies have also run out of freely available data to train their models, having practically scraped the entire surface web.

“LLMs such as they are, will become a commodity; price wars will keep revenue low. Given the cost of chips, profits will be elusive,” Marcus predicts. “When everyone realizes this, the financial bubble may burst quickly.”

There may be a way out of this economic rut. As the reports from The Information and Reuters note, OpenAI researchers are developing ways to surmount the scaling problem, such as training the models to “think” or “reason” in a similar way to humans, capabilities which have been previewed in its o1 model.

One way they are doing this is through a technique called “test-time compute,” which has an AI model explore multiple possibilities for complex problems and then choosing the most promising one, instead of jumping to a conclusion.

Whether such work will trailblaze a new way of pursuing significant AI improvements, however, will have to be borne out in the long run. As it stands, the AI industry continues to have a profitability problem, and as economic markets are rarely patient, there could be another AI winter to come if these improvements aren’t made fast.

More on AI: OpenAI Reportedly Hitting Law of Diminishing Returns as It Pours Computing Resources Into AI

Character.AI Is Hosting Pedophile Chatbots That Groom Users Who Say They're Underage

Character.AI Is Hosting Pedophile Chatbots That Groom Users Who Say They’re Underage