in

OpenAI Alarmed When Its Shiny New AI Model Isn’t as Smart as It Was Supposed to Be


“The AGI bubble is bursting a little bit.”

Cooling Off

OpenAI’s next large language model may not be as powerful as many hoped.

Code-named Orion, the AI model is sorely underperforming behind the scenes, Bloomberg reports, showing less improvement over its predecessor than GPT-4 did over GPT-3. A similar report from The Information this week indicated that some OpenAI researchers believed that in certain areas like coding, there were no improvements at all.

And according to Bloomberg, OpenAI isn’t the only AI outfit struggling with diminishing returns. Google’s next iteration of its Gemini model is also falling short of internal expectations, while the timeline for Anthropic’s release of its much hyped Claude 3.5 Opus is up in the air.

These industry-wide struggles may be a sign that the current paradigm of improving AI models via what’s known as “scaling” is hitting a brick wall, portending potential economic woes in the future if AI models remain costly to develop without achieving significant leaps in performance towards building an artificial general intelligence.

“The AGI bubble is bursting a little bit,” Margaret Mitchell, chief ethics scientist at the AI startup Hugging Face, told Bloomberg, adding that “different training approaches” may be needed to approach anything like human levels of intelligence and versatility.

Gluttonous Tech

The ethos that has yielded gains in generative AI so far has been scaling: to make generative AI models more powerful, the primary way to achieve this is by making them bigger. That means adding more processing power — AI chips, like from Nvidia — and injecting more training data, which has largely been scraped from the web with little cost.

But as these models get larger and more powerful, they also get hungrier. All that energy isn’t cheap — Microsoft is looking to reboot entire nuclear power plants to support its AI data centers, for example — and free training data is drying up. To obtain new brain food for their AIs, tech companies are using synthetic, computer -generated data. Yet, they still “struggle to get unique, high-quality datasets without human guidance, especially when it comes to language,” Lila Tretikov, head of AI strategy at New Enterprise associates, told Bloomberg.

And so, to give an idea of all those expenses: in a podcast episode quoted by Bloomberg, Anthropic CEO Dario Amodei said that a cutting-edge AI model currently costs around $100 million to build, and estimated that by 2027, they could cost well over $10 billion apiece.

Best Days Behind

This year, Anthropic updated its Claude models, but notably snubbed Opus, while references to a near-future release date for it have been scrubbed from its website. Like with OpenAI, researchers at the company reportedly observed only marginal improvements with Opus considering its size and how much it cost to build and run, according to a Bloomberg source.

Similarly, Google’s Gemini software is falling short of its goals, per Bloomberg, and the company has released few major improvements to its large language model in the meantime.

These aren’t insurmountable challenges, to be clear. But it’s increasingly sounding like the AI industry may not enjoy the same pace of advancements as it has in the past decade.

“We got very excited for a brief period of very fast progress,” Noah Giansiracusa, an associate professor of mathematics at Bentley University in Massachusetts, told Bloomberg. “That just wasn’t sustainable.”

More on AI: AI Expert Warns Crash Is Imminent As AI Improvements Hit Brick Wall

How AI Can Reduce Development Time and Costs for Software Projects - AI Time Journal

How AI Can Reduce Development Time and Costs for Software Projects – AI Time Journal

Azure OpenAI Demo with Use cases