in

Google’s Gemini 1.5 Pro dethrones GPT-4o


Google’s experimental Gemini 1.5 Pro model has surpassed OpenAI’s GPT-4o in generative AI benchmarks.

Gemini 1.5 Pro quickly caught the attention of the AI community on social media as reports emerged of its superior performance compared to its competitors.

For the past year, OpenAI’s GPT-4o and Anthropic’s Claude-3 have dominated the landscape. However, the latest version of Gemini 1.5 Pro appears to have taken the lead.

One of the most widely recognised benchmarks in the AI community is the LMSYS Chatbot Arena, which evaluates models on various tasks and assigns an overall competency score. On this leaderboard, GPT-4o achieved a score of 1,286, while Claude-3 secured a commendable 1,271. A previous iteration of Gemini 1.5 Pro had scored 1,261.

However, the experimental version of Gemini 1.5 Pro (designated as Gemini 1.5 Pro 0801) has surpassed them all with an impressive score of 1,300. This significant improvement suggests that Google’s latest model may possess greater overall capabilities than its competitors.

It’s worth noting that while benchmarks provide valuable insights into an AI model’s performance, they may not always accurately represent the full spectrum of its abilities or limitations in real-world applications.

The release of this experimental version raises questions about Google’s plans for Gemini 1.5 Pro. As of now, it remains unclear whether this version will become the default moving forward. 

Despite Gemini 1.5 Pro’s current availability, the fact that it’s labelled as an early release or in a testing phase suggests that Google may still make adjustments or even withdraw the model for safety or alignment reasons.

This development marks a significant milestone in the ongoing race for AI supremacy among tech giants. Google’s ability to surpass OpenAI and Anthropic in benchmark scores demonstrates the rapid pace of innovation in the field and the intense competition driving these advancements.

As the AI landscape continues to evolve, it will be interesting to see how OpenAI and Anthropic respond to this challenge from Google. Will they be able to reclaim their positions at the top of the leaderboard, or has Google established a new standard for generative AI performance?

(Photo by Yuliya Strizhkina)

See also: Meta’s AI strategy: Building for tomorrow, not immediate profits

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

Tags: ai, artificial intelligence, benchmark, chatbot arena, gemini, gemini 1.5 pro, Google, large language model, llm, lmsys, Model


AI game knocking over blocks gif

Can we train AI to be creative? One lab is testing ideas

A New Trick Could Block the Misuse of Open Source AI

A New Trick Could Block the Misuse of Open Source AI