Google has unveiled its newest artificial intelligence model, Gemini, boasting of its superior performance across various benchmarks compared to OpenAI’s GPT-4. While the initial announcement attracted widespread interest, closer examination reveals a complex landscape of technical capabilities, marketing strategies, and ongoing competition in the rapidly evolving AI field.
On standard benchmarks testing capabilities like high school physics and professional law, Gemini Ultra edges out GPT-4 by a marginal percentage. And even on those benchmarks, Gemini Ultra beat OpenAI’s GPT-4 model by only a few percentage points. Google’s top AI model has only made narrow improvements on something that OpenAI completed work on at least a year ago.
However, the video showcasing Gemini’s seemingly impressive abilities, like tracking sleight-of-hand magic tricks or inferring drawings, was heavily edited and did not reflect real-time performance. Additionally, access to the most powerful version, Gemini Ultra, is still limited.
On additional testing, when used with ChatGPT’s GPT-4, results were very similar to what Google presented in the video.
However, the results were subpar when the same thing was replicated in Bing Chat in Precise Mode, which uses GPT4.
It’s still hard to say how better Gemini Ultra will be. Still, one thing is for sure: Google emphasized its vast resources and deployment network, potentially overshadowing the focus on Gemini’s capabilities.
Suppose it’s released in early January, as Google has suggested. In that case, Gemini Ultra might not stay the top model long. In the time it has taken Google to catch up to OpenAI, the nimbler player has had almost a year to work on its next AI model, GPT-5.
Overall, we believe that Gemini is good, but so is GPT-4.