Gemini Advanced disappoints users, still lagging behind GPT-4

Reading time icon 2 min. read


Readers help support MSPoweruser. When you make a purchase using links on our site, we may earn an affiliate commission. Tooltip Icon

Read the affiliate disclosure page to find out how can you help MSPoweruser effortlessly and without spending any money. Read more

Google’s recently launched language model, Gemini Advanced, has garnered mixed reactions from early users, with some praising its emotional reasoning and response style while mostly expressing disappointment in its logical consistency and task execution compared to competitors like OpenAI’s GPT-4.

Here are several limitations and most common shortcomings:

  • Logical inconsistencies: Users reported encountering nonsensical statements and misinterpretations of information, raising concerns about the model’s logical reasoning capabilities.
  • Image interpretation issues: When presented with images, Gemini Advanced frequently provided inaccurate or irrelevant responses, indicating limitations in visual understanding.
  • Unfulfilled promises: Features like MIDI file creation, initially advertised as a differentiator, remain unavailable, leading to user dissatisfaction.
  • Complex task struggles: Users found the model to struggle with complex reasoning tasks and code generation, falling short of expectations.
  • Limited Functionality: Unlike competitors, Gemini Advanced reportedly lacks features like image processing, restricting its applicability in certain domains.
Comment
byu/BrightPLong from discussion
inBard

While there were negatives, it doesn’t mean that Gemini Advanced is a failed product; quite a few users loved it. Several users highlighted aspects of Gemini Advanced that resonated well with them:

  • Emotionally engaging responses: Some reviewers noted the model’s ability to generate human-like emotional responses, finding them more nuanced and natural than GPT-4’s outputs.
  • Thoroughness and style: In certain instances, users appreciated the model’s detailed and well-written responses, indicating potential for specific use cases.
  • Development potential: Acknowledging its nascent stage, some reviewers expressed hope for future improvements based on user feedback and ongoing development.
Comment
byu/BrightPLong from discussion
inBard

Adding to the mixed user sentiment, concerns were raised regarding the model’s pricing, with some suggesting the current performance doesn’t justify the associated cost.

We compared ChatGPT Plus, Copilot Pro, and Gemini Advanced paid subscription plans.

While direct comparisons between language models can be challenging due to differing strengths and weaknesses, user reviews suggest a gap between Gemini Advanced and established players like GPT-4 regarding overall performance and reliability. It’s crucial to remember that both models are under continuous development, and future updates could address identified shortcomings.

Here is the post.

More about the topics: Gemini Advanced