OpenAI announces GPT-4o, its newest flagship model with huge improvements

Reading time icon 2 min. read


Readers help support MSpoweruser. We may get a commission if you buy through our links. Tooltip Icon

Read our disclosure page to find out how can you help MSPoweruser sustain the editorial team Read more

OpenAI introduced chatgpt 4o

OpenAI has announced GPT-4o, an update to its GPT-4 language model that brings new features to both the free and paid tiers of its ChatGPT platform. While the functionality was amazing, the voice was breaking a lot. Moreover, the speakers deliberately spoke during the conversation to shut ChatGPT off because it was talking too much.

Previously limited to paid subscriptions, GPT-4o offers free users access to capabilities such as data and code analysis, image processing tools, and real-time language translation. OpenAI also revealed a desktop app.

While the free tier receives a significant upgrade, the paid subscription (ChatGPT Plus) continues to offer advantages. Paid users receive a fivefold increase in daily GPT-4o requests, can faster processing and access to future advanced features.

A key feature of GPT-4o is its enhanced live speech functionality. Unlike earlier models, GPT-4o can directly process speech input. This allows for more natural and interactive conversations with the AI.

When using GPT-4o, ChatGPT Free users will now have access to features such as:

  • Experience GPT-4 level intelligence
  • Get responses(opens in a new window) from both the model and the web
  • Analyze data(opens in a new window) and create charts
  • Chat about photos you take
  • Upload files(opens in a new window) for assistance summarizing, writing or analyzing
  • Discover and use GPTs and the GPT Store
  • Build a more helpful experience with Memory

GPT-4o goes beyond conversation, demonstrating capabilities in problem-solving and analysis. The model can solve math problems step-by-step, analyze code, interpret graphs, and translate languages in real-time. Additionally, it can generate various vocal styles. So, finally, OpenAI didn’t reveal a search engine. In the future, perhaps?

Even though the product was good, it seemed unfinished. Perhaps this was Microsoft’s idea to hack Google’s I/O?

More here.

User forum

0 messages