OpenAI teases updated GPT-4 Turbo, less "lazy" than before; other new models and lower pricing
2 min. read
Published on
Read our disclosure page to find out how can you help MSPoweruser sustain the editorial team Read more
OpenAI has announced several updates to its platform today. These updates include previewing an improved GPT-4 Turbo language model, new embedding models, price reductions, and tools for API key management and usage tracking.
First, let’s talk about the updated GPT-4 Turbo. The gpt-4-0125-preview model addresses feedback from developers regarding task completion in the earlier preview version. The new model aims to reduce instances of “laziness” where the model didn’t fully complete tasks and it also includes a fix for non-English generation issues. Additionally, OpenAI introduced a new “gpt-4-turbo-preview” model name alias that automatically points to the latest preview version.
What is laziness in this context?
“Laziness” typically refers to the tendency of large language models to take shortcuts, avoid effort, produce low-quality outputs, and fail to engage in a deep understanding of challenging tasks.
OpenAI also unveiled “text-embedding-3-small” and “text-embedding-3-large,” which are designed to improve performance while being more efficient and cost-effective than previous versions. These models offer native support for “shortening,” allowing developers to trade off some accuracy for reduced storage and compute requirements.
In easier words, “shortening” is like removing some less important details from a complex label while keeping the main idea intact.
Another key announcement is that OpenAI continues to make the GPT-3.5 Turbo model more accessible with a 50% reduction in input prices and a 25% reduction in output prices. Existing users will automatically be upgraded to the new “gpt-3.5-turbo-0125” model two weeks after its launch.
OpenAI also hinted at the upcoming release of GPT-4 Turbo with vision capabilities and further improvements to API usage tracking and key management tools.
You can read everything in detail here.
User forum
0 messages