OpenAI announces GPT-4 Turbo model that supports 128K tokens and more

Reading time icon 2 min. read


Readers help support MSpoweruser. We may get a commission if you buy through our links. Tooltip Icon

Read our disclosure page to find out how can you help MSPoweruser sustain the editorial team Read more

OpenAI GPT-4 Turbo

At OpenAI DevDay, OpenAI CEO Sam Altman today announced the new GPT-4 Turbo model with several key improvements and significantly cheaper pricing.

First, GPT-4 Turbo model now has a much larger context length. It now supports 128k context window and better accuracy over long context

Second, GPT-4 Turbo now supports JSON output mode allowing developers to easily integrate it within other services. It also comes with improvements in function calling, reproducible outputs and viewing log probabilities.

Third, GPT-4 Turbo’s knowledge cutoff is now April 2023, as compared to September 2021 for GPT-4.

Fourth, DALL-E 3, text to speech and GPT-4 Turbo models are now available via API. The new Whipser V3 comes with large improvements.

Fifth, finetuning is now available for 16k context window. Also, enterprise customers can partner with OpenAI for completely custom models. Similar to Google and Microsoft, OpenAI now offers Copyright Shield for all enterprise customers.

Sixth, OpenAI now offers higher rate limits and lower pricing. The pricing for GPT-4 Turbo input is 66% less and the output is 50% less expensive than the current GPT-4 pricing. You can more details on the decreased pricing below.

  • GPT-4 Turbo input tokens are 3x cheaper than GPT-4 at $0.01 and output tokens are 2x cheaper at $0.03.
  • GPT-3.5 Turbo input tokens are 3x cheaper than the previous 16K model at $0.001 and output tokens are 2x cheaper at $0.002. Developers previously using GPT-3.5 Turbo 4K benefit from a 33% reduction on input tokens at $0.001. Those lower prices only apply to the new GPT-3.5 Turbo introduced today.
  • Fine-tuned GPT-3.5 Turbo 4K model input tokens are reduced by 4x at $0.003 and output tokens are 2.7x cheaper at $0.006. Fine-tuning also supports 16K context at the same price as 4K with the new GPT-3.5 Turbo model. These new prices also apply to fine-tuned gpt-3.5-turbo-0613 models.

User forum

0 messages