Azure AI Search now increases its storage capacity, vector index size. Here's what changed

This update greatly cuts costs.

Reading time icon 2 min. read


Readers help support MSpoweruser. We may get a commission if you buy through our links. Tooltip Icon

Read our disclosure page to find out how can you help MSPoweruser sustain the editorial team Read more

Key notes

  • RAG apps go live in 2024, needing cost-effective retrieval.
  • Azure AI Search boosts storage and vector search, staying efficient.
  • Upgrades deliver better performance and scalability.

Microsoft has just recently announced that it’s expanding the storage capacity and vector index size of Azure AI Search, its popular AI-powered tool for developers.

The announcement arrived just as RAG apps go live in 2024, so folks need cost-effective retrieval. Now, in certain regions, new Basic and Standard tier services offer more storage space and processing power, especially for finding vectors, text, and metadata. This update greatly cuts costs, with the price per vector dropping by around 85%, and overall storage costs reduced by up to 75% or more. 

These upgrades also mean you can store more data per partition, have bigger vector indexes, and enjoy faster performance for tasks like indexing and searching.

The company also improves how vector search works and saves storage space. Now, you can use techniques like quantization and oversampling, and adjust settings to reduce storage use by up to 75%. Also, setting the “Stored” property on vector fields can further reduce storage overhead

Azure AI Search is a tool that makes it easy to create advanced search features and AI-powered applications by combining language models with business data. It helps developers build search functions for mobile or web apps, whether for their company or for the software they’re offering as a service.

Not too long ago, Microsoft also said that Cohere’s new Command R+ model is now available as one of the hundreds of language models in Azure AI. The model, which has just been launched today, has 104B parameters and is claimed to be better and cheaper than GPT-4 Turbo.

You can find more details on the increased capacity here.