AWS CEO Attacks Microsoft’s Azure AI Strategy

Reading time icon 3 min. read


Readers help support MSpoweruser. We may get a commission if you buy through our links. Tooltip Icon

Read our disclosure page to find out how can you help MSPoweruser sustain the editorial team Read more

AWS CEO Microsoft Jibe

Amazon Web Services, the dominant cloud computing provider, has been losing ground to rivals Microsoft and Google in the fast-growing field of generative artificial intelligence. AWS CEO Adam Selipsky tried to downplay the competition and took a swipe at Microsoft’s close partnership with OpenAI, a leading AI research organization, during the company’s annual Reinvent conference.

Generative AI is a branch of artificial intelligence that uses large-scale language models (LLMs) to create new content such as text, images, audio and video. The technology has been advancing rapidly in recent years, with breakthroughs such as OpenAI’s GPT-3 and GPT-4, Google’s PaLM and PaML-2, and Meta’s LLaMA. These models can generate realistic and coherent texts on various topics, answer questions, write code, and even compose songs.

Microsoft, which invested $1 billion in OpenAI in 2019, has exclusive access to OpenAI’s models and offers them to its Azure cloud customers. Google, which developed its own state-of-the-art models in-house, also makes them available through its Google Cloud platform. Meanwhile, AWS, which entered the generative AI market later than its competitors, has partnered with Anthropic, an AI startup, to provide its Claude2 model on AWS. AWS also announced its own Titan series of models, which are not as advanced as the models offered by Microsoft and Google.

Selipsky, who took over as AWS CEO in March, tried to portray AWS as the best choice for customers who want to use generative AI in their businesses. He claimed that AWS provides more variety and flexibility than other cloud providers, and criticized Microsoft for being “beholden primarily to one model provider”. He also said that he talked to at least 10 Fortune 500 CIOs who banned ChatGPT, a popular chatbot service powered by OpenAI’s models, from their enterprises due to privacy and security concerns.

However, Selipsky’s arguments are not entirely accurate or fair. First, Microsoft does not only depend on OpenAI’s models, but also offers a number of different models from other sources, such as Falcon, Stable Diffusion, LLaMA 2, Cohere, Jais, Mistral, and NVIDIA Nemotran. This is similar to how Amazon depends on Anthropic and models from elsewhere. Second, ChatGPT has a separate enterprise product, called ChatGPT Enterprise, which does not use user data for training and complies with data protection regulations. Third, Selipsky mentioned that a lot of other cloud providers “are still just talking” about their own AI chips. He ignored the fact that Google has been using its own custom-made AI chips, called Tensor Processing Units (TPUs), since 2016. Google’s TPU chips are designed to accelerate the training and inference of large-scale models.

AWS, which still holds the largest share of the cloud computing market, is facing increasing pressure from Microsoft and Google, which are growing faster and gaining more customers. Generative AI is one of the key areas where the cloud providers are competing to offer the best and most innovative solutions. Having the best state of the art AI models is important in generative AI solutions, as it can enable new possibilities, improve performance, and enhance user satisfaction. AWS may have to do more than just attacking its rivals to maintain its leadership position in the cloud.

More about the topics: aws, azure, Gen AI, Google Cloud