Microsoft CTO responds to MAI-1 LLM rumors
2 min. read
Published on
Read our disclosure page to find out how can you help MSPoweruser sustain the editorial team Read more
Key notes
- Microsoft clarifies collaboration with OpenAI for training powerful LLMs.
- Microsoft builds supercomputers, OpenAI uses them to train LLMs used in Microsoft products.
- Microsoft also builds its own LLMs like MAI-1 (500 billion parameters) to rival Google Gemini and Amazon Titan.
Microsoft has responded to recent news regarding their development of a LLM named MAI-1. In a post, the company’s CTO talked about their long-standing partnership with OpenAI for training LLMs.
He clarifies that they build powerful supercomputers that OpenAI utilizes to train these advanced models. These models are then integrated into various Microsoft products and services. This effort, according to the company, has been ongoing for several years, with each iteration exceeding the prior in terms of capability. Microsoft assures that this collaboration will continue well into the future.
The post also acknowledges Microsoft’s independent research on AI models. They have been creating smaller AI models and some larger ones like Turing and MAI. Interestingly, they even open-source some models, such as Phi.
Microsoft’s clarification comes after a report surfaced about their development of MAI-1, an LLM with a massive 500 billion parameters. This positions MAI-1 to compete directly with industry leaders like Google Gemini and Amazon Titan.
The report also mentions Microsoft’s recent acquisition of talent and technology from Inflection AI, which could be relevant to MAI-1’s development. While Microsoft has not explicitly confirmed this connection, the timing suggests a possible link.
Overall, Microsoft’s response appears to address the recent news by providing context for their LLM strategy. They highlight their existing, successful collaboration with OpenAI while acknowledging their in-house LLM efforts like MAI-1.
More here.
User forum
0 messages