What is Orca 2? Microsoft's latest drop could outperform smaller models and rivals larger models

Reading time icon 2 min. read


Readers help support MSpoweruser. We may get a commission if you buy through our links. Tooltip Icon

Read our disclosure page to find out how can you help MSPoweruser sustain the editorial team Read more

The AI race is just getting heated. Besides the shakedown of leadership at OpenAI involving (now former) CEO Sam Altman, the company’s board, and Microsoft, the Redmond-based tech giant “quietly” launched its latest small language models. It’s called Orca 2, and from the look of it, this could be Microsoft’s answer to the growing AI challenge. 

Orca 2 isn’t just talking the talk – it’s walking the walk. Outperforming models of similar size and going head-to-head with models almost ten times larger, especially in tricky tasks that test advanced reasoning, Orca 2 is proving its worth. 

Available in two sizes, 7 billion and 13 billion parameters, both fine-tuned on special synthetic data, Microsoft says that it’s making the weights for the public to “encourage research” of smaller language models. 

Check out the charts below to see how Orca 2 performs on a variety of benchmarks compared to other models of similar size and even models that are 5-10 times larger.

“The training data was generated such that it teaches Orca 2 various reasoning techniques, such as step-by-step processing, recall then generate, recall-reason-generate, extract-generate, and direct answer methods, while also teaching it to choose different solution strategies for different tasks,” says Microsoft in the official announcement

A few months back, Redmond’s researchers launched the predecessor, Orca 1, with 13 billion parameters. You can read Microsoft’s Orca 2 paper here