Ministral 3B & 8B models to power smartphones with Snapdragon 8 Elite chip

Samsung's upcoming Galaxy S25 series will be powered by this SoC.

Reading time icon 2 min. read


Readers help support MSpoweruser. We may get a commission if you buy through our links. Tooltip Icon

Read our disclosure page to find out how can you help MSPoweruser sustain the editorial team Read more

Key notes

  • Qualcomm and Mistral AI bring Ministral 3B and 8B to Snapdragon devices.
  • The models are optimized for smartphones, cars, and PCs.
  • They will be available soon on the Qualcomm AI Hub.
Snapdragon 8 Elite launch

Qualcomm has now tapped Mistral AI’s new & upcoming generative AI models, the Ministral 3B and Ministral 8B, for Snapdragon 8 Elite-powered phones.

The San Diego-based tech giant recently borrowed the “elite” name for the latest high-end smartphone chip. The highly anticipated SoC will apparently power Samsung’s upcoming Galaxy S25 series, set to launch in early 2025, and Asus ROG Phone 9 Pro.

The company also says in the recent announcement that these models are well-optimized on the mobile platform, Cockpit Elite, Ride Elite, and X Elite Compute Platform. That means, the models will soon power on-device AI smarts for smartphones, vehicles, and even PCs using these platforms.

“Running generative AI on devices offers numerous benefits, including enhanced privacy, lower latency, reliability, cost savings, and energy efficiency,” says the company.

The Snapdragon 8 Elite completely smoked Apple’s A18 Pro chip, according to recently leaked benchmark numbers. It replaces the Kryo CPUs used in previous mobile chipsets in favor of the new Oryon CPU, delivering desktop-like performance with a total of 8 cores with a Hexagon NPU and a sliced-architecture Adreno GPU.

Besides Samsung and Asus, Qualcomm also listed Android brands like Honor, OnePlus, Oppo, and Xiaomi to launch Snapdragon 8 Elite-powered phones in the coming weeks.

Mistral 7B v0.3 is currently available on the Qualcomm AI Hub, with Ministral 3B and 8B arriving soon. Designed for edge use cases with a 128k context length, these models excel in knowledge, reasoning, and efficiency within the sub-10B category. The company also says that they outperform Gemma 2 and Llama models across several benchmarks.

User forum

0 messages