Microsoft partners with Intel to bring optimized deep learning frameworks to Azure

Reading time icon 1 min. read


Readers help support MSpoweruser. We may get a commission if you buy through our links. Tooltip Icon

Read our disclosure page to find out how can you help MSPoweruser sustain the editorial team Read more

Generally, default builds of popular deep learning frameworks like TensorFlow are not fully optimized for training and inference on CPU. To solve this issue, Intel has open-sourced framework optimizations for Intel Xeon processors. Microsoft recently announced a partnership with Intel to bring optimized deep learning frameworks to Azure. These optimizations are available on the Azure marketplace in the name of Intel Optimized Data Science VM for Linux (Ubuntu).

These optimizations leverage the Intel Advanced Vector Extensions 512 (Intel AVX-512) and Intel Math Kernel Library for Deep Neural Networks (Intel MKL-DNN) to accelerate training and inference on Intel Xeon Processors. When running on an Azure F72s_v2 VM instance, these optimizations yielded an average of 7.7X speedup in training throughput across all standard CNN topologies.

You can find Intel Optimized Data Science VM instance on Azure here.

Source: Microsoft

More about the topics: Azure Marketplace, Deep Learning Frameworks, intel, Intel Optimized Data Science VM for Linux, microsoft

Leave a Reply

Your email address will not be published. Required fields are marked *