Microsoft partners with Intel to bring optimized deep learning frameworks to Azure
1 min. read
Published on
Read our disclosure page to find out how can you help MSPoweruser sustain the editorial team Read more
Generally, default builds of popular deep learning frameworks like TensorFlow are not fully optimized for training and inference on CPU. To solve this issue, Intel has open-sourced framework optimizations for Intel Xeon processors. Microsoft recently announced a partnership with Intel to bring optimized deep learning frameworks to Azure. These optimizations are available on the Azure marketplace in the name of Intel Optimized Data Science VM for Linux (Ubuntu).
These optimizations leverage the Intel Advanced Vector Extensions 512 (Intel AVX-512) and Intel Math Kernel Library for Deep Neural Networks (Intel MKL-DNN) to accelerate training and inference on Intel Xeon Processors. When running on an Azure F72s_v2 VM instance, these optimizations yielded an average of 7.7X speedup in training throughput across all standard CNN topologies.
You can find Intel Optimized Data Science VM instance on Azure here.
Source: Microsoft
User forum
0 messages