Microsoft open sources high-performance inference engine for machine learning models
1 min. read
Published on
Read our disclosure page to find out how can you help MSPoweruser sustain the editorial team Read more
Microsoft yesterday announced that it is open sourcing ONNX Runtime, a high-performance inference engine for machine learning models in the ONNX format on Linux, Windows, and Mac. ONNX Runtime allows developers to train and tune models in any supported framework and productionize these models with high performance in both cloud and edge. Microsoft is using ONNX Runtime internally for Bing Search, Bing Ads, Office productivity services, and more.
ONNX brings interoperability to the AI framework ecosystem providing a definition of an extensible computation graph model, as well as definitions of built-in operators and standard data types. ONNX enables models to be trained in one framework and transferred to another for inference. ONNX models are currently supported in Caffe2, Cognitive Toolkit, and PyTorch.
Check out the Open Neural Network Exchange (ONNX) Runtime on GitHub here.
Source: Microsoft
User forum
0 messages