Microsoft open sources high-performance inference engine for machine learning models

Microsoft yesterday announced that it is open sourcing ONNX Runtime, a high-performance inference engine for machine learning models in the ONNX format on Linux, Windows, and Mac. ONNX Runtime allows developers to train and tune models in any supported framework and productionize these models with high performance in both cloud and edge. Microsoft is using ONNX Runtime internally for Bing Search, Bing Ads, Office productivity services, and more.

ONNX brings interoperability to the AI framework ecosystem providing a definition of an extensible computation graph model, as well as definitions of built-in operators and standard data types. ONNX enables models to be trained in one framework and transferred to another for inference. ONNX models are currently supported in Caffe2, Cognitive Toolkit, and PyTorch.

Check out the Open Neural Network Exchange (ONNX) Runtime on GitHub here.

Source: Microsoft

Some links in the article may not be viewable as you are using an AdBlocker. Please add us to your whitelist to enable the website to function properly.

Related
Comments