Microsoft open sources high-performance inference engine for machine learning models

Reading time icon 1 min. read

Readers help support MSPoweruser. When you make a purchase using links on our site, we may earn an affiliate commission. Tooltip Icon

Read the affiliate disclosure page to find out how can you help MSPoweruser effortlessly and without spending any money. Read more

Microsoft yesterday announced that it is open sourcing ONNX Runtime, a high-performance inference engine for machine learning models in the ONNX format on Linux, Windows, and Mac. ONNX Runtime allows developers to train and tune models in any supported framework and productionize these models with high performance in both cloud and edge. Microsoft is using ONNX Runtime internally for Bing Search, Bing Ads, Office productivity services, and more.

ONNX brings interoperability to the AI framework ecosystem providing a definition of an extensible computation graph model, as well as definitions of built-in operators and standard data types. ONNX enables models to be trained in one framework and transferred to another for inference. ONNX models are currently supported in Caffe2, Cognitive Toolkit, and PyTorch.

Check out the Open Neural Network Exchange (ONNX) Runtime on GitHub here.

Source: Microsoft

More about the topics: microsoft, ONNX, Open Neural Network Exchange, open-source