Microsoft today announced their collaboration with Facebook to announce Open Neural Network Exchange (ONNX) format. ONNX brings interoperability to the AI framework ecosystem. ONNX provides a definition of an extensible computation graph model, as well as definitions of built-in operators and standard data types.
There are several AI frameworks available in the market, including Microsoft’s own Cognitive Toolkit. Until today, there was no way to use AI models created for one framework in another. ONNX solves this problem by becoming the open source format for AI models. Microsoft’s Cognitive Toolkit, Caffe2, and PyTorch will be supporting ONNX.
ONNX offers the following benefits:
- Framework interoperability: Developers can more easily move between frameworks and use the best tool for the task at hand. Each framework is optimized for specific characteristics such as fast training, supporting flexible network architectures, inferencing on mobile devices, etc. Many times, the characteristic most important during research and development is different than the one most important for shipping to production. This leads to inefficiencies from not using the right framework or significant delays as developers convert models between frameworks. Frameworks that use the ONNX representation simplify this and enable developers to be more agile.
- Shared optimization: Hardware vendors and others with optimizations for improving the performance of neural networks can impact multiple frameworks at once by targeting the ONNX representation. Frequently optimizations need to be integrated separately into each framework which can be a time-consuming process. The ONNX representation makes it easier for optimizations to reach more developers.