Custom Vision Service from Microsoft Cognitive Services allows developers to easily train, deploy, and improve custom image classifiers. Developers can train their own image classifier in minutes with few images per category. Once the model is trained, they can use it in their apps through REST APIs.
Yesterday, Microsoft announced mobile model support which will allow developers to add real time image classification to their mobile apps without the need for internet. After developers train their model in the cloud using their images, they can now export the models to run offline. Microsoft Cognitive Services will export the model to CoreML format for iOS 11.
Creating, updating, and exporting a compact model takes only minutes, making it easy to build and iteratively improve your application.
Microsoft is planning to add support for more export formats and supported devices in the future. Developers can check out the sample app here to learn how to build apps using this new feature.