Earlier this week, Microsoft Research revealed about their Project Adam at the Microsoft Research Faculty Summit. It is an state of an art machine learning and artificial intelligence program that enable software to visually recognize any object. Project Adam and its object classification is built on a massive dataset of 14 million images from the Web and sites such as Flickr, made up of more than 22,000 categories drawn from user-generated tags. Microsoft claims that this program is twice more accurate in its object recognition and 50 times faster than other systems. To show this system in action, Microsoft used a live dog on stage and Project Adam powered phone recognized the breed of the dog. You can watch it here.
Trishul Chilimbi, Partner Research Manager for Microsoft Research, discusses Project Adam, and how deep neural networks have enabled large-scale computer image recognition with astounding accuracy.
“The one thing that’s interesting and fundamental to me is how [deep learning] changes how we think about computers and programming,” Chilimbi said. “Say I would program a system to do the ImageNet classification task. As a programmer, the way I might go about it would be, ‘OK, I’ll program something to recognize faces or eyes.’
“That’s traditionally how we write programs. People have written programs that sought to do image-classification tasks, and the accuracy of those programs are way below the automatically learned system that operates on this task.”
Watch the video below.