At Microsoft Research Faculty Summit, Microsoft revealed their new project called ‘Adam’. Project Adam is a deep-learning system modeled after the human brain that has greater image classification accuracy and is 50 times faster than other systems in the industry. Microsoft ran the benchmark test called ImageNet 22K, the Adam neural network tops the performance numbers of the Google Brain, a system that is behind the services across. This benchmark test includes a database of 22,000 types of images, and only Adam and Google Brain are among the few artificial intelligence models that can handle this massive amount of input. Microsoft is looking to use Adam to provide lots of applications for users such as the ability to get information about anything just by pointing out your mobile camera and more. Microsoft even brought dogs on to the stage and demoed the system in which the mobile camera recognized the dog breed when pointed at the dog.
Lee believes Adam could be part of what he calls an “ultimate machine intelligence,” something that could function in ways that are closer to how we humans handle different types of modalities—like speech, vision, and text—all at once. The road to that kind of technology is long—people have been working towards it since the 50s—but we’re certainly getting closer.
Read more from the link below.