Kinect enable humans to interact naturally with computers, the latest sensor—whether it’s the Kinect for Xbox One sensor or the Kinect for Windows v2 sensor—and the free software development kit (SDK) 2.0 provide developers with the foundation needed to create and deploy interactive applications that respond to peoples’ natural movements, gestures, and voice commands. But it has some disadvantages like need for large room, the person interacting should be meters away from the sensor and more.
Microsoft Research is now developing a new real-time hand tracking system based on a single depth camera that can be embedded in mobile devices.
The system can accurately reconstruct complex hand poses across a variety of subjects. It also allows for robust tracking, rapidly recovering from any temporary failures. Most uniquely, our tracker is highly flexible, dramatically improving upon previous approaches which have focused on front-facing close-range scenarios. This flexibility opens up new possibilities for human-computer interaction with examples including tracking at distances from tens of centimeters through to several meters (for controlling the TV at a distance), supporting tracking using a moving depth camera (for mobile scenarios), and arbitrary camera placements (for VR headsets). These features are achieved through a new pipeline that combines a multi-layered discriminative reinitialization strategy for per-frame pose estimation, followed by a generative model-fitting stage.
Based on extensive technical details and a detailed qualitative and quantitative analysis, Microsoft Research thinks that their system is more accurate than Leap Motion. Read more about it from the link below.
Thanks to Walking Cat for the heads up.