Microsoft started the gesture recognition revolution in the technology world with the introduction of Kinect sensor. Initially, it started as a gaming accessory and then it moved on to support Windows platform in which there are now thousands of apps based on Kinect sensor and its SDK. It enables humans to interact naturally with computers, the Kinect for Windows v2 sensor and free software development kit (SDK) 2.0 provide developers with the foundation needed to create and deploy interactive applications that respond to peoples’ natural movements, gestures, and voice commands. But the main disadvantage of Kinect sensor is that it requires large space. Consumers should be atleast 2 meters away from of the sensor for better recogniation.
Intel came up with the new RealSense Technology which includes not one but three cameras that enable new ways to interact in gaming, entertainment, photography, and content creation. This front-facing camera allows users to interact more naturally with their computers through gestures and even facial and voice recognition. The main advantage is that Intel RealSense is small in size and can be integrated into laptops and PCs. At CES, few PC OEMs announced devices integrated with Intel RealSense 3D Camera. A good example for Intel RealSense camera in PCs is HP Sprout. Intel RealSense 3D Camera is used for instant capture of 2D and 3D objects.
The question is here is whether Microsoft tried to shrink Kinect sensor’s size so that it can be intgrated into PCs, tablets, etc,. According to a recent patent from Microsoft, they are working on a 3D camera. The patent was titled “Touch sensitive user interface with three dimensional input sensor“.
Disclosed herein are systems and methods for allowing touch user input to an electronic device associated with a display. The touch interface can be any surface. As one example, a table top can be used as a touch sensitive interface. The system may have a 3D camera that identifies the relative position of a user’s hands to the touch interface to allow for user input. Thus, the user’s hands do not occlude the display. The system is also able to determine whether the user’s hands are hovering over the touch interface. Therefore, the system allows for hover events, as one type of user input.
A system and method are disclosed for providing a touch interface for electronic devices. The touch interface can be any surface. As one example, a table top can be used as a touch sensitive interface. In one embodiment, the system determines a touch region of the surface, and correlates that touch region to a display of an electronic device for which input is provided. The system may have a 3D camera that identifies the relative position of a user’s hands to the touch region to allow for user input. Note that the user’s hands do not occlude the display. The system may render a representation of the user’s hand on the display in order for the user to interact with elements on the display screen.
Do you think Microsoft should release a competitor to Intel RealSense based on Kinect or adopt RealSense technology? Let us know in the comments.