Microsoft Kinect is creating quite a revolution in the field of Natural User Interface. The low cost 3D depth camera like Kinect opens up infinite possibilities in our computing world. Sign language is the primary language for many deaf and hard-of-hearing people, but currently it is not possible for these people to interact with computers using their native language. Microsoft Research Asia has developed a system using Kinect which can detect sign languages used by these people and it works well.
Here are some info on the project,
Kinect, with its ability to provide depth information and color data simultaneously, makes it easier to track hand and body actions more accurately—and quickly.
In this project—which is being shown during the DemoFest portion of Faculty Summit 2013, which brings more than 400 academic researchers to Microsoft headquarters to share insight into impactful research—the hand tracking leads to a process of 3-D motion-trajectory alignment and matching for individual words in sign language. The words are generated via hand tracking by the Kinect for Windows software and then normalized, and matching scores are computed to identify the most relevant candidates when a signed word is analyzed.
The algorithm for this 3-D trajectory matching, in turn, has enabled the construction of a system for sign-language recognition and translation, consisting of two modes. The first, Translation Mode, translates sign language into text or speech. The technology currently supports American sign language but has potential for all varieties of sign language.
The second, Communications Mode, enables communications between a hearing person and a deaf or hard-of-hearing person by use of an avatar. Guided by text input from a keyboard, the avatar can display the corresponding sign-language sentence. The deaf or hard-of-hearing person responds using sign language, and the system converts that answer into text.
Read more from the link below.