Over 360 million people with severe hearing loss use sign language everyday. Since most of the hearing individuals do not understand sign language, they are not able to engage in real-time, unwritten communication with people with hearing loss. To solve this issue, Microsoft Research in February 2012 started the Kinect Sign Language project in collaboration with the Chinese Academy of Sciences (CAS) and Beijing Union University. They have developed a tool called Kinect Sign Language Translator which enables real-time conversations between signing and non-signing participants by turning sign language into words spoken by a computer and simultaneously changing spoken words into sign language rendered by an avatar.
Now, they have recently started Kinect Sign Language Working Group community to advance research in sign-language recognition.
As a first step, we are opening to academia the DEVISIGN, Chinese Sign Language Database. Compiled by the Visual Information Processing and Learning (VIPL) group of the Institute of Computing Technology, under the sponsorship of Microsoft Research Asia, the DEVISIGN covers about 4,400 standard Chinese Sign Language words based on 331,050 pieces of vocabulary data from 30 signers (13 males and 17 females). The vocabulary data comprises RGB video (in AVI format), and depth and skeleton information (in BIN format). The DEVISIGN thus provides sign-language researchers with a rich store of data for training and evaluating their algorithms and for creating state-of-the-art practical applications, such as solutions for training the system to adapt to an unknown signer.
Read more about it from the link below.