Microsoft Research Shows More Details On Smart Interactive Displays(Video)


Just few days back, we posted the news on how Microsoft ready to blow your mind in the future computers. Among many projects that Microsoft research is working on, Applied Sciences group plays an important role. They build cool prototypes on lots of projects which may show in the future. In the above video, Microsoft researcher shows the smart interactive displays that they are working on. Truly Kinect is making waves within Microsoft and outside too. Here is the description from MSR on the technologies they are explaining in the video above.

Our research shows:
Steerable AutoStereo 3-D Display: We use a special, flat optical lens (Wedge) behind an LCD monitor to direct a narrow beam of light into each of a viewer’s eyes. By using a Kinect head tracker, the user’s relation to the display is tracked, and thereby, the prototype is able to steer that narrow beam to the user. The combination creates a 3-D image that is steered to the viewer without the need for glasses or holding your head in place.
Steerable Multiview Display: The same optical system used in the 3-D system, Wedge behind an LCD, is used to steer two separate images to two separate people rather than two separate eyes, as in the 3-D case. Using a Kinect head tracker, we find and track multiple viewers and send each viewer his or her own unique image. Therefore, two people can be looking at the same display but see two completely different images. If the two users switch positions, the same image continuously is steered toward them.
Retro-Reflective Air-Gesture Display: Sometimes, it’s better to control with gestures than buttons. Using a retro-reflective screen and a camera close to the projector makes all objects cast a shadow, regardless of their color. This makes it easy to apply computer-vision algorithms to sense above-screen gestures that can be used for control, navigation, and many other applications.
A display that can see: Using the flat Wedge optic in camera mode behind a special, transparent organic-light-emitting-diode display, we can capture images that are both on and above the display. This enables touch and above-screen gesture interfaces, as well as telepresence applications.
Kinect based Virtual Window
Using Kinect we track a user’s position relative to a 3D display to create the illusion of looking through a Window. This view dependent rendered technique is used in both the wedge 3D and multi-view demos, but the effect is much more prevalent in this demo. The user quickly should realize the need for a multi-view display as this illusion is only valid for one user with a conventional display. This technique along with the Wedge 3D output and 3D input techniques we are developing at Microsoft are the basic building blocks to build the ultimate telepresence display. This Magic Window is a bi-directional light-field interactive display that gives multiple users in a telepresence session the illusion that they are interacting and talking to each other through a simple glass window

Some links in the article may not be viewable as you are using an AdBlocker. Please add us to your whitelist to enable the website to function properly.