Is Copilot the best AI companion out there? Help us find out by answering a couple of quick questions!
Microsoft is almost certain to announce the long-awaited HoloLens V2 at Mobile World Congress 2019 on the 24th February, and the big question on everyone’s mind will be how they solved the Field of View issue which has dogged the handset’s reputation since its 2016 launch.
The field of view of the HoloLens V1 is often described as looking at the virtual world through a letterbox slot, and Microsoft is expected to improve this dramatically in version 2.
In a lengthy talk to the ZEISS Forum in Oberkochen in December 2018, Bernard Kress, Partner Optical Architect at Microsoft / Hololens may have given the game away.
He revealed the key to a good Mixed Reality headset is Fast, Accurate Eye Tracking, as illustrated in his diagram below.
This idea is particularly significant since HoloLens V1 does not have any form of eye tracking, seemingly making it priority 1 for Microsoft to solve for the next generation.
Kress goes on to explain that eye tracking reduces significantly the cost of generating a high resolution, large field of view holograms, as no matter how large the virtual field of view, if it is only rendered in your fovea the amount of calculation and rendering that is needed is capped at a very manageable level.
Kress does not explain what system of eye tracking Microsoft will be using. The industry standard is small eye-facing cameras, but Microsoft has patented such exotic ideas as measuring the direction your pupil is facing based on the capacitive field of your cornea, and also other approaches, so it remains to be seen what Microsoft will surprise us with.
The talk also reveals some other details, such that Microsoft sees the HoloLens as an enterprise device only, and does not expect to enter the consumer market (likely leaving this to Apple) and that Microsoft expects to make their money renting cloud rendering to companies rather than on the hardware itself.
The 90-minute talk can be seen embedded below: