Facebook takes on Microsoft Mesh with the new Presence Platform

Reading time icon 4 min. read

Readers help support MSPoweruser. When you make a purchase using links on our site, we may earn an affiliate commission. Tooltip Icon

Read the affiliate disclosure page to find out how can you help MSPoweruser effortlessly and without spending any money. Read more

Microsoft Mesh app for HoloLens 2

Facebook (now known as Meta) today announced the new Presence platform which will allow developers to create new mixed reality experiences. The Presence platform will compete with Microsoft’s Mesh platform which was announced early this year. Presence Platform capabilities include Passthrough, Spatial Anchors, and Scene Understanding.

Insight SDK

Today, we’re announcing Insight SDK, enabling you to build mixed reality experiences that create a realistic sense of presence.

Earlier this year, we introduced Passthrough API Experimental, enabling you to build experiences that blend virtual content with the physical world. Today, we’re announcing general availability of Passthrough in our next release, which means you’ll be able to build, test, and ship experiences with Passthrough capabilities.

We’re also announcing Spatial Anchors, world-locked frames of reference that will enable you to place virtual content in a physical space that can be persisted across sessions. With Spatial Anchors Experimental, available soon, you will be able to create Spatial Anchors at specific 6DoF poses, track the 6DoF pose relative to the headset, persist Spatial Anchors on-device, and retrieve a list of currently tracked Spatial Anchors.

We’re also announcing a new Scene Understanding capability. Together with Passthrough and Spatial Anchors, Scene Understanding allows you to quickly build complex and scene-aware experiences that have rich interactions with the user’s environment. As part of Scene Understanding, Scene Model provides a geometric and semantic representation of the user’s space, so you can build room-scale mixed reality experiences. Scene Model is a single, comprehensive, up-to-date representation of the physical world that is indexable and queryable. For example, you can attach a virtual screen to the user’s wall or have a virtual character navigate on the floor with realistic occlusion. Further, you can bring real-life, physical objects into VR. To create this Scene Model, we provide a system-guided Scene Capture flow that lets users walk around and capture their scene. We’re excited to make Scene Understanding capabilities available as an experimental capability early next year.

With the new Passthrough, Spatial Anchors, and Scene Understanding capabilities in Insight SDK, you’ll be able to build mixed reality experiences that blend virtual content with the physical world, creating new possibilities for social connection, entertainment, productivity, and more.

Protecting the privacy of people’s physical space is important to us. We designed Passthrough, Spatial Anchors, and Scene Understanding so that developers can create experiences that blend the physical and virtual surroundings without needing access to the raw images or videos from your Quest sensors.

Interaction SDK

With Interaction SDK, we’re making it easier for you to integrate hands and controller-centric interactions. The Unity library, available early next year, will come with a set of ready to use, robust interaction components like grab, poke, target and select. All components can be used together, independently, or even integrated into other interaction frameworks. Interaction SDK solves many of the tough interaction challenges linked to computer vision based Hand Tracking, offers standardized interaction patterns, and prevents regressions as the technology evolves. Last but not least, it provides tooling to help you build your own custom gestures as well.

The data protections we’ve always offered around Hand Tracking apply here. The images and estimated points specific to your hands are deleted after processing and are not stored on our servers.

Tracked Keyboard SDK

Last year, we announced a Tracked Keyboard capability for developers. We’re hard at work to launch a Tracked Keyboard SDK, and we’re on track to release it early next year as part of Presence Platform.

Voice SDK Experimental

We’re also announcing Voice SDK Experimental, available in our next release so you can start to build and experiment. Voice SDK is a set of natural language capabilities that let you create hands-free navigation and new voice-driven gameplay. With Voice SDK, you can create Voice Navigation & Search or enable Voice FAQ to allow users to ask for help or a reminder. We’re also enabling new Voice-Drive Gameplay—like winning a battle with a voice-activated magic spell or talking with a character or avatar. Voice SDK is powered by Facebook’s Wit.ai natural language platform and it is free to sign up and get started.

Source: Facebook

More about the topics: facebook, microsoft, Microsoft Mesh, Oculus