How we're building a common coordinate system for mixed reality

What if you could actually touch a hologram? And what if you didn’t even need a HoloLens to interact with one? Our team has been thinking about how we could extend the capabilities of the HoloLens and further break down the barriers of mixed reality. We want to recreate the magical moment when you realize the digital world sees and responds to you within the world of HoloLens.

We paired the Kinect with a HoloLens and have mapped the Kinect’s understanding of people and space with the anchoring system of the HoloLens. The key was creating a common coordinate space within a physical room. We transform the Kinect’s output to our common coordinate system and broadcast the Kinect on a network so that multiple devices can subscribe to it. Each subscribed HoloLens shares a set of anchors that is tied to our room’s coordinate system. By using the anchor sharing service multiple HoloLens can share our common coordinate system.

When we first saw the Kinect’s data within the HoloLens we were pleasantly surprised at the dimensionality that the HoloLens is able to add and the sense of presence the Kinect’s data has within the HoloLens. Though Microsoft demoed Holoportation, this capability hasn’t been available as part of the initial SDK. By connecting the Kinect to the HoloLens it will enable us to go beyond the SDK and explore the greater freedom of interaction and gestures within the digital world that is shared by the HoloLens.

With our common coordinate system, we look forward to anchoring other digital systems beyond the Kinect to the HoloLens. In the world of mixed reality, we don’t see a distinction for physical objects and the digital data that is augmented onto them. All systems can share an awareness of the physical objects and their digital components by agreeing on a common coordinate system and the paired data.

Alan Shimoide is a Technology Director for the Razorfish Emerging Experiences team, based out of our San Francisco office.