Creating happy accidents
The Microsoft HoloLens hardware
This week I’ve been experimenting with the capabilities of the graphics processing unit (GPU) on the HoloLens, which is part of the larger custom holographic processing unit (HPU) that runs on Intel's Atom processor. It has 114MB of dedicated video memory and 980MB that can be shared between the GPU and CPU. The HoloLens supports DirectX 11 which provides useful features like Compute Shaders, geometry instancing, and vertex id semantics.
I found some examples online of drawing procedural meshes with Unity, and set out to have some fun. I was using this abstract looping mesh that was created by our motion designer, Fyn Ng. On its own it looked like this. By the way, all the coloring is done in the vertex shader.
I started out with simple rotations and growing the mesh.
At this point, I switched up my rendering technique. At first I couldn’t get the mesh to render properly (I was indexing my vertex buffer wrong) but as I’ve learned, interesting stuff comes out when you mess up graphics code.
Here I’m using geometry instancing to create even more complex forms. Some of these were pretty overwhelming to have in your view while wearing the device.
Testing out some feedback ideas with geometry instancing. Reminds me of screensavers from the 90s. Retro is in, right?
Finally got the mesh rendering properly and honed in my vertex transformations a bit better.
It’s great that HoloLens has such good support for DirectX shaders. Everything you saw here was 100% rendered on the graphics card. The animations are all done in a vertex shader pass. All the data about the mesh is copied to the GPU when the app starts, and only small amounts of data are needed each frame, making these animations run quite quickly. These videos were captured with render settings set to ‘Fantastic’ but with ‘Fastest’ I can easily achieve 60fps on the device.
It’s absolutely incredible to view these abstract forms on the device. I get a bit of cognitive dissonance because the forms are so different from anything real, yet I see them as real as the tables and chairs in front of me. I think this medium will make for some very surreal art experiences. Vision is our primary sense for understanding the world around us, and when what we see doesn’t match up to what we know, we very quickly adjust our concept of reality.
Nate Turley (@turleyn) is a Senior Software Engineer for the Razorfish Emerging Experiences team, based out of our Atlanta office.