DIY Interactive Holographic Device

Realtime, Real Life, Visual Effects

Digital hallucinations will soon be on sale to consumers. Some pretty nerdy headsets, like the Hololens, will start to blur the line between real-reality and virtual-reality. Users will see computer generated images (CGI) interacting with their real world as if those graphics were the real world. It is difficult to think of a more significant change in recent time to how users interact with computers.

As a design and development company, HTMLFusion is excited to have started our own explorations into how users might interact with augmented reality (AR) interfaces. However, quality hardware of this type is not yet available on the general market. So we have built our own platform to experiment in this space. We call it the Holo UI.

This is not a new idea. Before hardware was small enough, Apple experimented with their own fake iPhone – a touch screen plugged into an ugly computer. Similarly, the Holo UI provides us with a software test bed for hardware not yet built.

AR Test Bed

The Holo UI has three components: a semi-transparent screen, a stereo projector and a high-speed infrared camera for tracking the position of the viewer's head. The goal of the Holo UI is to have the computer graphics seamlessly integrate with the real world. When this is working well it means we can make virtual objects appear to hit real objects, make virtual objects fall behind real objects, or have virtual objects rest on real surfaces. When it works, these interactions appear natural but achieving these effects is not. Our goal is to make the effect convincing enough that we can move on to developing applications.

Holo UI Setup Design

Head Tracking

In a previous career I worked in the "Data Integration" department of Digital Domain where I spent my days "match moving." Match moving is the painstaking process of animating a virtual camera to match the movements of a live action camera shot. In doing so, I learned that the human eye is so sensitive that just a pixel of drift or wobble makes the CG appear to float and can ruin the illusion. Just like in visual effects where the technicians work to track the camera, the Holo UI must track the user's head. This is done using a high-speed infra-red camera called the TrackIR and some great software called linux-track.

As the viewer moves, the graphics are re-rendered from the user's perspective. The same off-axis rendering technique is used in a number of other projects such as the CAVE and in this awesome WiiMote hack by Johnny Chung Lee a few years back. This technique traces a line from the viewer’s eyes to the virtual object, painting a pixel where it intersects with the screen. The effect, when done well, makes it appear as if you are looking through a window.

Pixel Imperfect

A major challenge for us has been calibration. The size of our screen, its orientation, and the accuracy with which we measure the position of our user relative to the screen are incredibly important for realistically drawing virtual objects. Our tracking camera is positioned on the top of the screen and centered horizontally. The head tracking data is relative to the camera so there is an offset between the camera and center of the screen that needs to be calculated. It is not just a positional offset, the tracking camera is leveled by hand. Any tilt has the effect of angling the tracking data up or down, to the left or to the right. I tried a number of calibration techniques, but have settled on a pretty manual approach. Fudge the offset until it feels good enough.

Boom Boom Bam

Once the rig was up and running we started to build some demos. First, we made some falling colorful shapes. We created a rough model of the room and allowed the shapes to fall, bounce and roll behind things. We played with occlusion, using black proxy geometry for a cup and created a game that allowed users to toss a virtual ball into a fake cup.

Next, we turned our focus to some basic object augmentation. A user is presented with a small pointer driven by the the orientation of their head. When they point at the object some meta data is displayed.

Conclusion

These early experiments, while modest, have already raised exciting technical and user-oriented questions:

  1. When 'selecting' an object how large a click area should the user have? What happens if two objects’ click areas intersect? Should the click area adapt as the user's position changes?

  2. Similarly, how can we build UI's that are responsive to physical real estate? For example, if an object is in a cluttered environment can we responsively shrink or remove elements? What is the language and toolset that lets us do this easily?

  3. Can we use natural or real looking materials to blend the CG UI with the real world or will it just look gaudy?

It is early days and we are learning what questions to ask and making primitive attempts at answering them. We look forward to upgrading our hardware. Microsoft send us a Hololens! Magic Leap send us whatever you have! We have worked with rougher hardware. We are ready to get started.