Augmented Reality: Leap Motion and gesture-tracking design

*Really getting into the tall grass there.

They've been at it quite a while

(…)

For VR experiences, we recommend a 1:1 scale to make virtual hands and objects look as realistic and natural as possible. With augmented reality, however, the scale needs to be adjusted to match the images from the controller to human eyes. Appropriate scaling can be accomplished by moving the cameras in the scene to their correct position, thereby increasing the scale of all virtual objects.

This issue occurs because the Leap Motion Controller sensors are 40 mm apart, while average human eyes are 64 mm apart. Future Leap Motion modules designed for virtual reality will have interpupillary distances of 64mm, eliminating the scaling problem.

Positioning the Video Passthrough

When using the Image API, it’s important to remember that the video data doesn’t “occupy” the 3D scene, but represents a stream from outside. As a result, since the images represent the entire view from a certain vantage point, rather than a particular object, they should not undergo the world transformations that other 3D objects do. Instead, the image location must remain locked regardless of how your head is tilted, even as its contents change — following your head movements to mirror what you’d see in real life.

To implement video passthrough, skip the modelview matrix transform (by setting the modelview to the identity matrix) and use the projection transform only. Under this projection matrix, it’s sufficient to define a rectangle with the coordinates (-4, -4, -1), (4, -4, -1), (-4, 4, -1), and (4, 4, -1), which can be in any units. Then, texture it with the fragment shader provided in our Images documentation.
Depth Cues

Whether you’re using a standard monitor or VR headset, the depth of nearby objects can be difficult to judge. This is because, in the real world, your eyes dynamically assess the depth of nearby objects — flexing and changing their lenses, depending on how near or far the object is in space. With headsets like the Oculus Rift, the user’s eye lenses will remain focused at infinity.

When designing VR experiences, you can use 3D cinematic tricks to create and reinforce a sense of depth:
...
objects in the distance lose contrast
distant objects appear fuzzy and blue/gray (or transparent)
nearby objects appear sharp and full color/contrast
shadow from hand casts onto objects, especially drop-shadows
reflection on hand from objects
sound can create a sense of depth

Rendering Distance

The distance at which objects can be rendered will depend on the optics of the VR headset being used….