According to , the MagicLeap team seems to use either optical waveguides at the nano scale or dense microlens arrays and create lightfield at the pupil. Here is a great introduction of what is lightfield:
The LightField is defined as all the lightrays at every point in space travelling in every direction. It is essentially 4D data, because every point in three-dimensional space is also attributed a direction (=the fourth dimension). The concept of the LightField was invented in the 1990s to solve common problems in computer graphics and vision.
Here is a great faked demo video from MagicLeap:
an object like any other object in the real world, with only one difference: instead of being made of physical matter, a hologram is made entirely of light. Holographic objects can be viewed from different angles and distances, just like physical objects, but they do not offer any physical resistance when touched or pushed because they don’t have any mass. Holograms can be two-dimensional, like a piece of paper or a TV screen, or they can be three-dimensional, just like other physical objects in your real world. The holograms you’ll see with Microsoft HoloLens can appear life-like, and can move, be shaped, and change according to interaction with users or the physical environment in which they are visible.
However, it’s hard for me to figure out the difference between lightfield and hologram right now. If you have an answer, please make a comment for me. Thanks!
Very few people has witnessed the MagicLeap, but Microsoft published Hololens officially to the audience. Here is the video:
Compared with both Augmented Reality (AR) devices, Oculus provides the consumer-level Rift that creates Virtual Reality (VR). The key secret of Oculus is the same as most VR:
trick the user into believing they’re actually there — wherever it’s bringing you.
By using stereo videos (or stereoscopic video, is the practice of producing the illusion of a 3D image in moving form. They are usually captured by two cameras in parallel, but could also be rendered using interactive 3D technologies.)
Finally, to be fair for comparison, here is a Oculus Rift video though I watched several of them already 🙂
- Real: The users would like models to “be as real as possible”.
- Temporal: Users would like to see “the sides of buildings overlaid with images that reveal how the structures looked in the past“.
MagicLeap (as for 3/20/2015)
According to , “As I see crisply rendered images of monsters, robots, and cadaver heads”, “ 3-D monsters and robots looked amazingly detailed and crisp, fitting in well with the surrounding world, though they were visible only with lenses attached to bulky hardware sitting on a cart.”
In their patent, MagicLeap talks about 12 depth layers to correctly drive focal cues, and their display having 720Hz refresh rate – 60Hz for each layer. It was mentioned in literature a number of years ago that scanning retinal displays (a.k.a. virtual retinal displays) produce superior image contrast, even for AR in daylight conditions. Magic leap has one of the pioneering VRD researchers from the MITHit lab on their team. The 12 depth layers are more likely than not are made using multiple freeform prisms in a complex off-axis optical system. For freeform prisms, additional prisms are required to “undistort” the incoming light. The biggest challenge I’ve heard people mention is to make all of this very compact.
According to , “Compared with MagicLeap, images appeared distractingly transparent and not nearly as crisp as the creatures Magic Leap showed me some months before”
Wearing / Form Factor
Both users and companies want to ” fit its technology into a chunky pair of sports sunglasses wired to a square pack that fits into your pocket”
“I think that for the near future (next 5 years) and Oculus rift like device with 2 camera’s hold a lot more promise for quality AR”
Metz said it uses a projector smaller than a grain of rice built into a black wire. But I cannot find any pictures.
As stated by , putting on the HoloLens is easy: twist a wheel on the back of the device to adjust how tight the headband is, and plop it onto your head. The headband and the visor can move independently of each other, so you’ll be able to get things situated just right. You can also slide the visor in and out, so it’s closer to or further away from your face — the HoloLens doesn’t rest on your nose, and folks who wear glasses (including prescription lenses) have told me they felt fine using it.
There are at least four cameras or sensors on the front of the HoloLens prototype.
Even for DK2, Oculus Rift is comfortable to wear from my perspective, but not perfect for people wearing glasses. Oculus also provides various lens for near-sighted people though I haven’t got a chance to try.
According to , a key difference compared to Magic Leap was that I was able to walk around some 3-D objects, such as an X-Wing fighter sitting in front of me; it looked fairly solid up close, though not intricately detailed. I was also able to modify 3-D objects, which was pretty cool. Using my gaze, gestures, and voice commands, I enlarged, copied, colored, and changed the angular position of a fish that was part of the ocean scene, for instance. And I could move objects from one spot to another, like a cartoonish pony I seated on a couch between the two HoloLens team members who were in the room with me for a mixed-reality photo.
As I wrote in yesterday’s post, the Oculus Touch is really well-designed. People would prefer virtual interactive, if the tracking algorithm is perfect. Nevertheless, tangible interaction would always come first and people would prefer Oculus Touch more.
Stereoscopic 3D which Oculus Rift uses can make you dizzy and lead to headaches and nausea.
As  stated, HoloLens did not make me feel nauseous, which bodes well.
I found a great tutorial on SIGGRAPH ’09: http://www.tgeorgiev.net/Asia2009/ Theory and Methods of Lightfield Photography. There are many cool slides in the above link, for example, one superresolution lightfield data is displayed as follows:
There are two short paper / poster recently regarding lightfield:
- Dense lightfield reconstruction from multi aperture cameras. (ICIP ’14 short paper)
- Lightfield media production system using sparse angular sampling. (SIGGRAPH Poster ’13)
Numerous papers are published using Oculus Rift in the past two years, such as:
- 3D Finger CAPE: Clicking Action and Position Estimation under Self-Occlusions in Egocentric Viewpoint (VR 2015 journal paper)
- WAVE: Interactive Wave-based Sound Propagation for Virtual Environments (VR 2015 TVCG journal paper)
- Virtual Training: Learning Transfer of Assembly Tasks (VR 2015 TVCG journal paper)
- WoBo: Multisensorial travels through Oculus Rift. (CHI ’15 short interactivity)
Finally, the Magic Leap sounds more promising to me. However, I don’t think the technical product will come until 1~2 years later, let alone the consumer product. All these information is intriguing, but it’s really hard to tell until I got the real prototype. Let’s conclude by quotes from venture capitalist Chris Dixon:
I’ve seen a handful of technology demos in my life that made me feel like I was glimpsing into the future, who helped lead investment firm Andreessen Horowitz’s funding in Oculus VR. “The best ones were: the Apple II, the Macintosh, Netscape, Google, the iPhone, and — most recently — the Oculus Rift
But who knows who is the winner? Let’s look forward to the future and participate in inventing the future.
Thanks for Gregory Kramida for discussion.
Some other non-mainstream headsets: