Magic Leap Finally Announced; Remains Mysterious

Yesterday Magic Leap announced that it will ship developer edition hardware in 2018. The company is best known for raising a lot of money. That’s only partially a joke, since the teased hardware has remained very mysterious and never been revealed, yet they have managed to raise nearly $2 billion through four rounds of funding (three of them raising more than $500 million each).

The announcement launched Magic Leap One — subtitled the Creator Edition — with a mailing list sign up for “designers, developers and creatives”. The gist is that the first round of hardware will be offered for sale to people who will write applications and create uses for the Magic Leap One.

We’ve gathered some info about the hardware, but we’ll certainly begin the guessing game on the specifics below. The one mystery that has been solved is how this technology is delivered: as a pair of goggles attaching to a dedicated processing unit. How does it stack up to current offerings?

What is Magic Leap?

Keeping the technology a mystery for years was a pretty good move. It let everyone imagine something much more advanced than could be possible right now. You likely remember the marketing image of an elephant in the palms of a user’s hands that broke the initial announcement back in 2014. The Wired article summarized Rony Abovitz’s boast as a new technology:

Magic Leap founder and CEO Rony Abovitz insists what he’s building isn’t mobile computing, Oculus-style virtual reality, or even augmented reality.

With the cat out of the bag we think those claims overstep. This is augmented reality as far as the broader term is concerned, but goes beyond into Mixed Reality (MR). MR is a type of AR that includes awareness of surroundings. So the Magic Leap promises to understand where tables, chairs, windows, doors, and hopefully cats and dogs are located when adding virtual items to your surroundings.

Magic Leap v. Hololens v. CastAR

CastAR with projector exposedThese wearable goggles have the most in common with Microsoft’s Hololens and CastAR because all three feature see-through lenses that do not block out all of the world around you in the way that HTC Vive or Oculus Rift (both VR technologies) do. What’s interesting is the key differences between the three.

Unfortunately, CastAR closed their doors earlier this year but we still loved the technology. It was built on a set of wearable glasses that included two 720p projectors, one above each eye. The augmented reality experience depended on a retro-reflective material to use as a projection surface. That surface reflected the projected images back at the exact same angle they were received so several people can use the same surface at the same time without interfering with each other. One of the keys to success here is that what you’re seeing is actually coming from the physical location the user expects to see it.

Microsoft’s Hololens is a curved visor which you wear like a facemask, spanning from temple to temple and from the tip of your nose to above your brow. That visor is the projection surface so the user sees the real world, but the effect of projecting on the visor overlays images on what the user sees. The benefit of this is that you don’t need to have the retro-reflective material or any other gear separate from the head-mounted unit itself to make the experience happen.

Magic Leap appears to use lenses as the projection surface, like the Hololens, but uses two separate lenses instead of a single large visor. We’re keen to try this out because it may solve a problem we noticed when testing the Hololens: field of view (FOV).

The Hololens has a limited area where it can project that is smaller than the total viewable part of the visor itself. If your eye catches the “edge” of where the virtual items can be projected it breaks the experience. Looking at the smaller lenses of the Magic Leap it would be fantastic if this synchronizes the FOV for both the real and the virtual world, leaving no way to interrupt the perceived experience. Alas, hearing from reporters who received demos of the current prototypes, Magic Leap does have a rectangular projection area inside of the round lenses. We hope the production hardware can improve on this.

I was fortunate enough to try out the CastAR while still in development and it had a great FOV. But that was limited to the size of the retro-reflective material so you can say there was a similar experience-breaking issue.

The Secret Sauce… or Not

So, it can track head movement, it has a wearable transparent screen in front of each eye, and it has cameras and sensors to pick up the surroundings. How do you make this stand out from existing offerings?

The most verbose source of information we’ve found is the Rolling Stone article that ran yesterday. There is a lengthy section on light field and how a breakthrough in handling it is the revolutionary technology Magic Leap is built upon. The assertion is that your eye only needs specific important parts of the light field and your brain builds the rest. If you feed in those magic parts, your brain perceives something real is in front of your eyeballs.

The secret sauce is a chip — basically a custom sensor/DSP that knows how to take in a virtual image, reconcile it with the lightfield sensed by the cameras, and spit out an artificial light field sophisticated enough to fool your brain into filling in the gaps. The article makes it sound like they built a fab in the basement floor of their Florida-based operations. That’s rather incredible; the stuff you expect from Tony Stark (or some evil villian’s lair). It’s more likely they’re getting wafers fabbed and shipped bare for processing during in-house assembly of prototypes.

These light field breakthroughs are a mighty claim and we hope it’s true, because such an advancement would indeed be revolutionary. It seems we’ll have to wait and see. We want to read the white paper!

An interesting tidbit the Rolling Stone article does show is an image of the first proof of concept prototype. If you look close you can see two bits of glass held at 45 degrees to a chin rest like at the eye doctor. Despite hints of more, the system has been fundamentally glasses-based since the beginning.

Guess the Hardware

Time for everyone’s favorite game! How did they do it?

There’s almost no hardware information available in this reveal. But guessing is more fun anyway. Here’s what we can gather from the photos. The headmounted goggles have two wires coming off of them. They merge into one which snakes around to the Discman shaped, belt-mounted computing device that drives it. The handheld controller is wireless, and there is no apparent power supply as the disc-shaped unit incorporates a battery.

To be honest, the controller and CPU aren’t all that interesting. These are solved problems as far as hardware goes. The magic of Magic Leap is in the headset itself.

Position tracking is perhaps the most important part of this entire system. Sure, you could argue that optics make or break it and we’ll get to that in a minute. But if your perfect graphics are out of sync with the head movement of the user, motion sickness will ensure that the headgear will not be used.

Having seen the Intel Euclid demoed at Maker Faire last May, it’s obvious that camera tech for making sense of movement and the environment has come of age. The small Euclid has an entire computer and battery inside, which the Magic Leap moves to your belt. Fitting the rest into these goggles is not beyond belief. The guessing game becomes: what are the obvious lenses and sensors shown above used for?

There is a peripheral vision lens on either “corner” of the Magic Leap, two front-mounted cameras also on the corners, and two windows on the bridge above the nose that look like they might be for IR. The green circle shown on the left is a mystery (power button and indicator?) and the corresponding camera lens on the right is as well. Motion tracking likely uses at least two depth cameras — blasting an IR laser projector to draw a grid that can be interpreted by an IR camera. This is how Intel’s Realsense does it and Hololens takes a similar approach. The other cameras are likely used for object recognition and lighting condition processing.

The small holes at the bottom of the lenses and on the side show locations for three of the four microphones used to sense sound.

What Do You Think of Magic Leap?

We want to hear from you in the comments below. Did you spot any hints from what has been shown off of the hardware? Put on your reverse engineering hat and tell us what you can glean from this very limited info? Is this revolutionary or merely another incremental development in the VR/AR/MR scene?

Of course there are a lot of other questions left hanging out there. What will the Magic Leap One hardware cost? How do you interface with the controller, and what frameworks can be used for getting your code onto the machine? We’re looking forward to hearing what you think.

74 thoughts on “Magic Leap Finally Announced; Remains Mysterious

    1. If you ever get a chance to actually use a hololens, you will be completely and totally underwhelmed. It is garbage compared to Oculus or Vive. It cannot be overstated how important Field of View is to the immersion experience, and Hololens absolutely fails when it comes to field of view. Headtracking is similarly subpar. There is still value in industrial applications, but you’re better off with full vision coverage and a pass through camera feed. M$ is selling a Minimally Viable Product, that is only viable for developers who want to develop user experiences for the real actually useful product that will (hopefully) come later.

      1. I don’t completely disagree. In my opinion the breakdown of hololens is in the software, and the software tools. If you do the Pompeii world tour you will find it immersive even with the FOV constraints. The issue to me is basically they just show you smart phone applications as windows for the OS, the headset is too heavy, and there is REALLY POOR implementation of gesture recognition – no touch controllers.

        I just dont think we need the full FOV. Try the Pompeii world tour and let me know if you agree.

      2. Whilst I completely agree about small FOV significantly detracting from immersion, the big difference here is that Magic Leap’s device is providing for the accommodation depth cue (depth of field) and the convergence depth cue, which no one else is doing. The potential impact of that difference cannot be overstated – all the flowery talk of the brain ‘magically filling in the rest’ is a reference to this feature, because all of a sudden there is no disparity between what the brain expects from an object in the environment and what it’s getting so the level of believability suddenly shoots through the roof. I doubt that they have any kind of occlusion of real world objects in place (I’m not even sure how that would be technically achievable to be honest in a lightfield display) but it won’t matter because the display is completely fooling your brain into thinking that the light you’re seeing is coming from an object, that if that object appears to be in front of a real world object, your brain will simply ignore the one that’s behind it even if the light from it isn’t being blocked by the display.

        1. I own a Hololens and a Rift. Comparing a VR with an AR device is wrong and the HL build is superior. I’m glad I didn’t let the FoV stop me from getting the HL because I would of missed out on my favorite computing device. I use the mobile HL much more than the Rift. Going forward the HL has been out for some time and Microsoft has announced a 70° FoV.

        1. No, it’s not, and nobody sensible considers processors to be made of nanomaterials. Nanomaterials usually have a regular repeating structure, and this structure interacts with light, or some other natural thing, to produce effects you wouldn’t normally expect.

          CPUs, RAM, and other chips built on a nano-scale are just like microchips. But smaller.

        2. It’s more than likely that they’re referring to a next generation metamaterial based optic to replace the holographic optical element they’re undoubtedly using for this first model (just like pretty much everyone else doing AR, Hololens included). Current HOEs have a limited field of view which will be why this model has a reasonably small field of view.

      1. Iphone is an iphone because of software and some very closed design custom chips.
        So it’s true they can’t do iphones as such.

        It’s also true that there are plenty of items the Chinese do not make at home as copies, even if they sell them they have to get originals and attempt to price them down a bit.
        But sometimes they take a product like that and make a cheap alternative, that’s in some ways more attractive than the original designs coming from the big boys.

        1. I’d argue when speaking of fake phones, what makes an Iphone is actually the packaging. Fake phones, are usually designed to mimic older generation phones and are often sold in the 3rd world. If you have no idea what the OS/UI should look like you will not know the difference. On the best fake Iphone’s I’ve seen the packaging was a perfect copy of a genuine iphone, the shell of the phone was very close, and the OS/UI resembled an older version of Apple’s iOS. The real give away is that they only wanted $100 for it.

          Fake phones are often used as part of a scam. I’d assume that at less than honest establishments they charge the full price of a real iphone, would use a real iphone as a display model and sell you a fake, in a very convincing copy of an iphone box. If someone brings one of the fakes back, the store owner points at the sign that says “NO REFUND”, or if appropriate to the situation, would suddenly forget how to speak english.

          Fake Samsung’s can be much harder to identify, as those actually run android, and can fool the buyer until the screen breaks and they go to have it replaced, and find out that none of the parts inside are right.

    1. Magic Leap sunk genuine competition by raising expectations beyond what anyone including them can possibly deliver for years, perhaps a decade+. What bugs me most about them is that despite there being nothing but roumers of their tech, and precomposited promises, they still have this vaporwarey vibe about them.

      Peoples response is always the same… THEY have BILLIONS of $$$ in google money!!! MUST BE REAL!!!
      Today crypto coins collectively have a market cap of $597,538,000,000 USD and people are still skeptical of them in various ways.

      I’m sure Google VC investors are intellegent people, but I’m also certain that they are not infallible Future Knowing Technology Gods. They have made some questionable purchases in the past cough*Dwave-psudo-quantum-computer*cough.

      I hope it’s real, as AR tech is neat, M gut says that Magic Leap are going to struggle to meet the expectations they set for themselves though.

          1. Since my post I learned it tracks your eyes (And who knows, maybe also gets iris and even retina scans) and makes a volumetric map of your house, including defining all objects in it, then sends that to the infamous ‘cloud’.
            I stand confirmed.

    2. I agree that the hype on this is huge and continues to be. I was initially disappointed when I saw the announcement because I had built this up as having the potential to be so much more. But getting over that initial feeling, I’m still really excited to see someone be widely successful in the AR realm. I wear glasses on a daily basis and long for the day when they have useful digital features built into them. If something like this can find success, that future gets a bit closer.

      A $25 clone? Nope. Despite my feeling that Magic Leap is overhyped I don’t think this, Hololens, Vibe, or Oculus are delivering poor value. Getting this hardware right is very difficult and if there were big corners to cut on the hardware these companies would have tried them. High performance camera and display hardware is hard.

      1. I have a Hololens. It has some significant issues. It is also pretty amazing and, I think, points to way to AR being an important technology. We’re just in the early days. An Apple II had significant issues too in its day but also pointed the way. We’re early in this technology’s evolution. I have no doubt that in 20 years we will compare the AR solutions of the time to these Magic Leap glasses and devices like the Hololens in the same way we look back at early personal computing.

  1. *So glad* HaD picked this one up– And this technology I’d love to both be ‘proved wrong’, and also not a ‘snake oil seller’. She also may not be ‘up’ to it, but I think many in the community would appreciate Jeri’s direct comment on the tech, as she has had much more time then all of us had to ‘think’ about it. Future, not to be a ‘Luddite’, but here I kind of see a half-product.Really excited about the tech, not ready, until details, to be its cheerleader.

  2. I have very low expectations here; any outfit as coy about their specs as these folks are likely in the business of marketing the “welcome to the future” image rather than selling any gear actual tech geeks would he keen to play with. Assuming they’re target market is Joe six pack I’m guessing their groundbreaking new tech is mostly system integration.
    That said, if they do have two depth cameras rigidly mounted relative one another it wouldn’t surprise me to see them used for head tracking with no external beacons required (sort of like SLAM). Even if they did a really good job at that I don’t know if it could justify their hype.

    1. I believe this does head tracking with no external beacons necessary. This has become a solved problem and we see incremental improvements in how the depth cameras work (and how fast they can operate).

      The real question is delivery on the hype of the “light field” claims. Hololens does a good job of overlaying digital creations into your environment, but from what I’ve seen they’re still very obviously digital creations. A real step forward would be creating objects that make you question your reality.

      I think this is challenging for a number of reasons, the biggest being that no matter what, you’re likely to be able to see through any of the digital objects… how could a system like this make any object appear opaque? DLP projectors were a breakthrough because unlike LCD, they can truly “shut off” the light source to a pixel because it’s a set of mems mirrors. When a pixel should be black the mirror simply doesn’t reflect light toward that part of the screen. Something like that in AR would be rather incredible.

      1. For AR, you could perhaps have an LCD grid of pure on / off pixels, to produce black blocking, or clear transmitting. Then in front of that a screen of teeny LEDs, each one occupying just a few percent of the space around it, leaving a nice clear gap around it for reality to leak in through.

        Seems like that’d answer the problem, with contemporary mass-produced technology.

        I don’t think that’s the answer though. Cameras only get better and better. They can have crappy speeds but that’s just a matter of re-designing them with more output ports onboard, so you can pull several pixels at once down, rather than just one.

        But even that isn’t the answer. Because the question is surely “why would you want AR anyway?”. I can’t see the attraction. Just have games based around a sitting position, flying your spaceship etc. Or just using a controller to walk forward, or to twist your waist. Less potential for confusion, for people who can’t afford the rubber wallpaper and a special bespoke VR room to be able to do foot-based VR.

        I played Doom on a gen-1 PC VR system way back. You don’t miss the ability to walk around on your real feet. And it saves the fairly real danger of people falling flat on their faces with a mask on. Since you’re blind, of course. You could use AR to compensate, have the real world still visible so I don’t bump into it, but there’s only so many demons I can see running round my house before I start to wonder about Indian burial grounds.

        Ditch the legs. Make VR games where we float around, or “run” without using our own actual flesh legs. Too much trouble, nobody cares that much, and you don’t want actual *exercise* while you’re trying to play games.

      2. For a truly believable insertion it would also have to mimic the lighting of the surroundings and its color temperature.
        That theoretically doable, but would require some strong coding and some additional processing.

  3. The cool thing about castAR is that it actually, really, worked. It was portable and ready to turn into a product. Jeri promised a product and would have been able to deliver (this year, maybe); too bad investors want to be promised the moon and receive nothing.

    1. All an investor needs is another mug to sell his shares onto at a profit. An actual product has very little to do with anything. Hype is way more important. Were you not aware that basically the entire world’s economy runs on hype and not much else? It’s where all the boom / bust cycles come from. Confidence is actually literally the most valuable thing in the world.

  4. Okay lets assume that everything they show works at the current state of the art.
    So they have left right center tracking, no top, bottom, back.
    That means that you will have to look down or hold your hand up to use wand gestures.

    We can assume since they ship with a wand, they don’t track hand gestures.
    This would be functionally like the DayDream Headsets.

    Hololens required FULL Point Clouds on each temple to get the room mapping that they can do, and you still have to wiggle your head around for it to get its bearings.

    We could assume there would be bluetooth support for things like keyboards?

    They are rumored to be developing their own Operating System, which sounds like a complete disaster. Not sure why they wouldn’t just use Android and build on top.

    At the end of the day the best you could do with this is showing holograms in the real world interacting with the real world. A whale splashing in the stadium is MUCH less useful/productive than a big monitor.

    You have to ask, In what setting, in what context would something on your head be helpful?
    I would say… IF it could simulate a couple HUGE monitors, I might use it to write software. If it could simulate a huge theater I might watch a movie.

    As we move into this world of AR/VR/MR we need to remember content is king. Today the number one use case for an “R” is games. We already have full 3D worlds created we just need to render them in 3D in-front of our eyes, and EVEN that is a hard sell due to motion sickness.

    The next most useful would be show content overlay-ed while doing something. For this you need INTEROP you need cooperation. Making your own OS really severely hinders this in my opinion. Also… all you really need for this is Google Glass. If you’re doing something it can be dangerous to obscure your front center FOV. Think dash board while driving, or medical vitals while operating. Obscuring your FOV could be life threatening.

    Next could be to “Simulate” expensive real world things, like TV’s and Huge Monitors. IMO.

    Beyond that… Social? Show me people in my room? Virtual voice call?
    Maybe… fringe Tourism? Demo Pitches?

    Just not sure this is worth billions/company or thousands/product. If it’s thousands then its more expensive than the real world things it simulates. If it’s billions show me what industry it creates/disrupts…

    Personally I like the hololens and the oculus rift / htc vive. But it’s important to remember that they have their limits. Until the content is created for a 3D world rather than a flat screen it will always seem out of place. I have yet to see (beyond tilt brush) content creation take advantage of 3D environment. Games on the other hand are Great, but then it’s not worth billions.

    1. And now for the skeptic: When you only see marketing, and you don’t hear about technical then you always have to be a bit suspicious. Even the people they invite are Social leaders like sports figures, but NOT technology leaders. I’m sure that google investing could be seen as technology input, but then again google might have wanted to snipe whatever good ideas they had back into daydream and google glass as a way of hedging bets.

      If they actually ship anything in 2018, I’ll buy it. But my expectations are it wont be more useful than Hololens, it will actually be much less useful because they chose to create their own OS. So out of the box you dont get android applications. Even the Windows Store has more applications than the Magic Leap store…

      If Windows has trouble getting people to develop applications for hololens, I doubt magic leap could do better. Their hardware would HAVE to be an instant success like the original iPhone BEFORE the app store to get developers to buy into the gold rush.

      Just my 2 cents of skepticism.

      1. My prediction is that they’ll ship a developer edition that functions on the level of hololens and tell everyone the full edition with true depth of field, light occlusion, etc, is just around the corner.

    2. To be fair… I don’t want to be negative. I want the future that Magic Leap has Hyped. I want this to be as successful as the iPhone in bringing a new form-factor into the mainstream.

      I REALLY like the idea of the on hip compute and battery. This is something that really could have helped the Hololens. Hololens is heavy, too heavy for my head. It becomes uncomfortable after a short time. Maybe my neck is weak but it seems like it limits the adopt-ability. Magic leap putting compute and battery on your hip is a REALLY good paradigm till we can miniaturize batteries. #MicroFusionReactor where is tony stark when we need him?

      I don’t want to sand bag, I choose to remain optimistic.

    3. I would guess the “whole new OS” thing is to do with realtime responses. Critically important when you’re simulating reality.

      If the hardware’s proprietary, and the same in every unit, you don’t need the HAL that’s a useful part of having an OS. Similarly you don’t want so many layers of abstraction in general, functions calling functions, because that all takes time too.

      Sure you can do VR using traditional OS’s, but it’s not what they’re designed for. They each have opposing needs and principles.

  5. $2 Billion in funding may be real for marketing-hype purposes, but in-fact there are MANY strings attached – like meeting development and/or commercial milestones. Typically what happens with Big Pocket funders like Google is the target company meets certain milestones up to the point where results matters to the likes of the funder – Google. Then Google either bails (defunds) or buys-out the start-up picking up the (often Patentable) intellectual property (IP) for its own use and/or to lock up the IP so competitors can’t use it – a simple thing to do given the corrupt and inefficient USPTO fed by greedy Trial Lawyers. In the end we’ll probably see NOTHING we can actually purchase that’s affordable from this venture, but Google will pay far less than $2 Billion to capture and lock up all the tasty bits that get developed. Is there anything wrong with this? Arguably NO. Aside from the misleading $2 Billion marketing hype (which most of us see through), this is just another Big Pocket company using its money to collect and own what may be the best parts of new ideas. Remember, there is RISK involved in funding new stuff at any level, and RISK ultimately equates to MONEY. Micro$oft, Apple, etc., can do it, but Google got there first in this case. Without this type of competition of ideas and backers to take the ideas forward, major leaps in technology would be crippled. It’s called Capitalism. Which for the most part still works.

  6. My guess is that they don’t have breakthrough hardware but are using AI techniques on state of the art GPU and ASIC chips optimized for AI. Probably the reason that investors are willing to put up so much money is because of the people involved ( l haven’t checked out their background) and because of their business model. It looks like they are deliberately building their own OS, probably one optimized for real time AI, with the intention of monopolizing the VR/AR space like Windows/iOS/Android have been able to do in the desktop/laptop/tablet/phone areas. They may be able to deliver and the idea of an AI optimized real time OS would be very interesting. Are they recruiting engineers from the AI world or the real time OS world? That would give some indication of where they are going.

  7. I’m guessing the tech they’re using based on DLP, probably with a cool new unreleased chip from TI and some magic with short throw optics and lens coatings.
    It seems to fit with many of their mysterious buzzwords around what tech they use for the display.

    1. I believe they are using virtual retinal display technology which produces a light field that is projected on to the retina, rather than a screen. This is not new technology. It has been used before. The hard part is creating the light field. That requires a lot of computing power. I expect that computing the light field is their magic sauce. Precisely why investors are willing to put so much into a company that, on the face of it, has no previous special expertise in this area is not clear. Maybe that is the secret that they are hiding or maybe it is marketing hype. We will find out in a year or so.

      1. I tend to concur – previous patent filings associated with them have referred to a scanned vibrating fibre optic display, which is essentially what you’re describing. The trick here is that if you’re careful about how you couple light into a fibre, you can dictate the colour and intensity of light that comes out the other end at different angles, which is a single lightfield pixel in essence. Vibrate that fibre horizontally and vertically, and you have yourself a lightfield display.

      2. I don’t think they are using virtual retinal display technology. While it has existed for a long time, it still suffers from major problems as others (like John Carmack) have found out when experimenting. If your eye moves by even a millimeter you no longer see anything. To counter this you either need very high precision/low latency optical tracking (doesn’t exist yet) with moving parts (not really desirable near the eye) or arrays of lasers, which defeats the simplicity of the concept in the first place and would be much more expensive.

        They’ve multiple patents that tend to indicate that their technology is neither a virtual retinal display nor a light field display despite what their marketing says. Light field displays are nice in principle, but they have a x9 hit on resolution, which their device doesn’t seem to suffer from.

        Their photonic chip seems to allow a multi-focal display with several discrete levels of focus that more or less cover the Human range of vision. They don’t need multiple renders for each eye for that, only a method to direct the pixels for each depth at the right focal distance. That’s what their chip does.

          1. Similar in principle but with virtual planes instead of physical ones. Multi-plane displays already exist but they would need to be very big to cover the range of Human accommodation (from 25 cm to 6-8 m). Here they’re virtual, as in virtual images in optics.

    1. There are telescopes that can detect which direction they’re pointing, in both axes. And I think digital compasses and GPS. In any case, there are telescopes that can tell you what you’re pointing at, on an accompanying device, but not in actual AR. Is the AR important? A star to the naked eye is just a white dot, the sky’s just black nothing. You could use just ordinary VR, or even better just a monitor.

    2. My cousin had an Android phone with an app that could do this, maybe 5 years ago.. I think it just used the Inertia Measurement Unit, GPS in the phone and time to display the stars and planets with names etc. It worked really well with what you saw with your eyes when holding the phone up at the starry sky and as if the earth was transparent you could see the stars below the horizon. Maybe not AR but if it works good enough, who cares.

  8. personally, I need “eyeball tracking” most. I do too much peripheral check with eye motion instead of gross head turning. individual lenses at least have the potential of showing images at different perspective based on actual eyeball focus within the limits of the lens surface. the dome look might suggest this allows equidistant view of any point of the lens from the wearer’s eyeball. If there’s eyeball tracking, and independent focus, and the ability to make the images focus where eyeball focus is and not everywhere at once, this might be the first VR system I can actually use without discomfort. Might. Maybe. Possibly. Depends on what real hardware actually comes out.

    1. With it being a light field display, gaze tracking is unnecessary as the image encodes information about both colour/intensity and direction of light coming into the eye. That means that an object at 2m from you will require your eyes to focus at 2m and converge at 2m, and that is true for every single part of the image. Vendors like Oculus are trying to patch on functionality like this by tracking the movement of the eye and adjust the focal length of the entire image based on what you are looking at. That system is a poor approximation to this one, and they’re not even all the way there with it yet.

  9. I have a cell phone app that comes close to that. it’s aware of the orientation of the phone and with GPS it will show the names of expected objects. But that’s all part of an overlay so even objects obscured by light pollution will still show up on the cell display. called “Star Chart” on google play.

  10. Good use of light field effect, yes. images can be focused upon at different depths. Apparently opaque objects, yes. Objects look solid and block background luminance.
    Field of view, quote: “a VHS tape held in front of you with your arms half extended.”
    Some wins, but clearly some more work to be done to make it viable. FOV is a huge portion of immersive experience

  11. I thought this was the company that has never had an actual demo, only play effects they paid to have made by a movie special effects company, to show what their product will be able to do, if they ever manage to make anything beyond mock-ups, like software, hardware, prototype or a product able to actually do something.

  12. “improved depth of field”. That’s what it does. The articles are focusing so much on how it works that they skimmed over what it does or how much it impacts AR/VR.

    While it’s very cool it’s only one piece of the puzzle. Field of view and image contrast are the biggest road blocks in augmented reality. I don’t think we’ll have viable AR products until those two pieces are solved. FOV and contrast make a product usable, depth of field will make it immersive.

  13. > Keeping the technology a mystery for years was a pretty good move

    It was not really a mystery, they have dozen of patents about what they were working on and several publications from their researchers before that (from the HITLab), specifically about fiber scanned displays and optical waveguides/diffractive optical elements.

    > MR is a type of AR that includes awareness of surroundings.

    It’s a marketing term, before that they used Cinematic Reality but they backpedaled. AR includes awareness of the surroundings, else it’s simply called a head-up display.

    > These wearable goggles have the most in common with Microsoft’s Hololens and CastAR

    CastAR was not AR, it was just a stereoscopic display, the only interaction with the real world was through a white surface. With head-tracking and a 360° retro-reflective surface it could have been VR though, like CAVE systems.

    > but uses two separate lenses instead of a single large visor.

    The visor in HoloLens is just the plastic enclosure, there are two separate displays + lenses, one for each eye, else it wouldn’t be stereoscopic. There are patents from Microsoft explaining the technology in detail as well.

    > Magic Leap does have a rectangular projection area inside of the round lenses. We hope the production hardware can improve on this.

    It can’t be improved by production hardware, it’s a physical limit due to the laws of optics. There is a barrier to the refractive index of transparent materials and the angles that can be used for total internal reflection used in optical waveguides, calculated at 47° by Microsoft with Schott glass that has highest refractive index of optical grade transparent materials.

    The only way to get over that it to used more bulky optical elements (wedge-shaped free-form prisms) or to combine multiple displays together, but then it’s a compromise on weight, comfort and price.

    > The assertion is that your eye only needs specific important parts of the light field and your brain builds the rest.

    That’s not how it works. The key improvement over previous headsets is the support of near-correct accommodation, to avoid the vergence-accommodation conflict that plagues stereoscopic displays (including VR/AR headsets) which provokes eye fatigue and discomfort and to provide better depth cues (accommodation) and more life-like imagery (defocus bluar) at a short distance ( The secret sauce is a chip — basically a custom sensor/DSP

    The secret sauce is their photonic chip, but it’s not a sensor/DSP, it’s a piece of plastic. It’s composed of a planar optical waveguide and optical diffraction element(s). The second key element is the display itself, but at this stage it’s not known if they are using the fiber scanning displays they’ve been working on for years or a more standard display technology like DLP, OLED or LCoS.

    1. Last two paragraphs have been borked by the parser :

      That’s not how it works. The key improvement over previous headsets is the support of near-correct accommodation, to avoid the vergence-accommodation conflict that plagues stereoscopic displays (including VR and AR headsets) and to provide more correct depth cues at a short distance (less than 10 m). To do that they built a display with multiple focal planes instead of a single focal plane.

      > The secret sauce is a chip — basically a custom sensor/DSP

      The secret sauce is their photonic chip. It’s not a sensor/DSP, it’s a piece of plastic. It’s composed of a planar optical waveguide and optical diffraction element(s). The second key element is the display itself, but at this time it’s not known if they are using the fiber scanning displays they’ve been working on for years or a more standard display technology (DLP, OLED, LCoS).

      1. The interesting thing to me is why Google should invest in a company founded by somebody who appears to have no background in this area, given that Google probably has a lot of internal expertise after working on Google Glass. Maybe the company got lucky and was able to file for some key patents before others. Does anyone know who are the technical experts behind the company and where they come from? Clearly, if they can deliver, they have something very cool.

        1. > The interesting thing to me is why Google should invest in a company founded by somebody who appears to have no background in this area

          He has no background in this area but he got the right connections with research labs and he’s also successfully founded a very high-tech company (MAKO Surgical Corp.) which he sold for a lot of money. It has probably instilled enough confidence in investors for Magic Leap.

          > given that Google probably has a lot of internal expertise after working on Google Glass

          Google Glass was not a breakthrough, all the technologies they’ve used have been known for quite some time and it has mostly failed as a product for various reasons, including technical ones. I guess they now prefer to invest in people who know where they are going and have the know-how.

          > Maybe the company got lucky and was able to file for some key patents before others

          It’s not a matter of patents, it’s a matter of bleeding edge hard research. Nobody is doing what they are doing, you can’t do that with just bunch a of high skill engineers, you need top of the crop research scientists specialized in a very specific fields. That’s what they have.

          > Does anyone know who are the technical experts behind the company and where they come from?

          Brian Schowengerdt who is a co-founder of Magic Leap and worked with Eric Seibel on fiber scanning displays at HITLab. But they’ve probably other experts in high-end optics too I guess.

  14. Why is no one developing an AR a brain implant? It would solve all the problems currently associated with VR/AR. Field of view, depth of vision, light field would all be non issues because you could just feed the images directly into the optic nerve, and it would not require controllers, if tapped into the appropriate place in the nervous system, it should be able to read the position of every part of the body, and create false tactile sensations, as well as read your thoughts.

    It would also be far more profitable. The ad revenue would be staggering, and instead of creating maps of everywhere you go and cataloging all the items it sees for upload to the cloud. An AR brain implant, with an appropriate license agreement, would allow all of your thoughts to sent directly to the cloud so the manufacturer can then claim any useful ideas you may have as their own intellectual property and sell the less useful thoughts to advertisers. When you fail to make your monthly payment for use of the AR implant chip the manufacturer can then literally claim their pound of flesh in the precess of reclaiming the implant.

Leave a Reply to OstracusCancel reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.