Augmented Reality Becomes Useful, Real

The state of augmented reality is terrible. Despite everyone having handheld, portable computers with high-resolution cameras, no one has yet built ‘Minecraft with digital blocks in real life’, and the most exciting upcoming use for augmented reality is 3D Dungeons and Dragons. There are plenty of interesting things that can be done with augmented reality, the problem is someone needs to figure out what those things are. Lucky for us, the MIT Media Lab knocked it out of the park with the ability to program anything through augmented reality.

The Reality Editor is a simple idea, but one that is extraordinarily interesting. Objects all around you are marked with a design that can be easily read by a smartphone running a computer vision application. In augmented reality, these objects have buttons and dials that can be used to turn on a lamp, open a car’s window, or any other function that can be controlled over the Internet. It’s augmented reality buttons for everything.

This basic idea is simple, but by combining it by another oft-forgotten technology from the 90s, we get something really, really cool. The buttons on each of the objects can be connected together with a sort of graphical programming language. Scan a button, connect the button to a lamp, and you’re able to program the lamp with augmented reality.

The Reality Editor is already available on the Apple app store, and there are a number of examples available for people to start tinkering with this weird yet interesting means of interacting with the world. If you’ve ever wondered how we’re going to interact with the Internet of Things, there you have it. Video below.

Thanks [Milosh] for the tip.

38 thoughts on “Augmented Reality Becomes Useful, Real

  1. The way it can link up actions reminds me of IFTTT but with a control interface based in an augmented reality. I can see that it could make some things easier (if you couldn’t remember what you’d called the items) but does restrict you to only being able to configure the sequences up in person.

  2. Pretty cool. But what if I don’t want everything around me marked with the special design? And just looking at them, I can already tell that I don’t. Rather limits aesthetic appeal…

      1. That’s a rather fundamental issue, because without those markers the phone won’t be able to display the graphic overlays nor identify what is it looking at. The pattern identifies and localizes the objects, it won’t work without it. You could use NFC tag to identify, but in order to display those overlays you must be able to register them with the camera image – that is where the pattern is needed.

        It is a neat demo, but hardly something new or groundbreaking – similar things have been done before with the ARtoolkit tags, various types of spatial codes, etc. Even Microsoft was showing off similar ideas with their Surface table – the only difference is that this uses objects which aren’t physically collocated together (on a table) and is buzzword compliant (IoT …).

        And, of course, they completely gloss over such details that in order for this to be possible there would have to be an enormous database of which code corresponds to what objects, what are the “abilities” of each object (so that you don’t connect an on/off button to something needing an analog input or if you so that the data can be adapted), what is connected where … And every single widget would need to be connected to the network and actually able to talk to every other.

        These things make it more an utopia than actually something realistically useful – heck, even Microsoft is not capable of making their applications to work together in the manner demoed in that old Surface demo and that is an order of magnitude simpler problem than making both the application & hardware working together.

        1. The marks could perhaps be left off, and instead use location to determine them. GPS, Wifi, whatever else, even a bit of dead reckoning, to tell the phone where you are in the house. Then use image recognition to see the lamp etc yourself. Sure it’s a bit more difficult but gets past the tags.

          NFC would help with locating things, and again, tie it in with image recognition. Maybe connected by wifi to the phone, have a computer using CUDA or whatever doing the heavy lifting.

          Theoretically you can encode information into anything. Nice squares and straight lines are easiest to code for, but designing an attractive visual tag isn’t impossible. You could perhaps use infra-red paint that only a camera can see.

          Yes, the properties of individual devices will need programming in, but the user can do that. Only needs doing once. Have a palette of widgets, brightness, temperature, off / on, timer, and the user can drag those onto each device. He can even do that with AR.

          As a finished product this has a fair few flaws, but as a proof of concept it’s not bad. That said, it’s a pointless concept. If I’m gonna go to the trouble of wandering round the house, I may as well use the actual switches. Quicker, less hassle, much less to go wrong. So AR not quite useful yet.

          1. Or even cooler and for sure more reliable have stickers which are transparent in the visible spectrum but have a pattern in for example UV. You just need cameras be able to sense UV then.

    1. With enough compute, a phone could totally recognise and track a lamp that is just a lamp, no tracking markers or anything. The tags, which are basically just QR code-esq geometric patterns to make it easy for the phone to identify and track.

      They could also in theory be IR constellations (and thus invisible to the naked eye), which is exactly what the Oculus Rift uses for its high resolution tracking.

    1. No no no no no. Here’s something I’ve wanted to build for *six years* now:

      Map the entire earth to one meter cubes, up to an elevation of about 1km above ground level. Allow people to place digital blocks in *real places* with a phone. You use a phone to view all the blocks overlaid with IRL places.

      It’s a surprisingly feasible idea; minecraft worlds are already multiple times the size of the earth. One cheap server could easily handle a few hundred people viewing blocks if they’re not updating. After that you could start charging $0.01 to place a block. You could easily make a few million off this idea.

      Want a dragon sitting on the St. Louis arch? Awesome. Statue of liberty battling the Stay-Puff Marshmallow Man? Done. Gigantic digital sphinx in Giza? Easy.

      The fact that this *hasn’t* been done is evidence of two things: either it’s impossible, or no one has any imagination. I’ve seen a project like this as an ‘art instillation’ in Philly several years ago, so I know it’s not impossible. That means no one has any imagination.

      1. There’s art installations, there’s phone apps, and there’s gigantic public reality-augmenting Minecraft servers. There’s more of a demand for the first two, particularly compared to the money it’d cost. You’re free to do it yourself, it doesn’t need any technology we don’t already have plenty of. Maybe some of the MIT Media Lab nerds could be corralled into doing it with you.

        The issues would be keeping it consistent, although since the real world is quite closely localised, it wouldn’t be too hard to set up servers per geographical area, and estimate the amount of users likely to be in each area. Or you could new-fangle some “cloud” solution into it, I suppose.

        Make money by sticking ads on people’s blocks, or better, giving away ads for free, then charging people not to have their stuff defaced by them. Or perhaps by selling blocks as you suggest, but the first thousand blocks are free. Some weird degenerates managed to make millions selling “land” to other degenerates in Second Life. Whatever happened to that?

        You’ve got the imagination, and I’d guess you might know a few people who could do it. People set up Minecraft servers at home all the time. The AR, I dunno, isn’t there some free toolkit? You could set it up with no budget for private use, then get some tech company to pretend it all relies on their technology, and pay you a lot of money.

        AR + Minecraft != rocket science.

        The other big business in Second Life was selling add-on avatar penises.

      2. I think the biggest issue is clipping the blocks against real life objects. The phone would need a 3D/depth camera such that blocks placed on the other side of buildings/signs/cars/people don’t show up in your view.

  3. Someone has already used a reality editor to convince people this sort of stuff is somehow the next big thing rather than a recipe for vast overcomplexity, massive security risks, and huge waste of resources.

  4. Neat for playing around and making videos, useless for real life. grabbing my phone, open the app, point it at the lamp and then touch the icon takes 900X more time than simply touching the button on the lamp, or better yet use forgotten 1970’s technology and make it a touch lamp. The car window example, 100% useless to stand outside the car and do that, do that from 500 feet away as I am walking up? sure, but I already get that by holding down the unlock button on my car fob.

    What I see this useful for is reading a ton of information. Their example in the car, the phone shows full details like the whole ODB-II data set and any errors read. Using that interface for normal functions in the car would be insane.

    It has uses, just not any of the examples they show.

    1. On the other hand, having the lamp turn on if it’s after sundown and before sunup and your phone is within x meters of it (ie, you’ve walked into the room) and turn off otherwise might be good.

      Hence the part about plugging bits together and making little programs.

      But, yes, your last line was dead on target:
      “It has uses, just not any of the examples they show.”

    2. What they were showing was a programming interface for connecting any compatible physical interface to any desired function of a compatible device. Once this is set up through the app it should remain that way until you decide to change it. They also showed that the functions each interface is tied to can be context sensitive based on location or time or additional physical interfaces.

  5. While definitely a neat trick for controlling stuff, the examples in the video were pretty useless, and I can’t think of a good practical everyday use.

    A desk lamp should have physical controls on it that do what the lamp does, turn on and off. It doesn’t need to connect to a network, communicate to my phone to be configured, and talk with a separate network connected control device with a QR code on it. That’s assuming there isn’t a server or hub device included in all that. The wireless TV remote control was invented in 1950, if people wanted remote control desk lamps, we’d have them already.

    My chair adjusts without the need for electricity, let alone network access.

    An app for customizing your cars buttons? Neat, but you would probably wouldn’t use it very regularly, and could easily be done with a dedicated app, provided the car actually supports customizing buttons.

    Maybe if you have a bunch of these controllers and items you wanted to put together for an art exhibit or special effects show, and the interacting configuration was really complicated and needed to be changeable and adaptable. Maybe for a flexible manufacturing setting where control sequences need to be altered and reconfigured regularly.

    Audio/Visual Application? Configuring controls for camera and audio streams easily maybe? Seems like a lot of equipment to replace to avoid having to plug the right device into the right port.

    Again, it’s a fun and neat system, but it’s not useful, yet.

  6. Creating and triggering automated “scenes” of devices can be useful, but I agree with most here that it is currently not something we see the value in yet. Even with home automation, you really need to get more things controllable before you can have any really useful automation. I find I struggle with reasons to create scenes (“is this really useful”) or stretch HA to include things like a closet light just to push my options for scenes that I may never use. But I think we will get there one day, as more and more things are made to be actuated and connected it makes sense we would be looking for a way to to simplify programming automation.

    The best use so far of augmented reality is the Google Translate app with the translation occurring live via the camera/screen.

  7. This idea is stupid. These examples are stupid. Anything not useful to me is stupid. Anything I don’t like is stupid. Anything more complicated than my arbitrary specification is stupid. People who make any such things are also stupid.

  8. I thought it was quiet a neat concept. I read the comments before I watched the video and based on the comments I thought it would be a dumb idea abut after seeing what they are ACUTUALY doing then it makes a lot more sense.

    For example Been able to reasign light switches to different allocations is a great idea but there is some work todo In the implementation, yeah having codes printed on everything isn’t my cup of tea, but there are lost of design ideas that I like but others don’t.

    A more mature product would be able to just identify a light switch for example – just like I can identify a light switch with out a special marker.

    The underlying infrastructure though would be rather involved to pull it all off.

  9. IFTT is already close to useless. This is the same. Neet idea for some niche apps, but mostly overcomplicated and not intuitive. Maybe I can connect my coffeemaker to my bed and get coffee after waking up, but then the app doesn’t allow it to take time into account, or the shift I’m working and booom, chaos.
    Switching a light takes a fraction of a second. I’m not going to pull out my smartphone, fiddle around and wait just to switch a light.

  10. I half like this. It could be a very handy way to do the initial configuration of the IoT connected devices. But too impractical for each on/off switch of the lamp.

    For stationary IoT devices at home I’d like something like a Myo armband plus precise indoor positioning of the Myo (if it doesn’t have that already). Log the 3D location of each IoT in the home. When the user does e.g. a gun gesture with the hand trigger the action linked to that gesture for the first IoT device in the “line of pointing” e.g. toggle a lamp on/off. Walls and other large obstacles can be mapped to avoid accidentally turning on the kitchen oven from the living room.

    1. Home Automation has been around since at least the 1940s, probably earlier. And still nobody bothers, because it’s solving a problem that doesn’t exist. A poster further up mentioned he has to make an effort to find uses for the “scenes” he creates with his own automation stuff.

      Augmented reality might take off in the future, or it might not. The need for goggles means it’s not likely to be an all-day thing. Things like Google using a phone to translate menus and signs in foreign languages is a good idea, but it’s a niche. Probably AR will only ever end up solving niche problems, although it will be quite handy in those niches, and there might be many of them.

      It’s “exciting” but that’s novelty value.

Leave a Reply to yesCancel reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.