Open Hybrid Gives You The Knobs And Buttons To Your Digital Kingdom

With a sweeping wave of complexity that comes with using your new appliance tech, it’s easy to start grumbling over having to pull your phone out every time you want to turn the kitchen lights on. [Valentin] realized that our new interfaces aren’t making our lives much simpler, and both he and the folks at MIT Media Labs have developed a solution.

open-hybrid-light-color-pickerOpen Hybrid takes the interface out of the phone app and superimposes it directly onto the items we want to operate in real life. The Open Hybrid Interface is viewed through the lense of a tablet or smart mobile device. With a real time video stream, an interactive set of knobs and buttons superimpose themselves on the objects they control. In one example, holding a tablet up to a light brings up a color palette for color control. In another, sliders superimposed on a Mindstorms tank-drive toy become the control panel for driving the vehicle around the floor. Object behaviors can even be tied together so that applying an action to one object, such as turning off one light, will apply to other objects, in this case, putting all other lights out.

Beneath the surface, Open Hybrid is developed on OpenFrameworks with a hardware interface handled by the Arduino Yún running custom firmware. Creating a new application, though, has been simplified to be achievable with web-friendly languages (HTML, Javascript, and CSS). The net result is that their toolchain cuts out a heavy need for extensive graphics knowledge to develop a new control panel.

If you can spare a few minutes, check out [Valentin’s] SolidCon talk on the drive to design new digital interfaces that echo those we’ve already been using for hundreds of years.

Last but not least, Open Hybrid may have been born in the Labs, but its evolution is up to the community as the entire project is both platform independent and open source.

Sure, it’s not mustaches, but it’s definitely more user-friendly.

20 thoughts on “Open Hybrid Gives You The Knobs And Buttons To Your Digital Kingdom

  1. I’m sorry, but they haven’t developed a solution to the problem of “I need to pull my phone out to turn on the lights”. They’ve developed another skin over the problem.

    1. exactly.

      let me say this for the 100th time: ‘pressing on glass’ is entirely unfulfilling. not fun, not something I enjoy, there’s no tactile feedback, you can’t find ‘buttons’ by feel on glass. its really a solution trying to justify itself, over and over again.

      faking knobs with bitmap or even vector graphics is still NOT A BUTTON.

      1. It is not the intention of this work to replace a physical button. Instead it should support you with functionality that you can not represent with a button. Your phone should be nothing more then a special kind of multitool. Hybrid objects combine the advantage of a physical button with the flexibility of a gui. You can read about it here: http://openhybrid.org/learn%2c-setup%2c-operate.html

        The solid talk might give you more answers on your doubts..

        1. Lemme give you a use case. It’s 06:00, and I’m stumbling out to the kitchen to load the coffee pot and scratch myself while it brews. Which is easier? Slap a switch as I go by? Or pull out my phone, launch the app, point it at the ligh, and find a hue and brightness that doesn’t make me want to hurl?

          It’s a pretty skin, but it doesn’t solve the problem.

          1. This is a point I’ve argued for years regarding touch screen interfaces. Stop trying to use it to emulate physical interfaces and figure out how to use it for what it is, a touch screen.

            I’ve seen far too much software go to crap when the moron programmer makes too much effort emulating crappy hard UI. It’s not a damn keyboard, joystick or a button. Think about that paradigm.

            To that end, Star Trek figured that shit out and that’s what I want in my house. A computer smart enough to adapt to *me* and how I function, not the other way around.

          2. Your are totally right! And what you just explained is exactly why open hybrid has been created for. You have to understand that you use AR only to set up your coffee machine once to the right setting, and you use AR once to program the light switch to the environment how you like it. Then your phone stays in the pocket and you use physical interfaces from that point. What you look at here is not a replacement for your physical knobs. Think about it more as a virtual screw driver that allows you to fix how your physical space behavies. Day by day operations should stay always physical never on a touch screen.

      2. This is what happens when the tech people design interfaces. If touch screens are so great, why does even the iPhone have buttons? Tactile feedback is important.

        I have an older Behringer Guitar Effects processor that uses rotary encoders as digital potentiometers. The problem is that they freewheel and don’t “feel” right. It doesn’t unfortunately cost $$ to do things right: For example, a proper turntable has a perfectly weighted needle arm so you can easily pick it up, not a floppy bit of plastic with a needle on the end of it.

        With the explosion of microcontrollers, why not build some wireless modules with real knobs/sliders/etc?

        1. “when the tech people design interfaces. ”

          Pretty sure it was Steve Jobs that killed off buttons.
          I hate touchscreen keyboards, but I think your targeting the room group to blame. Its designers that love their smooth bevel-less surfaces. ;)

  2. The augmented reality solution to home automation controls (or music selection) and menus is pretty slick. Who wants all that clutter and ridiculous set of menu diving like you see in lcd based controllers? Pip boys and AR are the way of the future.

  3. I see alot of usefull applications for this tech. Just not the simple turn on/off but more advance. Although what i would like to know is how precise the object recognition is. Being able to point at an object and able to configure/control it.

    As for it being in a tablet, that is just the medium that delivers the data/information, I do not see why this could not be a contact lens at a later time with interaction being through hand-gestures or even EEG when that advances enough.

    Comon guy, see the possibilities for this tech instead of focusing on its limits.

    1. Being all platform independent and build up on web tech, makes this platform ready to wander in to your contact lenses, if you want to do that. I see the main use for AR for remixing, adjusting and learning about the physical space, whenever physical knobs reach their limits. These interactions are only a small fraction of a products live time. Day by day operations stay phaysical. These contact lenses need to be very comfortable, because 99 % of your time you will not need to use them, if you follow the open hybrid paradigms.

  4. hmm they claim platform independence. In action however they only have an IOS app, and the visualization software they use as the basis for the entire project currently has no way to be compiled for android or other systems. ..so its dependant. They do link to a guy thats trying to make a wrapper for the Qualcomm software they are depending on, until then however, its dependent.

  5. the other issue I have with using ‘phones’ for things other than phone things is that we now know how little we can truly trust our ‘phones’. they ARE tracking devices, bugging devices, highly unsecure devices at their core.

    I don’t want any of that in the middle of my IoT world. I really don’t. its not wise to mix the domain of the NSA spying (etc etc) with my home automation and such. if our phones were truly ours, that MIGHT be something to consider doing, but they are not ours and its foolish to continue thinking that phones are good for ‘everything’ that used to be done by physical buttons and dedicated NON INTERNET systems.

    ob disc: I have designed and built my own non-internet non-touch screen remote and have been enjoying a return back to the old ways that worked and worked well. the touch screen and ‘phone’ are not really the blessings the new generation thinks they are.

  6. What I think would be cool is if you could implement this into some kind of glasses and then instead of using a touchscreen implement something on the lines of those laser keyboards…not sure if this is practical, but I think that would be cooler and it could also take care of the issue of dealing with getting your cell phone out and opening the app and then adjusting it to your hearts content

  7. When we are all using AR glass’s anyway this will make more sense.
    In-context virtual sliders by devices, and specific gestures.

    Specifically being able to make a gun shape with my hand and go “pow!” at the light to turn it off.

  8. Not sure if I am missing a point here or you all are. This technology allows to create objects that can be interconnected among each others. The Reality Editor is the iOS app to establish this connection, but it is just an editor, I am sure others can be built.

    As far as I understand you can have an hardware knob and connect it to a light bulb (both being open hybrid objects), use the editor to establish the connection and then, in the everyday life, use the knob to operate the light.
    So you do not have to use the Reality Editor and Augmented Reality to operate the objects, this is just one option.

    I agree with you about the importance of having haptic feedback of an physical user interface, in the realm of professional and luxury “tools” hardware user interface should always be used, but I do not think this technology is against this concept.

Leave a Reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.