Objectifier: Director Of Domestic Technology

book-example[Bjørn Karmann]’s Objectifier is a device that lets you control domestic objects by allowing them to respond to unique actions or behaviour, using machine learning and computer vision. The Objectifier can turn on a table lamp when you open a book, and turn it off when you close the book. Switch on the coffee maker when you place the mug next to the pot, and switch it off when the mug is removed. Turn on the belt sander when you put on the safety glasses, and stop it when you remove the glasses. Charge the phone when you put a banana in front of it, and stop charging it when you place an apple in front of it. You get the drift — the possibilities are endless. Hopefully, sometime in the (near) future, we will be able to interact with inanimate objects in this fashion. We can get them to learn from our actions rather than us learning how to program them.

The device uses computer vision and a neural network to learn complex behaviours associated with your trigger commands. A training mode, using a phone app, allows you to train it for the On and Off actions. Some actions require more human effort in training it — such as detecting an open and closed book — but eventually, the neural network does a fairly good job.

The current version is the sixth prototype in the series and [Bjørn] has put in quite a lot of work refining the project at each stage. In its latest avatar, the device hardware consists of a Pi Zero, a Raspberry-Pi camera module, an SMPS power brick, a relay block to switch the output, a 230 V plug for input power and a 230 V socket outlet for the final output. All the parts are put together rather neatly using acrylic laser cut support pieces, and then further enclosed in a nice wooden enclosure.

On the software side, all of the machine learning part is taken care of using “Wekinator” — a free, open source software that allows building musical instruments, gestural game controllers, computer vision or computer listening systems using machine learning. The computer vision is handled via Processing. All the code is wrapped using openframeworks, with ml4A providing apps for working with machine learning.

All of the above is what we could deduce looking at the pictures and information on his blog post. There isn’t much detail about the hardware, but the pictures are enough to tell us all. The software isn’t made available, but maybe this could spur some of you hackers into action to build another version of the Objectifier. Check out the video after the break, showing humans teaching the Objectifier its tricks.

37 thoughts on “Objectifier: Director Of Domestic Technology

  1. I dream of a world where objects can be controlled not by a computer neural network guessing what to do based on previous knowledge and assumptions but instead by the human directly, allowing a conscious choice of action.
    Perhaps a button or switch dedicated to the sole function of turning a lamp on or off should someone decide they want the lamp on or off, not hindered by the light turning on unnecessarily when you leave the room while opening a book or plunging you into darkness when you close it. Imagine a switch dedicated to turning your coffee machine on when you actually require it on, perhaps to be ready for when you have rinsed out your cup at the sink rather than waiting, and turns off when you want it to turn off, after everyone has had their coffee instead of turning on and off constantly and wearing itself out and being really inefficient and stupid and crap. Contemplate a belt sander that you could consciously control with the flick of a switch instead of having to deal with constant injuries when someone else puts on safety glasses while you are setting up, or turn it off when you want rather than having to take off your glasses and shield your eyes while it spins down because it doesn’t immediately become save just because you turn the power off. Being able to charge a phone by simply plugging it into some electricity instead of fannying around with fruit or something.
    I know it sounds insane but we must dare to dream.

    1. In two minds about this. On one side, you’re right; that element of making mistakes can be both dangerous and frustrating. But on the other, the technology can’t possibly be improved without stepping stones along the way.

    2. That sounds similar as my opinion to IoT coffee pots: To get coffee, some physical interaction is necessary. At least checking if water and coffee beans are in the machine and putting a cup under the outlet. In some machines it is even necessary to put some special overpriced single use pre-portioned coffee capsule or pad into the machine. So I see absolutely no use in starting the brewing process via WiFi. When I put the cup under it I can hit the button.
      Similar the charging of the phone. As long as there is no robot which checks the battery state of the phone on the table and plugs it into the charger if necessary (and it is not in my pocket), I have to plug it in myself. And if I plug it in I want it to charge.

  2. Well, that is interesting technical feat, but I would probably lose my temper as soon as it does something based on learned expectations but not aligned with my intentions. e.g. If I close the book to contemplate for a moment about something I have just read, or to look again at the cover, I don’t want the lamp to go off. Humans are predictable most of the time, but when those few corner cases come when they act in an novel (for computer observer) way, they will not be amused with automatic reaction of “the assistant”. For AI to work with humans, it requires much deeper understanding of the latter, or additional human awareness of most probable AI’s actions and facility to mitigate them if they are undesirable.

    Finally, we should chose our automation targets wisely. Some of the possible help actions are really not necessary. What we need is extension and expansion of our abilities and surpassing our shortcomings, not … an artificial sycophancy.

    The other idea: that we could interact with intelligent environment in some sort of symbolic ritual manipulation of physical objects … well, to me that is more attractive, it resonates with long tradition of fantastic lore, “magic objects” (rubbing a lamp, waving a wand, … etc.) so it could be easily understood and accepted by users.

  3. It can be annoying when people second guess my intention. Having a machine do it ?
    Maybe it will have the ability to learn from its’ mistakes whilst the humans act with propensity.

    BTW, cool project. Very thought provoking and philosophical.

  4. I dream of a world where all the devices get their (0 W quiescent power consumption) electromechanical ON/OFF switch.

    Jeez.

    I mean: from a geeky POV this project is really amazing, but my best idea of dystopia is when I don’t know whether my TV serves Samsung or me (or part-way in between), my vacuum cleaner belongs more to Hoover or to me (or is striking a deal with Amazon behind my back). You get the idea.

    Kinda like this “trusted computing” euphemism. Trusted by whom? Not by me, definitely.

  5. I could see this having some great possibilities, sounding a warning when someone removes their safety glasses while power tools are running in my makerspace would be neat. We have a lot of kids there that dont always take safety as seriously as I would like.

    1. I would suggest you need to be more strict, safety is no joke. If people are a danger to themselves and others they need to be removed for their own sake and to show others how seriously safety needs to be taken.
      Boy I wouldn’t like to be around for training that one, sounds dangerous!

          1. But this is no american plug :-) It resembles either a very old European one (from the time before there were PE connetors in the sockets) or it looks like something from India, as the pins are a little thicker than in Europe. I am not sure if there were 3 pin sockets in India when I have been there last year.

          2. It’s either the German/European Schuko or the Danish equivalent of it. But yeah, the lack of a Ground/Earth on the plug end is disconcerting. It’s definitely not an India plug/socket, although it will work here. Which is a huge problem. Appliances with Shuko plugs fit Indian sockets, but remain ungrounded. I’ve seen enough of them to conclude no one bothers to change the cable / plug. It ought to be an easy fix to snip off the offending plug and connect a rewirable Indian one.

    2. I’m sorry, trying to replace education with technology isn’t the right solution. If somebody doesn’t care about safety he needs a dressing down (copied from dictionnary, not sure if it’s the right word) or his permission to use $tool removed (or some other stuff if we talk about his job).

      1. The other side of this coin is too much trust in the tech to keep one safe. “The system won’t let me open the door if the hazard is present” – until it does, and you haven’t checked. Expecting the system to turn the stove off if it doesn’t detect cooking taking place, and so on. That doesn’t mean that the way we interact with things shouldn’t be optimized, only that we shouldn’t shift too much responsibility to autonomous systems.

        That’s the bottom line here: yes we can develop the systems to do these things, but that in and of itself is not reason enough to implement them.

  6. I think commenters have missed the forest for the trees. Yes, hooking it to a belt sander is probably a terrible idea. Yes, turning off the light when you close the book is not great. But I think people are picking apart these specific examples. I’m sure there are dozens of other things it would be great at. Do I know what they are? No, not yet. But that’s often how cool new stuff works. I started working with NFC sensors because I just felt like there was *something* cool in there, but I couldn’t figure out what it was. After a couple years of work, I think I’ve finally found a sweet spot for it, and it’s very exciting. The same goes here. The examples aren’t great, but somebody will come up with a great use case. Then another and another. That’s how something like this goes in my experience.

    1. Having it dim or turn off the lights when you sit on the couch with the TV on and no book in your hand would be handy, and turn them back on if you turn the TV off or pick up something to read.

      Having it learn to recognise complex patterns as a gesture is a cool thing. With self learning it could potentially beat you to what you were going to do next, so it learns that when you sit on the couch alone with only your phone in your hand, you’re going to want the lights off and the tv on.

  7. I can’t see the use-cases in the article being all that practical, but I think that a device like this could be very useful for enforcing safety protocols. One good example is given above in the comments already, but what about sounding an alarm if someone defeats a lockout/tagout? Preventing a machine from starting if safety equipment isn’t in place? Turning off a laser if someone enters the area without proper eyewear? I know these all sound similar, but I just can’t think of anything better right now. That’s not to say that other good uses don’t exist.
    There are probably a number of potential uses outside of safety as well.

  8. This is an interesting experiment. It shows convincingly that the concept doesn’t work, and also that it is really hard to come up with user interaction tasks to automate beyond the sensibility level of Monty Python.

    Book-operated reading light? Banana-activated phone charging? MV-controlled belt sander? I checked that date, nope, two months early.

    OK, maybe this is unfair feedback to a hobby project… but there are three aspects: the tech, the use cases studied and the method of concept evaluation. The tech is really nice, simple and well designed for the purpose. The use cases studied so far are all nonsense, and that is to be expected because coming up with sensible use cases is really, really difficult. Asking family members or random people for use cases is a highly inefficient process, and short of launching an App Store, the absolute yield of good ideas is just too low to register. Take a peek at the SNR (= Sense-to-Nonsense-Ratio) of the mother of all App Stores to calibrate your scale.

    Now for the method of concept evaluation. In the (pro) world of user interaction design one would explore a concept like this through “Wizard of Oz experiments”, a.k.a. FIOYMI (“Fake it, or you make it!”). In this case, the box would contain just a webcam and the awesome learning algorithm is called George and is located out of view in a lab with buttons for remote controlling the environment of the user in front of him. George, being non-autistic, gets it. I.e., George sees the user’s intention. But George was asked to do only ‘the automatic’ thing—banana on, apple off. Once the researchers (which may or may not include George) have identified a useful task that is worth automating, they can try to come up with a mechanism for it. This is also hard, precisely because it must be sufficiently robust and reliable, and most serious of all: the blame must go to someone else when it drives your car under the truck. (Spoiler: Fail ahead) I have built and installed a sound-operated light switch in my bedroom. You finish reading, clap your hand and the light goes out. You clap again, sound goes on. It worked like a charm for a week. (And, btw, you do not need light to see that the user wants to switch on the light.) Then something outside made a loud sudden sound at 4 AM, which dimmed my youthful (age 10) enthusiasm for clap switches by a notch.

    So the bottom line: ask around, far and wide, for better ideas of use cases. My intuition is saying that there will be some good ones if you can find them. Maybe not in the home environment but in professional, highly specialized or repetitive environments. Think of Baxter: http://www.rethinkrobotics.com/baxter/

    1. I already have a use case. I have arthritis in my shoulders. On a bad day I can’t lift my arms above shoulder hight, so if i need to turn on my reading light, I need to stand up and lean over the back of my chair. A device such as this, which I could train to recognise some hand jesture, would be of real use. I can see several others. The fact that some commenters here have no imagination is not a reason to criticise this project. I think the most significant part of it is the fact that it can learn. The use cases given may be a joke, but the device is not.

      1. You have a use case? Great! The next step is to try it out in practice, in order to develop realistic expectations what this kind of learning can and cannot do, how many training samples of supervised learning are required for acceptable performance in the given application, and possibly even ideas on how to improve it.

        My suggestion would be that the project initiator donates a prototype to you for some weeks, if he wants, and you report back your experience, if you want. The “light switch for people with arthritis on a bad day” could be just a starting point, other use cases might present themselves while using it.

        It is just a suggestion, but I am serious. The fact that some commenters here have no imagination is also no reason not to take this thing a step closer to actual usefulness—which, as I wrote before, might be there, just hard to discover.

        Here are some suggestions of my own, not all of them strictly seriously:
        – “De koffie is klaar”: an email is sent when the coffee maker has filled the pot after being started.
        – Control of hospital bed attitude by the presence of certain things around it. (Warning: medical device, seek professional advice before head explodes on regulatory requirements)
        – Better mouse trap: when the mouse is recognized a door is closed. Potential issues with providing the positive examples to the learning algorithm, mice are shy, fast and generally distaste mouse traps secretly reading up on them in USPTO’s publications
        – Monitor that certain things (decoration objects, plants, the like) are in their usual place. Alert if not.
        – “You’ve got mail”: Monitor the mat at the doorstep. Alert when obscured by rectangular thingies.
        – Roomba watchdog: Monitor the base station and alert if Roomba went AWOL again.
        – Moon detector: recognize the moon in a certain position and play Moonlight Shadow without warning.
        – Spy game: alert when The Secret Pebble is placed in The Secret Place to pick up The Secret Message.

        Unless of course the true purpose of this project is some sort of “IoT performance art” or practical joke, which was the first thing that crossed my mind when I saw people play with their Objectifier like a cat. (As you might have guessed, I have worked on concepts like this for a living for some years. And you cannot, er, imagine the amount of even “nearly-good” ideas that went into the bin in those days…) Art is fine with me, fine art is even finer.

  9. First non frightening “smart” IOT thing I’ve seen: no network connectivity (outside of bleutooth with the app), it’s not a botnet disaster waiting to happen, there is no big data going on, no nothing. Plus, it seems really user friendly and would probably be cheap, that could be something great if it hits the market (and works reliably).

    I’m keeping my dumb and reliable crontab-controlled automation, thou…

  10. Cool examples! But for everything on/off I would much rather have it detect one single handgesture to toggle the device the arm is pointing at on/off. These variations with cups, books and so on is like memorizing a lot of keyboard shortcuts. The handgesture plus arm pointing is like a simple mouseclick.

  11. Given that Wekinator is free and opensource, it would be nice and reasonable if the details of projects based heavily on it were documented for the others (HW & SW), I mean, to provide some ground for others’ development.

Leave a Reply to MartinCancel reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.