Hallucinating Machines Generate Tiny Video Clips

Hallucination is the erroneous perception of something that’s actually absent – or in other words: A possible interpretation of training data. Researchers from the MIT and the UMBC have developed and trained a generative-machine learning model that learns to generate tiny videos at random. The hallucination-like, 64×64 pixels small clips are somewhat plausible, but also a bit spooky.

The machine-learning model behind these artificial clips is capable of learning from unlabeled “in-the-wild” training videos and relies mostly on the temporal coherence of subsequent frames as well as the presence of a static background. It learns to disentangle foreground objects from the background and extracts the overall dynamics from the scenes. The trained model can then be used to generate new clips at random (as shown above), or from a static input image (as shown in pairs below).

Currently, the team limits the clips to a resolution of 64×64 pixels and 32 frames in duration in order to decrease the amount of required training data, which is still at 7 TB. Despite obvious deficiencies in terms of photorealism, the little clips have been judged “more realistic” than real clips by about 20 percent of the participants in a psychophysical study the team conducted. The code for the project (Torch7/LuaJIT) can already be found on GitHub, together with a pre-trained model. The project will also be shown in December at the 2016 NIPS conference.

My Take on Assistive Tech for the Hackaday Prize

We’re in the last few weeks for entries in the 2016 Hackaday Prize — specifically the challenge is to show off your take on assisstive technology. This is a hugely broad category and I’ve been thinking about it for a while. I’m sure there’s a ton of low-hanging fruit that’s not obvious to everyone. This would be a great time to hit up the comments below and leave your “hey, I always thought someone should make…” ideas. I’m looking forward to reading them and it might just inspire someone to spend the next couple weeks hammering out a prototype to enter.

For me, it’s medication. I knew this can be a challenging problem having gone through a few cycles of prescription medicines in my life. But recently I helped out a family member who was suddenly on many medications taken on eight different times a day — including once, twice, three, and six times per day. This was further compounded by sleep deprivation (having to set alarms at night to take the medicine) and  drowsy/woozy effects from the medicine. I can tell you first hand that this is really tough for anyone to deal with and it’s incredibly easy to make a mistake or not be able to remember if you took a dose.

Pill Organizers Do No More or Less

We’ve seen a number of pill organizers before and that’s what I reached for in this case. However, that organizer only had four slots for each day. I didn’t hack it (other than writing on the doors with a Sharpie for when to take each) but even if there were added buttons or LEDs I’m not convinced this would be a marked improvement.

What you see above is my proposal for the medicine problem. Smartphones have become ubiquitous and the processing power and cameras of even budget phones are mind blowing. I think it is entirely possible to write an app that uses computer vision to recognize pills and sync them with the schedule. This may mean whipping the phone out of your pocket, or designing a pill box that has a phone stand next to it (saying that makes me think of using RPi and a Pi camera). Grab your pills and validate them under the camera.

Useful Augmented Reality

The screen of the phone would use augmented reality to overlay information about the pills it sees — you know, like Pokemon Go but in a way that enriches your life. ‘pills, catch ’em all!’ — new pills can be learned of the fly, delivering the user to a screen to identify the pill and the dosing schedule. Taking the validation picture will record when the medicine was taken, and the natural extension of this systems is a pharmacy’s ability to push your dose schedule to your account when you pick up the prescription. A stretch goal would be keeping an eye out for interactions.

This is all very much like how hospitals do it — they’re scanning bar codes on the packaging and the patient bracelet and recording it. This would be an easier user experience and quite frankly I think companies already in this space (like Snapchat and Niantic) could whip this up in a single-day hackathon no problem.

Is it the perfect system? Maybe not. But there is no perfect system or we’d be using it by now. We need you, the world’s talent pool, to step up and make life a little better. Do it in prototype form by October 3rd and you’ll be eligible for one of twenty $1000 cash prizes and a chance at winning the Hackaday Prize. But even if you don’t build a single thing, one idea could be the spark that lets others change the world for the better. So let’s hear it!

Add Robotic Farming to Your Backyard with Farmbot Genesis

Growing your own food is a fun hobby and generally as rewarding as people say it is. However, it does have its quirks and it definitely equires quite the time input. That’s why it was so satisfying to watch Farmbot push a weed underground. Take that!

Farmbot is a project that has been going on for a few years now, it was a semifinalist in the Hackaday Prize 2014, and that development time shows in the project documented on their website. The robot can plant, water, analyze, and weed a garden filled with arbitrarily chosen plant life. It’s low power and low maintenance. On top of that, every single bit is documented on their website. It’s really well done and thorough. They are gearing up to sell kits, but if you want it now; just do it yourself.

The bot itself is exactly what you’d expect if you were to pick out the cheapest most accessible way to build a robot: aluminum extrusions, plate metal, and 3D printer parts make up the frame. The brain is a Raspberry Pi hooked to its regular companion, an Arduino. On top of all this is a fairly comprehensive software stack.

The user can lay out the garden graphically. They can get as macro or micro as they’d like about the routines the robot uses. The robot will happily come to life in intervals and manage a garden. They hope that by selling kits they’ll interest a whole slew of hackers who can contribute back to the problem of small scale robotic farming.

Hackaday Prize Entry: Harmonicas, Candy, And Van Halen

Watch enough How It’s Made, and you’ll soon become very enthusiastic about computer vision and compressed air. In factories all around the world, production lines automatically sort the wheat from the chaff by running a product underneath a camera and blowing defective product off the line.

For his Hackaday Prize entry, [Fabien] is attempting this same task. He’s building a machine that will rapidly sort candy with computer vision and precisely controlled jets of air. He’s also planning for the Van Halen reunion and building a CNC harmonica.

Right now, the design has a hopper full of M&Ms dropping through a channel where a camera looks at each individual piece of candy. A Raspberry Pi, camera, and OpenMV detect all the red, yellow, brown, and blue M&Ms, and send that information to a computer controlling a suite of pneumatic valves. When these valves open, candy of different colors is shuffled off into it’s own bin. It’s the perfect device for someone responsible for reading Van Halen’s rider.

In an interesting little side project, [Fabien] needed a way to test the pneumatic valves before building the color sensor and candy chute. He had a harmonica lying around, and built something we’re surprised we’ve never seen before. It’s a CNC harmonica, capable of belting out a few tunes. You can check out that testing video after the break.

Continue reading “Hackaday Prize Entry: Harmonicas, Candy, And Van Halen”

Eye Tracking Makes the Musical Eye Conductor for Everyone!

For his final project at the Copenhagen Institute of Interaction Design, [Andreas Refsgaard] decided to make something that matters : a system that allows anyone to control a musical instrument using only their eyes and facial expressions. Someone should enter this into a certain contest that’s running…

Dubbed the Eye Conductor, [Andreas] has created a highly customizable system that allows for a control interface that can be operated using only your eyes, and some facial expressions. Designed with the intent to allow everyone to enjoy playing music, [Andreas] user test the system at schools, housing communities for people with physical disabilities, and anyone he could find in a wheel chair. His intent is to continue the project so that all people can enjoy playing music.

The system is open, designed for inclusion and can be customised to fit the physical abilities of whoever is using it.

Continue reading “Eye Tracking Makes the Musical Eye Conductor for Everyone!”

Robot Cheats at Rock Paper Scissors

It is hard enough to beat computers at games like chess. Now robotics engineers at the Ishikawa Watanabe Laboratory in Japan have created a janken robot that wins every time (if you didn’t know, janken is the Japanese name for rock-paper-scissors). How can it win every time? Easy. It cheats.

The janken robot evolved through three different versions. In the first version, the robotic hand would note the human player’s hand with a high-speed camera and then move the hand to a winning counter play with about a 20 millisecond delay. In the second version, the delay was greatly reduced.

However, in the third version, the robot uses a scanning technique to capture an entire field of view and determines what play the human is making. Again, a winning counter play is instantly produced by the robotic hand.

Continue reading “Robot Cheats at Rock Paper Scissors”

Hackaday Prize Semifinalist: Picking Up Litter With Robots

On beaches, in parks, and in [BDM]’s back yard, there’s a lot of liter everywhere. The normal solution to this problem is to hire someone or find some volunteers to pick up all this trash. We’re living in the future, though, and that means robots. For his Hackaday Prize entry, [BDM] is building a robot that picks up trash.

A robot that picks up litter is a very, very interesting problem. It can’t be controlled by a person, or else it would be more efficient to just get out there and kill your back picking up bottles. This means it must work autonomously, and that means identifying litter, picking it up, and disposing of it.

For the identification part of the problem, [BDM] is using computer vision that captures an RGB image and discriminates against natural objects. Right now the computer vision is far from perfect, but it does a very good job, all things considering.

The next biggest problem is picking the trash up and disposing of it. For this, [BDM] has repurposed a Power Wheels and attached a DIY robot arm. It’s not a very powerful arm, and a children’s toy probably isn’t the best platform, but it is the start of something very, very cool.

You can check out [BDM]’s video for the project below.

The 2015 Hackaday Prize is sponsored by:

Continue reading “Hackaday Prize Semifinalist: Picking Up Litter With Robots”