Researchers at the University of Liege have developed an algorithm to separate movement from background. They call it ViBe and this patented piece of code comes in at under 100 lines of C. Above you can see the proof of concept shown by hacking the code into CHDK, a Canon PowerShot alternative firmware. The package is available for non-commercial use and might be just the thing you need to get your project to recognize where it needs to serve the beer.
[Thanks Juan via Slashdot]
28 thoughts on “Motion Sensing Camera Hack”
“100 lines of C”
OMG.., why do they need 100 lines of code?
Its just comparing pixels in a loop.., isn’t It?! :|
I can’t speak to these people’s patented algorithm. But motion (http://www.lavrsen.dk/foswiki/bin/view/Motion/MotionTechnology) (apt-get install motion) has been around for more than 5 years and produces the same results.
Quick, someone shove that camera a bit so we can see if there is camera movement compenation. If not, there is nothing to see here. Move on.
looking at the results on some video feeds, it clearly does not account for any camera movement as it’s simply just processing video and comparing changes in pixels, but that being said i have a feeling their algorithm is more complex than just looping through the pixels and checking each one for a change as that technique has been around for years. It is probably doing something based on regions and tunneling down to each pixel change.
Cool, also see MJPEG security freeware for PC based projects: http://brooksyounce.tripod.com/
Patented algorithm? Since when can an algorithm be patented? This is retarded …
I agree, I don’t see how great this is unless it is able to actually detect movement while the camera is moving. They show no videos (that I found anyway) on their site of anything other than a static camera.
This is still dam cool. I will probably incorporate this in my security system im developing. One thing at a time though , need the rest working.
Back in ’00 in graduate school, we used bayesian algorthims to analyze arial photos to distinguish between cars, carrion, etc… with the hopes of measuring traffic flows. Seriously, there’s nothing new here. In fact, if you wanted to do this with moving cameras I bet it would work there as well as long as the camera wasn’t moving too fast.
PS don’t ask me to try to remember the actall algorithm or code, it’s been way, way too long.
Mike, this is just a crude attempt for this patent-trolling prof to get more publicity for his invention. It goes against the spirit of academic openness *and* against the spirit of hacking.
This isn’t worthy of hackaday.
it saddens me Sony wont make a version of CHDK had I known about CHDK I would have bought a cannon, my friends with cannons are too yellow to put CHDK on thier cameras… Im sure if you could find a seiral connection on a Sony you could get the OS and tweek it… there’s a mission…
CHDK doesn’t modify the camera firmware (as it runs entirely in memory), why would they be ‘too yellow’?
The very last video is a car sequence (camera in a car going down highway)
this CAN’t be patented, there is prior art:
The effectv program for Linux features an effect called “hologram” or something like that, which does exactly the same thing, but instead of butt-ugly green overlay, it overlays a cool retro-scifi hologram effect.
so don’t worry too much about breaking “their” “patent”.
Patents must die.
hmm 100 lines of C code, .o file…cue dun dun duhh music
You know most all videocodec of the last decades have used motiondetection to compress video.
and this is in fact just comparing pixels, in short it’s backwards and I’m going to say retarded to release this now as so novel and clever, the claim to fame they can make is putting it in the canon, but seeing canon cameras also have video compressors on chip it can probably be done much better using its hardware to assist.
We live in a day and age where cheap consumer cameras can freaking detect when you are smiling and this should impress? come on now.
Claiming there is nothing novel here is like claiming that there was nothing novel about quicksort because bubble sort already existed. It’s naive and foolish. I skimmed through the paper. The algorithm is new, and they are just showing off the efficiency by running it on CHDK.
I will agree though that it goes against the spirit of academic research to patent algorithms.
I quickly skimmed over the patent papers, and it seems the algorithm uses an interesting adaptive background subtraction technique. Putting the whole issue of software patents aside, I’m not sure whether this patent is justifiable at all in terms of originality.
It IS interesting (I skimmed the algorithm part of the paper) but it seems more like an incremental improvement. It seems marginally better than naive background subtraction if you forget about the ‘history’ aspect of what this algorithm is doing. However, I cannot speak to whether or not it is a huge improvement in efficiency.
Yes, it goes against the spirit of academia. But Sergey and Brin patented their algorithms. GIF was patented. RSA has patents on encryption algorithms. I think even the SUSAN corner detection algorithm is patented. I’m for openness but I’m also for people being able to profit from their hard work.
Sorry, this is patented? They do realise that Apple have had essentially this idea/software built into photo booth for many years..?
I’ve been looking for a background differencing for a webcam. Does anyone know of a simple API (Java, C# or C++) that I could use? My aim would be to pump the result to an MCU controlled LED matrix.
prior art doesn’t automatically invalidate patents. patents like thse don’t protect the outcome, but the way to get there. Which there are many ways of doing.
this algorithm seem to work well, its short, math light and integer based so its good for a lot of basic hardware, thats one of the key differences with other algorithms.
You can implement just about any image processing algorithm using integer arithmetic.
am I the only person who recognizes that Photobooth (in mac os) has been able to do this for years? It’s not that hard. So congrats to you U of L people for re-inventing something that didn’t need reinventing. This would be acceptable to me if they said that they “ported the ability to filter motion from the background” instead of “developed an algorithm”
You really bastardized the hell out of that write up. The slashdot article was much more informative. You made it sound like motion detection and background subtraction has never been done before. The cool thing here is that it’s beeing done directly on the camera.
The principle of “Background subtraction” exists for about 30 years now. Television too…
Why would this mean that you cannot innovate on it anymore?
On the author’s site, there is a sequence for a version of the algorithm that only requires 1 comparison per pixel and 1 byte of memory. To me, this seems to be the absolute bottom line in terms of computational resources. Not surprising then that you can embed it in a digital camera. Nice demo anyway.
Should also be noted that in releasing in binary-only form they’re (quite significantly, in my opinion) limiting it to x86-only applications; can’t use one of the many embedded ARM systems w/ the ability to use a camera.
I quite like the way that this algorithm keeps objects hilighted even after they have stopped moving. Does anyone know of a more open source algorithm that does that? I’m using AForge at the moment and I’m looking for a faster/better alternative.
Please be kind and respectful to help make the comments section excellent. (Comment Policy)