[youtube=http://www.youtube.com/watch?v=iKYCob7getU]
In this writeup, you can see how to build a cheap compound eye system for your robot. Using 4 IR LEDs and 4 phototransistors, [oddbot] gave “Mr General” the ability to follow movement in objects fairly well, assuming that they are within 200 mm. Being IR, it has the typical drawbacks such as sensitivity to light or overly reflective surfaces, but we like the idea. It is perfect for a nocturnal or low light robot.
[via Hacked Gadgets]
Weird how such a thing can make it seem alive.
it would be cool to see it interacting with a dog or a cat
I like it, looks remarkably effective for such small outlay in sensing. Very good.
CuteBot!
If anyone is interested in taking a look at more advanced implementations of optical flow sensors, have a look at http://www.diydrones.com and search, but for real state of the (civilian) art, take a look at http://centeye.com/pages/techres/flightvideogallery.html. They have video of an autonomous model helicopter using arrays of optical flow sensors, amongst others, and actually sell experimental sensors to the public. They have the sensors manufactured in batches, so they don’t always have them in stock. Finally, their explanation of how it all works is very clear.
It’s fascinating to watch it move. Very good!
how cute :D good job
A good rule of thumb is that if a new millennium company invents or markets some radical new technology, you will almost always discover that someone was already using it 20 years before, and published. More often then not, someone else did it 20 years before that.
In 1976, I avidly followed published research which consisted of doing just this with simple logic, but minus all the proprietary razzle-dazzle.
They were attempting to model fly and frog eyes using photo transistors, and were able to follow a variety of moving objects fairly well.
Circa 1973 or so, someone discovered that you could pry the metal caps off early dynamic ram chips, and that they were affected by light. These eventually became the “cmos imagers” or CCDs we use now (well, technically a CCD is different, but close enough for discussion, as original CCDs were linear arrays rather than square matrices) and everyone knows how that turned out.
Don’t get me wrong – I applaud ingenuity and research, and these guys obviously have figured out how to mate a CCD with a cheap processor and create an integrated subsystem. We need more and better building blocks to accomplish cool things with, and they’re doing it.
Having said that, the formulas for tracking edges and calculating speed are pretty simple, especially at the relatively low resolutions and accuracy provided by the centeye chip.
The 1976 researchers were using a 4×4 phototransistor array and were able to find and issue… wait a minute! Maybe I should get into this business… a baby AVR and a pinhole camera chip… hmmmmmmm. :)
Nah, just an old guy trollin’. Carry on Centeye, and more power to you. Please stay out of helicopters as passengers. They’re dangerous.
That is all.
@Andrei
Depressingly true.
To be fair to Centeye, to whom I have no connection whatsoever, their market is UAVs and similar, and given the difficulty of building autopilots for helicopters, their solution seems impressive, especially given its simplicity.
That said, insects with compound eyes are neither new nor rare, and it’s pretty obvious that their brains are very small, so perhaps the most surprising thing about this is that it’s taken so long for this approach to make the minor progress that it has. Even more so given that nature seems to much prefer dedicated hardware to computational throughput so, as you say, the clues were there.
Still, they say we only use one third of our brains. I wonder what the other third is for?
the tracking reminds me of the fully autonomous sentry guns they sell at http://www.paintballsentry.com but theirs use a camera for image processing.