Robot Vision: Detecting Obstacles with FPGAs and line lasers

Somewhere down the road, you’ll find that your almighty autonomous robot chassis is going to need some sensor feedback. Otherwise, that next small step down the road may end with a blind leap off the coffee table. The first low-cost sensors we might throw at this problem would be sonars or IR rangefinders, but there’s a problem: those sensors only really provide distance data back from the pinpoint view directly ahead of them.

Rest assured, [Jonathan] wrote in to let us know that he’s got you covered. Combining a line laser, camera, and an FPGA, he’s able to detect obstacles that fall within the field of view of the camera and laser.

If you thought writing algorithms in software is tricky, wait till to you try hardware! (We know: division sucks!) [Jonathan] knows no fear though; he’s performing gradient computation on the FPGA directly to detect the laser in the camera image at a wicked 30 frames-per-second. Why roll up your sleeves and take the hardware route, you might ask? If we took a CPU-based approach at the tiny embedded-robot scale, Jonathan estimates a mere 10 frames-per-second. With an FPGA, we’re able to process images about as fast as they’re received.

Jonathan is using the Logi Board, a Kickstarter success we’ve visited in the past, and all of his code is up on the Githubs. If you crack it open, you’ll also find that many of his modules are Wishbone compliant, so developing your own projects with just some of these parts has been made much easier than trying to rip out useful features from a sea of hairy logic.

With computer-vision hardware keeping such a low profile in the hobbyist community, we’re excited to hear more about [Jonathan’s] FPGA-based robotics endeavors.

Continue reading “Robot Vision: Detecting Obstacles with FPGAs and line lasers”

DIY Pick and Place just getting under way

diy-pick-and-place

It’s not totally fair to say that this project is just getting under way. But the truth is it neither picks nor places so there’s a long road still to travel. But we’re impressed with the demonstrations of what [Daniel Amesberger] has achieved thus far. Using the simplest of CNC mills he’s finished the frame and gantry for the device. You can see some of the parts on the left after going though an anodizing process that leaves them with that slick black finish.

The demo video shows off the device by driving it with a joystick. It’s fast, which gives us hope that this will rival some of the low-end commercial pick and place machines. He’s already been working on the software, which runs on a mini ITX form factor computer. This includes a gerber file interpreter and some computer vision for a visual check on part placement. He hasn’t gotten around to building the parts feeders but we’ll keep you updated as we hear back from him.

Continue reading “DIY Pick and Place just getting under way”

Flocking behavior using Mindstorm robots

flock-behavior-with-mindstorm

Do you ever wonder why geese always fly together in a V-shape? We’re not asking about the fact that it makes the work load much less for all but the lead goose. We mean how is it that all geese know to form up like this? It’s is the act of flocking, and it’s long been a subject of fascination when it comes to robotics. [Scott Snowden] researched the topic while working on his degree a few years ago. Above you can see the demonstration of the behavior using LEGO Mindstorm robots. That’s certainly interesting and you’ll want to check out the video after the break. But his offering doesn’t end with the demo. He also posted a huge article about his work that will provide days of fascinating reading.

We can’t begin to scratch the surface of all that he covers, but we can give you a quick primer on his Mindstorm (NXT) setup. He uses these three bots along with a central brick (the computer part of the NXT hardware) which communicates with them. This lets him use a wide range of powerful tools like MatLab and Processing to recognize each robot with a top-down camera, passing it data based on info harvested with computer vision. From there it’s a wild ride of modeling the behavior as a set of algorithms.

Continue reading “Flocking behavior using Mindstorm robots”

Using OpenCV with the Raspberry Pi

When we first heard of the Raspberry Pi we were elated that projects that once required a full-blown computer could now be done on a tiny, and cheap board running Linux. Unfortunately, we haven’t seen much in the way of using computer vision algorithms on the Raspi, but thanks to [Lentin] the world of OpenCV is now accessable to Raspberry Pi users everywhere.

[Lentin] didn’t feel like installing OpenCV from its source, a process that takes the better part of a day. Instead, he installed it using the synaptic package manager. After connecting a webcam, [Lentin] ssh’d into his Raspi and installed a face detection example script that comes with OpenCV.

It should be noted that [Lentin]’s install of OpenCV isn’t exactly fast, but for a lot of projects being able to update a face tracker five times a second is more than enough. Once the Raspberry Pi camera module is released the speed of face detection on a Raspi should increase dramatically, though, leading to even more useful computer vision builds with the Raspberry Pi.

Pick and place that can’t pick or place… but it looks very promising

This sexy piece of CNC can really fly. It’s a pick and place machine which [Danh Trinh] has been working on. The thing is, so far it lacks the ability to move components at all. But the good news is the rest of the system seems to be there.

He posted a demo video of his progress so far which you can see embedded after the break. He starts of by showing off his computer vision software which he wrote in C#. The demonstration includes the view from the gantry-mounted camera, as well as the computer filtering which seems to accurately locate the solder pads and silk screen on the PCB. The second half of the video looks at the hardware seen above. It’s just executing some static code but the whine of those stepper motors is music to our ears. [Danh] reports that the movements of the needle that will eventually serve as the tip of the vacuum tweezer seem to be very accurate.

These home-built pick and place projects are quite a challenge, but we’ve seen a lot of really awesome work on them lately.

Continue reading “Pick and place that can’t pick or place… but it looks very promising”

Turning video game sprites into 3D objects

Anyone who has played Minecraftfor a good amount of time should have a good grasp on making 3D objects by placing voxels block by block. A giant voxel art dragon behind your base is cool, but what about the math behind your block based artwork? [mikolalysenko] put together a tutorial for making 3D objects out of video game sprites and covers a lot of the math involved in turning pixels into voxels.

The process of modeling a 3D object from a series of 2D images is a very well-studied computer vision problem called multiview stereo reconstruction. This process has been used to build 3D models of random objects with devices such as the Stanford spherical gantry. Unfortunately the math for this algorithm is a mess, but there is another way: using photo hulls (PDF warning) to find the largest possible object from a series of images showing the top, bottom, left, right, front, and back views.

[mikolaly] put together an algorithm to produce 3D images from a series of images and even went so far as to build a web-based shape carving editor. With this web app, it’s possible to make 3D objects simply by inputting a bunch of colored pixels onto six 2D grids.

Once the models were complete, [mikolaly] sent some of the 3D models off to Shapeways for 3D printing. He’s completed Meat boy, Mario, and Link 3D sprites, all available for sale.

Now the only thing left to do is build a script to turn these objects into Minecraft object schematics.

Help computer vision researchers, get a 3d model of your living room

Robots can easily make their way across a factory floor; with painted lines on the floor, a factory makes for an ideal environment for a robot to navigate. A much more difficult test of computer vision lies in your living room. Finding a way around a coffee table and not knocking over a lamp present a huge challenge for any autonomous robot. Researchers at the Royal Institute of Technology in Sweden are working on this problem, but they need your help.

[Alper Aydemir], [Rasmus Göransson] and Prof. [Patric Jensfelt] at the Centre for Autonomous Systems in Stockholm created Kinect@Home. The idea is simple: by modeling hundreds of living rooms in 3D, the computer vision and robotics researchers will have a fantastic library to train their algorithms.

To help out the Kinect@Home team, all that is needed is a Kinect, just like the one lying disused in your cupboard. After signing up on the Kinect@Home site, you’re able to create a 3D model of your living room, den, or office right in your browser. This 3D model is then added to the Kinect@Home library for CV researchers around the world.