DIY Pick and Place just getting under way

diy-pick-and-place

It’s not totally fair to say that this project is just getting under way. But the truth is it neither picks nor places so there’s a long road still to travel. But we’re impressed with the demonstrations of what [Daniel Amesberger] has achieved thus far. Using the simplest of CNC mills he’s finished the frame and gantry for the device. You can see some of the parts on the left after going though an anodizing process that leaves them with that slick black finish.

The demo video shows off the device by driving it with a joystick. It’s fast, which gives us hope that this will rival some of the low-end commercial pick and place machines. He’s already been working on the software, which runs on a mini ITX form factor computer. This includes a gerber file interpreter and some computer vision for a visual check on part placement. He hasn’t gotten around to building the parts feeders but we’ll keep you updated as we hear back from him.

[Read more...]

Flocking behavior using Mindstorm robots

flock-behavior-with-mindstorm

Do you ever wonder why geese always fly together in a V-shape? We’re not asking about the fact that it makes the work load much less for all but the lead goose. We mean how is it that all geese know to form up like this? It’s is the act of flocking, and it’s long been a subject of fascination when it comes to robotics. [Scott Snowden] researched the topic while working on his degree a few years ago. Above you can see the demonstration of the behavior using LEGO Mindstorm robots. That’s certainly interesting and you’ll want to check out the video after the break. But his offering doesn’t end with the demo. He also posted a huge article about his work that will provide days of fascinating reading.

We can’t begin to scratch the surface of all that he covers, but we can give you a quick primer on his Mindstorm (NXT) setup. He uses these three bots along with a central brick (the computer part of the NXT hardware) which communicates with them. This lets him use a wide range of powerful tools like MatLab and Processing to recognize each robot with a top-down camera, passing it data based on info harvested with computer vision. From there it’s a wild ride of modeling the behavior as a set of algorithms.

[Read more...]

Using OpenCV with the Raspberry Pi

opencv

When we first heard of the Raspberry Pi we were elated that projects that once required a full-blown computer could now be done on a tiny, and cheap board running Linux. Unfortunately, we haven’t seen much in the way of using computer vision algorithms on the Raspi, but thanks to [Lentin] the world of OpenCV is now accessable to Raspberry Pi users everywhere.

[Lentin] didn’t feel like installing OpenCV from its source, a process that takes the better part of a day. Instead, he installed it using the synaptic package manager. After connecting a webcam, [Lentin] ssh’d into his Raspi and installed a face detection example script that comes with OpenCV.

It should be noted that [Lentin]‘s install of OpenCV isn’t exactly fast, but for a lot of projects being able to update a face tracker five times a second is more than enough. Once the Raspberry Pi camera module is released the speed of face detection on a Raspi should increase dramatically, though, leading to even more useful computer vision builds with the Raspberry Pi.

Pick and place that can’t pick or place… but it looks very promising

This sexy piece of CNC can really fly. It’s a pick and place machine which [Danh Trinh] has been working on. The thing is, so far it lacks the ability to move components at all. But the good news is the rest of the system seems to be there.

He posted a demo video of his progress so far which you can see embedded after the break. He starts of by showing off his computer vision software which he wrote in C#. The demonstration includes the view from the gantry-mounted camera, as well as the computer filtering which seems to accurately locate the solder pads and silk screen on the PCB. The second half of the video looks at the hardware seen above. It’s just executing some static code but the whine of those stepper motors is music to our ears. [Danh] reports that the movements of the needle that will eventually serve as the tip of the vacuum tweezer seem to be very accurate.

These home-built pick and place projects are quite a challenge, but we’ve seen a lot of really awesome work on them lately.

[Read more...]

Turning video game sprites into 3D objects

Anyone who has played Minecraftfor a good amount of time should have a good grasp on making 3D objects by placing voxels block by block. A giant voxel art dragon behind your base is cool, but what about the math behind your block based artwork? [mikolalysenko] put together a tutorial for making 3D objects out of video game sprites and covers a lot of the math involved in turning pixels into voxels.

The process of modeling a 3D object from a series of 2D images is a very well-studied computer vision problem called multiview stereo reconstruction. This process has been used to build 3D models of random objects with devices such as the Stanford spherical gantry. Unfortunately the math for this algorithm is a mess, but there is another way: using photo hulls (PDF warning) to find the largest possible object from a series of images showing the top, bottom, left, right, front, and back views.

[mikolaly] put together an algorithm to produce 3D images from a series of images and even went so far as to build a web-based shape carving editor. With this web app, it’s possible to make 3D objects simply by inputting a bunch of colored pixels onto six 2D grids.

Once the models were complete, [mikolaly] sent some of the 3D models off to Shapeways for 3D printing. He’s completed Meat boy, Mario, and Link 3D sprites, all available for sale.

Now the only thing left to do is build a script to turn these objects into Minecraft object schematics.

Help computer vision researchers, get a 3d model of your living room

Robots can easily make their way across a factory floor; with painted lines on the floor, a factory makes for an ideal environment for a robot to navigate. A much more difficult test of computer vision lies in your living room. Finding a way around a coffee table and not knocking over a lamp present a huge challenge for any autonomous robot. Researchers at the Royal Institute of Technology in Sweden are working on this problem, but they need your help.

[Alper Aydemir], [Rasmus Göransson] and Prof. [Patric Jensfelt] at the Centre for Autonomous Systems in Stockholm created Kinect@Home. The idea is simple: by modeling hundreds of living rooms in 3D, the computer vision and robotics researchers will have a fantastic library to train their algorithms.

To help out the Kinect@Home team, all that is needed is a Kinect, just like the one lying disused in your cupboard. After signing up on the Kinect@Home site, you’re able to create a 3D model of your living room, den, or office right in your browser. This 3D model is then added to the Kinect@Home library for CV researchers around the world.

Adding new features and controlling a Kinect from a couch

Upon the release of the Kinect, Microsoft showed off its golden child as the beginnings of a revolution in user interface technology. The skeleton and motion detection promised a futuristic, hand-waving “Minority Report-style” interface where your entire body controls a computer. The expectations haven’t exactly lived up reality, but [Steve], along with his coworkers at Amulet Devices have vastly improved the Kinect’s skeleton recognition so people can use a Kinect sitting down.

One huge drawback for using the Kinect for a Minority Report UI in a home theater is the fact that the Microsoft Skeleton recognition doesn’t work well when sitting down. Instead of relying on the built-in skeleton recognition that comes with the Kinect, [Steve] rolled his own skeleton detection using Harr classifiers.

Detecting Harr-like features has been used in many applications of computer vision technology; it’s a great, not-very-computationally-intensive way to detect faces and body positions with a simple camera. Training is required for the software, and [Steve]‘s app spent several days programming itself. The results were worth it, though: the Kinect now recognizes [Steve] waving his arm while he is lying down on the couch.

Not to outdo himself, [Steve] also threw in voice recognition to his Kinect home theater controller; a fitting  addition as his employer makes a voice recognition remote control. The recognition software seems to work very well, even with the wistful Scottish accent [Steve] has honed over a lifetime.

[Steve]‘s employer is giving away their improved Kinect software that works for both the Xbox and Windows Kinects. If you’re ever going to do something with a Kinect that isn’t provided with the SDKs and APIs we covered earlier today, this will surely be an invaluable resource.

You can check out [Steve]‘s demo of the new Kinect software after the break.

[Read more...]

Follow

Get every new post delivered to your Inbox.

Join 92,165 other followers