Open Source Marker Recognition for Augmented Reality

marker

[Bharath] recently uploaded the source code for an OpenCV based pattern recognition platform that can be used for Augmented Reality, or even robots. It was built with C++ and utilized the OpenCV library to translate marker notations within a single frame.

The program started out by focusing in on one object at a time. This method was chosen to eliminate the creation of additional arrays that contained information of all of the blobs inside the image; which could cause some problems.

Although this implementation did not track marker information through multiple frames, it did provide a nice foundation for integrating pattern recognition into computer systems. The tutorial was straightforward and easy to ready. The entire program and source code can be found on Github which comes with a ZERO license so that anyone can use it. A video of the program comes up after the break:

[Read more...]

A DIY Geomagnetic Observatory

Magnetometer observatory

[Dr. Fortin] teaches physics at a French High School, and to get his students interested in the natural world around them, he built a geomagnetic observatory, able to tell his students if they have a chance at seeing an aurora, or if a large truck just drove by.

We’ve seen this sort of device before, and the basic construction is extremely similar – a laser shines on a mirror attached to magnets. When a change occurs in the local magnetic field, the mirror rotates slightly and the laser beam is deflected. Older versions have used photoresistors, but [the doctor] is shining his laser on a piece of paper and logging everything with a webcam and a bit of OpenCV.

The design is a huge improvement over earlier DIY attempts at measuring the local magnetic field, if only because the baseline between the webcam and mirror are so long. When set up in his house, the magnetometer can detect cars parked in front of his building, but the data he’s collecting (French, but it’s just a bunch of graphs) is comparable to the official Russian magnetic field data.

The Crane Game, Oculus Style

crane We’re pretty sure the Hackaday demographic is a a person who sees a giant tower crane lifting beams and girders above a skyline and says, “that would be fun, at least until I have to go to the bathroom.” Realizing the people who own these cranes probably won’t let any regular joe off the street into the cabin, [Thomas] and [screen Name] (see, this is why we have brackets, kids) built their own miniature version with an Oculus Rift.

Instead of a crane that is hundreds of feet tall, the guys are using a much smaller version, just over a meter tall, that is remotely controlled through a computer via a serial connection. Just below the small plastic cab is a board with two wide-angle webcams. The video from these cameras are sent to the Oculus so the operator can see the boom swinging around, and the winch unwinding to pick up small objects.

The guys have also added a little bit of OpenCV to add color based object detection. This is somewhat useful, but there’s also an approximation of the distance to an object, something that would be very useful if you don’t have a three-inch tall spotter on the ground.

Video below.

[Read more...]

Autonomous Balloon Popping

Quadcopter drone for popping balloons

Taking on an autonomous vehicle challenge, [Randy] put together this drone which can locate and pop balloons. It’s been assembled for this year’s Sparkfun Autonomous Vehicle Competition, which will challenge entrants to locate and pop 99 luftbaloons red balloons without human intervention.

The main controller for this robot is the Pixhawk, which runs a modified version of the ArduCopter firmware. These modifications enable the Pixhawk to receive commands from an Odroid U3 computer module. The Odroid uses a webcam to take images, and then processes them using OpenCV. It tries to locate large red objects and fly towards them.

The vision processing and control code on the Odroid was developed using MAVProxy and Drone API. This allows for all the custom code to be developed using Python.

The Sparkfun AVC takes place tomorrow — June 21st in Boulder, Colorado. You can still register to spectate for free. We’re hoping [Randy]‘s drone is up to the task, and based on the video after the break, it should be able to complete this challenge.

[Read more...]

Turning Lego Into A Groove Machine

lego

Last weekend wasn’t just about Maker Faire; in Stockholm there was another DIY festival celebrating the protocols that make electronic music possible. It’s MIDI Hack 2014, and [Kristian], [Michael], [Bram], and [Tobias] put together something really cool: a Lego sequencer

The system is set up on a translucent Lego base plate, suspended above a webcam that feeds into some OpenCV and Python goodness. From there, data is sent to Native Instruments Maschine. There’s a step sequencer using normal Lego bricks, a fader controlling beat delay, and a rotary encoder for reverb.

Despite being limited to studs and pegs, the short demo in the video below actually sounds good, with a lot of precision found in the faders and block-based rotary encoder. [Kristian] will be putting up the code and a few more details shortly. Hopefully there will be enough information to use different colored blocks in the step sequencer part of the build for different notes.

[Read more...]

A Webcam Based Posture Sensor

Webcam based posture sensor

Even for hobby projects, iteration is very important. It allows us to improve upon and fine-tune our existing designs making them even better. [Max] wrote in to tell us about his latest posture sensor, this time, built around a webcam.

We covered [Max's] first posture sensor back in February, which utilized an ultrasonic distance sensor to determine if you had correct posture (or not). Having spent time with this sensor and having received lots of feedback, he decided to scrap the idea of using an ultrasonic distance sensor altogether. It simply had too many issues: issues with mounting the sensor on different chairs, constantly hearing the clicking of the sensor, and more.  After being inspired by a very similar blog post to his original that mounted the sensor on a computer monitor, [Max] was back to work. This time, rather than using an ultrasonic distance sensor, he decided to use a webcam. Armed with Processing and OpenCV, he greatly improved upon the first version of his posture sensor. All of his code is provided on his website, be sure to check it out and give it a whirl!

Iteration leads to many improvements and it is an integral part of both hacking and engineering. What projects have you redesigned or rebuild? Let us know!

Never Lose Your Pencil With OSkAR on Patrol

OSkAR

[Courtney] has been hard at work on OSkAR, an OpenCV based speaking robot. OSkAR is [Courney's] capstone project (pdf link) at Shepherd University in West Virginia, USA. The goal is for OSkAR to be an assistive robot. OSkAR will navigate a typical home environment, reporting objects it finds through speech synthesis software.

To accomplish this, [Courtney]  started with a Beagle Bone Black and a Logitech C920 webcam. The robot’s body was built using LEGO Mindstorms NXT parts. This means that when not operating autonomously, OSkAR can be controlled via Bluetooth from an Android phone. On the software side, [Courtney] began with the stock Angstrom Linux distribution for the BBB. After running into video problems, she switched her desktop environment to Xfce.  OpenCV provides the machine vision system. [Courtney] created models for several objects for OSkAR to recognize.

Right now, OSkAR’s life consists of wandering around the room looking for pencils and door frames. When a pencil or door is found, OSkAR announces the object, and whether it is to his left or his right. It may sound like a rather boring life for a robot, but the semester isn’t over yet. [Courtney] is still hard at work creating more object models, which will expand OSkAR’s interests into new areas.

[Read more...]

Follow

Get every new post delivered to your Inbox.

Join 92,260 other followers