OpenCV Brings Pinch To Zoom Into The Real World

Gesture controls arrived in the public consciousness a little over a decade ago as touchpads and touchscreens became more popular. The main limitation to gesture controls, a least as far as [Norbert] is concerned, is that they can only control objects in a virtual space. He was hoping to use gestures to control a real-world object instead, and created this device which uses gestures to control an actual picture.

In this unique augmented reality device, not only is the object being controlled in the real world but the gestures are being monitored there as well, thanks to a computer vision system watching his hand which is running OpenCV. The position data is fed into an algorithm which controls a physical picture mounted on a slender robotic arm. Now, when [Norbert] “pinches to zoom”, the servo attached to the picture physically brings it closer to or further from his field of view. He can also use other gestures to move the picture around.

While this gesture-controlled machine is certainly a proof-of-concept, there are plenty of other uses for gesture controls of real-world objects. Any robotics platform could benefit from an interface like this, or even something slightly more mundane like an office PowerPoint presentation. Opportunity abounds, but if you need a primer for OpenCV take a look at this build which tracks a hand in minute detail.

Continue reading “OpenCV Brings Pinch To Zoom Into The Real World”

ElectronBot: A Sweet Mini Desktop Robot That Ticks All The Boxes

[Peng Zhihui] seems to have found some spare time and energy to crack out another sweet robot build, this time it’s a much smaller, and cuter emoji-bot (Original GitHub Link,) with the usual production-ready levels of attention to detail. With a lot of fine details in the 3D printed models, this is one for SLS printing in nylon, but that can be done for a reasonable outlay, in China at least. The electronics package consists of a few full custom, and tiny, PCBs designed with Altium Designer, with off-the-shelf modules for the circular LCD and camera. The main board hosts an STM32F405 and deals with the display and SD card, The reason for this choice of STM32 was due to the requirement for connecting to an external USB3300 high-speed USB PHY. There is a sensor PCB which handles the gesture sensor, a USB hub, MPU6050 9-axis sensor, and also the USB camera module. This board attaches to the USB-C connector in the base, via a FFC cable, allowing the robot to rotate on its base.

Cunning two-servo shoulder mechanism

[Peng] clearly has exacting standards as to how things should work, and we guess wanted to have the arms back-driveable in a way that enabled the host computer to track and record the motor positions for replaying later on. The connection back to the controller is via I2C, allowing all five servos to hang on the same bus, saving previous resources. Smart! Getting a processor and motor driver in such a tiny space was a bit of challenge, but a walk in the park for [Peng] as is demonstrates in the video embedded below (We believe English subtitles are pending!) The arm mechanism is particularly interesting, and rather elegantly executed, and he does seem rather proud of this part of the design, and so he should! Like with [Peng’s] other projects, there is a lot to see, and plenty of scope for feature explosion. It was nice to see the ‘bot being used as an input device, not only with gesture sensing via the dedicated sensor, but also using the camera with OpenCV to track user posture and act accordingly. This thing could act as genuinely useful AI device, as was a being darn cute at the same time!

We know you come to Hackaday for your cute robot fix, and we’re not going to disappoint. Here’s a cute robot lamp, an obligatory spot (a robot dog) type project, and if you’re more of a cat person, then we got that base covered as well.

Continue reading “ElectronBot: A Sweet Mini Desktop Robot That Ticks All The Boxes”

Forget Sudoku, Build Yourself A Minimalist Rubik’s Solver Robot

Some people like crossword puzzles, some are serious sudoku ninjas, but [Andrea Favero] likes to keep himself sharp, by learning coding and solving control problems, and that is something we can definitely relate to. When learning a new platform, it’s a very good idea to have a substantial project or goal in mind, and learn what is needed on the way there. [Andrea] chose to build an autonomous Rubik’s cube solver, and was kind enough to document exactly how how to do it, and we’re glad of it!

The result of the openCV processing chain

Working in python with OpenCV, [Andrea] uses the methodology by [Oussama Barkouki] to process each face image and convert it into a table of the colours of individual facelets. The basics of that, are first to convert the image to grayscale, then use a gaussian blur to denoise the image. Edges are identified using the canny algorithm, the result of which is then dilated and passed into a contour detector. The contours are sent into a cunning filter that identifies square contours, and those the wrong size are filtered off. What you’re left with are the outlines of the actual coloured facelets. Once you have a list of squares, these can be used to form image masks, and thence select the average colour from each square. The colour is then quantised and stored as a labelled colour from the standard Western Rubik’s cube colour scheme. Finally, once all face images are captured and facelets colours identified, the data are passed into a Rubik’s cube solving algorithm developed by [Hegbert Kociemba,] a guide to which is available on the speedsolving site. The result of the solving step is a sequence of descrambling moves, in the move notation developed by [David Singmaster]. Fascinating stuff, if you ask us! Continue reading “Forget Sudoku, Build Yourself A Minimalist Rubik’s Solver Robot”

OpenCV Knows Where Your Hand Is

We have to say, [Murtaza]’s example game in his latest video isn’t very exciting. However, the OpenCV technique he uses to track a hand and determine its distance from a single camera is pretty interesting. The demo shows a random button on the screen and you have to use your hand to press the button which then moves so you can try again. The hand measurement seems accurate to a few centimeters which is good enough for many applications.

The Python code is actually quite straightforward. Essentially, the software tracks your hand and by estimating its relative size to determine how far away it is. Of course, your hand might also rotate, and [Murtaza] works through all the cases step-by-step. If we wanted to know a distance, we’d probably turn to ultrasonics or a time of flight sensor. The problem is, those sensors can’t tell your hand from anything else that happens to be in front of it. The use of a single camera to track and locate is pretty impressive.

If you haven’t used OpenCV before, the channel has a lot of tutorials and they are all worth watching. Computer vision is a great technique and can replace a lot of things in some applications. GPS, for example. Or, try this creepier tracking application next Halloween.

Continue reading “OpenCV Knows Where Your Hand Is”

Box with a hole. Camera and Raspberry Pi inside.

A Label Maker That Uses AI Really Poorly

[8BitsAndAByte] found herself obsessively labeling items around her house, and, like the rest of the world, wanted to see what simple, routine tasks could be made unnecessarily complicated by using AI. Instead of manually identifying objects using human intelligence, she thought it would be fun to offload that task to our AI overlords and the results are pretty amusing.

She constructed a cardboard enclosure that housed a Raspberry Pi 3B+, a Pi Camera Module V2, and a small thermal printer for making the labels. The enclosure included a hole for the camera and a button for taking the picture. The image taken by the Pi is analyzed by the DeepAI DenseCap API which, in theory, should create a label for each object detected within the image. Unfortunately, it doesn’t seem to do that very well and [8BitsAndAByte] is left with labels that don’t match any of the objects she took pictures of. In some cases it didn’t even get close, for example, the model thought an apple was a person’s head and a rotary dial phone was a cup. Go figure. It didn’t really seem to bother her though, and she got a pretty good laugh from the whole thing.

It appears the model detects all objects in the image, but only prints the label for the object it was most certain about. So maybe part of her problem is there were just too many objects in the background? If that were the case, you could probably improve the accuracy of the model by placing the object against a neutral background. That may confuse the AI a lot less and possibly give you better results. Or maybe try a different classifier altogether? Or don’t. Then you could just use it as a fun, gag project at your next get-together. That works too.

Cool project [8BitsAndAByte]! Hey, maybe this is a sign the world will still need some human intelligence after all. Who knows?

Continue reading “A Label Maker That Uses AI Really Poorly”

A robot that detects whether you are awake and gently taps you if not.

Wake-Up Robot Does It Gently

For hundreds of years, people have fallen asleep while reading in bed late at night. These days it’s worse, what with us taking phones to the face instead when we start to nod off. At least they don’t have pointy corners like books. While you may not want to share your bedroom with a robot, this wake-up robot by [Norbert Zare] may be just the thing to keep you awake.

Here’s how it works: a Raspberry Pi camera on a servo wanders around at eye level, and the Pi it’s attached to uses OpenCV to determine whether those eyes are open or starting to get heavy. The bot can also speak — it uses eSpeak to introduce itself as a bot designed not to let you sleep. Then when it catches you snoozing, it repeatedly intones ‘wake up’ in a bored British accent.

We were sure that the thing was designed to slap [Norbert] in the face a la [Simone Giertz]’s robot alarm clock, but no, that long-fingered hand just slowly swings down and gently taps [Norbert] on the arm (or whatever is in the path of the slappy hand). Check out the short demo and build video after the break.

Do you want to be awoken even more gently? Try a sunlight lamp. We’ve got dozens in stock, but this one gradually gets about as bright as the sun.

Continue reading “Wake-Up Robot Does It Gently”

Robot with glowing eyes

Spatial AI And CV Hack Chat

Join us on Wednesday, December 1 at noon Pacific for the Spatial AI and CV Hack Chat with Erik Kokalj!

A lot of what we take for granted these days existed only in the realm of science fiction not all that long ago. And perhaps nowhere is this more true than in the field of machine vision. The little bounding box that pops up around everyone’s face when you go to take a picture with your cell phone is a perfect example; it seems so trivial now, but just think about what’s involved in putting that little yellow box on the screen, and how it would not have been plausible just 20 years ago.

Erik Kokalj

Perhaps even more exciting than the development of computer vision systems is their accessibility to anyone, as well as their move into the third dimension. No longer confined to flat images, spatial AI and CV systems seek to extract information from the position of objects relative to others in the scene. It’s a huge leap forward in making machines see like we see and make decisions based on that information.

To help us along the road to incorporating spatial AI into our projects, Erik Kokalj will stop by the Hack Chat. Erik does technical documentation and support at Luxonis, a company working on the edge of spatial AI and computer vision. Join us as we explore the depths of spatial AI.

join-hack-chatOur Hack Chats are live community events in the Hackaday.io Hack Chat group messaging. This week we’ll be sitting down on Wednesday, December 1st at 12:00 PM Pacific time. If time zones have you tied up, we have a handy time zone converter.