Forget Sudoku, Build Yourself A Minimalist Rubik’s Solver Robot

Some people like crossword puzzles, some are serious sudoku ninjas, but [Andrea Favero] likes to keep himself sharp, by learning coding and solving control problems, and that is something we can definitely relate to. When learning a new platform, it’s a very good idea to have a substantial project or goal in mind, and learn what is needed on the way there. [Andrea] chose to build an autonomous Rubik’s cube solver, and was kind enough to document exactly how how to do it, and we’re glad of it!

The result of the openCV processing chain

Working in python with OpenCV, [Andrea] uses the methodology by [Oussama Barkouki] to process each face image and convert it into a table of the colours of individual facelets. The basics of that, are first to convert the image to grayscale, then use a gaussian blur to denoise the image. Edges are identified using the canny algorithm, the result of which is then dilated and passed into a contour detector. The contours are sent into a cunning filter that identifies square contours, and those the wrong size are filtered off. What you’re left with are the outlines of the actual coloured facelets. Once you have a list of squares, these can be used to form image masks, and thence select the average colour from each square. The colour is then quantised and stored as a labelled colour from the standard Western Rubik’s cube colour scheme. Finally, once all face images are captured and facelets colours identified, the data are passed into a Rubik’s cube solving algorithm developed by [Hegbert Kociemba,] a guide to which is available on the speedsolving site. The result of the solving step is a sequence of descrambling moves, in the move notation developed by [David Singmaster]. Fascinating stuff, if you ask us! Continue reading “Forget Sudoku, Build Yourself A Minimalist Rubik’s Solver Robot”

OpenCV Knows Where Your Hand Is

We have to say, [Murtaza]’s example game in his latest video isn’t very exciting. However, the OpenCV technique he uses to track a hand and determine its distance from a single camera is pretty interesting. The demo shows a random button on the screen and you have to use your hand to press the button which then moves so you can try again. The hand measurement seems accurate to a few centimeters which is good enough for many applications.

The Python code is actually quite straightforward. Essentially, the software tracks your hand and by estimating its relative size to determine how far away it is. Of course, your hand might also rotate, and [Murtaza] works through all the cases step-by-step. If we wanted to know a distance, we’d probably turn to ultrasonics or a time of flight sensor. The problem is, those sensors can’t tell your hand from anything else that happens to be in front of it. The use of a single camera to track and locate is pretty impressive.

If you haven’t used OpenCV before, the channel has a lot of tutorials and they are all worth watching. Computer vision is a great technique and can replace a lot of things in some applications. GPS, for example. Or, try this creepier tracking application next Halloween.

Continue reading “OpenCV Knows Where Your Hand Is”

Box with a hole. Camera and Raspberry Pi inside.

A Label Maker That Uses AI Really Poorly

[8BitsAndAByte] found herself obsessively labeling items around her house, and, like the rest of the world, wanted to see what simple, routine tasks could be made unnecessarily complicated by using AI. Instead of manually identifying objects using human intelligence, she thought it would be fun to offload that task to our AI overlords and the results are pretty amusing.

She constructed a cardboard enclosure that housed a Raspberry Pi 3B+, a Pi Camera Module V2, and a small thermal printer for making the labels. The enclosure included a hole for the camera and a button for taking the picture. The image taken by the Pi is analyzed by the DeepAI DenseCap API which, in theory, should create a label for each object detected within the image. Unfortunately, it doesn’t seem to do that very well and [8BitsAndAByte] is left with labels that don’t match any of the objects she took pictures of. In some cases it didn’t even get close, for example, the model thought an apple was a person’s head and a rotary dial phone was a cup. Go figure. It didn’t really seem to bother her though, and she got a pretty good laugh from the whole thing.

It appears the model detects all objects in the image, but only prints the label for the object it was most certain about. So maybe part of her problem is there were just too many objects in the background? If that were the case, you could probably improve the accuracy of the model by placing the object against a neutral background. That may confuse the AI a lot less and possibly give you better results. Or maybe try a different classifier altogether? Or don’t. Then you could just use it as a fun, gag project at your next get-together. That works too.

Cool project [8BitsAndAByte]! Hey, maybe this is a sign the world will still need some human intelligence after all. Who knows?

Continue reading “A Label Maker That Uses AI Really Poorly”

A robot that detects whether you are awake and gently taps you if not.

Wake-Up Robot Does It Gently

For hundreds of years, people have fallen asleep while reading in bed late at night. These days it’s worse, what with us taking phones to the face instead when we start to nod off. At least they don’t have pointy corners like books. While you may not want to share your bedroom with a robot, this wake-up robot by [Norbert Zare] may be just the thing to keep you awake.

Here’s how it works: a Raspberry Pi camera on a servo wanders around at eye level, and the Pi it’s attached to uses OpenCV to determine whether those eyes are open or starting to get heavy. The bot can also speak — it uses eSpeak to introduce itself as a bot designed not to let you sleep. Then when it catches you snoozing, it repeatedly intones ‘wake up’ in a bored British accent.

We were sure that the thing was designed to slap [Norbert] in the face a la [Simone Giertz]’s robot alarm clock, but no, that long-fingered hand just slowly swings down and gently taps [Norbert] on the arm (or whatever is in the path of the slappy hand). Check out the short demo and build video after the break.

Do you want to be awoken even more gently? Try a sunlight lamp. We’ve got dozens in stock, but this one gradually gets about as bright as the sun.

Continue reading “Wake-Up Robot Does It Gently”

Robot with glowing eyes

Spatial AI And CV Hack Chat

Join us on Wednesday, December 1 at noon Pacific for the Spatial AI and CV Hack Chat with Erik Kokalj!

A lot of what we take for granted these days existed only in the realm of science fiction not all that long ago. And perhaps nowhere is this more true than in the field of machine vision. The little bounding box that pops up around everyone’s face when you go to take a picture with your cell phone is a perfect example; it seems so trivial now, but just think about what’s involved in putting that little yellow box on the screen, and how it would not have been plausible just 20 years ago.

Erik Kokalj

Perhaps even more exciting than the development of computer vision systems is their accessibility to anyone, as well as their move into the third dimension. No longer confined to flat images, spatial AI and CV systems seek to extract information from the position of objects relative to others in the scene. It’s a huge leap forward in making machines see like we see and make decisions based on that information.

To help us along the road to incorporating spatial AI into our projects, Erik Kokalj will stop by the Hack Chat. Erik does technical documentation and support at Luxonis, a company working on the edge of spatial AI and computer vision. Join us as we explore the depths of spatial AI.

join-hack-chatOur Hack Chats are live community events in the Hackaday.io Hack Chat group messaging. This week we’ll be sitting down on Wednesday, December 1st at 12:00 PM Pacific time. If time zones have you tied up, we have a handy time zone converter.

A robot that uses CV to detect villagers in Stardew Valley and display their gift preferences on a screen.

Stardew Valley Preferences Bot Is A Gift To The Player

It seems like most narrative games have some kind of drudgery built in. You know, some tedious and repetitious task that you absolutely must do if you want to succeed. In Stardew Valley, that thing is gift giving, which earns you friendship points just like in real life. More important than the giving itself is that each villager has preferences — things they love, like, and hate to receive as gifts. It’s a lot to remember, and most people don’t bother trying and just look it up in the wiki. Well, except for Abigail, who seems to like certain gemstones so much that she must be eating them. She’s hard to forget.

[kutluhan_aktar]’s villager gift preferences bot is a fun and fantastic use of OpenCV. This bot uses a LattePanda Alpha 864s, which is a single-board computer with an Arduino Leonardo built in. It works using template matching, which is basically a game of Where’s Waldo? for computers.

Given a screenshot of each villager in various positions, the LattePanda recognizes them among a given game scene, then does a lookup of their birthday and preferences which the Leonardo prints on a 3.5″ LCD screen. At the same time, it alerts the player with a buzz and big green LED. Be sure to check it out in action after the break.

In Animal Crossing, the drudgery amounts to pressing the A button while catching shooting stars. That’s not a huge problem for a Teensy.

Continue reading Stardew Valley Preferences Bot Is A Gift To The Player”

This Robot Can’t Keep Its Eyes Off The Money

Some say there’s no treasure quite as valuable as the almighty dollar. [Norbert Zare] likes alt-rock soundtracks on Youtube videos and robots obsessed with money, so set about building the latter.

The project is fundamentally a simple one. A Raspberry Pi 3B+ is outfitted with a Pi Camera, and set up to control twin servo motors attached to a simple pan/tilt assembly. The Pi runs OpenCV set up in a face-tracking mode. This allows the robot to readily track money in its field of view, as the vast majority of money out there has someone’s face on it. OpenCV is used to detect where the money is in the field of view, and guide the Pi’s camera towards the cash.

It’s a neat repurposing OpenCV’s face detection algorithm, and that’s much faster than training your own money-tracking system. However, it seems like the robot would also track regular human faces, too. Perhaps it could be optimised to do a color check, such that only greyscale or green faces were followed by the robot.

Does the project do anything useful or important? Arguably no, but if a robot can be this obsessed with money, perhaps we all can learn something. Alternatively, it might just have served as a useful project for [Norbert] to learn about programming and mechatronics projects. Either way, we dig it. Code is on Github for the curious.

Using OpenCV in this way has become common over the years. If you want to detect cats, however, maybe consider giving Tensorflow a try. Video after the break.

Continue reading “This Robot Can’t Keep Its Eyes Off The Money”