Anything Becomes A Clock

Clocks are a popular project around here, and with good reason. There’s a ton of options, and there’s always a new take on ways to tell time. Clocks using lasers, words, or even ball bearings are all atypical ways of displaying time, but like a mathematician looking for a general proof of a long-understood idea this clock from [Julldozer] shows us a way to turn any object into a clock.

His build uses AA-powered clock movements that you would find on any typical wall clock, rather than reaching for his go-to solution of an Arduino and a stepper motor. The motors that drive the hands in these movements are extremely low-torque and low-power which is what allows them to last for so long with such a small power source. He uses two of them, one for hours and one for minutes, to which he attaches a custom-built lazy Suzan. The turntable needs to be extremely low-friction so as to avoid a situation where he has to change batteries every day, so after some 3D printing he has two rotating plates which can hold any object in order to tell him the current time.

While he didn’t design a clock from scratch or reinvent any other wheels, the part of this project that shines is the way he was able to utilize such a low-power motor to turn something so much heavier. This could have uses well outside the realm of timekeeping, and reminds us of this 3D-printed gear set from last year’s Hackaday prize.

Continue reading “Anything Becomes A Clock”

Object Detection, With TensorFlow

Getting computers to recognize objects has been a historically difficult problem in computer science, but with the rise of machine learning it is becoming easier to solve. One of the tools that can be put to work in object recognition is an open source library called TensorFlow, which [Evan] aka [Edje Electronics] has put to work for exactly this purpose.

His object recognition software runs on a Raspberry Pi equipped with a webcam, and also makes use of Open CV. [Evan] notes that this opens up a lot of creative low-cost detection applications for the Pi, such as setting up a camera that detects when a pet is waiting at the door to be let inside or outside, counting the number of bees entering and exiting a beehive, or monitoring parking spaces at an office.

This project uses a number of other toolkits as well, including Protobuf. It also makes extensive use of Python scripts, but if you’re comfortable with that and you have an application for computer vision, [Evan]’s tutorial will get you started.

Continue reading “Object Detection, With TensorFlow”

Ping Pong Ball-Juggling Robot

There aren’t too many sports named for the sound that is produced during the game. Even though it’s properly referred to as “table tennis” by serious practitioners, ping pong is probably the most obvious. To that end, [Nekojiru] built a ping pong ball juggling robot that used those very acoustics to pinpoint the location of the ball in relation to the robot. Not satisfied with his efforts there, he moved onto a visual solution and built a new juggling rig that uses computer vision instead of sound to keep a ping pong ball aloft.

The main controller is a Raspberry Pi 2 with a Pi camera module attached. After some mishaps with the planned IR vision system, [Nekojiru] decided to use green light to illuminate the ball. He notes that OpenCV probably wouldn’t have worked for him because it’s not fast enough for the 90 fps that’s required to bounce the ping pong ball. After looking at the incoming data from this system, an algorithm extracts 3D information about the ball and directs the paddle to strike the ball in a particular way.

If you’ve ever wanted to get into real-time object tracking, this is a great project to look over. The control system is well polished and the robot itself looks almost professionally made. Maybe it’s possible to build something similar to test [Nekojiru]’s hypothesis that OpenCV isn’t fast enough for this. If you want to get started in that realm of object tracking, there are some great projects that make use of that piece of software as well.

RadarCat Gives Computers A Sense Of Touch

So far, humans have had the edge in the ability to identify objects by touch. but not for long. Using Google’s Project Soli, a miniature radar that detects the subtlest of gesture inputs, the [St. Andrews Computer Human Interaction group (SACHI)] at the University of St. Andrews have developed a new platform, named RadarCat, that uses the chip to identify materials, as if by touch.

Realizing that different materials return unique radar signals to the chip, the [SACHI] team combined it with their recognition software and machine learning processes that enables RadarCat to identify a range of materials with accuracy in real time! It can also display additional information about the object, such as nutritional information in the case of food, or product information for consumer electronics. The video displays how RadarCat has already learned an impressive range of materials, and even specific body parts. Can Skynet be far behind?

Continue reading “RadarCat Gives Computers A Sense Of Touch”