Multi-Year Doorbell Project

Camera modules for the Raspberry Pi became available shortly after its release in the early ’10s. Since then there has been about a decade of projects eschewing traditional USB webcams in favor of this more affordable, versatile option. Despite the amount of time available there are still some hurdles to overcome, and [Esser50k] has some supporting software to drive a smart doorbell which helps to solve some of them.

One of the major obstacles to using the Pi camera module is that it can only be used by one process at a time. The PiChameleon software that [Esser50k] built is a clever workaround for this, which runs the camera as a service and allows for more flexibility in using the camera. He uses it in the latest iteration of a smart doorbell and intercom system, which uses a Pi Zero in the outdoor unit armed with motion detection to alert him to visitors, and another Raspberry Pi inside with a touch screen that serves as an interface for the whole system.

The entire build process over the past few years was rife with learning opportunities, including technical design problems as well as experiencing plenty of user errors that caused failures as well. Some extra features have been added to this that enhance the experience as well, such as automatically talking to strangers passing by. There are other unique ways of using machine learning on doorbells too, like this one that listens for a traditional doorbell sound and then alerts its user.

Continue reading “Multi-Year Doorbell Project”

An automatic laser turret playing with a cat.

Entertain Your Cats Automatically With LazerPaw

Most of us would agree that kittens are very cute, but require lots of attention in return. What would you do if you adopted three abandoned cats but didn’t have all day to play with them? [Hoani Bryson] solved his problem by building LazerPaw — an autonomous, safe way to let your cats chase lasers.

Having recently tinkered with computer vision in the form of OpenCV, [Hoani] decided he would make a laser turret for his cats to play with. An infrared camera, used so that the LazerPaw works in the dark, is mounted to the laser and the Raspberry Pi. These electronics are then mounted on a servo-based pan/tilt module, which is in turn mounted with two smartphone clamps to the ceiling. That way, when the cats chase the laser, they will be looking away from the beam source. Additionally, if the device is aiming directly at a cat, the laser is turned off. Finally, [Hoani] added some NeoPixels with an Arduino-based controller for extra hacker vibes.

The LazerPaw’s software takes in a 30 FPS stream from a webcam, scales it down for performance, and applies a threshold filter to it. When a black pixel, which is assumed to be a cat, is detected, it “pushes” the camera away from it depending on how close to the laser it is. The effect of this is that every time a cat catches up to the laser, it moves away again. The processed images are also sent to an interactive website for remote cat playtime. Finally, there is also a physical start button so you don’t need WiFi to use it.

Is your cat more of a sunbather than a deadly murder beast? Maybe it’ll like this cat chair that follows the sun.

Continue reading “Entertain Your Cats Automatically With LazerPaw”

Smart Garbage Trucks Help With Street Maintenance

If you’ve ever had trouble with a footpath, bus stop, or other piece of urban infrastructure, you probably know the hassles of dealing with a local council. It can be incredibly difficult just to track down the right avenue to report issues, let alone get them sorted in a timely fashion.

In the suburban streets of one Australian city, though, that’s changing somewhat. New smart garbage trucks are becoming instruments of infrastructure surveillance, serving a dual purpose that could reshape urban management. Naturally, though, this new technology raises issues around ethics and privacy.

Continue reading “Smart Garbage Trucks Help With Street Maintenance”

Several video clips of a robot arm manipulating objects in a kitchen environment, demonstrating some of the 12 generalized skills

RoboAgent Gets Its MT-ACT Together

Researchers at Carnegie Mellon University have shared a pre-print paper on generalized robot training within a small “practical data budget.” The team developed a system that breaks movement tasks into 12 “skills” (e.g., pick, place, slide, wipe) that can be combined to create new and complex trajectories within at least somewhat novel scenarios, called MT-ACT: Multi-Task Action Chunking Transformer. The authors write:

Trained merely on 7500 trajectories, we are demonstrating a universal RoboAgent that can exhibit a diverse set of 12 non-trivial manipulation skills (beyond picking/pushing, including articulated object manipulation and object re-orientation) across 38 tasks and can generalize them to 100s of diverse unseen scenarios (involving unseen objects, unseen tasks, and to completely unseen kitchens). RoboAgent can also evolve its capabilities with new experiences.

Continue reading “RoboAgent Gets Its MT-ACT Together”

Spy Tech: Unshredding Documents

Bureaucracies generate paper, usually lots of paper. Anything you consider private — especially anything that could get you in trouble — should go in a “burn box” which is usually a locked trash can that is periodically emptied into an incinerator. However, what about a paper shredder? Who hasn’t seen a movie or TV show where the office furiously shreds papers as the FBI, SEC, or some other three-letter-agency is trying to crash the door down?

That might have been the scene in the late 1980s when Germany reunified. The East German Ministry of State Security — known as the Stasi — had records of unlawful activity and, probably, information about people of interest. The staff made a best effort to destroy these records, but they did not quite complete their task.

The collapsing East German government ordered documents destroyed, and many were pulped or burned. However, many of the documents were shredded by hand, stuffed into bags, and were awaiting final destruction. There were also some documents destroyed by the interim government in 1990. Today there are about 16,000 of these bags remaining, each with 2,500 to 3,000 pieces of pages in them.

Machine-shredded documents were too small to recover, but the hand-shredded documents should be possible to reconstruct. After all, they do it all the time in spy movies, right? With modern computers and vision systems, it should be a snap.

You’d think so, anyway.

Continue reading “Spy Tech: Unshredding Documents”

Self-Driving Library For Python

Fully autonomous vehicles seem to perennially be just a few years away, sort of like the automotive equivalent of fusion power. But just because robotic vehicles haven’t made much progress on our roadways doesn’t mean we can’t play with the technology at the hobbyist level. You can embark on your own experimentation right now with this open source self-driving Python library.

Granted, this is a library built for much smaller vehicles, but it’s still quite full-featured. Known as Donkey Car, it’s mostly intended for what would otherwise be remote-controlled cars or robotics platforms. The library is built to be as minimalist as possible with modularity as a design principle, and includes the ability to self-drive with computer vision using machine-learning algorithms. It is capable of logging sensor data and interfacing with various controllers as well, either physical devices or through something like a browser.

To build a complete platform costs around $250 in parts, but most things needed for a Donkey Car compatible build are easily sourced and it won’t be too long before your own RC vehicle has more “full self-driving” capabilities than a Tesla, and potentially less risk of having a major security vulnerability as well.

Modern Dance Or Full-Body Keyboard? Why Not Both!

If you felt in your heart that Hackaday was a place that would forever be free from projects that require extensive choreography to pull off, we’re sorry to disappoint you. Because you’re going to need a level of coordination and gross motor skills that most of us probably lack if you’re going to type with this full-body, semaphore-powered keyboard.

This is another one of [Fletcher Heisler]’s alternative inputs projects, in the vein of his face-operated coding keyboard. The idea there was to be able to code with facial gestures while cradling a sleeping baby; this project is quite a bit more expressive. Pretty much all you need to know about the technical side of the project can be gleaned from the brilliant “Hello world!” segment at the start of the video below. [Fletcher] uses OpenCV and MediaPipe’s Pose library for pose estimation to decode the classic flag semaphore alphabet, which encodes characters in the angle of the signaler’s extended arms relative to their body. To extend the character set, [Fletcher] added a squat gesture for numbers, and a shift function controlled by opening and closing the hands. The jazz-hands thing is just a bonus.

Honestly, the hack here is mostly a brain hack — learning a complex series of gestures and stringing them together fluidly isn’t easy. [Fletcher] used a few earworms to help him master the character set and tune his code; the inevitable Rickroll was quite artistic, and watching him nail the [Johnny Cash] song was strangely satisfying. We also thoroughly enjoyed the group number at the end. Ooga chaka FTW.

Continue reading “Modern Dance Or Full-Body Keyboard? Why Not Both!”