Autonomous Sentry Gun Packs A Punch And A Ton Of Build Tips

What has dual compressed-air cannons, 500 roll-on deodorant balls, and a machine-learning brain with a bad attitude? We didn’t know either, until [Leo Fernekes] dropped this video on his autonomous robot sentry gun and saw it in action for ourselves.

Now, we’ve seen tons of sentry guns on these pages before, shooting everything from water to various forms of Nerf. And plenty of those builds have used some form of machine vision to aim the gun onto the target. So while it might appear that [Leo]’s plowing old ground here, this build is chock full of interesting tips and tricks.

It started when [Leo] saw a video on TensorFlow basics from our friend [Edje Electronics], which gave him the boost needed to jump into an AI project. The controller he ended up with looks for humans in the scene and slews the turret onto target, where the air cannons can do their thing. The hefty ammo is propelled by compressed air, which is dumped into the chamber using a solenoid valve with an interesting driver that maximizes the speed at which it opens. Style points go to the bacteriophage T4-inspired design, and to the sequence starting at 1:34 which reminded us of the factory scene from RoboCop.

[Leo] really put a ton of work into this project, and the results show. He is hoping to get an art gallery or museum to show it as an interactive piece to comment on one possible robot-human future, presumably after getting guests to sign a release. Whatever happens to it, the robot looks great and [Leo] learned a lot from it, as did we.

Continue reading “Autonomous Sentry Gun Packs A Punch And A Ton Of Build Tips”

Using Smartphone Cameras To Make Sure Drivers Are Looking At The Road

Most of us are probably quite aware of the damage that a car can inflict when driven by a distracted driver. In an ideal world, people who are driving a car would not allow something like their phone to distract them from their primary task of being the primary navigation system for the 1+ metric ton vehicle which they are controlling.

Many smartphone apps as well as in-car infotainment systems have added features over the years that try to prevent a driver from using them, but they run into the issue that it’s hard to distinguish between passenger and driver. As it turns out, asking the human driver whether they are the driver doesn’t always get the expected result. This is where [Rushil Khurana] and his team at Carnegie Mellon University (CMU) have come up with a more fool-proof approach.

In their paper (PDF), they cover the algorithm and software implementation that uses the smartphone’s own front (selfie) and back cameras to determine from the car’s interior which side of the car the user is sitting in, and deducing from that whether the user is sitting in the driver’s seat or not.  From there it is a fairly safe assumption to make that if the user is sitting in the driver’s seat, and the car is moving, that this user should not be looking at the phone’s screen.

In a test involving 16 different cars and 33 users, they achieved an overall accuracy of 94% with the phone held in the hand, and 92.2% while docked. This is more reliable than the other approaches covered in the paper, and as a benefit does not require any extra hardware. Who knows, upcoming smartphones may include a feature like this, so that apps can easily determine what feature set should be made available to a driver, if any.

Continue reading “Using Smartphone Cameras To Make Sure Drivers Are Looking At The Road”

Machine Learning Algorithm Runs On A Breadboard 6502

When it comes to machine learning algorithms, one’s thoughts do not naturally flow to the 6502, the processor that powered some of the machines in the first wave of the PC revolution. And one definitely does not think of gesture recognition running on a homebrew breadboard version of a 6502 machine, and yet that’s exactly what [Nick Bild] has accomplished.

Before anyone gets too worked up in the comments, we realize that [Nick]’s Vectron breadboard computer is getting a lot of help from other, more modern machines. He’s got a pair of Raspberry Pi 3s in the mix, one to capture and downscale images from a Pi cam, and one that interfaces to an Atari 2600 emulator and sends keypresses to control games based on the gestures seen by the camera. But the logic to convert gesture to control signals is all Vectron, and uses a k-nearest neighbor algorithm executed in 6502 assembly. Fifty gesture images are stored in ROM and act as references for the four known gesture classes: up, down, left, and right. When a match between the camera image and a gesture class is found, the corresponding keypress is sent to the game. The video below shows that the whole thing is pretty responsive.

In our original article on [Nick]’s Vectron breadboard computer, [Tom Nardi] said that “You won’t be playing Prince of Persia on it.” That may be true, but a machine learning system running on the Vectron is not too shabby either.

Continue reading “Machine Learning Algorithm Runs On A Breadboard 6502”

Machine Vision Keeps Track Of Grubby Hands

Can you remember everything you’ve touched in a given day? If you’re being honest, the answer is, “Probably not.” We humans are a tactile species, with an outsized proportion of both our motor and sensory nerves sent directly to our hands. We interact with the world through our hands, and unfortunately that may mean inadvertently spreading disease.

[Nick Bild] has a potential solution: a machine-vision system called Deep Clean, which monitors a scene and records anything in it that has been touched. [Nick]’s system uses Jetson Xavier and a stereo camera to detect depth in a scene; he built his camera from a pair of Raspberry Pi cams and a Pi 3B+, but other depth cameras like a Kinect could probably do the job. The idea is to watch the scene for human hands — OpenPose is the tool he chose for that job — and correlate their depth in the scene with the depth of objects. Touch a doorknob or a light switch, and a marker is left on the scene. The idea would be that a cleaning crew would be able to look at the scene to determine which areas need extra attention. We can think of plenty of applications that extend beyond the current crisis, as the ability to map areas that have been touched seems to be generally useful.

[Nick] has been getting some mileage out of that Xavier lately — he’s used it to build an AI umpire and shades that help you find lost stuff. Who knows what else he’ll find to do with them during this time of confinement?

Continue reading “Machine Vision Keeps Track Of Grubby Hands”

Robotic Ball Bouncer Uses Machine Vision To Stay On Target

When we first caught a glimpse of this ball juggling platform, we were instantly hooked by its appearance. With its machined metal linkages and clear polycarbonate platform, its got an irresistibly industrial look. But as fetching as it may appear, it’s even cooler in action.

You may recognize the name [T-Kuhn] as well as sense the roots of the “Octo-Bouncer” from his previous juggling robot. That earlier version was especially impressive because it used microphones to listen to the pings and pongs of the ball bouncing off the platform and determine its location. This version went the optical feedback route, using a camera mounted under the platform to track the ball using OpenCV on a Windows machine. The platform linkages are made from 150 pieces of CNC’d aluminum, with each arm powered by a NEMA 17 stepper with a planetary gearbox. Motion control is via a Teensy, chosen for its blazing-fast clock speed which makes for smoother acceleration and deceleration profiles. Watch it in action from multiple angles in the video below.

Hats off to [T-Kuhn] for an excellent build and a mesmerizing device to watch. Both his jugglers do an excellent job of keeping the ball under control; his robotic ball-flinger is designed to throw the ball to the same spot every time.

Continue reading “Robotic Ball Bouncer Uses Machine Vision To Stay On Target”

Assistive Specs Help Jog Your Memory

It’s something that can happen to all of us, that we forget things. Young and old, we know things are on our to-do list but in the heat of the moment they disappear from our minds and we miss them. There are a myriad of technological answers to this in the form of reminders and calendars, but [Nick Bild] has come up with possibly the most inventive yet. His Newrons project is a pair of glasses with a machine vision camera, that flashes a light when it detects an object in its field of view associated with a calendar entry.

At its heart is a JeVois A33 Smart Machine Vision Camera, which runs a neural network trained on an image dataset. It passes its sightings to an Arduino Nano IoT fitted with a real-time clock, that pulls appointment information from Google Calendar and flashes the LED when it detects a match between object and event. His example which we’ve placed below the break is a pill bottle triggering a reminder to take the pills.

We like this idea, but can’t help thinking that it has a flaw in that the reminder relies on the object moving into view. A version that tied this in with more conventional reminding based upon the calendar would address this, and perhaps save the forgetful a few problems.

Continue reading “Assistive Specs Help Jog Your Memory”

Lane Keeping RC Car Uses OpenCV

Automakers continue to promise that fully autonomous cars are around the corner, but we’re still not quite there yet. However, there are a broad range of driver assist technologies that have come to market in recent years, with lane keeping assist being one of them. [raja_961] decided to implement this technology on an RC car, using a Raspberry Pi.

A regular off-the-shelf RC car is used as the base of the platform, outfitted with two drive motors and a third motor used for the steering. Unfortunately, the car can only turn either full-left or full-right only, limiting the finesse of the steering. Despite this, the work continued. A Raspberry Pi 3 was fitted out with a motor controller and camera, and hooked up to the chassis. With everything laced up, a Python script is used along with OpenCV to run the lane-keeping algorithm.

[raja_961] does a great job of explaining the lane keeping methodology. Rather than simply invoking a library and calling it good, instead the Instructable breaks down each stage of how the algorithm works. Incoming images are converted to the HSL color system, before a series of operations is used to pick out the apparent slope of the lane lines. This is then used with a PID algorithm to guide the steering of the car.

It’s a comprehensive explanation of a basic lane-keeping algorithm, and a great place to start if you’re interested in learning about the technology. There’s plenty going on in the world of self-driving RC cars, you just need to know where to look! Video after the break.

Continue reading “Lane Keeping RC Car Uses OpenCV”