Adding A Laser Blaster To Classic Atari 2600 Games With Machine Vision

Remember the pistol controller for the original Atari 2600? No? Perhaps that’s because it never existed. But now that we’re living in the future, adding a pistol to the classic games of the 2600 is actually possible.

Possible, but not exactly easy. [Nick Bild]’s approach to the problem is based on machine vision, using an NVIDIA Xavier NX to run an Atari 2600 emulator. The game is projected on a wall, while a camera watches the game field. A toy pistol with a laser pointer attached to it blasts away at targets, while OpenCV is used to find the spots that have been hit by the laser. A Python program matches up the coordinates of the laser blasts with coordinates within the game, and then fires off a sequence of keyboard commands to fire the blasters in the game. Basically, the game plays itself based on where it sees the laser shots. You can check out the system in the video below.

[Nick Bild] had a busy weekend of hacking. This was the third project write-up he sent us, after his big-screen Arduboy build and his C64 smartwatch.

Continue reading “Adding A Laser Blaster To Classic Atari 2600 Games With Machine Vision”

Teaching A Machine To Be Worse At A Video Game Than You Are

Is it really cheating if the aimbot you’ve built plays the game worse than you do?

We vote no, and while we take a dim view on cheating in general, there are still some interesting hacks in this AI-powered bot for Valorant. This is a first-person shooter, team-based game that has a lot of action and a Counter-Strike vibe. As [River] points out, most cheat-bots have direct access to the memory of the computer which is playing the game, which gives it an unfair advantage over human players, who have to visually process the game field and make their moves in meatspace. To make the Valorant-bot more of a challenge, he decided to feed video of the game from one computer to another over an HDMI-to-USB capture device.

The second machine has a YOLOv5 model which was trained against two hours of gameplay, enough to identify friend from foe — most of the time. Navigation around the map was done by analyzing the game’s on-screen minimap with OpenCV and doing some rudimentary path-finding. Actually controlling the player on the game machine was particularly hacky; rather than rely on an API to send keyboard sequences, [River] used a wireless mouse dongle on the game machine and a USB transmitter on the second machine.

The results are — iffy, to say the least. The system tends to get the player stuck in corners, and doesn’t recognize enemies that pop up at close range. The former is a function of the low-res minimap, while the latter has to do with the training data set — most human players engage enemies at distance, so there’s a dearth of “bad breath range” encounters to train to. Still, we’re impressed that it’s possible to train a machine to play a complex FPS game at all, let alone this well.

Guitar Hero Robot Actually Shreds

Once a popular craze, most of the public has sold or stashed away their plastic video game instruments and forgotten the likes of Guitar Hero and Rockband. Having never been quite satisfied with his scores, [Nick O’Hara] set out to create a robot that could play a Guitar Hero controller. It would be easy enough to use transistors to actuate the buttons or even just a Teensy to emulate a controller and have it play the perfect game, but [Nick] wanted to replicate what it was really like to play. So after burning out a fair number of solenoids (driving them over spec) and learning on his feet, [Nick] slowly began to dial in his robot, Jon Bot Jovi.

The brains of the bot are a Raspberry Pi running some OpenCV-based code that identifies blobs of different colors. The video feed comes from a PS2 via an HDMI capture card. Solenoids are driven via an 8 channel driver board, controlled by the Pi. While it missed a few notes here and there, we loved seeing the strumming solenoid whammy rapidly on the strummer. All in all, it’s a great project, and we love the design of the robot. Whether played by a robot, turned into a synthesizer, or recreated from toy pianos and mechanical keyboards, Guitar Hero controllers offer many hacking opportunities.

Continue reading “Guitar Hero Robot Actually Shreds”

Putting Perseverance Rover’s View Into Satellite View Context

It’s always fun to look over aerial and satellite maps of places we know, seeing a perspective different from our usual ground level view. We lose that context when it’s a place we don’t know by heart. Such as, say, Mars. So [Matthew Earl] sought to give Perseverance rover’s landing video some context by projecting onto orbital imagery from ESA’s Mars Express. The resulting video (embedded below the break) is a fun watch alongside the technical writeup Reprojecting the Perseverance landing footage onto satellite imagery.

Some telemetry of rover position and orientation were transmitted live during the landing process, with the rest recorded and downloaded later. Surprisingly, none of that information was used for this project, which was based entirely on video pixels. This makes the results even more impressive and the techniques more widely applicable to other projects. The foundational piece is SIFT (Scale Invariant Feature Transform), which is one of many tools in the OpenCV toolbox. SIFT found correlations between Perseverance’s video frames and Mars Express orbital image, feeding into a processing pipeline written in Python for results rendered in Blender.

While many elements of this project sound enticing for applications in robot vision, there are a few challenges touched upon in the “Final Touches” section of the writeup. The falling heatshield interfered with automated tracking, implying this process will need help to properly understand dynamically changing environments. Furthermore, it does not seem to run fast enough for a robot’s real-time needs. But at first glance, these problems are not fundamental. They merely await some motivated people to tackle in the future.

This process bears some superficial similarities to projection mapping, which is a category of projects we’ve featured on these pages. Except everything is reversed (camera instead of video projector, etc.) making the math an entirely different can of worms. But if projection mapping sounds more to your interest, here is a starting point.

[via Dr. Tanya Harrison @TanyaOfMars]

Continue reading “Putting Perseverance Rover’s View Into Satellite View Context”

An AI-Free Way To Catch Wildlife On Camera

Judging by the over-representation of the term “AI” in our news feeds these days, we’re clearly in the exponential phase of the artificial intelligence hype cycle, and very nearly at the dreaded “Peak of Inflated Expectations.” It seems like there’s nothing that AI can’t do, and nowhere that its principles can’t be applied to virtuous — and profitable — effect.

We don’t deny that AI has massive potential, but we strongly suspect that there will soon come a day when eyes will roll and stomachs will turn at yet another AI application that could have been addressed with something easier. An example of the simpler approach can be seen in this non-AI wildlife photo trap, cobbled together by [Sebastian] to capture pictures of some camera-shy squirrels. Rather than train an AI with gigabytes of squirrel images, he instead relies on his old Sony Alpha camera, which has a built-in WiFi. A Python script connects to the camera, which is trained on a feeder box and set to a very narrow depth of field. That makes a good percentage of the scene out of focus until a squirrel or other animal comes along looking for treats. The script detects the increased area of the scene that is now in-focus with a Laplace operator in OpenCV, and triggers the camera shutter. [Sebastian] ended up with some wonderful shots of the shy squirrels using this scheme; the video below describes the setup in more detail.

It’s not the first time we’ve seen Laplace transforms used to gauge image sharpness, of course, but we really like the approach [Sebastian] took here for its simplicity. The squirrels are cute too.

Continue reading “An AI-Free Way To Catch Wildlife On Camera”

Real Time Object Detection For $59

There was a time when making a machine to identify objects in a camera was difficult, even without trying to do it in real time. But now, you can do it with a Jetson Nano board for under $60. How well does it work? Watch [Murtaza’s] video below and see what you think.

The first few minutes of the video piqued our interest, and good thing, too, because the 50 lines of code get a 50-plus minute video! It is worth watching, though, because there’s a lot of good information about how to apply this technique in your own projects.

Continue reading “Real Time Object Detection For $59”