Adding A Laser Blaster To Classic Atari 2600 Games With Machine Vision

Remember the pistol controller for the original Atari 2600? No? Perhaps that’s because it never existed. But now that we’re living in the future, adding a pistol to the classic games of the 2600 is actually possible.

Possible, but not exactly easy. [Nick Bild]’s approach to the problem is based on machine vision, using an NVIDIA Xavier NX to run an Atari 2600 emulator. The game is projected on a wall, while a camera watches the game field. A toy pistol with a laser pointer attached to it blasts away at targets, while OpenCV is used to find the spots that have been hit by the laser. A Python program matches up the coordinates of the laser blasts with coordinates within the game, and then fires off a sequence of keyboard commands to fire the blasters in the game. Basically, the game plays itself based on where it sees the laser shots. You can check out the system in the video below.

[Nick Bild] had a busy weekend of hacking. This was the third project write-up he sent us, after his big-screen Arduboy build and his C64 smartwatch.

Continue reading “Adding A Laser Blaster To Classic Atari 2600 Games With Machine Vision”

An AI-Free Way To Catch Wildlife On Camera

Judging by the over-representation of the term “AI” in our news feeds these days, we’re clearly in the exponential phase of the artificial intelligence hype cycle, and very nearly at the dreaded “Peak of Inflated Expectations.” It seems like there’s nothing that AI can’t do, and nowhere that its principles can’t be applied to virtuous — and profitable — effect.

We don’t deny that AI has massive potential, but we strongly suspect that there will soon come a day when eyes will roll and stomachs will turn at yet another AI application that could have been addressed with something easier. An example of the simpler approach can be seen in this non-AI wildlife photo trap, cobbled together by [Sebastian] to capture pictures of some camera-shy squirrels. Rather than train an AI with gigabytes of squirrel images, he instead relies on his old Sony Alpha camera, which has a built-in WiFi. A Python script connects to the camera, which is trained on a feeder box and set to a very narrow depth of field. That makes a good percentage of the scene out of focus until a squirrel or other animal comes along looking for treats. The script detects the increased area of the scene that is now in-focus with a Laplace operator in OpenCV, and triggers the camera shutter. [Sebastian] ended up with some wonderful shots of the shy squirrels using this scheme; the video below describes the setup in more detail.

It’s not the first time we’ve seen Laplace transforms used to gauge image sharpness, of course, but we really like the approach [Sebastian] took here for its simplicity. The squirrels are cute too.

Continue reading “An AI-Free Way To Catch Wildlife On Camera”

OpenCV Spreads Smart Camera Joy To See Ideas Come To Life

Do you have a great application for computer vision, but couldn’t spare the cost of hardware needed to build it? Or perhaps you just need a deadline to pull you away from endless doom scrolling? Either way, the OpenCV team wants you to enter their OpenCV AI Competition 2021 and they’re willing to pitch in hardware to make it happen.

This competition is part of OpenCV’s 20th anniversary celebration, and the field of machine vision has changed a lot in those two decades. OpenCV started within Intel harnessing power of their high end CPUs, but today the excitement is around specialized acceleration hardware for vision processing. Which is why OpenCV put their support and lent their name to the OpenCV AI Kit (OAK) Kickstarter we covered a few months ago. Since then, the hardware was produced and starting to arrive in project backer’s hands. (Barring pandemic-related shipping restrictions…)

This shiny new hardware is the competition’s focus. Phase one solicits team proposals for putting an OAK-D’s power to novel use. University teams may have up to ten members, general teams are limited to four. Each team’s geographic home will put them in one of six global regions. Proposals must be submitted by January 27th, 2021. By February 11th, judges will select the best twenty-five general and ten university team proposals from each region, and every member of the team gets an OAK-D unit to turn their idea into reality by phase two deadline of June 27th. That’s up to 1,200 OAK-D modules available to anyone who can convince the judges they have a great idea and they are capable of bringing it to fruition. Is that you? Of course it is!

Teams will also receive additional resources such as an allotment of cloud compute credits to train their models, and naturally all tutorials and sample code released as part of OAK Kickstarter. No explicit resource for project team organization is mentioned, but of course our own Hackaday.io is available to support you. Best of luck to everyone who enters and we look forward to seeing all the projects this contest will bring to life.

Vizy “AI Camera” Wants To Make Machine Vision Less Complex

Vizy, a new machine vision camera from Charmed Labs, has blown through their crowdfunding goal on the promise of making machine vision projects both easier and simpler to deploy. The camera, which starts around $250, integrates a Raspberry Pi 4 with built-in power and shutdown management, and comes with a variety of pre-installed applications so one can dive right in.

The Sony IMX477 camera sensor is the same one found in the Raspberry Pi high quality camera, and supports capture rates of up to 300 frames per second (under the right conditions, anyway.) Unlike the usual situation faced by most people when a Raspberry Pi is involved, there’s no need to worry about adding a real-time clock, enclosure, or ensuring shutdowns happen properly; it’s all taken care of.

‘Birdfeeder’ application can automatically identify and upload images of visitors.

Charmed Labs are the same folks behind the Pixy and Pixy 2 cameras, and Vizy goes further in the sense that everything required for a machine vision project has been put onboard and made easy to use and deploy, even the vision processing functions work locally and have no need for a wireless data connection (though one is needed for things like automatic uploading or sharing.) For outdoor or remote applications, there’s a weatherproof enclosure option, and wireless connectivity in areas with no WiFi can be obtained by plugging in a USB cellular modem.

A few of the more hacker-friendly hardware features are things like a high-current I/O header and support for both C/CS and M12 lenses for maximum flexibility. The IR filter can also be enabled or disabled via software, so no more swapping camera modules for ones with the IR filter removed. On the software side, applications are all written in Python and use open software like Tensorflow and OpenCV for processing.

The feature list looks good, but Vizy also seems to have a clear focus. It looks best aimed at enabling projects with the following structure:

Detect Things (people, animals, cars, text, insects, and more) and/or Measure Things (size, speed, duration, color, count, angle, brightness, etc.)

Perform an Action (for example, push a notification or enable a high-current I/O) and/or Record (save images, video, or other data locally or remotely.)

The Motionscope application tracking balls on a pool table. (Click to enlarge)

A good example of this structure is the Birdfeeder application which comes pre-installed. With the camera pointed toward a birdfeeder, animals coming for a snack are detected. If the visitor is a bird, Vizy identifies the species and uploads an image. If the animal is not a bird (for example, a squirrel) then Vizy can detect that as well and, using the I/O header, could briefly turn on a sprinkler to repel the hungry party-crasher. A sample Birdfeeder photo stream is here on Google Photos.

Motionscope is a more unusual but very interesting-looking application, and its purpose is to capture moving objects and measure the position, velocity, and acceleration of each. A picture does a far better job of explaining what Motionscope does, so here is a screenshot of the results of watching some billiard balls and showing what it can do.

Vizy The AI Camera Aims To Ease Machine Vision

Cameras are getting smarter and more capable than ever, able to run embedded machine vision algorithms and pull off tricks far beyond what something like a serial camera and microcontroller board would be capable of, and the upcoming Vizy aims to be even smarter and easier to use yet. Vizy is the work of Charmed Labs, and this isn’t their first foray into accessible machine vision. Charmed Labs are the same folks behind the Pixy and Pixy 2 cameras. Vizy’s main goal is to make object detection and classification easy, with thoughtful hardware features and a browser-based interface.

Vizy can identify common birds with “Birdfeeder”, one of the several built-in applications that uses local processing only.

The usual way to do machine vision is to get a USB camera and run something like OpenCV on a desktop machine to handle the processing. But Vizy leverages a Raspberry Pi 4 to provide a tightly-integrated unit in a small package with a variety of ready-to-run applications. For example, the “Birdfeeder” application comes ready to take snapshots of and identify common species of bird, while also identifying party-crashers like squirrels.

The demonstration video on their page shows off using the built-in high-current I/O header to control a sprinkler, repelling non-bird intruders with a splash of water while uploading pictures and video clips. The hardware design also looks well thought out; not only is there a safe shutdown and low-power mode for the Raspberry Pi-based hardware, but the lens can be swapped and the camera unit itself even contains an electrically-switched IR filter.

Vizy has a Kickstarter campaign planned, but like many others, Charmed Labs is still adjusting to the changes the COVID-19 pandemic has brought. You can sign up to be notified when Vizy launches; we know we’ll be keen for a closer look once it does. Easier machine vision is always a good thing, because it helps free people to focus on clever ideas like machine vision-based tool alignment.

OAK Vision Modules Help You See The Forest And The Trees

OpenCV is an open source library of computer vision algorithms, its power and flexibility made many machine vision projects possible. But even with code highly optimized for maximum performance, we always wish for more. Which is why our ears perk up whenever we hear about a hardware accelerated vision module, and the latest buzz is coming out of the OpenCV AI Kit (OAK) Kickstarter campaign.

There are two vision modules launched with this campaign. The OAK-1 with a single color camera for two dimensional vision applications, and the OAK-D which adds stereo cameras for that third dimension. The onboard brain is a Movidius Myriad X processor which, according to team members who have dug through its datasheet, have been massively underutilized in other products. They believe OAK modules will help the chip fulfill its potential for vision applications, delivering high performance while consuming low power in a small form factor. Reading over the spec sheet, we think it’s fair to call these “Ultimate Myriad X Dev Boards” but we must concede “OpenCV AI Kit” sounds better. It does not provide hardware acceleration for the entire OpenCV library (likely an impossible task) but it does cover the highly demanding subset suitable for Myriad X acceleration.

Since the campaign launched a few weeks ago, some additional information have been released to help assure backers that this project has real substance. It turns out OAK is an evolution of a project we’ve covered almost exactly one year ago that became a real product DepthAI, so at least this is not their first rodeo. It is also encouraging that their invitation to the open hardware community has already borne fruit. Check out this thread discussing OAK for robot vision, where a question was met with an honest “we don’t have expertise there” from the OAK team, but then ArduCam pitched in with their camera module experience to help.

We wish them success for their planned December 2020 delivery. They have already far surpassed their funding goals, they’ve shipped hardware before, and we see a good start to a development community. We look forward to the OAK-1 and OAK-D joining the ranks of other hacking friendly vision modules like OpenMV, JeVois, StereoPi, and AIY Vision.

Dial In Your Multi-Headed 3D Printer With 2020 Machine Vision

Most folks that have been poking around at multi-tool 3D printing know that lining up nozzles can be a gnarly, but necessary pain point. Existing methods either have us measure offsets with a vernier scale or with a series of pictures taken with an upwards-facing camera. And this step is not to be ignored! Any mismatch between nozzles, and your multicolor prints end up looking like Scotty really screwed up those sliders on that transporter beam console. Fear not, however! [Danal] took this problem as an opportunity to write something that’s completely automated and brought to you by some machine vision.

Dubbed TAMV, forĀ Tool Align Machine Vision, [Danal] added a Raspberry Pi alongside his existing 3D printing motion controller in addition to an upwards facing camera. A few lines of code (and a few hours of compiling OpenCV) later, and he had himself a circle-detecting script that automatically cycles through each tool, detects the nozzle center, and calculates an offset for each tool that’s stored into the machine’s configuration file. If that’s not nifty enough, he’s made the entire setup open-source, and he included both an installation script for compiling OpenCV and a well-written set of step-by-step instructions.

In a world where most hobbyists approaches still solve this problem manually, this is leaps and bounds ahead of what we know, and it’s a great application of machine vision built on top of a stack of recognizable hardware and software. While this project was outfitted for a Jubilee running a Duet3 controller with a Raspberry Pi connected in “single-board computer” mode, the core features are readily adaptable to any other multi-tool machine with a similar control board stack. And for folks willing to poke under the hood, the project could even be extended to a standalone script that you can run on your PC locally to simply print the tool offsets separately.

Alongside TAMV, it’s refreshing that even a decade after 3D printers have been with us, we’re still finding ways to make these machines more capable. For more fresh hacks in this category, check out a new spin on using sharpie ink as a support material release agent.

Sadly, [Danal] has recently passed away in the last week, but we are grateful to capture a snapshot in the history of this person’s life.

Continue reading “Dial In Your Multi-Headed 3D Printer With 2020 Machine Vision”