OpenGL Machine Learning Runs On Low-End Hardware

If you’ve looked into GPU-accelerated machine learning projects, you’re certainly familiar with NVIDIA’s CUDA architecture. It also follows that you’ve checked the prices online, and know how expensive it can be to get a high-performance video card that supports this particular brand of parallel programming.

But what if you could run machine learning tasks on a GPU using nothing more exotic than OpenGL? That’s what [lnstadrum] has been working on for some time now, as it would allow devices as meager as the original Raspberry Pi Zero to run tasks like image classification far faster than they could using their CPU alone. The trick is to break down your computational task into something that can be performed using OpenGL shaders, which are generally meant to push video game graphics.

An example of X2’s neural net upscaling.

[lnstadrum] explains that OpenGL releases from the last decade or so actually include so-called compute shaders specifically for running arbitrary code. But unfortunately that’s not an option on boards like the Pi Zero, which only meets the OpenGL for Embedded Systems (GLES) 2.0 standard from 2007.

Constructing the neural net in such a way that it would be compatible with these more constrained platforms was much more difficult, but the end result has far more interesting applications to show for it. During tests, both the Raspberry Pi Zero and several older Android smartphones were able to run a pre-trained image classification model at a respectable rate.

This isn’t just some thought experiment, [lnstadrum] has released an image processing framework called Beatmup using these concepts that you can play around with right now. The C++ library has Java and Python bindings, and according to the documentation, should run on pretty much anything. Included in the framework is a simple tool called X2 which can perform AI image upscaling on everything from your laptop’s integrated video card to the Raspberry Pi; making it a great way to check out this fascinating application of machine learning.

Truth be told, we’re a bit behind the ball on this one, as Beatmup made its first public release back in April of this year. It might have flown under the radar until now, but we think there’s a lot of potential for this project, and hope to see more of it once word gets out about the impressive results it can wring out of even the lowliest hardware.

[Thanks to Ishan for the tip.]

Astro Pi Mk II, The New Raspberry Pi Hardware Headed To The Space Station

Back in 2015, European Space Agency (ESA) astronaut Tim Peake brought a pair of specially equipped Raspberry Pi computers, nicknamed Izzy and Ed, onto the International Space Station and invited students back on Earth to develop software for them as part of the Astro Pi Challenge. To date, more than 50,000 young people have had their code run on one of the single-board computers; making them arguably the most popular, and surely the most traveled, Raspberry Pis in the solar system.

While Izzy and Ed are still going strong, the ESA has decided it’s about time these veteran Raspberries finally get the retirement they’re due. Set to make the journey to the ISS in December aboard a SpaceX Cargo Dragon, the new Astro Pi MK II hardware looks quite similar to the original 2015 version at first glance. But a peek inside its 6063-grade aluminium flight case reveals plenty of new and improved gear, including a Raspberry Pi 4 Model B with 8 GB RAM.

The beefier hardware will no doubt be appreciated by students looking to push the envelope. While the majority of Python programs submitted to the Astro Pi program did little more than poll the current reading from the unit’s temperature or humidity sensors and scroll messages for the astronauts on the Astro Pi’s LED matrix, some of the more advanced projects were aimed at performing legitimate space research. From using the onboard camera to image the Earth and make weather predictions to attempting to map the planet’s magnetic field, code submitted from teams of older students will certainly benefit from the improved computational performance and expanded RAM of the newest Pi.

As with the original Astro Pi, the ESA and the Raspberry Pi Foundation have shared plenty of technical details about these space-rated Linux boxes. After all, students are expected to develop and test their code on essentially the same hardware down here on Earth before it gets beamed up to the orbiting computers. So let’s take a quick look at the new hardware inside Astro Pi MK II, and what sort of research it should enable for students in 2022 and beyond.

Continue reading “Astro Pi Mk II, The New Raspberry Pi Hardware Headed To The Space Station”

Mastering Stop Motion Through Machine Learning

Stop motion animation is notoriously difficult to pull off well, in large part because it’s a mind-numbingly slow process. Each frame in the final video is a separate photograph, and for each one of those, the characters and props need to be moved the appropriate amount so that the final result looks smooth. You don’t even want to know how long Ben Wyatt spent working on Requiem for a Tuesday, though to be fair, it might still get done before the next Avatar.

But [Nick Bild] thinks his latest project might be able to improve on the classic technique with a dash of artificial intelligence provided by a Jetson Xavier NX. Basically, the Jetson watches the live feed from the camera, and using a hand pose detection model, waits until there’s no human hand in the frame. Once the coast is clear, it takes a shot and then goes back to waiting for the next hands-free opportunity. With the photographs being taken automatically, you’re free to focus on getting your characters moving around in a convincing way.

If it’s still not clicking for you, check out the video below. [Nick] first shows the raw unedited video, which primarily consists of him moving three LEGO figures around, and then the final product produced by his system. All the images of him fiddling with the scene have been automatically trimmed, leaving behind a short animated clip of the characters moving on their own.

Now don’t be fooled, it’s still going to take awhile. By our count, it took two solid minutes of moving around Minifigs to produce just a few seconds of animation. So while we can say its a quicker pace than with traditional stop motion production, it certainly isn’t fast.

Machine learning isn’t the only modern technology that can simplify stop motion production. We’ve seen a few examples of using 3D printed objects instead of manually-adjusted figures. It still takes a long time to print, and of course it eats up a ton of filament, but the mechanical precision of the printed scenes makes for a very clean final result.

Continue reading “Mastering Stop Motion Through Machine Learning”

Flamethrower weedkiller mounted on a robot arm riding a tank tracked base

Don’t Sleep On The Lawn, There’s An AI-Powered, Flamethrower-Wielding Robot About

You know how it goes, you’re just hanging out in the yard, there aren’t enough hours in the day, and weeding the lawn is just such a drag. Then an idea just pops into your head. How about we attach a gas powered flamethrower to a robot arm, drive it around on a tank-tracked robotic base, and have it operate autonomously with an AI brain? Yes, that sounds like a good idea. Let’s do that. And so, [Dave Niewinski] did exactly that with his Ultimate Weed Killing Robot.

And you thought the robot overlords might take a more subtle approach and take over the world one coffee machine at a time? No, straight for the fully-autonomous flamethrower it is then.

This build uses a Kinova Robots Gen 3 six-axis arm, mounted to an Agile-X Robotics Bunker base. Control is via a Connect Tech Rudi-NX box which contains an Nvidia Jetson Xavier NX Edge AI computing engine. Wow that was a mouthful!

Connectivity from the controller to the base is via CAN bus, but, sadly no mention of how the robot arm controller is hooked up. At least this particular model sports an effector mount camera system, which can feed straight into the Jetson, simplifying the build somewhat.

To start the software side of things, [Dave] took a video using his mobile phone while walking his lawn. Next he used RoboFlow to highlight image stills containing weeds, which were in turn used to help train a vision AI system. The actual AI training was written in Python using Google Collaboratory, which is itself based on the awesome Jupyter Notebook (see also Jupyter Lab on the main site. If you haven’t tried that yet, and if you do any data science at all, you’ll kick yourself for not doing so!) Collaboratory would not be all that useful for this by itself, except that it gives you direct, free GPU access, via the cloud, so you can use it for AI workloads without needing fancy (and currently hard to get) GPU hardware on your desk.

Details of the hardware may be a little sparse, but at least the software required can be found on the WeedBot GitHub. It’s not like most of us will have this exact hardware lying around anyway. For a more complete description of this terrifying contraption, checkout the video after the break.

Continue reading “Don’t Sleep On The Lawn, There’s An AI-Powered, Flamethrower-Wielding Robot About”

Espresso maker with added nixie flair

AI Powered Coffee Maker Knows A Bit Too Much About You

People keep warning that Skynet and the great robot uprising is not that far away, what with all this recent AI and machine-learning malarky getting all the attention lately. But we think going straight for a terminator robot army is not a very smart approach, not least due to a lack of subtlety. We think that it’s a much better bet to take over the world one home appliance at a time, and this AI Powered coffee maker might just well be part of that master plan.

Raspberry Pi Zero sitting atop the custom nixie tube driver PCB
PCB stackup with Pi Zero sat atop the driver / PSU PCBs

[Mark Smith] has taken a standard semi-auto espresso maker and jazzed it up a bit, with a sweet bar graph nixie tube the only obvious addition, at least from the front of the unit. Inside, a Raspberry Pi Zero sits atop his own nixie tube hat and associated power supply. The whole assembly is dropped into a 3D printed case and lives snuggled up to the water pump.

The Pi is running a web application written with the excellent Flask framework, and also an additional control application written in python. This allows the user to connect to the machine via Ethernet and see its status. The smarts are in the form of a simple self-grading machine learning algorithm, that takes a time series as an input (in this case when you take your shots of espresso) and after a few weeks of data, is able to make a reasonable prediction as to when you might want it in the future. It then automatically heats up in time for you to use the machine, when you usually do, then cools back down to save energy. No more pointless wandering around to see if the machine is hot enough yet – as you can just check the web page and see from the comfort of your desk.

But that’s not all [Mark] has done. He also improved the temperature control of the water boiler, and added an interlock that prevents the machine from producing a shot until the water temperature is just so. Water level is indicated by the glorious bar graph nixie tube, which also serves a few other user indication duties when appropriate. All in all a pretty sweet build, but we do add a word of caution: If your toaster starts making an unreasonable number of offers of toasted teacakes, give it a wide berth.

Spaghetti Detective Users Boiled By Security Gaffe

For readers that might not spend their free time watching spools of PLA slowly unwind, The Spaghetti Detective (TSD) is an open source project that aims to use computer vision and machine learning to identify when a 3D print has failed and resulted in a pile of plastic “spaghetti” on the build plate. Once users have installed the OctoPrint plugin, they need to point it to either a self-hosted server that’s running on a relatively powerful machine, or TSD’s paid cloud service that handles all the AI heavy lifting for a monthly fee.

Unfortunately, 73 of those cloud customers ended up getting a bit more than they bargained for when a configuration flub allowed strangers to take control of their printers. In a frank blog post, TSD founder Kenneth Jiang owns up to the August 19th mistake and explains exactly what happened, who was impacted, and how changes to the server-side code should prevent similar issues going forward.

Screenshot from TSD web interface
TSD allows users to remotely manage and monitor their printers.

For the record, it appears no permanent damage was done, and everyone who was potentially impacted by this issue has been notified. There was a fairly narrow window of opportunity for anyone to stumble upon the issue in the first place, meaning any bad actors would have had to be particularly quick on their keyboards to come up with some nefarious plot to sabotage any printers connected to TSD. That said, one user took to Reddit to show off the physical warning their printer spit out; the apparent handiwork of a fellow customer that discovered the glitch on their own.

According to Jiang, the issue stemmed from how TSD associates printers and users. When the server sees multiple connections coming from the same public IP, it’s assumed they’re physically connected to the same local network. This allows the server to link the OctoPrint plugin running on a Raspberry Pi to the user’s phone or computer. But on the night in question, an incorrectly configured load-balancing system stopped passing the source IP addresses to the server. This made TSD believe all of the printers and users who connected during this time period were on the same LAN, allowing anyone to connect with whatever machine they wished.

Changed TSD code from GitHub
New code pushed to the TSD repository limits how many devices can be associated with a single IP.

The mix-up only lasted about six hours, and so far, only the one user has actually reported their printer being remotely controlled by an outside party. After fixing the load-balancing configuration, the team also pushed an update to the TSD code which puts a cap on how many printers the server will associate with a given IP address. This seems like a reasonable enough precaution, though it’s not immediately obvious how this change would impact users who wish to add multiple printers to their account at the same time, such as in the case of a print farm.

While no doubt an embarrassing misstep for the team at The Spaghetti Detective, we can at least appreciate how swiftly they dealt with the issue and their transparency in bringing the flaw to light. This is also an excellent example of how open source allows the community to independently evaluate the fixes applied by the developer in response to a discovered flaw. Jiang says the team will be launching a full security audit of their own as well, so expect more changes getting pushed to the repository in the near future.

We were impressed with TSD when we first covered it back in 2019, and glad to see the project has flourished since we last checked in. Trust is difficult to gain and easy to lose, but we hope the team’s handling of this issue shows they’re on top of things and willing to do right by their community even if it means getting some egg on their face from time to time.

GitHub Copilot And The Unfulfilled Promises Of An Artificial Intelligence Future

In late June of 2021, GitHub launched a ‘technical preview’ of what they termed GitHub Copilot, described as an ‘AI pair programmer which helps you write better code’. Quite predictably, responses to this announcement varied from glee at the glorious arrival of our code-generating AI overlords, to dismay and predictions of doom and gloom as before long companies would be firing software developers en-masse.

As is usually the case with such controversial topics, neither of these extremes are even remotely close to the truth. In fact, the OpenAI Codex machine learning model which underlies GitHub’s Copilot is derived from OpenAI’s GPT-3 natural language model,  and features many of the same stumbles and gaffes which GTP-3 has. So if Codex and with it Copilot isn’t everything it’s cracked up to be, what is the big deal, and why show it at all?

Continue reading “GitHub Copilot And The Unfulfilled Promises Of An Artificial Intelligence Future”