video of someone pushing the button to generate new art

AI Generating Paintings Off To A Flying Art

The philosophical question of “What is art?” has an ethereal, transient quality to it. A definition seems to slip away as you get close to an answer. Embracing that quality, [Max Fischer] has created an AI-powered painting that paints a new piece of art at the push of a button. When the button below the screen is pushed, a new image is generated and the old one is forever lost, which in a way, makes the frame a piece of art itself.

The really makes this project stand is the sheer quality of documentation on the GitHub repo. The instructions are incredibly detailed. Everything from setting up the Jetson to building the control box out of half-inch MDF (12mm for the sane part of the world) is laid out with copious pictures. Despite the ease of generating images ahead of time, [Max] took the hard route Hackaday route and did all inference locally and in real-time. To handle the processing requirements, an Nvidia Jetson Xavier NX single-board computer was used. He trained StyleGAN with high-resolution abstract art that gets generated whenever the button below the screen is pushed. To prevent screen burn-in, a PIR was added to turn the screen off when no one is around.

Here at Hackaday, we’ve seen several projects putting old laptop screens or monitors into a nice wooden case and mounting them to the wall. Since 32″ laptops are rather hard to find, [Max] opted to take a different approach and instead got a 32″ Samsung Frame for relatively cheap.

For all their detail, [Max] did leave one thing out of the readme: the AI that generates the art. [Max] hints that he wants others to create their picture frames, but with their own art generation. So what are you waiting for? Go make some art.

GitHub Copilot And The Unfulfilled Promises Of An Artificial Intelligence Future

In late June of 2021, GitHub launched a ‘technical preview’ of what they termed GitHub Copilot, described as an ‘AI pair programmer which helps you write better code’. Quite predictably, responses to this announcement varied from glee at the glorious arrival of our code-generating AI overlords, to dismay and predictions of doom and gloom as before long companies would be firing software developers en-masse.

As is usually the case with such controversial topics, neither of these extremes are even remotely close to the truth. In fact, the OpenAI Codex machine learning model which underlies GitHub’s Copilot is derived from OpenAI’s GPT-3 natural language model,  and features many of the same stumbles and gaffes which GTP-3 has. So if Codex and with it Copilot isn’t everything it’s cracked up to be, what is the big deal, and why show it at all?

Continue reading “GitHub Copilot And The Unfulfilled Promises Of An Artificial Intelligence Future”

Ostrich Robot Machine-Learns Itself To 5K

Ever since humanity has grasped the idea of a robot, we’ve wanted to imagine them into walking humanoid form. But making a robot walk like a human is not an easy task, and even the best of them end up with the somewhat shuffling gait of a Honda Asimo rather than the graceful poise of a balerina. Only in recent years have walking robots appeared to come of age, and then not by mimicking the human gait but something more akin to a bird.

We’ve seen it in the Boston Dynamics models, and also now in a self-balancing two-legged robot developed at Oregon State University that has demonstrated its abilities by completing an unaided 5 km run having used its machine learning skills to teach itself to run from scratch. It’s believed to be the first time a robot has achieved such a feat without first being programmed for the specific task.

The university’s PR piece envisages a time in which walking robots of this type have become commonplace, and when humans interact with them on a daily basis. We can certainly see that they could perform a huge number of autonomous outdoor tasks that perhaps a wheeled robot might find to be difficult, so maybe they have a bright future. Decide for yourself, after watching the video below the break.

Continue reading “Ostrich Robot Machine-Learns Itself To 5K”

Smart Camera Based On Google Coral

As machine learning and artificial intelligence becomes more widespread, so do the number of platforms available for anyone looking to experiment with the technology. Much like the single board computer revolution of the last ten years, we’re currently seeing a similar revolution with the number of platforms available for machine learning. One of those is Google Coral, a set of hardware specifically designed to take advantage of this new technology. It’s missing support to work with certain hardware though, so [Ricardo] set out to get one working with a Raspberry Pi Zero with this smart camera build based around Google Coral.

The project uses a Google Coral Edge TPU with a USB accelerator as the basis for the machine learning. A complete image for the Pi Zero is available which sets most of the system up right away including headless operation and includes a host of machine learning software such as OpenCV and pytesseract. By pairing a camera to the Edge TPU and the Raspberry Pi, [Ricardo] demonstrates many of its machine learning capabilities with several example projects such as an automatic license plate detector and even a mode which can recognize whether or not a face mask is being worn, and even how correctly it is being worn.

For those who want to get into machine learning and artificial intelligence, this is a great introductory project since the cost to entry is so low using these pieces of hardware. All of the project code and examples are available on [Ricardo]’s GitHub page too. We could even imagine his license plate recognition software being used to augment this license plate reader which uses a much more powerful camera.

Teaching A Machine To Be Worse At A Video Game Than You Are

Is it really cheating if the aimbot you’ve built plays the game worse than you do?

We vote no, and while we take a dim view on cheating in general, there are still some interesting hacks in this AI-powered bot for Valorant. This is a first-person shooter, team-based game that has a lot of action and a Counter-Strike vibe. As [River] points out, most cheat-bots have direct access to the memory of the computer which is playing the game, which gives it an unfair advantage over human players, who have to visually process the game field and make their moves in meatspace. To make the Valorant-bot more of a challenge, he decided to feed video of the game from one computer to another over an HDMI-to-USB capture device.

The second machine has a YOLOv5 model which was trained against two hours of gameplay, enough to identify friend from foe — most of the time. Navigation around the map was done by analyzing the game’s on-screen minimap with OpenCV and doing some rudimentary path-finding. Actually controlling the player on the game machine was particularly hacky; rather than rely on an API to send keyboard sequences, [River] used a wireless mouse dongle on the game machine and a USB transmitter on the second machine.

The results are — iffy, to say the least. The system tends to get the player stuck in corners, and doesn’t recognize enemies that pop up at close range. The former is a function of the low-res minimap, while the latter has to do with the training data set — most human players engage enemies at distance, so there’s a dearth of “bad breath range” encounters to train to. Still, we’re impressed that it’s possible to train a machine to play a complex FPS game at all, let alone this well.

Neural Networks Emulate Any Guitar Pedal For $120

It’s a well-established fact that a guitarist’s acumen can be accurately gauged by the size of their pedal board- the more stompboxes, the better the player. Why have one box that can do everything when you can have many that do just a few things?

Jokes aside, the idea of replacing an entire pedal collection with a single box is nothing new. Your standard, old-school stompbox is an analog affair, using a combination of filters and amplifiers to achieve a certain sound. Some modern multi-effects processors use software models of older pedals to replicate their sound. These digital pedals have been around since the 90s, but none have been quite like the NeuralPi project. Just released by [GuitarML], the NeuralPi takes about $120 of hardware (including — you guessed it — a Raspberry Pi) and transforms it into the perfect pedal.

The key here, of course, is neural networks. The LSTM at the core of NeuralPi can be trained on any pedal you’ve got laying around to accurately reproduce its sound, and it can even do so with incredibly low latency thanks to Elk Audio OS (which even powers Matt Bellamy’s synth guitar, as used in Muse‘s Simulation Theory World Tour). The result of a trained model is a VST3 plugin, a popular format for describing audio effects.

This isn’t the first time we’ve seen some seriously cool stuff from [GuitarML], and it also hearkens back a bit to some sweet pedal simulation in LTSpice we saw last year. We can’t wait to see this project continue to develop — over time, it would be awesome to see a slick UI, or maybe somebody will design a cool enclosure with some knobs and an honest-to-god pedal for user input!

Thanks to [Mish] for the tip!

Continue reading “Neural Networks Emulate Any Guitar Pedal For $120”

Deep Learning Enables Intuitive Prosthetic Control

Prosthetic limbs have been slow to evolve from simple motionless replicas of human body parts to moving, active devices. A major part of this is that controlling the many joints of a prosthetic is no easy task. However, researchers have worked to simplify this task, by capturing nerve signals and allowing deep learning routines to figure the rest out.

The prosthetic arm under test actually carries a NVIDIA Jetson Nano onboard to run the AI nerve signal decoder algorithm.

Reported in a pre-published paper, researchers used implanted electrodes to capture signals from the median and ulnar nerves in the forearm of Shawn Findley, who had lost a hand to a machine shop accident 17 years prior. An AI decoder was then trained to decipher signals from the electrodes using an NVIDIA Titan X GPU.

With this done, the decoder model could then be run on a significantly more lightweight system consisting of an NVIDIA Jetson Nano, which is small enough to mount on a prosthetic itself. This allowed Findley to control a prosthetic hand by thought, without needing to be attached to any external equipment. The system also allowed for intuitive control of Far Cry 5, which sounds like a fun time as well.

The research is exciting, and yet another step towards full-function prosthetics becoming a reality. The key to the technology is that models can be trained on powerful hardware, but run on much lower-end single-board computers, avoiding the need for prosthetic users to carry around bulky hardware to make the nerve interface work. If it can be combined with a non-invasive nerve interface, expect this technology to explode in use around the world.

[Thanks to Brian Caulfield for the tip!]