Making Minty Fresh Music With Markov Chains: The After Eight Step Sequencer

Step sequencers are fantastic instruments, but they can be a little, well, repetitive. At it’s core, the step sequencer is a pretty simple device: it loops through a series of notes or phrases that are, well, sequentially ordered into steps. The operator can change the steps while the sequencer is looping, but it generally has a repetitive feel, as the musician isn’t likely to erase all of the steps and enter in an entirely new set between phrases.

Enter our old friend machine learning. If we introduce a certain variability on each step of the loop, the instrument can help the musician out a bit here, making the final product a bit more interesting. Such an instrument is exactly what [Charis Cat] set out to make when she created the After Eight Step Sequencer.

The After Eight is an eight-step sequencer that allows the artist to set each note with a series of potentiometers (which are, of course, housed in an After Eight mint tin). The potentiometers are read by an Arduino, which passes MIDI information to a computer running the popular music-oriented visual programming language Max MSP. The software uses a series of Markov Chains to augment the musician’s inputted series of notes, effectively working with the artist to create music. The result is a fantastic piece of music that’s different every time it’s performed. Make sure to check out the video at the end for a fantastic overview of the project (and to hear the After Eight in action, of course)!

[Charis Cat]’s wonderful creation reminds us of some the work [Sara Adkins] has done, blending human performance with complex algorithms. It’s exactly the kind of thing we love to see at Hackaday- the fusion of a musician’s artistic intent with the stochastic unpredictability of a machine learning system to produce something unique.

Thanks to [Chris] for the tip!

Continue reading “Making Minty Fresh Music With Markov Chains: The After Eight Step Sequencer”

Thought Control Via Handwriting

Computers haven’t done much for the quality of our already poor handwriting. However, a man paralyzed by an accident can now feed input into a computer by simply thinking about handwriting, thanks to work by Stanford University researchers. Compared to more cumbersome systems based on eye motion or breath, the handwriting technique enables entry at up to 90 characters a minute.

Currently, the feat requires a lab’s worth of equipment, but it could be made practical for everyday use with some additional work and — hopefully — less invasive sensors. In particular, the sensor used two microelectrode arrays in the precentral gyrus portion of the brain. When the subject thinks about writing, recognizable patterns appear in the collected data. The rest is just math and classification using a neural network.

If you want to try your hand at processing this kind of data and don’t have a set of electrodes to implant, you can download nearly eleven hours of data already recorded. The code is out there, too. What we’d really like to see is some easier way to grab the data to start with. That could be a real game-changer.

More traditional input methods using your mouth have been around for a long time. We’ve also looked at work that involves moving your head.

Mind-Controlled Flamethrower

Mind control might seem like something out of a sci-fi show, but like the tablet computer, universal translator, or virtual reality device, is actually a technology that has made it into the real world. While these devices often requires on advanced and expensive equipment to interpret brain waves properly, with the right machine learning system it’s possible to do things like this mind-controlled flame thrower on a much smaller budget. (Video, embedded below.)

[Nathaniel F] was already experimenting with using brain-computer interfaces and machine learning, and wanted to see if he could build something practical combining these two technologies. Instead of turning to an EEG machine to read brain patterns, he picked up a much less expensive Mindflex and paired it with a machine learning system running TensorFlow to make up for some of its shortcomings. The processing is done by a Raspberry Pi 4, which sends commands to an Arduino to fire the flamethrower when it detects the proper thought patterns. Don’t forget the flamethrower part of this build either: it was designed and built entirely by [Nathanial F] as well using gas and an arc lighter.

While the build took many hours of training to gather the proper amount of data to build the neural network and works as the proof of concept he was hoping for, [Nathaniel F] notes that it could be improved by replacing the outdated Mindflex with a better EEG. For now though, we appreciate seeing sci-fi in the real world in projects like this, or in other mind-controlled projects like this one which converts a prosthetic arm into a mind-controlled music synthesizer.

Continue reading “Mind-Controlled Flamethrower”

Visual Raspberry Pi With Node-Red And TensorFlow

If you prefer to draw boxes instead of writing code, you may have tried IBM’s Node-RED to create logic with drag-and-drop flows. A recent [TensorFlow] video shows an interview between [Jason Mayes] and [Paul Van Eck] about using TensorFlow.js with Node-RED to create machine learning applications for Raspberry Pi visually. You can see the video, below.

The video doesn’t go into much detail since it is only ten minutes long. But it does show how easy it is to do things like identify images using an existing TensorFlow model. There is a more detailed tutorial available, as well as a corresponding video, which you can see below.

Continue reading “Visual Raspberry Pi With Node-Red And TensorFlow”

AI Makes Linux Do What You Mean, Not What You Say

We are always envious of the Star Trek Enterprise computers. You can just sort of ask them a hazy question and they will — usually — figure out what you want. Even the automatic doors seemed to know the difference between someone walking into a turbolift versus someone being thrown into the door during a fight. [River] decided to try his new API keys for the private beta of an AI service to generate Linux commands based on a description. How does it work? Watch the video below and find out.

Some examples work fairly well. In response to “email the Rickroll video to Jeff Bezos,” the system produced a curl command and an e-mail to what we assume is the right place. “Find all files in the current directory bigger than 1 GB” works, too.

Continue reading “AI Makes Linux Do What You Mean, Not What You Say”

Imaging The Past With Time-Travel Rephotography

Have you ever noticed that people in old photographs looks a bit weird? Deep wrinkles, sunken cheeks, and exaggerated blemishes are commonplace in photos taken up to the early 20th century. Surely not everybody looked like this, right? Maybe it was an odd makeup trend — was it just a fashionable look back then?

Not quite — it turns out that the culprit here is the film itself. The earliest glass-plate emulsions used in photography were only sensitive to the highest-frequency light, that which fell in the blue to ultraviolet range. Perhaps unsurprisingly, when combined with the fact that humans have red blood, this posed a real problem. While some of the historical figures we see in old photos may have benefited from an improved skincare regimen, the primary source of their haunting visage was that the photographic techniques available at the time were simply incapable of capturing skin properly. This lead to the sharp creases and dark lips we’re so used to seeing.

Of course, primitive film isn’t the only thing separating antique photos from the 42 megapixel behemoths that your camera can take nowadays. Film processing steps had the potential to introduce dust and other blemishes to the image, and over time the prints can fade and age in a variety of ways that depend upon the chemicals they were processed in. When rolled together, all of these factors make it difficult to paint an accurate portrait of some of history’s famous faces. Before you start to worry that you’ll never know just what Abraham Lincoln looked like, you might consider taking a stab at Time-Travel Rephotography.

Amazingly, Time-Travel Rephotography is a technique that actually lives up to how cool its name is. It uses a neural network (specifically, the StyleGAN2 framework) to take an old photo and project it into the space of high-res modern photos the network was trained on. This allows it to perform colorization, skin correction, upscaling, and various noise reduction and filtering operations in a single step which outputs remarkable results. Make sure you check out the project’s website to see some of the outputs at full-resolution.

We’ve seen AI upscaling before, but this project takes it to the next level by completely restoring antique photographs. We’re left wondering what techniques will be available 100 years from now to restore JPEGs stored way back in 2021, bringing them up to “modern” viewing standards.

Thanks to [Gus] for the tip!

Continue reading “Imaging The Past With Time-Travel Rephotography”

Is That A Cat Or Not?

Pandemic induced boredom takes people in many different ways. Some of us go for long walks, others learn to speak a new language, while yet more unleash their inner gaming streamer. [Niklas Fauth] has taken a break from his other projects by creating a very special project indeed. A cat detector! No longer shall you ponder whether or not the object or creature before you is a cat, now that existential question can be answered by a gadget.

This is more of a novelty project than one of special new tech, he’s taken what looks to be the shell from a cheap infra-red thermometer and put a Raspberry Pi Zero with camera and a small screen into it. This in turn runs Tensorflow with the COCO-SSD object identification model. The device has a trigger, and when it’s pressed to photograph an image it applies the model to detect whether the subject is a cat or not. The video posted to Twitter is below the break, and we can’t dispute its usefulness in the feline-spotting department.

[Niklas] has featured here more than once in the past. This is not his only pandemic project, either.

Continue reading “Is That A Cat Or Not?”