Voice Without Sound

Voice recognition is becoming more and more common, but anyone who’s ever used a smart device can attest that they aren’t exactly fool-proof. They can activate seemingly at random, don’t activate when called or, most annoyingly, completely fail to understand the voice commands. Thankfully, researchers from the University of Tokyo are looking to improve the performance of devices like these by attempting to use them without any spoken voice at all.

The project is called SottoVoce and uses an ultrasound imaging probe placed under the user’s jaw to detect internal movements in the speaker’s larynx. The imaging generated from the probe is fed into a series of neural networks, trained with hundreds of speech patterns from the researchers themselves. The neural networks then piece together the likely sounds being made and generate an audio waveform which is played to an unmodified Alexa device. Obviously a few improvements would need to be made to the ultrasonic imaging device to make this usable in real-world situations, but it is interesting from a research perspective nonetheless.

The research paper with all the details is also available (PDF warning). It’s an intriguing approach to improving the performance or quality of voice especially in situations where the voice may be muffled, non-existent, or overlaid with a lot of background noise. Machine learning like this seems to be one of the more powerful tools for improving speech recognition, as we saw with this robot that can walk across town and order food for you using voice commands only.

Continue reading “Voice Without Sound”

Let Machine Learning Code An Infinite Variety Of Pong Games

In a very real way, Pong started the video game revolution. You wouldn’t have thought so at the time, with its simple gameplay, rudimentary controls, some very low-end sounds, and a cannibalized TV for a display, but the legendarily stuffed coinboxes tell the tale. Fast forward 50 years or so, and Pong has been largely reduced to a programmer’s exercise to see how few lines of code can stand in for what [Ted Dabney] and [Allan Alcorn] accomplished. But now even that’s too much, as OpenAI Codex can generate a playable Pong from just a few prompts, at least most of the time. Continue reading “Let Machine Learning Code An Infinite Variety Of Pong Games”

Machine Learning Baby Monitor, Part 2: Learning Sleep Patterns

The first lesson a new parent learns is that the second you think you’ve finally figured out your kid’s patterns — sleeping, eating, pooping, crying endlessly in the middle of the night for no apparent reason, whatever — the kid will change it. It’s the Uncertainty Principle of kids — the mere act of observing the pattern changes it, and you’re back at square one.

As immutable as this rule seems, [Caleb Olson] is convinced he can work around it with this over-engineered sleep pattern tracker. You may recall [Caleb]’s earlier attempts to automate certain aspects of parenthood, like this machine learning system to predict when baby is hungry; and yes, he’s also strangely obsessed with automating his dog’s bathroom habits. All that preliminary work put [Caleb] in a good position to analyze his son’s sleep patterns, which he did with the feed from their baby monitor camera and Google’s MediaPipe library.

This lets him look for how much the baby’s eyes are open, calculate with a wakefulness probability, and record the time he wakes up. This worked great right up until the wave function collapsed the baby suddenly started sleeping on his side, requiring the addition of a general motion detection function to compensate for the missing eyeball data. Check out the video below for more details, although the less said about the screaming, demon-possessed owl, the better.

The data [Caleb] has collected has helped him and his wife understand the little fellow’s sleep needs and fine-tune his cycles. There’s a web app, of course, and a really nice graphical representation of total time asleep and awake. No word on naps not taken in view of the camera, though — naps in the car are an absolute godsend for many parents. We suppose that could be curated manually, but wouldn’t doubt it if [Caleb] had a plan to cover that too.

Continue reading “Machine Learning Baby Monitor, Part 2: Learning Sleep Patterns”

Detecting Machine-Generated Content: An Easier Task For Machine Or Human?

In today’s world we are surrounded by various sources of written information, information which we generally assume to have been written by other humans. Whether this is in the form of books, blogs, news articles, forum posts, feedback on a product page or the discussions on social media and in comment sections, the assumption is that the text we’re reading has been written by another person. However, over the years this assumption has become ever more likely to be false, most recently due to large language models (LLMs) such as GPT-2 and GPT-3 that can churn out plausible paragraphs on just about any topic when requested.

This raises the question of whether we are we about to reach a point where we can no longer be reasonably certain that an online comment, a news article, or even entire books and film scripts weren’t churned out by an algorithm, or perhaps even where an online chat with a new sizzling match turns out to be just you getting it on with an unfeeling collection of code that was trained and tweaked for maximum engagement with customers. (Editor’s note: no, we’re not playing that game here.)

As such machine-generated content and interactions begin to play an ever bigger role, it raises both the question of how you can detect such generated content, as well as whether it matters that the content was generated by an algorithm instead of by a human being.

Continue reading “Detecting Machine-Generated Content: An Easier Task For Machine Or Human?”

A montage of a "death stranding" lamp in two different color modes, purple on the left and blue on the right

Illuminate Your Benched Things With This Death Stranding Lamp

[Pinkman] creates a smart RGB table lamp based off of the “Odradek device” robot arm from the video game “Death Stranding”.

[Pinkman] adds a XIAO BLE nRF52840 Sense device, with Bluetooth support, microphone and TinyML capability. The nRF52840 is used to push data to the five WS2812 strips, one for each “blade” of the lamp, and also connects to a TTP223 capacitive touch controller to add touch input detection. The TinyML portion of the nRF52840 allows for custom keyword training to turn on the lamp with voice commands ([Pinkman] uses “Bling Bling”). [Pinkman] has also provided Bluetooth control, allowing the color and pattern to be changed from a phone application.

The lamp is 3D printed with the build being based off of [Nils Kal]’s Printables files. Each of the five blades has a white 3D-printed diffusor plate to help ease out the hot spots for the LED strip. The lamp is fully adjustable in addition to having cavities, channels and access points for “invisible” wiring. [Pinkman] has also upgraded the original 3D files to allow for the three wires needed to drive the WS2812, instead of the two wires that [Nils] had allotted in the original.

[Pinkman] has all of the code, STL files and training data available for download, so be sure to check it out. Lamps are a favorite of ours and we’ve featured our fair share, including 3D printed Shoji lamps and RGB wall lamps.

Video after the break!

Continue reading “Illuminate Your Benched Things With This Death Stranding Lamp”

Smart Bike Suspension Tunes Your Ride On The Fly

Riding a bike is a pretty simple affair, but like with many things, technology marches on and adds complications. Where once all you had to worry about was pumping the cranks and shifting the gears, now a lot of bikes have front suspensions that need to be adjusted for different riding conditions. Great for efficiency and ride comfort, but a little tough to accomplish while you’re underway.

Luckily, there’s a solution to that, in the form of this active suspension system by [Jallson S]. The active bit is a servo, which is attached to the adjustment valve on the top of the front fork of the bike. The servo moves the valve between fully locked, for smooth surfaces, and wide open, for rough terrain. There’s also a stop in between, which partially softens the suspension for moderate terrain. The 9-gram hobby servo rotates the valve with the help of a 3D printed gear train.

But that’s not all. Rather than just letting the rider control the ride stiffness from a handlebar-mounted switch, [Jallson S] added a little intelligence into the mix. Ride data from the accelerometer on an Arduino Nano 33 BLE Sense was captured on a smartphone via Arduino Science Journal. The data was processed through Edge Impulse Studio to create models for five different ride surfaces and rider styles. This allows the stiffness to be optimized for current ride conditions — check it out in action in the video below.

[Jallson S] is quick to point out that this is a prototype, and that niceties like weatherproofing still have to be addressed. But it seems like a solid start — now let’s see it teamed up with an Arduino shifter.

Continue reading “Smart Bike Suspension Tunes Your Ride On The Fly”

Giving An Old Typewriter A Mind Of Its Own With GPT-3

There was an all-too-brief period in history where typewriters went from clunky, purely mechanical beasts to streamlined, portable electromechanical devices. But when the 80s came around and the PC revolution started, the typewriting was on the wall for these machines, and by the 90s everyone had a PC, a printer, and Microsoft Word. And thus the little daisy-wheel typewriters began to populate thrift shops all over the world.

That’s fine with us, because it gave [Arvind Sanjeev] a chance to build “Ghostwriter”, an AI-powered automatic typewriter. The donor machine was a clapped-out Brother electronic typewriter, which needed a bit of TLC to bring it back to working condition. From there, [Arvind] worked out the keyboard matrix and programmed an Arduino to drive the typewriter, both read and write. A Raspberry Pi running the OpenAI Python API for GPT-3 talks to the Arduino over serial, which basically means you can enter a GPT writing prompt with the keyboard and have the machine spit out a dead-tree version of the results.

To mix things up a bit, [Arvind] added a pair of pots to control the creativity and length of the response, plus an OLED screen which seems only to provide some cute animations, which we don’t hate. We also don’t hate the new paint job the typewriter got, but the jury is still out on the “poetry” that it typed up. Eye of the beholder, we suppose.

Whatever you think of GPT’s capabilities, this is still a neat build and a nice reuse of otherwise dead-end electronics. Need a bit more help building natural language AI into your next project? Our own [Donald Papp] will get you up to speed on that.

Continue reading “Giving An Old Typewriter A Mind Of Its Own With GPT-3”