Combining Acoustic Bioprinting With Raman Spectroscopy For High-Throughput Identification Of Bacteria

Rapidly analyzing samples for the presence of bacteria and similar organic structures is generally quite a time-intensive process, with often the requirement of a cell culture being developed. Proposed by Fareeha Safir and colleagues inĀ Nano Letters is a method to use an acoustic droplet printer combined with Raman spectroscopy. Advantages of this method are a high throughput, which could make analysis of samples at sewage installations, hospitals and laboratories significantly faster.

Raman spectroscopy works on the principle of Raman scattering, which is the inelastic scattering of photons by matter, causing a distinct pattern in the thus scattered light. By starting with a pure light source (that is, a laser), the relatively weak Raman scattering can be captured and the laser light filtered out. The thus captured signal can be analyzed and matched with known pathogens. Continue reading “Combining Acoustic Bioprinting With Raman Spectroscopy For High-Throughput Identification Of Bacteria”

Modifying Artwork With Glaze To Interfere With Art Generating Algorithms

With the rise of machine-generated art we have also seen a major discussion begin about the ethics of using existing, human-made art to train these art models. Their defenders will often claim that the original art cannot be reproduced by the generator, but this is belied by the fact that one possible query to these generators is to produce art in the style of a specific artist. This is where feature extraction comes into play, and the Glaze tool as a potential obfuscation tool.

Developed by researchers at the University of Chicago, the theory behind this tool is covered in their preprint paper. The essential concept is that an artist can pick a target ‘cloak style’, which is used by Glaze to calculate specific perturbations which are added to the original image. These perturbations are not easily detected by the human eye, but will be picked up by the feature extraction algorithms of current machine-generated art models. Continue reading “Modifying Artwork With Glaze To Interfere With Art Generating Algorithms”

Voice Without Sound

Voice recognition is becoming more and more common, but anyone who’s ever used a smart device can attest that they aren’t exactly fool-proof. They can activate seemingly at random, don’t activate when called or, most annoyingly, completely fail to understand the voice commands. Thankfully, researchers from the University of Tokyo are looking to improve the performance of devices like these by attempting to use them without any spoken voice at all.

The project is called SottoVoce and uses an ultrasound imaging probe placed under the user’s jaw to detect internal movements in the speaker’s larynx. The imaging generated from the probe is fed into a series of neural networks, trained with hundreds of speech patterns from the researchers themselves. The neural networks then piece together the likely sounds being made and generate an audio waveform which is played to an unmodified Alexa device. Obviously a few improvements would need to be made to the ultrasonic imaging device to make this usable in real-world situations, but it is interesting from a research perspective nonetheless.

The research paper with all the details is also available (PDF warning). It’s an intriguing approach to improving the performance or quality of voice especially in situations where the voice may be muffled, non-existent, or overlaid with a lot of background noise. Machine learning like this seems to be one of the more powerful tools for improving speech recognition, as we saw with this robot that can walk across town and order food for you using voice commands only.

Continue reading “Voice Without Sound”

Let Machine Learning Code An Infinite Variety Of Pong Games

In a very real way, Pong started the video game revolution. You wouldn’t have thought so at the time, with its simple gameplay, rudimentary controls, some very low-end sounds, and a cannibalized TV for a display, but the legendarily stuffed coinboxes tell the tale. Fast forward 50 years or so, and Pong has been largely reduced to a programmer’s exercise to see how few lines of code can stand in for what [Ted Dabney] and [Allan Alcorn] accomplished. But now even that’s too much, as OpenAI Codex can generate a playable Pong from just a few prompts, at least most of the time. Continue reading “Let Machine Learning Code An Infinite Variety Of Pong Games”

Machine Learning Baby Monitor, Part 2: Learning Sleep Patterns

The first lesson a new parent learns is that the second you think you’ve finally figured out your kid’s patterns — sleeping, eating, pooping, crying endlessly in the middle of the night for no apparent reason, whatever — the kid will change it. It’s the Uncertainty Principle of kids — the mere act of observing the pattern changes it, and you’re back at square one.

As immutable as this rule seems, [Caleb Olson] is convinced he can work around it with this over-engineered sleep pattern tracker. You may recall [Caleb]’s earlier attempts to automate certain aspects of parenthood, like this machine learning system to predict when baby is hungry; and yes, he’s also strangely obsessed with automating his dog’s bathroom habits. All that preliminary work put [Caleb] in a good position to analyze his son’s sleep patterns, which he did with the feed from their baby monitor camera and Google’s MediaPipe library.

This lets him look for how much the baby’s eyes are open, calculate with a wakefulness probability, and record the time he wakes up. This worked great right up until the wave function collapsed the baby suddenly started sleeping on his side, requiring the addition of a general motion detection function to compensate for the missing eyeball data. Check out the video below for more details, although the less said about the screaming, demon-possessed owl, the better.

The data [Caleb] has collected has helped him and his wife understand the little fellow’s sleep needs and fine-tune his cycles. There’s a web app, of course, and a really nice graphical representation of total time asleep and awake. No word on naps not taken in view of the camera, though — naps in the car are an absolute godsend for many parents. We suppose that could be curated manually, but wouldn’t doubt it if [Caleb] had a plan to cover that too.

Continue reading “Machine Learning Baby Monitor, Part 2: Learning Sleep Patterns”

Detecting Machine-Generated Content: An Easier Task For Machine Or Human?

In today’s world we are surrounded by various sources of written information, information which we generally assume to have been written by other humans. Whether this is in the form of books, blogs, news articles, forum posts, feedback on a product page or the discussions on social media and in comment sections, the assumption is that the text we’re reading has been written by another person. However, over the years this assumption has become ever more likely to be false, most recently due to large language models (LLMs) such as GPT-2 and GPT-3 that can churn out plausible paragraphs on just about any topic when requested.

This raises the question of whether we are we about to reach a point where we can no longer be reasonably certain that an online comment, a news article, or even entire books and film scripts weren’t churned out by an algorithm, or perhaps even where an online chat with a new sizzling match turns out to be just you getting it on with an unfeeling collection of code that was trained and tweaked for maximum engagement with customers. (Editor’s note: no, we’re not playing that game here.)

As such machine-generated content and interactions begin to play an ever bigger role, it raises both the question of how you can detect such generated content, as well as whether it matters that the content was generated by an algorithm instead of by a human being.

Continue reading “Detecting Machine-Generated Content: An Easier Task For Machine Or Human?”

A montage of a "death stranding" lamp in two different color modes, purple on the left and blue on the right

Illuminate Your Benched Things With This Death Stranding Lamp

[Pinkman] creates a smart RGB table lamp based off of the “Odradek device” robot arm from the video game “Death Stranding”.

[Pinkman] adds a XIAO BLE nRF52840 Sense device, with Bluetooth support, microphone and TinyML capability. The nRF52840 is used to push data to the five WS2812 strips, one for each “blade” of the lamp, and also connects to a TTP223 capacitive touch controller to add touch input detection. The TinyML portion of the nRF52840 allows for custom keyword training to turn on the lamp with voice commands ([Pinkman] uses “Bling Bling”). [Pinkman] has also provided Bluetooth control, allowing the color and pattern to be changed from a phone application.

The lamp is 3D printed with the build being based off of [Nils Kal]’s Printables files. Each of the five blades has a white 3D-printed diffusor plate to help ease out the hot spots for the LED strip. The lamp is fully adjustable in addition to having cavities, channels and access points for “invisible” wiring. [Pinkman] has also upgraded the original 3D files to allow for the three wires needed to drive the WS2812, instead of the two wires that [Nils] had allotted in the original.

[Pinkman] has all of the code, STL files and training data available for download, so be sure to check it out. Lamps are a favorite of ours and we’ve featured our fair share, including 3D printed Shoji lamps and RGB wall lamps.

Video after the break!

Continue reading “Illuminate Your Benched Things With This Death Stranding Lamp”