Simulating Temperature In VR Apps With Trigeminal Nerve Stimulation

Virtual reality systems are getting better and better all the time, but they remain largely ocular and auditory devices, with perhaps a little haptic feedback added in for good measure. That still leaves 40% of the five canonical senses out of the mix, unless of course this trigeminal nerve-stimulating VR accessory catches on.

While you may be tempted to look at this as a simple “Smellovision”-style olfactory feedback, the work by [Jas Brooks], [Steven Nagels], and [Pedro Lopes] at the University of Chicago’s Human-Computer Integration Lab is intended to provide a simulation of different thermal regimes that a VR user might experience in a simulation. True, the addition to an off-the-shelf Vive headset does waft chemicals into the wearer’s nose using three microfluidics pumps with vibrating mesh atomizers, but it’s the choice of chemicals and their target that makes this work. The stimulants used are odorless, so instead of triggering the olfactory bulb in the nose, they target the trigeminal nerve, which also innervates the lining of the nose and causes more systemic sensations, like the generalized hot feeling of chili peppers and the cooling power of mint. The headset leverages these sensations to change the thermal regime in a simulation.

The video below shows the custom simulation developed for this experiment. In addition to capsaicin’s heat and eucalyptol’s cooling, the team added a third channel with 8-mercapto-p-menthan-3-one, an organic compound that’s intended to simulate the smoke from a generator that gets started in-game. The paper goes into great detail on the various receptors that can be stimulated and the different concoctions needed, and full build information is available in the GitHub repo. We’ll be watching this one with interest.

Continue reading “Simulating Temperature In VR Apps With Trigeminal Nerve Stimulation”

Bare-Metal STM32: Setting Up And Using SPI

The Serial Peripheral Interface (SPI) interface was initially standardized by Motorola in 1979 for short-distance communication in embedded systems. In its most common four-wire configuration, full-duplex data transfer is possible on the two data (MOSI, MISO) lines with data rates well exceeding 10 Mb/s. This makes SPI suitable for high-bandwidth, full-duplex applications like SD storage cards and large resolution, high-refresh displays.

STM32 devices come with a variable number of SPI peripherals, two in the F042 at 18 Mb/s and five in the F411. Across the STM32 families, the SPI peripheral is relatively similar, with fairly minor differences in the register layout. In this article we’ll look at configuring an SPI peripheral in master mode.

Continue reading “Bare-Metal STM32: Setting Up And Using SPI”

How The Image-Generating AI Of Stable Diffusion Works

[Jay Alammar] has put up an illustrated guide to how Stable Diffusion works, and the principles in it are perfectly applicable to understanding how similar systems like OpenAI’s Dall-E or Google’s Imagen work under the hood as well. These systems are probably best known for their amazing ability to turn text prompts (e.g. “paradise cosmic beach”) into a matching image. Sometimes. Well, usually, anyway.

‘System’ is an apt term, because Stable Diffusion (and similar systems) are actually made up of many separate components working together to make the magic happen. [Jay]’s illustrated guide really shines here, because it starts at a very high level with only three components (each with their own neural network) and drills down as needed to explain what’s going on at a deeper level, and how it fits into the whole.

Spot any similar shapes and contours between the image and the noise that preceded it? That’s because the image is a result of removing noise from a random visual mess, not building it up from scratch like a human artist would do.

It may surprise some to discover that the image creation part doesn’t work the way a human does. That is to say, it doesn’t begin with a blank canvas and build an image bit by bit from the ground up. It begins with a seed: a bunch of random noise. Noise gets subtracted in a series of steps that leave the result looking less like noise and more like an aesthetically pleasing and (ideally) coherent image. Combine that with the ability to guide noise removal in a way that favors conforming to a text prompt, and one has the bones of a text-to-image generator. There’s a lot more to it of course, and [Jay] goes into considerable detail for those who are interested.

If you’re unfamiliar with Stable Diffusion or art-creating AI in general, it’s one of those fields that is changing so fast that it sometimes feels impossible to keep up. Luckily, our own Matthew Carlson explains all about what it is, and why it matters.

Stable Diffusion can be run locally. There is a fantastic open-source web UI, so there’s no better time to get up to speed and start experimenting!

Arduino hearing test device overview

DIY Arduino Hearing Test Device

Hearing loss is a common problem for many – especially those who may have attended too many loud concerts in their youth. [mircemk] had recently been for a hearing test, and noticed that the procedure was actually quite straightforward. Armed with this knowledge, he decided to build his own test system and document it for others to use.

audiogram showing the results of the arduino hearing test device
Resultant audiogram from the device showing each ear in a different color

By using an Arduino to produce tones of various stepped frequencies, and gradually increasing the volume until the test subject can detect the tone, it is possible to plot an audiogram of hearing threshold sensitivity.  Testing each ear individually allows a comparison between one side and the other.

[mircemk] has built a nice miniature cabinet that holds an 8×8 matrix of WS2812 addressable RGB LEDs.  A 128×64 pixel OLED display provides user instructions, and a rotary encoder with push-button serves as the user input.

Of course, this is not a calibrated professional piece of test equipment, and a lot will depend on the quality of the earpiece used.  However, as a way to check for gross hearing issues, and as an interesting experiment, it holds a lot of promise.

There is even an extension, including a Class D audio amplifier, that allows the use of bone-conduction earpieces to help narrow down the cause of hearing loss further.

There’s some more information on bone conduction here, and we’ve covered an intriguing optical stimulation cochlear implant, too.

Continue reading “DIY Arduino Hearing Test Device”

Sight And Sound Combine In This Engaging Synthesizer Sculpture

We’ll always have a soft spot for circuit sculpture projects; anything with components supported on nice tidy rows of brass wires always captures our imagination. But add to that a little bit of light and a lot of sound, and you get something like this hybrid synthesizer sculpture that really commands attention.

[Eirik Brandal] calls his creation “corwin point,” and describes it as “a generative dual voice analog synthesizer.” It’s built with a wide-open architecture that invites exploration and serves to pull the eyes — and ears — into the piece. The lowest level of the sculpture has all the “boring” digital stuff — an ESP32, the LED drivers, and the digital-to-analog converters. The next level up has the more visually interesting analog circuits, built mainly “dead-bug” style on a framework of brass wires. The user interface, mainly a series of pots and switches, lives on this level, as does a SeeedStudio WIO terminal, which is used to display a spectrum analyzer of the sounds generated.

Moving up a bit, there’s a seemingly incongruous vacuum tube overdrive along with a power amp and speaker in an acrylic enclosure. A vertical element of thick acrylic towers over all and houses the synth’s delay line, and the light pipes that snake through the sculpture pulse in time with sequencer events. The video below shows the synth in action — the music that it generates never really sounds the same twice, and sounds like nothing we’ve heard before, except perhaps briefly when we heard something like the background music from Logan’s Run.

Hats off to [Eirik] for another great-looking and great-sounding build; you may remember that his “cwymriad” caught our attention earlier this year.

Continue reading “Sight And Sound Combine In This Engaging Synthesizer Sculpture”

Hackaday Links Column Banner

Hackaday Links: October 23, 2022

There were strange doings this week as Dallas-Forth Worth Airport in Texas experienced two consecutive days of GPS outages. The problem first cropped up on the 17th, as the Federal Aviation Administration sent out an automated notice that GPS reception was “unreliable” within 40 nautical miles of DFW, an area that includes at least ten other airports. One runway at DFW, runway 35R, was actually closed for a while because of the anomaly. According to GPSjam.org — because of course someone built a global mapping app to track GPS coverage — the outage only got worse the next day, both spreading geographically and worsening in some areas. Some have noted that the area of the outage abuts Fort Hood, one of the largest military installations in the country, but there doesn’t appear to be any connection to military operations. The outage ended abruptly at around 11:00 PM local time on the 19th, and there’s still no word about what caused it. Loss of GPS isn’t exactly a “game over” problem for modern aviation, but it certainly is a problem, and at the very least it points out how easy the system is to break, either accidentally or intentionally.

In other air travel news, almost as quickly as Lufthansa appeared to ban the use of Apple AirTags in checked baggage, the airline reversed course on the decision. The original decision was supposed to have been based on “an abundance of caution” regarding the potential for disaster from its low-power transmitters, or should a stowed AirTag’s CR2032 battery explode. But as it turns out, the Luftfahrt-Bundesamt, the German civil aviation authority, agreed with the company’s further assessment that the tags pose little risk, green-lighting their return to the cargo compartment. What luck! The original ban totally didn’t have anything to do with the fact that passengers were shaming Lufthansa online by tracking their bags with AirTags while the company claimed they couldn’t locate them, and the sudden reversal is unrelated to the bad taste this left in passengers’ mouths. Of course, the reversal only opened the door to more adventures in AirTag luggage tracking, so that’s fun.

Energy prices are much on everyone’s mind these days, but the scale of the problem is somewhat a matter of perspective. Take, for instance, the European Organization for Nuclear Research (CERN), which runs a little thing known as the Large Hadron Collider, a 27-kilometer-long machine that smashes atoms together to delve into the mysteries of physics. In an average year, CERN uses 1.3 terawatt-hours of electricity to run the LHC and its associated equipment. Technically, this is what’s known as a hell of a lot of electricity, and given the current energy issues in Europe, CERN has agreed to shut down the LHC a bit early this year, shutting down in late November instead of the usual mid-December halt. What’s more, CERN has agreed to reduce usage by 20% next year, which will increase scientific competition for beamtime on the LHC. There’s only so much CERN can do to reduce the LHC’s usage, though — the cryogenic plant to cool the superconducting magnets draws a whopping 27 megawatts, and has to be kept going to prevent the magnets from quenching.

And finally, as if the COVID-19 pandemic hasn’t been weird enough, the fact that it has left in its wake survivors whose sense of smell is compromised is alarming. Our daily ritual during the height of the pandemic was to open up a jar of peanut butter and take a whiff, figuring that even the slightest attenuation of the smell would serve as an early warning system for symptom onset. Thankfully, the alarm hasn’t been tripped, but we know more than a few people who now suffer from what appears to be permanent anosmia. It’s no joke — losing one’s sense of smell can be downright dangerous; think “gas leak” or “spoiled food.” So it was with interest that we spied an article about a neuroprosthetic nose that might one day let the nasally challenged smell again. The idea is to use an array of chemical sensors to stimulate an array of electrodes implanted near the olfactory bulb. It’s an interesting idea, and the article provides a lot of fascinating details on how the olfactory sense actually works.

A sequence of pictures with arrows between each other. This picture shows a Wokwi (Fritzing-like) diagram with logic gates, going to a chip shot, going to a panel of chipsGA footprint on a KiCad PCB render with DIP switches and LEDs around the breakout. Under the sequence, it says: "Tiny Tapeout! Demystifying microchip design and manufacture"

Design Your Own Chip With TinyTapeout

When hackers found and developed ways to order PCBs on the cheap, it revolutionized the way we create. Accessible 3D printing brought us entire new areas to create things. [Matt Venn] is one of the people at the forefront of hackers designing our own silicon, and we’ve covered plenty of his research over the years. His latest effort to involve the hacker community, TinyTapeout, makes chip design accessible to newcomers – the bar is as low as arranging logic gates on a web browser page.

Six chip shots shown, with various densities of gates being used - some use a little, and some use a the entire area given.
Just six of the designs submitted, with varying complexity

For this, [Matt] worked with people like [Uri Shaked] of Wokwi fame, [Sylvain “tnt” Munaut], [jix], and a few others. Together, they created all the tooling necessary, and most importantly, a pipeline where your logic gate-based design in Wokwi gets compiled into a block ready to be put into silicon, with even simulations and compile-time verification for common mistakes. As a result, the design process is remarkably straightforward, to the point where a 9-year-old kid can do it. If you wanted, you could submit your Verilog, too!

The first round of TinyTapeout had a deadline in the first days of September and brought 152 entries together – just in time for an Efabless shuttle submission. All of these designs were put on a single instance of a chip, that will be fabbed in quantity, tested, soldered onto breakouts, and mailed out to individual participants. In this way, everyone will be getting everyone’s design, but thanks to the on-chip muxing hardware, they’re able to switch between designs using on-breakout DIP switches.

More after the break…

Continue reading “Design Your Own Chip With TinyTapeout”