Flight Simulator Focuses On The Other Side Of The Cockpit Door

When one thinks of getting into a flight simulator, one assumes that it’ll be from the pilot’s point of view. But this alternative flight simulator takes a different tack, by letting you live out your air travel fantasies from the passenger’s point of view.

Those of you looking for a full-motion simulation of the passenger cabin experience will be disappointed, as [Alex Shakespeare] — we assume no relation — has built a minimal airliner cabin for this simulator. That makes sense, though; ideally, an airline pilot aims to provide passengers with as dull a ride as possible. Where a flight is at its most exciting, and what [Alex] captures nicely here, is the final approach to your destination, when the airport and its surrounding environs finally come into view after a long time staring at clouds. This is done by mounting an LCD monitor outside the window of a reasonable facsimile of an airliner cabin, complete with a row of seats. A control panel that originally lived in an airliner cockpit serves to select video of approaches to airports in various exotic destinations, like Las Vegas. The video is played by a Pi Zero, while an ESP32 takes care of controlling the lights, fans, and attendant call buttons in the quite realistic-looking overhead panel. Extra points for the button that plays the Ryanair arrival jingle.

[Alex]’s simulator is impressively complete, if somewhat puzzling in conception. We don’t judge, though, and it looks like it might be fun for visitors, especially when the drinks cart comes by.

Continue reading “Flight Simulator Focuses On The Other Side Of The Cockpit Door”

Simulating Temperature In VR Apps With Trigeminal Nerve Stimulation

Virtual reality systems are getting better and better all the time, but they remain largely ocular and auditory devices, with perhaps a little haptic feedback added in for good measure. That still leaves 40% of the five canonical senses out of the mix, unless of course this trigeminal nerve-stimulating VR accessory catches on.

While you may be tempted to look at this as a simple “Smellovision”-style olfactory feedback, the work by [Jas Brooks], [Steven Nagels], and [Pedro Lopes] at the University of Chicago’s Human-Computer Integration Lab is intended to provide a simulation of different thermal regimes that a VR user might experience in a simulation. True, the addition to an off-the-shelf Vive headset does waft chemicals into the wearer’s nose using three microfluidics pumps with vibrating mesh atomizers, but it’s the choice of chemicals and their target that makes this work. The stimulants used are odorless, so instead of triggering the olfactory bulb in the nose, they target the trigeminal nerve, which also innervates the lining of the nose and causes more systemic sensations, like the generalized hot feeling of chili peppers and the cooling power of mint. The headset leverages these sensations to change the thermal regime in a simulation.

The video below shows the custom simulation developed for this experiment. In addition to capsaicin’s heat and eucalyptol’s cooling, the team added a third channel with 8-mercapto-p-menthan-3-one, an organic compound that’s intended to simulate the smoke from a generator that gets started in-game. The paper goes into great detail on the various receptors that can be stimulated and the different concoctions needed, and full build information is available in the GitHub repo. We’ll be watching this one with interest.

Continue reading “Simulating Temperature In VR Apps With Trigeminal Nerve Stimulation”

Sight And Sound Combine In This Engaging Synthesizer Sculpture

We’ll always have a soft spot for circuit sculpture projects; anything with components supported on nice tidy rows of brass wires always captures our imagination. But add to that a little bit of light and a lot of sound, and you get something like this hybrid synthesizer sculpture that really commands attention.

[Eirik Brandal] calls his creation “corwin point,” and describes it as “a generative dual voice analog synthesizer.” It’s built with a wide-open architecture that invites exploration and serves to pull the eyes — and ears — into the piece. The lowest level of the sculpture has all the “boring” digital stuff — an ESP32, the LED drivers, and the digital-to-analog converters. The next level up has the more visually interesting analog circuits, built mainly “dead-bug” style on a framework of brass wires. The user interface, mainly a series of pots and switches, lives on this level, as does a SeeedStudio WIO terminal, which is used to display a spectrum analyzer of the sounds generated.

Moving up a bit, there’s a seemingly incongruous vacuum tube overdrive along with a power amp and speaker in an acrylic enclosure. A vertical element of thick acrylic towers over all and houses the synth’s delay line, and the light pipes that snake through the sculpture pulse in time with sequencer events. The video below shows the synth in action — the music that it generates never really sounds the same twice, and sounds like nothing we’ve heard before, except perhaps briefly when we heard something like the background music from Logan’s Run.

Hats off to [Eirik] for another great-looking and great-sounding build; you may remember that his “cwymriad” caught our attention earlier this year.

Continue reading “Sight And Sound Combine In This Engaging Synthesizer Sculpture”

Hackaday Links Column Banner

Hackaday Links: October 23, 2022

There were strange doings this week as Dallas-Forth Worth Airport in Texas experienced two consecutive days of GPS outages. The problem first cropped up on the 17th, as the Federal Aviation Administration sent out an automated notice that GPS reception was “unreliable” within 40 nautical miles of DFW, an area that includes at least ten other airports. One runway at DFW, runway 35R, was actually closed for a while because of the anomaly. According to GPSjam.org — because of course someone built a global mapping app to track GPS coverage — the outage only got worse the next day, both spreading geographically and worsening in some areas. Some have noted that the area of the outage abuts Fort Hood, one of the largest military installations in the country, but there doesn’t appear to be any connection to military operations. The outage ended abruptly at around 11:00 PM local time on the 19th, and there’s still no word about what caused it. Loss of GPS isn’t exactly a “game over” problem for modern aviation, but it certainly is a problem, and at the very least it points out how easy the system is to break, either accidentally or intentionally.

In other air travel news, almost as quickly as Lufthansa appeared to ban the use of Apple AirTags in checked baggage, the airline reversed course on the decision. The original decision was supposed to have been based on “an abundance of caution” regarding the potential for disaster from its low-power transmitters, or should a stowed AirTag’s CR2032 battery explode. But as it turns out, the Luftfahrt-Bundesamt, the German civil aviation authority, agreed with the company’s further assessment that the tags pose little risk, green-lighting their return to the cargo compartment. What luck! The original ban totally didn’t have anything to do with the fact that passengers were shaming Lufthansa online by tracking their bags with AirTags while the company claimed they couldn’t locate them, and the sudden reversal is unrelated to the bad taste this left in passengers’ mouths. Of course, the reversal only opened the door to more adventures in AirTag luggage tracking, so that’s fun.

Energy prices are much on everyone’s mind these days, but the scale of the problem is somewhat a matter of perspective. Take, for instance, the European Organization for Nuclear Research (CERN), which runs a little thing known as the Large Hadron Collider, a 27-kilometer-long machine that smashes atoms together to delve into the mysteries of physics. In an average year, CERN uses 1.3 terawatt-hours of electricity to run the LHC and its associated equipment. Technically, this is what’s known as a hell of a lot of electricity, and given the current energy issues in Europe, CERN has agreed to shut down the LHC a bit early this year, shutting down in late November instead of the usual mid-December halt. What’s more, CERN has agreed to reduce usage by 20% next year, which will increase scientific competition for beamtime on the LHC. There’s only so much CERN can do to reduce the LHC’s usage, though — the cryogenic plant to cool the superconducting magnets draws a whopping 27 megawatts, and has to be kept going to prevent the magnets from quenching.

And finally, as if the COVID-19 pandemic hasn’t been weird enough, the fact that it has left in its wake survivors whose sense of smell is compromised is alarming. Our daily ritual during the height of the pandemic was to open up a jar of peanut butter and take a whiff, figuring that even the slightest attenuation of the smell would serve as an early warning system for symptom onset. Thankfully, the alarm hasn’t been tripped, but we know more than a few people who now suffer from what appears to be permanent anosmia. It’s no joke — losing one’s sense of smell can be downright dangerous; think “gas leak” or “spoiled food.” So it was with interest that we spied an article about a neuroprosthetic nose that might one day let the nasally challenged smell again. The idea is to use an array of chemical sensors to stimulate an array of electrodes implanted near the olfactory bulb. It’s an interesting idea, and the article provides a lot of fascinating details on how the olfactory sense actually works.

When [Elon] Says No, Just Reverse Engineer The Starlink Signal

We all know that it’s sometimes better to beg forgiveness than ask permission to do something, and we’ll venture a guess that more than a few of us have taken that advice to heart on occasion. But [Todd Humphreys] got the order of operations a bit mixed up with his attempt to leverage the Starlink network as a backup to the Global Positioning System, and ended up doing some interesting reverse engineering work as a result.

The story goes that [Todd] and his team at the University of Texas Austin’s Radionavigation Lab, on behalf of their sponsors in the US Army, approached Starlink about cooperating on a project to make their low-Earth orbit constellation provide position, navigation, and timing capabilities. Although initially interested in the project, Starlink honcho [Elon Musk] put the brakes on things, leaving [Todd]’s team high and dry. Not to be dissuaded, they bought a Starlink user terminal, built what amounts to a small radiotelescope — although we’ve seen something similar done with just an RTL-SDR — and proceeded to reverse-engineer the structure of Starlink’s Ku-band downlink signal. The paper (PDF link) on their findings is densely packed with details, such as the fact that Starlink uses an orthogonal frequency-division multiplexing (OFDM) scheme.

It’s important to note that their goal was not to break encryption or sniff in on user data; rather, they wanted access to the synchronization and timing signals embedded in the Starlink data structures. By using this data along with the publically available ephemera for each satellite, it’s possible to quickly calculate the exact distance to multiple satellites and determine the receiver’s location to within 30 meters. It’s not as good as some GPS-Starlink hacks we’ve seen, but it’s still pretty good in a pinch. Besides, the reverse engineering work here is well worth a read.

Thanks to [Adrian] for the tip!

This Snappy 8-Bit Microcomputer Brings The Speed To Retrocomputing

When the need for speed overcomes you, thoughts generally don’t turn to 8-bit computers. Sure, an 8-bit machine is fun for retro gameplay and reliving the glory days, and there certainly were some old machines that were notably faster than the others. But raw computing power isn’t really the point of retrocomputing.

Or is it? [Bernardo Kastrup] over at The Byte Attic has introduced an interesting machine called the Agon Light, an 8-bit SBC that’s also a bit like a microcontroller. The machine has a single PCB that looks about half as big as an Arduino Uno, and sports some of the same connectors and terminals around its periphery. The heart of the Agon Light is an eZ80 8-bit, 18.432 MHz 3-stage pipelined CPU, which is binary compatible with the Z80. It also has an audio-video coprocessor, in the form of an ESP32-Pico-D4, which supports a 640×480 64-color display and two mono audio channels. There’s no word we could find of whether the ESP32’s RF systems are accessible; it would be nice, but perhaps unnecessary since there are both USB ports and a PS/2 keyboard jack. There’s also a pin header for 20 GPIOs as well as I2C, SPI, and UART for serial communication.

The lengthy video below goes into all the details on the Agon Light, including the results of benchmark testing, all of which soundly thrash the usual 8-bit suspects. The project is open source and all the design files are available, or you can get a PCB populated with all the SMD components and just put the through-hole parts on. [Bernardo] is also encouraging people to build and sell their own Agon Lights, which seems pretty cool too. It honestly looks like a lot of fun, and we’re looking forward to seeing what people do with this.

Continue reading “This Snappy 8-Bit Microcomputer Brings The Speed To Retrocomputing”

Render Yourself Invisible To AI With This Adversarial Sweater Of Doom

Ugly sweater season is rapidly approaching, at least here in the Northern Hemisphere. We’ve always been a bit baffled by the tradition of paying top dollar for a loud, obnoxious sweater that gets worn to exactly one social event a year. We don’t judge, of course, but that’s not to say we wouldn’t look a little more favorably on someone’s fashion choice if it were more like this AI-defeating adversarial ugly sweater.

The idea behind this research from the University of Maryland is not, of course, to inform fashion trends, nor is it to create a practical invisibility cloak. It’s really to probe machine learning systems for vulnerabilities by making small changes to the input while watching for changes in the output. In this case, the ML system was a YOLO-based vision system which has little trouble finding humans in an arbitrary image. The adversarial pattern was generated by using a large set of training images, some of which contain the objects of interest — in this case, humans. Each time a human is detected, a random pattern is rendered over the image, and the data is reassessed to see how much the pattern lowers the object’s score. The adversarial pattern eventually improves to the point where it mostly prevents humans from being recognized. Much more detail is available in the research paper (PDF) if you want to dig into the guts of this.

The pattern, which looks a little like a bad impressionist painting of people buying pumpkins at a market and bears some resemblance to one we’ve seen before in similar work, is said to work better from different viewing angles. It also makes a spiffy pullover, especially if you’d rather blend in at that Christmas party.