Art Exhibit Lets You Hide From Self-Driving Cars

In the discussions about how dangerous self-driving cars are – or aren’t – one thing is sorely missing, and that is an interactive game in which you do your best to not be recognized as a pedestrian and subsequently get run over. Even if this is a somewhat questionable take, there’s something to be said for the interactive display over at the Asian Art Museum in San Francisco which has you try to escape the tyranny of machine-vision and get recognized as a crab, traffic cone, or something else that’s not pedestrian-shaped.

Daniel Coppen, one of the artists behind “How (not) to get hit by a self-driving car,” sets up a cone at the exhibit at the Asian Art Museum in San Francisco on March 22, 2024. (Credit: Stephen Council, SFGate)
Daniel Coppen, one of the artists behind “How (not) to get hit by a self-driving car,” sets up a cone at the exhibit at the Asian Art Museum in San Francisco on March 22, 2024. (Credit: Stephen Council, SFGate)

The display ran from March 21st to March 23rd, with [Stephen Council] of SFGate having a swing at the challenge. As can be seen in the above image, he managed to get labelled as ‘fire’ during one attempt while hiding behind a stop sign as he walked the crossing. Other methods include crawling and (ab)using a traffic cone.

Created by [Tomo Kihara] and [Daniel Coppen], it’s intended to be a ‘playful, engaging game installation’. Both creators make it clear that self-driving vehicles which use LIDAR and other advanced detection methods are much harder to fool, but given how many Teslas are on the road using camera-based systems, it’s still worth demonstrating the shortcomings of the technology.

There’s no shortage of debate about whether or not autonomous vehicles are ready to share the roads with human drivers, especially when they exhibit unusual behavior. We’ve already seen protesters attempt to confuse self-driving systems with methods that aren’t far removed from what [Kihara] and [Coppen] have demonstrated here, and it seems likely such antics will only become more common with time.

LoRA, With No Radio

A LoRa project has traditionally required a dedicated radio module, because it’s a commercially licenced protocol. But as the way it works has been progressively reverse engineered, it’s become ever more possible to produce a LoRA radio for yourself. But what about a LoRA radio without a radio at all? [CNLohr] has managed just that, by driving a microcontroller pin and relying on one of its harmonics to provide enough RF to be received by a LoRA gateway.

The video below the break goes into the process in great detail, revealing some of the tricks. Undersampling to create intentional aliasing for example allows subharmonic peaks to be produced in unexpected places. Most of the development is performed on Espressif microcontrollers, but as the code is optimised it becomes possible to use it on much more modest silicon. The dirt cheap CH32V003 RISC-V microcontroller for example can be a LoRA transmitter able to talk to a gateway at a range of hundreds of metres with the CH32 and 2.5km with the ESP32. The code can be found in this GitHub repository.

The CH32 can’t receive of course, and it relies on barfing harmonics all over the spectrum to work. But on the other hand its total RF output is so tiny that we’re guessing a filter for the LoRA band might even make it almost legal. He’s got a little way to go before beating the record though.

Continue reading “LoRA, With No Radio”

The Intel 8088 And 8086 Processor’s Instruction Prefetch Circuitry

The 8088 die under a microscope, with main functional blocks labeled. This photo shows the chip's single metal layer; the polysilicon and silicon are underneath. (Credit: Ken Shirriff)
The 8088 die under a microscope, with main functional blocks labeled. This photo shows the chip’s single metal layer; the polysilicon and silicon are underneath. (Credit: Ken Shirriff)

Cache prefetching is what allows processors to have data and/or instructions ready for use in a fast local cache rather than having to wait for a fetch request to trickle through to system RAM and back again. The Intel 8088  (and its big brother 8086) processor was among the first microprocessors to implement (instruction) prefetching in hardware, which [Ken Shirriff] has analyzed based on die images of this famous processor. This follows last year’s deep-dive into the 8086’s prefetching hardware, with (unsurprisingly) many similarities between these two microprocessors, as well as a few differences that are mostly due to the 8088’s cut-down 8-bit data bus.

While the 8086 has 3 16-bit slots in the instruction prefetcher the 8088 gets 4 slots, each 8-bit. The prefetching hardware is part of the Bus Interface Unit (BIU), which effectively decouples the actual processor (Execution Unit, or EU) from the system RAM. While previous MPUs would be fully deterministic, with instructions being loaded from RAM and subsequently executed, the 8086 and 8088’s prefetching meant that such assumptions no longer were true. The added features in the BIU also meant that the instruction pointer (IP) and related registers moved to the BIU, while the ringbuffer logic around the queue had to somehow keep the queueing and pointer offsets into RAM working correctly.

Even though these days CPUs have much more complicated, multi-level caches that are measured in kilobytes and megabytes, it’s fascinating to see where it all began, with just a few bytes and relatively straight-forward hardware logic that you easily follow under a microscope.

Is Your Mental Model Of Bash Pipelines Wrong?

[Michael Lynch] encountered a strange situation. Why was compiling then running his program nearly 10x faster than just running the program by itself? [Michael] ran into this issue while benchmarking a programming project, pared it down to its essentials for repeatability and analysis, and discovered it highlighted an incorrect mental model of how bash pipelines worked.

Here’s the situation. The first thing [Michael]’s pared-down program does is start a timer. Then it simply reads and counts some bytes from stdin, then prints out how long it took for that to happen. When running the test program in the following way, it takes about 13 microseconds.

$ echo '00010203040506070809' | xxd -r -p | zig build run -Doptimize=ReleaseFast
bytes: 10
execution time: 13.549µs

When running the (already-compiled) program directly, execution time swells to 162 microseconds.

$ echo '00010203040506070809' | xxd -r -p | ./zig-out/bin/count-bytes
bytes: 10
execution time: 162.195µs

Again, the only difference between zig build run and ./zig-out/bin/count-bytes is that the first compiles the code, then immediately runs it. The second simply runs the compiled program. Continue reading “Is Your Mental Model Of Bash Pipelines Wrong?”

Saving A Clock Radio With An LM8562

Smart phones have taken the place of a lot of different devices especially as they get more and more powerful. GPS, music and video player, email, and of course a phone are all functions tied up in these general-purpose devices. Another casualty of the smart phone revolution is the humble bedside alarm clock as its radio, alarm, and timekeeping functionalities are also provided by modern devices. [zst123] has a sentimental attachment to the one he used in the 00s, though, and set about restoring it to its former glory.

Most of the issue with the clock involved drift with the timekeeping circuitry. Since it wasn’t accurately keeping the time anymore, losing around 10 minutes a day, the goal to save it was to use NTP to get the current time and a microcontroller to make the correction automatically. Rather than replace everything in the clock except the display, [zst123] is using the existing circuit board and adding an ESP8266 to grab the time from the Internet. A custom driver board reads the current time displayed on the clock directly from the display itself and then the ESP8266 can adjust it by using the existing buttons through a relay wired in parallel.

Using the existing circuitry was certainly a challenge especially since the display was multiplexed, but the LM8562 that came with these clock radios is a common and well-documented chip for driving displays like this, giving [zst123] a leg up over something unlabeled or proprietary. Using NTP is certainly a reliable and straightforward way of getting the current time too but there are a few other options for projects like these like using GPS or even a radio signal.

Exploring The Sega Saturn’s Wacky Architecture

Sega Saturn mainboard with main components labelled. More RAM is found on the bottom, as well. (Credit: Rodrigo Copetti)
Sega Saturn mainboard with main components labelled. More RAM is found on the bottom, as well. (Credit: Rodrigo Copetti)

In the annals of game console history, the Sega Saturn is probably the most convoluted system of all time, even giving the Playstation 3 a run for its rings. Also known as the system on which Sega beached itself before its Dreamcast swansong, it featured an incredible four CPUs, two video processors, multiple levels and types of RAM, all pushed onto game studios with virtually no software tools or plan how to use the thing. An introduction to this console’s architecture is provided by [Rodrigo Copetti], which gives a good idea of the harrowing task of developing for this system.

Launched in Japan in 1994 and North America and Europe in 1995, it featured a double-speed CD-ROM drive, Hitachi’s zippy new SH-2 CPU (times two) and some 3D processing grunt that was intended to let it compete with Sony’s Playstation. The video and sound solutions were all proprietary to Sega, with the two video processors (VDP1 & 2) handling parts of the rendering process which complicated its use for 3D tasks, along with its use of quadrilaterals instead of triangles as with the Playstation and Nintendo 64.

Although a lot of performance could be extracted from the Saturn’s idiosyncratic architecture, its high price and ultimately the competition with the Sony Playstation and the 1996 release of the Nintendo 64 would spell the end for the Saturn. Although the Dreamcast did not repeat the Saturn’s mistakes, it seems one commercial failure was enough to ruin Sega’s chances as a hardware developer.

Retrogadgets: Butler In A Box

You walk into your house and issue a voice command to bring up the lights and start a cup of coffee. No big deal, right? Siri, Google, and Alexa can do all that. Did we mention it is 1985? And, apparently, you were one of the people who put out about $1,500 for a Mastervoice “Butler in a Box,” the subject of a Popular Science video you can see below.

If you think the box is interesting, the inventor’s story is even stranger. [Kevin] got a mint-condition Butler in a Box from eBay. How did it work, given in 1983, there was no AI voice recognition and public Internet? We did note that the “appliance module” was a standard X10 interface.

Continue reading “Retrogadgets: Butler In A Box”