On Getting A Computer’s Attention And Striking Up A Conversation

With the rise in voice-driven virtual assistants over the years, the sight of people talking to various electrical devices in public and in private has become rather commonplace. While such voice-driven interfaces are decidedly useful for a range of situations, they also come with complications. One of these are the trigger phrases or wake words that voice assistants listen to when in standby. Much like in Star Trek, where uttering ‘Computer’ would get the computer’s attention, so do we have our ‘Siri’, ‘Cortana’ and a range of custom trigger phrases that enable the voice interface.

Unlike in Star Trek, however, our virtual assistants do not know when we really desire to interact. Unable to distinguish context, they’ll happily respond to someone on TV mentioning their trigger phrase. This possibly followed by a ludicrous purchase order or other mischief. The realization here is the complexity of voice-based interfaces, while still lacking any sense of self-awareness or intelligence.

Another issue is that the process of voice recognition itself is very resource-intensive, which limits the amount of processing that can be performed on the local device. This usually leads to the voice assistants like Siri, Alexa, Cortana and others processing recorded voices in a data center, with obvious privacy implications.

Continue reading “On Getting A Computer’s Attention And Striking Up A Conversation”

The Importance Of Physical Models: How Not To Shoot Yourself In The Foot Or Anywhere Else

We take shortcuts all the time with our physical models. We rarely consider that wire has any resistance, for example, or that batteries have a source impedance. That’s fine up until the point that it isn’t. Take the case of the Navy’s Grumman F11F Tiger aircraft. The supersonic aircraft was impressive, although it suffered from some fatal flaws. But it also has the distinction of being the first plane ever to shoot itself down.

So here’s the simple math. A plane traveling Mach 1 is moving about 1,200 km/h — the exact number depends on a few things like your altitude and the humidity. Let’s say about 333 m/s. Bullets from a 20 mm gun, on the other hand, move at more than 1000 m/second. So when the bullet leaves the plane it would take the plane over three seconds to catch up with it, by which time it has moved ever further away, right?

Continue reading “The Importance Of Physical Models: How Not To Shoot Yourself In The Foot Or Anywhere Else”

Chinese Chips Are Being Artificially Slowed To Dodge US Export Regulations

Once upon a time, countries protected their domestic industries with tariffs on imports. This gave the home side a price advantage over companies operating overseas, but the practice has somewhat fallen out of fashion in the past few decades.

These days, governments are altogether more creative, using fancy export controls to protect their interests. To that end, the United States enacted an export restriction on high-powered computing devices. In response, Chinese designers are attempting to artificially slow their hardware to dodge these rules.

Continue reading “Chinese Chips Are Being Artificially Slowed To Dodge US Export Regulations”

The Wow! Signal Revisited: Citizen Science Informs SETI Effort

As far as interesting problems go, few can really compete with the perennial question: “Are we alone?” The need to know if there are other forms of intelligent life out there in the galaxy is deeply rooted, and knowing for sure either way would have massive implications.

But it’s a big galaxy, and knowing where to look for signals that might mean we’re not alone is a tough task. Devoting limited and expensive resources to randomly listen to chunks of the sky in the hopes of hearing something that’s obviously made by a technical civilization is unlikely to bear fruit. Much better would be to have something to base sensible observations on — some kind of target that has a better chance of paying off.

Luckily, a chance observation nearly 50 years ago has provided just that. The so-called Wow! Signal, much discussed but only occasionally and somewhat informally studied, has provided a guidepost in the sky, thanks in part to a citizen scientist with a passion for finding exoplanets.

Continue reading “The Wow! Signal Revisited: Citizen Science Informs SETI Effort”

Dosimetry: Measuring Radiation

Thanks to stints as an X-ray technician in my early 20s followed by work in various biology labs into my early 40s, I’ve been classified as an “occupationally exposed worker” with regard to ionizing radiation for a lot of my life. And while the jobs I’ve done under that umbrella have been vastly different, they’ve all had some common ground. One is the required annual radiation safety training classes. Since the physics never changed and the regulations rarely did, these sessions would inevitably bore everyone to tears, which was a pity because it always felt like something I should be paying very close attention to, like the safety briefings flight attendants give but everyone ignores.

The other thing in common was the need to keep track of how much radiation my colleagues and I were exposed to. Aside from the obvious health and safety implications for us personally, there were legal and regulatory considerations for the various institutions involved, which explained the ritual of finding your name on a printout and signing off on the dose measured by your dosimeter for the month.

Dosimetry has come a long way since I was actively considered occupationally exposed, and even further from the times when very little was known about the effects of radiation on living tissue. What the early pioneers of radiochemistry learned about the dangers of exposure was hard-won indeed, but gave us the insights needed to develop dosimetric methods and tools that make working with radiation far safer than it ever was.

Continue reading “Dosimetry: Measuring Radiation”

M.2 For Hackers – Connectors

In the first M.2 article, I’ve described real-world types and usecases of M.2 devices, so that you don’t get confused when dealing with various cards and ports available out there. I’ve also designed quite a few M.2 cards and card-accepting adapters myself. And today, I’d like to tell you everything you need to know in order to build M.2 tech on your own.

There’s two sides to building with M.2 – adding M.2 sockets onto your PCBs, and building the PCBs that are M.2 cards. I’ll cover both of these, starting with the former, and knowing how to deal with M.2 sockets might be the only thing you ever need. Apart from what I’ll be describing, there’s some decent guides you can learn bits and pieces from, like the Sparkfun MicroMod design guide, most of which is MicroMod-specific but includes quite a few M.2 tips and tricks too.

First, Let’s Talk About The Y-Key

What could you do with a M.2 socket on your PCB? For a start, many tasty hobbyist-friendly SoMs and CPUs now have a PCIe interface accessible, and if you’re building a development board or a simple breakout, an M.2 socket will let you connect an NVMe SSD for all your high-speed low-power storage needs – many Raspberry Pi Compute Module mainboards have M.2 M-key sockets specifically for that, and there’s NVMe support in the RPi firmware to boot. Plus, you can always plug a full-sized PCIe adapter or an extender into such a socket and connect a PCIe network card or other much-needed device – even perhaps, an external GPU! However, as much as PCIe-equipped SoMs are tasty, they’re far from the only reason to use M.2 sockets.

Continue reading “M.2 For Hackers – Connectors”

Bye Bye Linux On The 486. Will We Miss You?

A footnote in the week’s technology news came from Linus Torvalds, as he floated the idea of abandoning support for the Intel 80486 architecture in a Linux kernel mailing list post. That an old and little-used architecture might be abandoned should come as no surprise, it’s a decade since the same fate was meted out to Linux’s first platform, the 80386. The 486 line may be long-dead on the desktop, but since they are not entirely gone from the embedded space and remain a favourite among the retrocomputer crowd it’s worth taking a minute to examine what consequences if any there might be from this move.

Is A 486 Even Still A Thing?

Block diagram of the ZFx86 SoC
An entire 486 PC in a chip that only uses 1W, that would have been amazing in 1994!

The Intel 80486 was released in 1989, and was substantially an improved version of their previous 80386 line of 32-bit microprocessors with an on-chip cache, more efficient pipelining, and a built-in mathematical co-processor. It had a 32-bit address space, though in practice the RAM and motherboard constraints of the 1990s meant that a typical 486 system would have RAM in megabyte quantities. There were a range of versions in clock speeds from 16 MHz to 100 MHz over its lifetime, and a low-end “SX” range with the co-processor disabled. It would have been the object of desire as a processor on which to run WIndows 3.1 and it remained a competent platform for Windows 95, but by the end of the ’90s its days on the desktop were over. Intel continued the line as an embedded processor range into the 2000s, finally pulling the plug in 2007. The 486 story was by no means over though, as a range of competitors had produced their own take on the 486 throughout its active lifetime. The non-Intel 486 chips have outlived the originals, and even today in 2022 there is more than one company making 486-compatible devices. RDC produce a range of RISC SoCs that run 486 code, and according to the ZF Micro Solutions website they still boast of an SoC that is a descendant of the Cyrix 486 range. There is some confusion online as to whether DM&P’s Vortex86 line are also 486 derivatives, however we understand them to be descendants of Rise Technology’s Pentium clone. Continue reading “Bye Bye Linux On The 486. Will We Miss You?”