Inside Starlink’s User Terminal

If you talk about Starlink, you are usually talking about the satellites that orbit the Earth carrying data to and from ground stations. Why not? Space is cool. But there’s another important part of the system: the terminals themselves. Thanks to [DarkNavy], you don’t have to tear one open yourself to see what’s inside.

The terminal consists of two parts: the router and the antenna. In this context, antenna is somewhat of a misnomer, since it is really the RF transceiver and antenna all together. The post looks only at the “antenna” part of the terminal.

Continue reading “Inside Starlink’s User Terminal”

Your Own Core Rope Memory

If you want read-only memory today, you might be tempted to use flash memory or, if you want old-school, maybe an EPROM. But there was a time when that wasn’t feasible. [Igor Brichkov] shows us how to make a core rope memory using a set of ferrite cores and wire. This was famously used in early UNIVAC computers and the Apollo guidance computer. You can see how it works in the video below.

While rope memory superficially resembles core memory, the principle of operation is different. In core memory, the core’s magnetization is what determines any given bit. For rope memory, the cores are more like a sensing element. A set wire tries to flip the polarity of all cores. An inhibit signal stops that from happening except on the cores you want to read. Finally, a sense wire weaves through the cores and detects a blip when a core changes polarity. The second video, below, is an old MIT video that explains how it works (about 20 minutes in).

Why not just use core memory? Density. These memories could store much more data than a core memory system in the same volume. Of course, you could write to core memory, too, but that’s not always a requirement.

We’ve seen a resurgence of core rope projects lately. Regular old core is fun, too.

Continue reading “Your Own Core Rope Memory”

RADUGA: The Retro Computer From Behind The Curtain

When [Kasyan] was six years old, he saw a RADUGA computer, a Russian unit from the 1990s, and it sparked his imagination. He has one now that is a little beat up, but we feel like he sees it through his six-year-old eyes as a shiny new computer. The computer, which you can see in the video below, was a clone of the Spectrum 48K.

The box is somewhat klunky-looking, and inside is also a bit strange. The power supply is a — for the time — state-of-the-art switching power supply. Since it wasn’t in good shape, he decided to replace it with a more modern supply.

The main board was also not in good shape. A Zilog CPU is on a large PCB with suspicious-looking capacitors. The mechanical keyboard is nothing more than a array of buttons, and wouldn’t excite today’s mechanical key enthusiast.

Continue reading “RADUGA: The Retro Computer From Behind The Curtain”

Version Control To The Max

There was a time when version control was an exotic idea. Today, things like Git and a handful of other tools allow developers to easily rewind the clock or work on different versions of the same thing with very little effort. I’m here to encourage you not only to use version control but also to go even a step further, at least for important projects.

My First Job

The QDP-100 with — count ’em — two 8″ floppies (from an ad in Byte magazine)

I remember my first real job back in the early 1980s. We made a particular type of sensor that had a 6805 CPU onboard and, of course, had firmware. We did all the development on physically big CP/M machines with the improbable name of Quasar QDP-100s. No, not that Quasar. We’d generate a hex file, burn an EPROM, test, and eventually, the code would make it out in the field.

Of course, you always have to make changes. You might send a technician out with a tube full of EPROMs or, in an emergency, we’d buy the EPROMs space on a Greyhound bus. Nothing like today.

I was just getting started, and the guy who wrote the code for those sensors wasn’t much older than me. One day, we got a report that something was misbehaving out in the field. I asked him how we knew what version of the code was on the sensor. The blank look I got back worried me. Continue reading “Version Control To The Max”

What’s An LCR Databridge?

[Thomas Scherrer] has an odd piece of vintage test equipment in his most recent video. An AIM LCR Databridge 401. What’s a databridge? We assume it was a play on words of an LCR bridge with a digital output. Maybe. You can see a teardown in the video below.

Inside the box is a vintage 1983 Z80 CPU with all the extra pieces. The device autoranges, at least it seems as much. However, the unit locks up when you use the Bias button, but it isn’t clear if that’s a fault or if it is just waiting for something to happen.

The teardown starts at about six minutes in. Inside is a very large PCB. The board is soldermasked and looks good, but the traces are clearly set by a not-so-steady hand. In addition to AIM, Racal Dana sold this device as a model 9341. The service manual for that unit is floating around, although we weren’t able to download it due to a server issue. A search could probably turn up copies.

Continue reading “What’s An LCR Databridge?”

Thermal Monocular Brings The Heat At 10X

[Project 326] is following up on his thermal microscope with a thermal telescope or, more precisely, a thermal monocular. In fact, many of the components and lenses in this project are the same as those in the microscope, so you could cannibalize that project for this one, if you wanted.

During the microscope project, [Project 326] noted that first-surface mirrors reflect IR as well as visible light. The plan was to make a Newtonian telescope for IR instead of light. While the resulting telescope worked with visible light, the diffraction limit prevented it from working for its intended purpose.

Continue reading “Thermal Monocular Brings The Heat At 10X”

Remembering Memory: EMS, And TSRs

You often hear that Bill Gates once proclaimed, “640 kB is enough for anyone,” but, apparently, that’s a myth — he never said it. On the other hand, early PCs did have that limit, and, at first, that limit was mostly theoretical.

After all, earlier computers often topped out at 64 kB or less, or — if you had some fancy bank switching — maybe 128 kB. It was hard to justify the cost, though. Before long, though, 640 kB became a limit, and the industry found workarounds. Mercifully, the need for these eventually evaporated, but for a number of years, they were a part of configuring and using a PC.

Why 640 kB?

The original IBM PC sported an Intel 8088 processor. This was essentially an 8086 16-bit processor with an 8-bit external data bus. This allowed for cheaper computers, but both chips had a strange memory addressing scheme and could access up to 1 MB of memory.

In fact, the 8088 instructions could only address 64 kB, very much like the old 8080 and Z80 computers. What made things different is that they included a number of 16-bit segment registers. This was almost like bank switching. The 1 MB space could be used 64 kB at a time on 16-byte boundaries.

So a full address was a 16-bit segment and a 16-bit offset. Segment 0x600D, offset 0xF00D would be written as 600D:F00D. Because each segment started 16-bytes after the previous one, 0000:0020, 0001:0010, and 0002:0000 were all the same memory location. Confused? Yeah, you aren’t the only one.

Continue reading “Remembering Memory: EMS, And TSRs”