Retrotechtacular: IBM’s The World Of OCR

Optical Character Recognition (OCR) forms the bridge between the analog world of paper and the world of machines. The modern-day expectation is that when we point a smartphone camera at some characters it will flawlessly recognize and read them, but OCR technology predates such consumer technology by a considerable amount, with IBM producing OCR systems as early as the 1950s. In a 1960s promotional video on the always delightful Periscope Film channel on YouTube we can get an idea of how this worked back then, in particular the challenge of variable quality input.

What drove OCR was the need to process more paper-based data faster, as the amount of such data increased and computers got more capable. This led to the design of paper forms that made the recognition much easier, as can still be seen today on for example tax forms and on archaic paper payment methods like checks in countries that still use it. This means a paper form optimized for reflectivity, with clearly designated sections and lines, thus limiting the variability of the input forms to be OCR-ed. After that it’s just a matter of writing with clear block letters into the marked boxes, or using a typewriter with a nice fresh ink ribbon.

These days optical scanners are a lot more capable, of course, making many of such considerations no longer as relevant, even if human handwriting remains a challenge for OCR and human brains alike.

Continue reading “Retrotechtacular: IBM’s The World Of OCR”

Surviving The RAM Apocalypse With Software Optimizations

To the surprise of almost nobody, the unprecedented build-out of datacenters and the equipping of them with servers for so-called ‘AI’ has led to a massive shortage of certain components. With random access memory (RAM) being so far the most heavily affected and with storage in the form of HDDs and SSDs not far behind, this has led many to ask the question of how we will survive the coming months, years, decades, or however-long the current AI bubble will last.

One thing is already certain, and that is that we will have to make our current computer systems last longer, and forego simply tossing in more sticks of RAM in favor of doing more with less. This is easy to imagine for those of us who remember running a full-blown Windows desktop system on a sub-GHz x86 system with less than a GB of RAM, but might require some adjustment for everyone else.

In short, what can us software developers do differently to make a hundred MB of RAM stretch further, and make a GB of storage space look positively spacious again?

Continue reading “Surviving The RAM Apocalypse With Software Optimizations”

Internet-Connected Consoles Are Retro Now, And That Means Problems

A long time ago, there was a big difference between PC and console gaming. The former often came with headaches. You’d fight with drivers, struggle with crashes, and grow ever more frustrated dealing with CD piracy checks and endless patches and updates. Meanwhile, consoles offered the exact opposite experience—just slam in a cartridge, and go!

That beautiful feature fell away when consoles joined the Internet. Suddenly there were servers to sign in to and updates to download and a whole bunch of hoops to jump through before you even got to play a game. Now, those early generations of Internet-connected consoles are becoming retro, and that’s introduced a whole new set of problems now the infrastructure is dying or dead. Boot up and play? You must be joking!

Continue reading “Internet-Connected Consoles Are Retro Now, And That Means Problems”

Catching Those Old Busses

The PC has had its fair share of bus slots. What started with the ISA bus has culminated, so far, in PCI Express slots, M.2 slots, and a few other mechanisms to connect devices to your computer internally. But if the 8-bit ISA card is the first bus you can remember, you are missing out. There were practically as many bus slots in computers as there were computers. Perhaps the most famous bus in early home computers was the Altair 8800’s bus, retroactively termed the S-100 bus, but that wasn’t the oldest standard.

There are more buses than we can cover in a single post, but to narrow it down, we’ll assume a bus is a standard that allows uniform cards to plug into the system in some meaningful way. A typical bus will provide power and access to the computer’s data bus, or at least to its I/O system. Some bus connectors also allow access to the computer’s memory. In a way, the term is overloaded. Not all buses are created equal. Since we are talking about old bus connectors, we’ll exclude new-fangled high speed serial buses, for the most part.

Tradeoffs

There are several trade-offs to consider when designing a bus. For example, it is tempting to provide regulated power via the bus connector. However, that also may limit the amount of power-hungry electronics you can put on a card and — even worse — on all the cards at one time. That’s why the S-100 bus, for example, provided unregulated power and expected each card to regulate it.

On the other hand, later buses, such as VME, will typically have regulated power supplies available. Switching power supplies were a big driver of this. Providing, for example, 100 W of 5 V power using a linear power supply was a headache and wasteful. With a switching power supply, you can easily and efficiently deliver regulated power on demand.

Some bus standards provide access to just the CPU’s I/O space. Others allow adding memory, and, of course, some processors only allow memory-mapped I/O. Depending on the CPU and the complexity of the bus, cards may be able to interrupt the processor or engage in direct memory access independent of the CPU.

In addition to power, there are several things that tend to differentiate traditional parallel buses. Of course, power is one of them, as well as the number of bits available for data or addresses. Many bus structures are synchronous. They operate at a fixed speed, and in general, devices need to keep up. This is simple, but it can impose tight requirements on devices.

Continue reading “Catching Those Old Busses”

Thorium-Metal Alloys And Radioactive Jet Engines

Although metal alloys is not among the most exciting topics for most people, the moment you add the word ‘radioactive’, it does tend to get their attention. So too with the once fairly common Mag-Thor alloys that combine magnesium with thorium, along with other elements, including zinc and aluminium. Its primary use is in aerospace engineering, as these alloys provide useful properties such as heat resistance, high strength and creep resistance that are very welcome in e.g. jet engines.

Most commonly found in the thorium-232 isotope form, there are no stable forms of this element. That said, Th-232 has a half-life of about 14 billion years, making it only very weakly radioactive. Like uranium-238 and uranium-235 it has the unique property of not having stable isotopes and yet still being abundantly around since the formation of the Earth. Thorium is about three times as abundant as uranium and thus rather hard to avoid contact with.

This raises the question of whether thorium alloys are such a big deal, and whether they justify removing something like historical artefacts from museums due to radiation risks, as has happened on a few occasions.

Continue reading “Thorium-Metal Alloys And Radioactive Jet Engines”

A Brief History Of The Spreadsheet

We noted that Excel turned 40 this year. That makes it seem old, and today, if you say “spreadsheet,” there’s a good chance you are talking about an Excel spreadsheet, and if not, at least a program that can read and produce Excel-compatible sheets. But we remember a time when there was no Excel. But there were still spreadsheets. How far back do they go? Continue reading “A Brief History Of The Spreadsheet”

Creating User-Friendly Installers Across Operating Systems

After you have written the code for some awesome application, you of course want other people to be able to use it. Although simply directing them to the source code on GitHub or similar is an option, not every project lends itself to the traditional configure && make && make install, with often dependencies being the sticking point.

Asking the user to install dependencies and set up any filesystem links is an option, but having an installer of some type tackle all this is of course significantly easier. Typically this would contain the precompiled binaries, along with any other required files which the installer can then copy to their final location before tackling any remaining tasks, like updating configuration files, tweaking a registry, setting up filesystem links and so on.

As simple as this sounds, it comes with a lot of gotchas, with Linux distributions in particular being a tough nut. Whereas on MacOS, Windows, Haiku and many other OSes you can provide a single installer file for the respective platform, for Linux things get interesting.

Continue reading “Creating User-Friendly Installers Across Operating Systems”