Retrotechtacular: The Computer Center Of 1973

You might expect Bell Labs would have state-of-the-art computers, and they did. But it is jarring to realize just how little that was in 1973, fifty years ago. If you started work at Bell’s Holmdel Computing Center back then, you might have watched one of the orientation videos below. Your first clue about how far things have come might be the reference to the IBM 370/165, which had “3 million bytes of core, 2 million of which are available for programmer use.” Even our laptops today have at least 8 gigabytes of RAM. There were at least two other smaller IBM 370s, too. Plenty of 029 card punches are visible.

If you were trying to run something between 8:00 AM and 5:30 PM, you had to limit your job run time to three minutes, 4,000 lines of output, and no more than 1,000 cards in and 5,000 cards out. Oh, and don’t use more than 384 kB of that core memory, either. If you fell within those limits, you could hand your card deck over at the express counter and get your results in only five or ten minutes. If you were not in the express line but still rated “premium” service, you could expect to wait a half hour.

Continue reading “Retrotechtacular: The Computer Center Of 1973”

Finding Undocumented 8086 Instructions Via Microcode

Video gamers know about cheat codes, but assembly language programmers are often in search of undocumented instructions. One way to find them is to map out all of a CPU’s opcodes and where there are holes, try those values, and see what happens. Not good enough for [Ken Shirriff]. He prefers examining the CPU’s microcode and deducing what each part of it does.

Microcode is a feature of many modern CPUs. The CPU runs several “microcode” instructions to process a single opcode. For the Intel 8086, there are 512 micro instructions, each with 21 bits. Each instruction has two parts: a part that moves a source to a destination and another that performs some other operation, such as an ALU operation. [Ken] explains it all in the post, including several hidden registers you can’t see, but the microcode can.

Searching for holes in the opcode table.

Some of the undocumented instructions are probably not useful. They are either impractical or duplicate a function you can already do another way. Not all of the instructions are there for technical reasons. For example, opcode D6, commonly known as SALC for “Set AL to Carry”, seems to exist only as a trap for anyone making a carbon copy of Intel’s microcode. When other companies like NEC made 8086 clones, having an undocumented instruction would strongly suggest they just copied Intel’s intellectual property (in NECs case, they didn’t).

Other cases happen where an instruction just doesn’t make sense. For example, you can pop all segment registers, and though it is not documented, you can deduce that POP CS should be opcode 0F. The problem is there is no sane reason to pop CS off the stack. The instruction works; it just isn’t useful. The opcodes from 60-6F are conditional jumps that are no different from the instructions at 70-7F because of decoding. There is no reason to document both identical instruction ranges.

The plot thickens when you go to two-byte instructions. You’ll find plenty of instructions of dubious value. You don’t hear much about undocumented instructions anymore. Why? Because modern CPUs have enough circuitry to dedicate some to detecting illegal instructions and halting the CPU. But the 8086 was squeezed too tight to allow for such a luxury. Good thing for people like us who enjoy solving puzzles.

You can still get a modern CPU to tell you more about instructions even if it won’t run them. Even the 80286 had some secret opcodes.

Ask Hackaday: Learn Assembly First, Last, Or Never?

A few days ago, I ran into an online post where someone pointed out the book “Learn to Program with Assembly” and asked if anyone had ever learned assembly language as a first programming language. I had to smile because, if you are a certain age, your first language may well have been assembly, even if it was assembly for machines that never existed.

Of course, that was a long time ago. It is more likely, these days, if you are over 40, you might have learned BASIC first. Go younger, and you start skewing towards Java, Javascript, or even C. It got me thinking, though: should people learn assembly, and if so, when?

Continue reading “Ask Hackaday: Learn Assembly First, Last, Or Never?”

Timekeeping For Distributed Computers

Ask any programmer who has ever had to deal with timekeeping on a computer, and they’re likely to go on at length about how it can be a surprisingly difficult thing to keep track of. Time zones, leap years, leap seconds, various timekeeping standards, clock drift, and even relativity are all problems that can creep in to projects. Issues with timekeeping are exacerbated in distributed systems as well, adding another layer of complexity when we need to reliably determine the order that a series of actions occurred across a number of different computers with a high precision. One solution to this problem is the implementation of a vector clock.

When using other systems such as logical clocks to attempt to keep track of the order of events on different computers, a problem that may arise is that these systems don’t always track these changes with perfect reliability due to many issues such as varying temperature, race conditions, or clock skew. The vector clock instead tracks causal relationships between events. Each separate process maintains its own vector clock, represented by a list of integers. When one of these processes performs an event, it increments its own clock and sends it out to the rest of the system. By keeping track of this clock as it is updated by various processes across the computer the distributed system can be much more confident about the order in which events took place.

Of course, there are always downsides with elegant solutions like this. In the case of vector clocks the downside is largely increased overhead for keeping track of all of the sets of integers. But in systems where the ordering of processes is of the upmost importance, this is worth the trade-off to ensure reliability. And unless we hook all of our computers up to atomic clocks like they do for some computers at CERN we will have to take the increased overhead instead.

A Browser Approach To Parsing

There are few rites of programmer passage as iconic as writing your first parser. You might want to interpret or compile a scripting language, or you might want to accept natural-language-like commands. You need a parser. [Varunramesh] wants to show you parser combinators, a technique used to make practical parsers. But the demonstration using interactive code cells in the web page is nearly as interesting as the technique.

Historically, you parse tokens, and this technique can do that too, but it can also operate directly on character streams if you prefer. The idea is related to recursive descent parsing, where you attempt to parse certain things, and if those things fail, you try again.

Continue reading “A Browser Approach To Parsing”

This Week In Security: NOAuth, MiniDLNA, And Ticket To Ride

There’s a fun logic flaw in how multiple online services handle OAuth logins, that abuses Microsoft’s Azure Active Directory service to allow account takeovers. The problem is how a site handles the “Sign In With Microsoft” option, when there’s an existing account under the same email address. This is an irritating problem for an end-user, when a site offers multiple sign-in options. Trying to remember which option was used to set up an account is a struggle, so many services automatically merge accounts.

The problem is that the Microsoft Azure authentication information includes an email address, but Microsoft hasn’t done any verification that the account in question actually controls that address. And in fact, it’s trivial for the Azure admin to change that address at whim. So if the service accepts that email address as authoritative, and auto-merges the accounts, it’s a trivial account takeover. And it’s more than just a theoretical problem, as researchers at descope were able to demonstrate the attack, and have found multiple medium and large services that were vulnerable, as well as at least two authentication providers that themselves were vulnerable to this attack.

Microsoft has pushed updates to the Azure AD service to make the issue easier to avoid, though it seems that the unverified “email” field is still being sent on authentication transactions. There is a new flag, “RemoveUnverifiedEmailClaim” that eliminates the issue, and is enabled by default for new applications. Unfortunately this means that existing vulnerable applications will continue to be vulnerable until fixed on the application side. Continue reading “This Week In Security: NOAuth, MiniDLNA, And Ticket To Ride”

A Simple Guide To Bit Banged I2C On The 6502

We covered [Anders Nielsen]’s 65duino project a short while ago, and now he’s back with an update video showing some more details of bit-banging I2C using plain old 6502 assembly language.

Obviously, with such a simple system, there is no dedicated I2C interface hardware, so the programmer must take care of all the details of the I2C protocol in software, bit-banging it out to the peripheral and reading back the response one bit at a time.

The first detail to concern us will be the I2C addresses of the devices being connected to the bus and how low-level bit manipulation is used to turn the 7-bit I2C address into the byte being bit-banged. As [Anders] shows, setting a bit is simply a logical-OR operation, and resetting a bit is a simple logical-AND operation using the inversion (or one’s complement) bit to reset to form a bitmask. As many will already know, this process is necessary to code for a read or a write I2C operation. A further detail is that I2C uses an open-collector connection scheme, which means that no device on the bus may drive the bus to logical high; instead, they must release the drive by going to the high impedance state, and an external pull-up resistor will pull the bus high. The 6532 RIOT chip (used for I/O on the 65unio) does not have tristate control but instead uses a data direction register (DDR) to allow a pin to be an input. This will do the job just fine, albeit with slightly odd-looking code, until you know what’s going on.

From there, it’s a straightforward matter to write subroutines that generate the I2C start, stop, and NACK conditions that are required to write to the SSD1306-based OLED to get it to do something we can observe. From these basic roots, through higher-level subroutines, a complete OLED library in assembly can be constructed. We shall sit tight and await where [Anders] goes next with this!

We see I2C-connected things all the time, like this neat ATtiny85-based I2C peripheral, and whilst we’re talking about the SSD1306 OLED display controller, here’s a hack that shows just how much you can push your luck with the I2C spec and get some crazy frame rates.

Continue reading “A Simple Guide To Bit Banged I2C On The 6502”