The Apple II is one of the most iconic microcomputers, and [James Lewis] decided to use the Mega-II “Apple IIe on a chip” from an Apple IIgs to build a tiny Apple IIe.
While there was an Apple II compatibility card using the related Gemini chip, it was initially unclear whether the Mega-II could even work outside of an Apple IIgs given the lack of documentation for either Apple II SOC. [Lewis] did finally get the Mega-II to boot after a great deal of effort in debugging and design. The system is built with three boards: the Mega-II and RAM board, a CPU board with a 65C02, and a video out board.
To simplify routing, the boards are all four layer PCBs. Unfortunately, the chips needed to make this system, especially the Mega-II, aren’t available on their own and must be harvested from an existing IIgs. [Lewis] took care to make sure any desoldering or other part removal was done in a way that it could be reversed. If you want to see all the nitty gritty details, check out his GitHub for the project.
If you want another 6502-based computer in a tiny package, why not try this one built on Perf+ boards?
Some of the most popular vintage computers are now more than forty years old, and their memory just ain’t how it used to be. Identifying bad memory chips can quickly become a chore, so [Jan Beta] spent some time putting together a cheap DRAM tester out of spare parts.
This little tester can be used with 4164 and 41256 DRAM memory chips. 4164 DRAM was used in several popular home computers throughout the 1970s and 1980s, including the Apple ][ series, Commodore 64, ZX Spectrum and many more. Likewise, the 41256 was used in the Commodore Amiga. These computers are incredibly popular in the vintage computing community, and its not uncommon to find bad memory in any of them.
With an Arduino at its core, this DRAM tester uses the most basic of electronic components, and any modest tinkerer should have pretty much everything in stock. The original project can be found here, including the Arduino code. Just pop the suspect chip into the ZIF socket, hit the reset switch, and wait for the LED – green is good, and red means it’s toast.
It’s a great sanity check for when you’re neck deep in suspect DRAM. A failed test is a sure sign that the chip is bad, however the tester does occasionally report a false pass. Not every issue can be identified with such a simple tester, however it’s great at weeding out the chips that are definitely dead.
If you’re not short on cash, then the Chip Tester Pro may be more to your liking, however it’s hard to beat the simplicity and thriftiness of building your own simple tester from spare parts. If you’re a little more adventurous, this in-circuit debugger could come in handy.
A major part of the retrocomputing scene for many of us lies in the world of chiptunes, music created either using original retrocomputing hardware or in the style of those early synthesiser chips. There’s one machine we don’t hear much about among all this though, and that’s the Apple II. Though probably one of the most expandable of all the 8-bit home computers, it lacked a sound channel beyond a speaker hooked up to a memory location port so any complex sound work had to be done via an add-on card. It’s something [Nicole Branagan] has investigated in depth, as she demonstrates first the buzz from the speaker and then what must have been an object of extreme desire back in the day, a Mockingboard sound card.
Her card is not an original but a modern recreation using the same hardware, which is to say a pair of 6522 VIA port chips, each driving an AY-3-8910 audio chip. This is already a familiar device to those who have heard an Amstrad CPC, a later Sinclair Spectrum or, an MSX, and in the Apple it delivers an impressive stereo sound thanks to both channels being present. Interestingly though, it delivers a far smoother output than an MSX playing the same music, probably because of a superior filtering circuit.
She wraps up with a discussion of coding on the Apple for the AY, and how to best accommodate the card on the later Apple IIgs. If the AY chip catches your interest, it’s also easy to drive from a microcontroller.
If we cast our minds back a few decades, almost all computers were a beige colour. “Beige box” even became a phrase for a generic PC, such was their ubiquity. Long before PCs though there were other beige computers, and probably one of the first to land on the desks of enthusiasts rather than professionals was the Apple ][. But exactly what beige colour was it? It’s a question that interested [Ben Zotto], and his quest led him through a fascinating exploration of a colour most of us consider to be boring.
We’re used to older beige computers becoming yellow with time, as the effect of light and age causes the fire retardants in their plastic to release bromine. But the earlier Apple products haven’t done this, because their beige came not from the plastic but from a paint. [Ben] was lucky enough to find a small pot of touch-up paint from Apple that was made available to dealers, so notwithstanding any slight pigment changes from its age, he set off in pursuit of its origin.
Along the way to identifying a modern Pantone shade (Pantone 14–0105 TPG, for the curious) he treats us to a cross-section of Apple’s early colour history with reference to the memories of early Apple luminaries. He even suggests readily available shades that could suffice, pointing to Gloss Almond Rust-Oleum spray paint.
So should you wish to colour-match to an early Apple, now you can. If you have a Commodore or an Atari though, maybe your task is a little easier.
Ever get that funny feeling that things aren’t quite what they used to be? Not in the way that a new washing machine has more plastic parts than one 40 years its senior. More like “my laptop can churn through hundreds of gigaflops, but when I scroll it doesn’t feel great.” That perception of smoothness might be based on a couple factors, including system latency. A couple years ago [danluu] had that feeling too and measured the latency of “devices I’ve run into in the past few months” (based on this list, he lives a more interesting life than we do). It turns out his hunch was objectively correct. What he wrote was a wonderful deep dive into how and why a wide variety of devices work and the hardware and software contributors to latency.
Let’s be clear about what “latency” means in this context. [danluu] was checking the time between a user input and some response on screen. For desktop systems he measured a keystroke, for mobile devices scrolling a browser. If you’re here on Hackaday (or maybe at a Vintage Computer Festival) the cause of the apparent contradiction at the top of the charts might be obvious.
Q: Why are some older systems faster than devices built decades later? A: The older systems just didn’t do much! Instead of complex multi-tasking operating systems doing hundreds of things at once, the CPU’s entire attention was bent on whatever user process was running. There are obvious practical drawbacks here but it certainly reduces context switching!
In some sense this complexity that [danluu] describes is at the core of how we solve problems with programming. Writing code is all about abstraction. While it’s true that any program could be written directly in machine code and customized to an individual machine’s hardware configuration, it would be pretty inconvenient for both developer and user. So over time layers of sugar have been added on top to hide raw hardware behind nicer interfaces written in higher-level programming languages.
And instead of writing every program to target exact hardware configurations there is a kernel to handle the lowest layers, then layers adding hotplug systems, power management, pluggable module and driver infrastructure, and more. When considering solutions to a programming problem the approach is always recursive: you can solve the problem, or add a layer of abstraction and reframe it. Enough layers of the latter makes the former trivial. But it’s abstractions all the way down.
[danluu]’s observation is that we’re just now starting to curve back around and hit low latency again, but this time by brute force! Modern solutions to latency largely look like increasingly exotic display technologies and complex optimizations which reach from UI draw functions all the way down to the silicon, not removing software and system infrastructure. It turns out the benefits of software complexity in terms of user experience and ease of development are worth it most of the time.
For a very tangible illustration of latency as applied to touchscreen devices, check out the Microsoft Research video after the break (linked to in [danluu]’s piece).
Our better-traveled colleagues having provided ample coverage of the 34C3 event in Leipzig just after Christmas, it is left to the rest of us to pick over the carcass as though it was the last remnant of a once-magnificent Christmas turkey. There are plenty of talks to sit and watch online, and of course the odd gem that passed the others by.
It probably doesn’t get much worse than nuclear conflagration, when it comes to risks facing the planet. Countries nervously peering at each other, each jealously guarding their stocks of warheads. It seems an unlikely place to find a 34C3 talk about 6502 microprocessors, but that’s what [Moritz Kütt] and [Alex Glaser] managed to deliver.
Policing any peace treaty is a tricky business, and one involving nuclear disarmament is especially so. There is a problem of trust, with so much at stake no party is anxious to reveal all but the most basic information about their arsenals and neither do they trust verification instruments manufactured by a state agency from another player. Thus the instruments used by the inspectors are unable to harvest too much information on what they are inspecting and can only store something analogous to a hash of the data they do acquire, and they must be of a design open enough to be verified. This last point becomes especially difficult when the hardware in question is a modern high-performance microprocessor board, an object of such complexity could easily have been compromised by a nuclear player attempting to game the system.
We are taken through the design of a nuclear weapon verification instrument in detail, with some examples and the design problems they highlight. Something as innocuous as an ATtiny microcontroller seeing to the timing of an analogue board takes on a sinister possibility, as it becomes evident that with compromised code it could store unauthorised information or try to fool the inspectors. They show us their first model of detector using a Red Pitaya FPGA board, but make the point that this has a level of complexity that makes it unverifiable.
Then comes the radical idea, if the technology used in this field is too complex for its integrity to be verified, what technology exists at a level that can be verified? Their answer brings us to the 6502, a processor in continuous production for over 40 years and whose internal structures are so well understood as to be de facto in the public domain. In particular they settle upon the Apple II home computer as a 6502 platform, because of its ready availability and the expandability of [Steve Wozniak]’s original design. All parties can both source and inspect the instruments involved.
If you’ve never examined a nuclear warhead verification device, the details of the system are fascinating. We’re shown the scintillation detector for measuring the energies present in the incident radiation, and the custom Apple II ADC board which uses only op-amps, an Analog Devices flash ADC chip, and easily verifiable 74-series logic. It’s not intentional but pleasing from a retro computing perspective that everything except perhaps the blue LED indicator could well have been bought for an Apple II peripheral back in the 1980s. They then wrap up the talk with an examination of ways a genuine 6502 system could be made verifiable through non-destructive means.
It is not likely that nuclear inspectors will turn up to the silos with an Apple II in hand, but this does show a solution to some of the problems facing them in their work and might provide pointers towards future instruments. You can read more about their work on their web site.
Emulators are a great way to reminisce about games and software from yesteryear. [Jorj Bauer] found himself doing just that back in 2002, when they decided to boot up Three Mile Island for the Apple II. It played well enough, but for some reason, crashed instantly if you happened to press the ‘7’ key. This was a problem — the game takes hours to play, and ‘7’ is the key for saving and restoring your progress. In 2002, [Jorj] was content to put up with this. But finally, enough was enough – [Jorj] set out to fix the bug in Three Mile Island once and for all.
The project is written up in three parts — the history of how [Jorj] came to play Three Mile Island and learn about Apple IIs in the first place, the problem with the game, and finally the approach to finding a solution. After first discovering the problem, [Jorj] searched online to see if it was just a bad disk image causing the problem. But every copy they found was the same. There was nothing left for it to be but problem in the binary.