The Z80 Is Dead. Long Live The Free Z80!

It’s with a tinge of sadness that we and many others reported on the recent move by Zilog to end-of-life the original Z80 8-bit microprocessor. This was the part that gave so many engineers and programmers their first introduction to a computer of their own. Even though now outdated its presence has been a constant over the decades. Zilog will continue to sell a Z80 derivative in the form of their eZ80, but that’s not the only place the core can be found on silicon. [Rejunity] is bringing us an open-source z80 core on real hardware, thanks of course to the TinyTapeout ASIC project. The classic core will occupy two tiles on the upcoming TinyTapeout 7. While perhaps it’s not quite the same as a real 40-pin DIP in your hands, like all of the open-source custom silicon world, it’s as yet early days.

The core in question is derived from the TV80 open-source core, which we would be very interested to compare when fabricated at TinyTapeout’s 130nm process with an original chip from a much larger 1970s process. While It’s true that this project is more of an interesting demonstration of TinyTapeout than a practical everyday Z80, it does at least serve as a reminder that there may be a future point in which a run of open-source real Z80s or other chips might become possible.

This isn’t the first time we’ve featured a TinyTapeout project.

Australian Library Uses Chatbot To Imitate Veteran With Predictable Results

The educational sector is usually the first to decry large language models and AI, due to worries about cheating. The State Library of Queensland, however, has embraced the technology in controversial fashion. In the lead-up to Anzac Day, the primarily Australian war memorial holiday, the library released a chatbot intended to imitate a World War One veteran. It went as well as you’d expect.

The highlighted line was apparently added to the chatbot’s instructions later on to help shut down tomfoolery.

Twitter users immediately chimed in with dismay at the very concept. Others showed how easy it was to “jailbreak” the AI, convincing Charlie he was actually supposed to teach Python, imitate Frasier Crane, or explain laws like Elle from Legally Blonde. One person figured out how to get Charlie to spit out his initial instructions; these were patched later in the day to try and stop some of the shenanigans.

From those instructions, it’s clear that this was supposed to be educational, rather than some sort of macabre experiment. However, Charlie didn’t do a great job here, either. As with any Large Language Model, Charlie had no sense of objective truth. He routinely spat out incorrect facts regarding the war, and regularly contradicted himself.

Generally, any plan that includes the words “impersonate a veteran” is a foolhardy one at best. Throwing a machine-generated portrait and a largely uncontrolled AI into the mix didn’t help things. Regardless, the State Library has left the “Virtual Veterans” experience up at the time of writing.

The problem with AI is that it’s not a magic box that gets things right all the time. It never has been. As long as organizations keep putting AI to use in ways like this, the same story will keep playing out.

Hackaday Podcast Episode 268: RF Burns, Wireless Charging Sucks, And Barnacles Grow On Flaperons

Not necessarily the easy way to program an EPROM

Elliot and Dan got together to enshrine the week’s hacks in podcast form, and to commiserate about their respective moms, each of whom recently fell victim to phishing attacks. It’s not easy being ad hoc tech support sometimes, and as Elliot says, when someone is on the phone telling you that you’ve been hacked, he’s the hacker. Moving on to the hacks, we took a look at a hacking roadmap for a cheap ham radio, felt the burn of AM broadcasts, and learned how to program old-school EPROMs on the cheap.

We talked about why having a smart TV in your house might not be so smart, especially for Windows users, and were properly shocked by just how bad wireless charging really is. Also, cheap wind turbines turn out to be terrible, barnacles might give a clue to the whereabouts of MH370, and infosec can really make use of cheap microcontrollers.

Grab a copy for yourself if you want to listen offline.

Continue reading “Hackaday Podcast Episode 268: RF Burns, Wireless Charging Sucks, And Barnacles Grow On Flaperons”

This Week In Security: Cisco, Mitel, And AI False Flags

There’s a trend recently, of big-name security appliances getting used in state-sponsored attacks. It looks like Cisco is the latest victim, based on a report by their own Talos Intelligence.

This particular attack has a couple of components, and abuses a couple of vulnerabilities, though the odd thing about this one is that the initial access is still unknown. The first part of the infection is Line Dancer, a memory-only element that disables the system log, leaks the system config, captures packets and more. A couple of the more devious steps are taken, like replacing the crash dump process with a reboot, to keep the in-memory malware secret. And finally, the resident installs a backdoor in the VPN service.

There is a second element, Line Runner, that uses a vulnerability to arbitrary code from disk on startup, and then installs itself onto the device. That one is a long term command and control element, and seems to only get installed on targeted devices. The Talos blog makes a rather vague mention of a 32-byte token that gets pattern-matched, to determine an extra infection step. It may be that Line Runner only gets permanently installed on certain units, or some other particularly fun action is taken.

Fixes for the vulnerabilities that allowed for persistence are available, but again, the initial vector is still unknown. There’s a vulnerability that just got fixed, that could have been such a vulnerability. CVE-2024-20295 allows an authenticated user with read-only privileges perform a command injection as root. Proof of Concept code is out in the wild for this one, but so far there’s no evidence it was used in any attacks, including the one above. Continue reading “This Week In Security: Cisco, Mitel, And AI False Flags”

Analyzing The Code From The Terminator’s HUD

The T-800, also known as the Terminator, was like some kind of non-giving up robot guy. The robot assassin viewed the world through a tinted view with lines of code scrolling by all the while. It was cinematic shorthand to tell the audience they were looking through the eyes of a machine. Now, a YouTuber called [Open Source] has analyzed that code.

The video highlights one interesting finds, concerning graphics seen in the T-800’s vision. They appear to match the output of various code listings and articles in Nibble Magazine, specifically its September 1984 issue. One example spotted was a compass rose, spawned from an Apple Basic listing. it was a basic quiz to help teach children to understand the compass. Another graphic appears to be cribbed from the same issue in the MacPaint Patterns section.

The weird thing is that the original film came out in October 1984 — just a month after that article would have hit the news stands. It suggests perhaps someone involved with the movie was also involved or had access to an early copy of Nibble Magazine — or that the examples in the magazine were just rehashed from some other earlier source.

Code that regularly flickers in the left of the T-800s vision is just 6502 machine code. It’s apparently just a random hexdump from an Apple II’s memory. At other times, there’s also 6502 assembly code on screen which includes various programmer comments still intact. There’s even some code cribbed from the Apple II DOS 3.3 RAM Disk driver.

It’s neat to see someone actually track down the background of these classic graphics. Hacking and computers are usually portrayed in a fairly unrealistic way in movies, and it’s no different in The Terminator (1984). Still, that doesn’t mean the movies aren’t fun!

Continue reading “Analyzing The Code From The Terminator’s HUD”

This Week In Security: XZ, ATT, And Letters Of Marque

The xz backdoor is naturally still the top story of the week. If you need a refresher, see our previous coverage. As expected, some very talented reverse engineers have gone to work on the code, and we have a much better idea of what the injected payload does.

One of the first findings to note is that the backdoor doesn’t allow a user to log in over SSH. Instead, when an SSH request is signed with the right authentication key, one of the certificate fields is decoded and executed via a system() call. And this makes perfect sense. An SSH login leaves an audit trail, while this backdoor is obviously intended to be silent and secret.

It’s interesting to note that this code made use of both autotools macros, and the GNU ifunc, or Indirect FUNCtions. That’s the nifty feature where a binary can include different versions of a function, each optimized for a different processor instruction set. The right version of the function gets called at runtime. Or in this case, the malicious version of that function gets hooked in to execution by a malicious library. Continue reading “This Week In Security: XZ, ATT, And Letters Of Marque”

Generative AI Now Encroaching On Music

While it might not seem like it to a novice, music turns out to be a highly mathematical endeavor with precise ratios between chords and notes as well as overall structure of rhythm and timing. This is especially true of popular music which has even more recognizable repeating patterns and trends, making it unfortunately an easy target for modern generative AI which is capable of analyzing huge amounts of data and creating arguably unique creations. This one, called Suno, does just that for better or worse.

Unlike other generative AI offerings that are currently available for creating music, this one is not only capable of generating the musical underpinnings of the song itself but can additionally create a layer of intelligible vocals as well. A deeper investigation of the technology by Rolling Stone found that the tool uses its own models to come up with the music and then offloads the text generation for the vocals to ChatGPT, finally using the generated lyrics to generate fairly convincing vocals. Like image and text generation models that have come out in the last few years, this has the potential to be significantly disruptive.

While we’re not particularly excited about living in a world where humans toil while the machines create art and not the other way around, at best we could hope for a world where real musicians use these models as tools to enhance their creativity rather than being outright substitutes, much like ChatGPT itself currently is for programmers. That might be an overly optimistic view, though, and only time will tell.