Computers For Fun

The last couple years have seen an incredible flourishing of the cyberdeck scene, and probably for about as many reasons as there are individual ’deck designs. Some people get really into the prop-making, some into scrapping old tech or reusing a particularly appealing case, and others simply into the customization possibilities. That’s awesome, and they’re all different motivations for making a computer that’s truly your own.

But I really like the motivation and sentiment behind [Andreas Eriksen]’s PotatoP. (Assuming that his real motivation isn’t all the bad potato puns.) This is a small microcomputer that’s built on a commonly available microcontroller, so it’s not a particularly powerful beast – hence the “potato”. But what makes up for that in my mind is that it’s running a rudimentary bare-metal OS of his own writing. It’s like he’s taken the cyberdeck’s DIY aesthetic into the software as well.

What I like most about the spirit of the project is the idea of a long-term project that’s also a constant companion. Once you get past a terminal and an interpreter – [Andreas] is using LISP for both – everything else consists of small projects that you can check off one by one, that maybe don’t take forever, and that are limited in complexity by the hardware you’re working on. A simple text editor, some graphics primitives, maybe a sound subsystem. A way to read and write files in flash. I don’t love LISP personally, but I love that it brings interactivity and independence from an external compiler, making the it possible to develop the system on the system, pulling itself up by its own bootstraps.

Pretty soon, you could have something capable, and completely DIY. But it doesn’t need to be done all at once either. With a light enough computer, and a good basic foundation, you could keep it in your backpack and play “OS development” whenever you’ve got the free time. A DIY play OS for a sandbox computing platform: what more could a nerd want?

Retrotechtacular: The Revolutionary Visual Effects Of King Kong

Today, it’s easy to take realistic visual effects in film and TV for granted. Computer-generated imagery (CGI) has all but done away with the traditional camera tricks and miniatures used in decades past, and has become so commonplace in modern productions that there’s a good chance you’ve watched scenes without even realizing they were created partially, or sometimes even entirely, using digital tools.

But things were quite different when King Kong was released in 1933. In her recently released short documentary King Kong: The Practical Effects Wonder, Katie Keenan explains some the groundbreaking techniques used in the legendary film. At a time when audiences were only just becoming accustomed to experiencing sound in theaters, King Kong employed stop-motion animation, matte painting, rear projection, and even primitive robotics to bring the titular character to life in a realistic way.

Continue reading “Retrotechtacular: The Revolutionary Visual Effects Of King Kong

The First Gui? Volscan Controls The Air

In the 1950s,  computers were, for the most part, ponderous machines. But one machine offered a glimpse of the future. The Volscan was probably the first real air traffic computer designed to handle high volumes of military aircraft operations. It used a light gun that looked more like a soldering gun than a computer input device. There isn’t much data about Volscan, but it appears to have been before its time, and had arguably the first GUI on a computer system ever.

The Air Force had a problem. The new — in the 1950s — jets needed long landing approaches and timely landings since they burned more fuel at lower altitudes. According to the Air Force, they could land 40 planes in an hour, but they needed to be able to do 120 planes an hour. The Whirlwind computer had proven that computers could process radar data — although Whirlwind was getting the data over phone lines from a distance. So the Air Force’s Cambridge Research Center started working on a computerized system to land planes called Volscan, later known as AN/GSN-3.

Continue reading “The First Gui? Volscan Controls The Air”

What’s Going To Happen To Legacy Broadcast Bands When The Lights Go Out?

Our smartphones have become our constant companions over the last decade, and it’s often said that they have been such a success because they’ve absorbed the features of so many of the other devices we used to carry. PDA? Check. Pager? Check. Flashlight? Check. Camera? Check. MP3 player? Of course, and the list goes on. But alongside all that portable tech there’s a wider effect on less portable technology, and it’s one that even has a social aspect to it as well. In simple terms, there’s a generational divide that the smartphone has brought into focus, between older people who consume media in ways born in the analogue age, and younger people for whom their media experience is customized and definitely non-linear.

The Kids Just Don’t Listen To The Radio Any More

A 1957 American family watching TV
We’re guessing this is no longer a scene played out in many homes. Evert F. Baumgardner, Public domain.

The effect of this has been to see a slow erosion of the once-mighty reach of radio and TV broadcasters, and with that loss of listenership has come less of a need for the older technologies they relied on. Which leaves a fascinating question here at Hackaday, what is going to happen to all that spectrum? Indeed, there’s a deeper question behind all that, is lower frequency spectrum even that valuable any more?

In the old days, we had analogue TV in several-MHz-wide channels spread across a large part of the UHF bands and some smaller chunks of VHF. Among that we had 20 MHz of FM broadcasting around the 100 MHz mark, and disregarding shortwave, then a MHz of AM down around 1 MHz. Europeans got a bonus band down there too: we’ve got Long Wave, over 100 kHz of AM goodness roughly centered around 200 kHz.

Continue reading “What’s Going To Happen To Legacy Broadcast Bands When The Lights Go Out?”

The Future Of RISC-V And The VisionFive 2 Single Board Computer

We’ve been following the open, royalty-free RISC-V ISA for a while. At first we read the specs, and then we saw RISC-V cores in microcontrollers, but now there’s a new board that offers enough processing power at a low enough price point to really be interesting in a single board computer. The VisionFive 2 ran a successful Kickstarter back in September 2022, and I’ve finally received a unit with 8 GB of ram. And it works! The JH7110 won’t outperform a modern desktop, or even a Raspberry Pi 4, but it’s good enough to run a desktop environment, browse the web, and test software.

And that’s sort of a big deal, because the RISC-V architecture is starting to show up in lots of places. The challenge has been getting real hardware that’s powerful enough to run Linux and compile software on, that doesn’t cost an arm and a leg. If ARM is an alternative architecture, then RISC-V is still an experimental one, and that is an issue when trying to use the VF2. That’s a theme we’ll repeat a few times, but the thing to remember here is that getting more devices in the wild is the first step to fixing things. Continue reading “The Future Of RISC-V And The VisionFive 2 Single Board Computer”

Hackaday Links Column Banner

Hackaday Links: March 5, 2023

Well, we guess it had to happen eventually — Ford is putting plans in place to make its vehicles capable of self-repossession. At least it seems so from a patent application that was published last week, which reads like something written by someone who fancies themselves an evil genius but is just really, really annoying. Like most patent applications, it covers a lot of ground; aside from the obvious capability of a self-driving car to drive itself back to the dealership, Ford lists a number of steps that its proposed system could take before or instead of driving the car away from someone who’s behind on payments.

Examples include selective disabling conveniences in the vehicle, like the HVAC or infotainment systems, or even locking the doors and effectively bricking the vehicle. Ford graciously makes allowance for using the repossessed vehicle in an emergency, and makes mention of using cameras in the vehicle and a “neural network” to verify that the locked-out user is indeed having, say, a medical emergency. What could possibly go wrong?

Continue reading “Hackaday Links: March 5, 2023”

ChatGPT, Bing, And The Upcoming Security Apocalypse

Most security professionals will tell you that it’s a lot easier to attack code systems than it is to defend them, and that this is especially true for large systems. The white hat’s job is to secure each and every point of contact, while the black hat’s goal is to find just one that’s insecure.

Whether black hat or white hat, it also helps a lot to know how the system works and exactly what it’s doing. When you’ve got the source code, either because it’s open-source, or because you’re working inside the company that makes the software, you’ve got a huge advantage both in finding bugs and in fixing them. In the case of closed-source software, the white hats arguably have the offsetting advantage that they at least can see the source code, and peek inside the black box, while the attackers cannot.

Still, if you look at the number of security issues raised weekly, it’s clear that even in the case of closed-source software, where the defenders should have the largest advantage, that offense is a lot easier than defense.

So now put yourself in the shoes of the poor folks who are going to try to secure large language models like ChatGPT, the new Bing, or Google’s soon-to-be-released Bard. They don’t understand their machines. Of course they know how the work inside, in the sense of cross multiplying tensors and updating weights based on training sets and so on. But because the billions of internal parameters interact in incomprehensible ways, almost all researchers refer to large language models’ inner workings as a black box.

And they haven’t even begun to consider security yet. They’re still worried about how to construct obscure background prompts that prevent their machines from spewing hate speech or pornographic novels. But as soon as the machines start doing something more interesting than just providing you plain text, the black hats will take notice, and someone will have to figure out defense.

Indeed, this week, we saw the first real shot across the bow: a hack to make Bing direct users to arbitrary (bad) webpages. The Bing hack requires the user to already be on a compromised website, so it’s maybe not very threatening, but it points out a possible real security difference between Bing and ChatGPT: Bing gives you links to follow, and that makes it a juicy target.

We’re right on the edge of a new security landscape, because even the white hats are facing a black box in the AI. So far, what ChatGPT and Codex and other large language models are doing is trivially secure – putting out plain text – but Bing is taking the first dangerous steps into doing something more useful, both for users and black hats. Given the ease with which people have undone OpenAI’s attempts to keep ChatGPT in its comfort zone, my guess is that the white hats will have their hands full, and the black-box nature of the model deprives them of their best hope. Buckle your seatbelts.