It’s DOOM, But In Teletext

We’ve seen the 1993 id Software classic DOOM running on so many pieces of unexpected hardware, as “Will it run DOOM?” has become something of a test for any new device. But will it run in the circuitry of a 1970s or 1980s TV set? Not quite, but as [lukneu] has demonstrated, it is possible to render the game using the set’s inbuilt Teletext decoder.

Teletext is a technology past its zenith and which is no longer broadcast in many countries, but for those unfamiliar it’s an information service broadcast in the unseen lines hidden in the frame blanking period of an analogue TV transmission. Its serial data packets can contain both pages of text and rudimentary block graphics, and we’re surprised to learn, can include continuous streams to a single page. It’s this feature that he’s used, piping the game’s graphics as a teletext stream which is decoded by the CRT TV and displayed as a playable if blocky game.

Delving further, we find that DOOM is running on a Linux machine on which the teletext stream is created, and the stream is then piped to a Raspberry Pi which does the encoding on to its composite video output. More powerful versions of the Pi can run both processes on the same machine. The result can be seen in the video below, and we can definitely say it would have been mind-blowing, back when DOOM was king. There are plans for further refinement, of which we’d say that color would be the most welcome.

Continue reading “It’s DOOM, But In Teletext”

How Far Can An EULA Go?

We read this news with mixed glee and horror: a company called Telly is giving TVs away, for the low price of having to live with an always-on advertisement bar and some pretty stringent terms and conditions. Break the terms, and they’ll repossess your TV. If you don’t give them the TV, they have your credit card on record and they think the set is worth $1,000.

The hacker in me sees free hardware, so I checked out the terms and conditions, and it doesn’t look good. They’ve explicitly ruled out opening up or physically modifying the device, and it has to continually have WiFi – for which you pay, naturally. It sounds like it could easily tell if you try to tamper with it. My next thought was, perhaps too cynically, to get one, put it in the closet, and wait for the company to go bankrupt. Because you know that business model isn’t going to last.

But it’s clear that they’ve seen through me. The most bizarre clause is that you have to “Use the Product as the primary television in Your household”. Now, we’re not lawyers, but it seems like an amazing stretch that they can tell you how intensively you are to use the product. Can you imagine a license with a keyboard that demanded that you only use it to write sci-fi novels, or that you have to use it more than any other keyboard?

Nope. Too many hoops to jump through for a silly free TV. You can keep your dystopian future.

Hexed Home Assistant Monitors 3D Printers

You can babysit your 3D printer 100% of the time, or you can cross your fingers and hope it all works. Some monitor their printers using webcams, but [Simit] has a more stylish method of keeping tabs on six 3D printers.

The idea is to use a 3D printed hex LED display found online. Adding an ESP32 and Home Assistant allows remote control of the display. The printers use Klipper and can report their status using an API called Moonraker. Each hexagon shows the status of one printer. You can tell if the printer is online, paused, printing, or in other states based on the color and amount of LEDs lit. For example, a hex turns totally green when printing is complete.

Once you have a web API and some network-controlled LEDs, it is relatively straightforward to link it together with Home Automation. Of course, you could do it other ways, too, but if you already have Home Automation running for other reasons, why not?

We have seen other ways to do this, of course. If you need an easy monitor, the eyes have it. If you don’t use Klipper, OctoPrint can pull a similar stunt.

Get That Dream Job, With A Bit Of Text Injection

Getting a job has always been a tedious and annoying process, as for all the care that has been put into a CV or resume, it can be still headed for the round file at the whim of some corporate apparatchik. At various times there have also been dubious psychometric tests and other horrors to contend with, and now we have the specter of AI before us. We can be tossed aside simply because some AI model has rejected our CV, no human involved. If this has made you angry, perhaps it’s time to look at [Kai Greshake]’s work. He’s fighting back, by injecting a PDF CV with extra text to fool the AI into seeing the perfect candidate, and even fooling AI-based summarizers.

Text injection into a PDF is a technique the same as used by the less salubrious end of the search engine marketing world, of placing text in a web page such that a human can’t read it but a machine can. The search engine marketeers put them in tiny white text or offset them far out of the viewport, and it seems the same is possible in a PDF. He’s put the injection in white and a tiny font, and interestingly, overlaid it several times.

Using the ChatGPT instance available in the Bing sidebar he’s then able to fool it into an affirmative replay to questions about whether he should be hired. But it’s not just ChatGPT he’s targeting, another use of AI in recruitment is via summarizing tools. By injecting a lot of text with phrases normally used in conclusion of a document, he’s able to make Quillbot talk about puppies. Fancy a go yourself? He’s put a summarizer online, in the link above.

So maybe the all-seeing AI isn’t as clever as we’ve been led to believe. Who’d have thought it!

Spy Transceiver Makes Two Tubes Do The Work Of Five

Here at Hackaday, we love following along with projects as they progress. That’s especially true when a project makes a considerable leap in terms of functionality from one version to another, or when the original design gets more elegant. And when you get both improved function and decreased complexity at the same time? That’s the good stuff.

Take the recent improvements to a vacuum tube “spy radio” as an example. Previously, [Helge (LA6NCA)] built both a two-tube transmitter and a three-tube receiver, either of which would fit in the palm of your hand. A little higher math seems to indicate that combining these two circuits into a transceiver would require five tubes, but that’s not how hams like [Helge] roll. His 80-m CW-only transceiver design uses only two tubes and a lot of tricks, which we admit we’re still wrapping our heads around. On the receive side, one tube serves as a mixer/oscillator, combining the received signal with a slightly offset crystal-controlled signal to provide the needed beat frequency. The second tube serves as the amplifier, both for the RF signal when transmitting, and for audio when receiving.

The really clever part of this build is that [Helge] somehow stuffed four separate relays into the tiny Altoids tin chassis. Three of them are used to switch between receive and transmit, while the fourth is set up as a simple electromagnetic buzzer. This provides the sidetone needed to effectively transmit Morse code, and is about the simplest way we’ve ever seen to address that need. Also impressive is how [Helge] went from a relatively expansive breadboard prototype to a much more compact final design, and how the solder was barely cooled before he managed to make a contact over 200 km. The video below has all the details.

Continue reading “Spy Transceiver Makes Two Tubes Do The Work Of Five”

Prompt Injection: An AI-Targeted Attack

For a brief window of time in the mid-2010s, a fairly common joke was to send voice commands to Alexa or other assistant devices over video. Late-night hosts and others would purposefully attempt to activate voice assistants like these en masse and get them to do ridiculous things. This isn’t quite as common of a gag anymore and was relatively harmless unless the voice assistant was set up to do something like automatically place Amazon orders, but now that much more powerful AI tools are coming online we’re seeing that joke taken to its logical conclusion: prompt-injection attacks. Continue reading “Prompt Injection: An AI-Targeted Attack”

Oscillon by Ben F. Laposky

Early Computer Art From The 1950s And 1960s

Modern day computer artist, [Amy Goodchild] surveys a history of Early Computer Art from the 1950s and 1960s. With so much attention presently focused on AI-generated artwork, we should remember that computers have been used to created art for many decades.

Our story begins in 1950 when Ben Laposky started using long exposure photography of cathode ray oscilloscopes to record moving signals generated by electronic circuits. In 1953, Gordon Pask developed the electromechanical MusiColor system. MusiColor empowered musicians to control visual elements including lights, patterns, and motorized color wheels using sound from their instruments. The musicians could interact with the system in real-time, audio-visual jam sessions.

In the early 1960s, BEFLIX (derived form Bell Flix) was developed by Ken Knowlton at Bell Labs as a programming language for generating video animations. The Graphic 1 computer featuring a light pen input device was also developed at Bell Labs. Around the same timeframe, IBM introduced novel visualization technology in the IBM 2250 graphics display for its System/360 computer. The 1967 IBM promotional film Frontiers in Computer Graphics demonstrates the capabilities of the system.

Continue reading “Early Computer Art From The 1950s And 1960s”