DOOM Played By Tweet

Getting DOOM to run on hardware it was never intended to run on is a tradition as old as time. Old cell phones, embedded systems, and ancient televisions have all been converted to play this classic first-person shooter. This style of playing games on old hardware might be passé now as the new trend seems to be the ability to play this game on more ethereal platforms instead. This project brings DOOM to Twitter.

The gameplay is a little nontraditional as well. To play the game, a tweet needs to be sent with specific instructions for the bot. The bot then plays the game according to its instructions and then tweets a video. By responding to this tweet with more instructions, the player can continue the game tweet-by-tweet. While slightly cumbersome, it does have the advantage of allowing a player to resume any game simply by responding to the tweet where they would like to start. Behind the scenes of the DOOM-playing Twitter bot is interesting as well and the code is available on the project’s GitHub page.

While we’ve seen plenty of DOOM instances on all kinds of hardware, it’s safe to say we’ve never really seen a gameplay experience quite like this one. It may stay as a curiosity, but DOOM porters are always looking for something else to run this classic game so it may eventually branch out or develop into something more user-friendly like this cloud-based Atari 2600.

Access An 8-bit Atari Through Twitter

Building a retro computer, or even restoring one, is a great way to understand a lot of the fundamentals of computing. That can take a long time and a lot of energy, though. Luckily, there is a Twitter bot out there that can let you experience an old 8-bit Atari without even needing to spin up an emulator. Just tweet your program to the bot, and it outputs the result.

The bot was built by [Kay Savetz] and accepts programs in five programming languages: Atari BASIC, Turbo-Basic XL, Atari Logo, Atari PILOT, and Atari Assembler/Editor, which was a low-level assembly-type language available on these machines. The bot itself runs on a Raspberry Pi with the Atari 800 emulator, rather than original hardware, presumably because it’s much simpler to get a working network connection on a Pi than on a computer from the 80s. The Pi runs a python script that polls Twitter every two minutes and then hands the code off to the emulator.

[Kay]’s work isn’t limited to just Ataris, though. There’s also an Apple II BASIC bot for all the Apple fans out there that responds to programs written in AppleSoft BASIC. While building your own retro system or emulating one on other hardware is a great exercise, it’s also great that there are tools like these that allow manipulation of retro computers without having to do any of the dirty work ourselves.

Twitter: It’s Not The Algorithm’s Fault. It’s Much Worse.

Maybe you heard about the anger surrounding Twitter’s automatic cropping of images. When users submit pictures that are too tall or too wide for the layout, Twitter automatically crops them to roughly a square. Instead of just picking, say, the largest square that’s closest to the center of the image, they use some “algorithm”, likely a neural network, trained to find people’s faces and make sure they’re cropped in.

The problem is that when a too-tall or too-wide image includes two or more people, and they’ve got different colored skin, the crop picks the lighter face. That’s really offensive, and something’s clearly wrong, but what?

A neural network is really just a mathematical equation, with the input variables being in these cases convolutions over the pixels in the image, and training them essentially consists in picking the values for all the coefficients. You do this by applying inputs, seeing how wrong the outputs are, and updating the coefficients to make the answer a little more right. Do this a bazillion times, with a big enough model and dataset, and you can make a machine recognize different breeds of cat.

What went wrong at Twitter? Right now it’s speculation, but my money says it lies with either the training dataset or the coefficient-update step. The problem of including people of all races in the training dataset is so blatantly obvious that we hope that’s not the problem; although getting a representative dataset is hard, it’s known to be hard, and they should be on top of that.

Which means that the issue might be coefficient fitting, and this is where math and culture collide. Imagine that your algorithm just misclassified a cat as an “airplane” or as a “lion”. You need to modify the coefficients so that they move the answer away from this result a bit, and more toward “cat”. Do you move them equally from “airplane” and “lion” or is “airplane” somehow more wrong? To capture this notion of different wrongnesses, you use a loss function that can numerically encapsulate just exactly what it is you want the network to learn, and then you take bigger or smaller steps in the right direction depending on how bad the result was.

Let that sink in for a second. You need a mathematical equation that summarizes what you want the network to learn. (But not how you want it to learn it. That’s the revolutionary quality of applied neural networks.)

Now imagine, as happened to Google, your algorithm fits “gorilla” to the image of a black person. That’s wrong, but it’s categorically differently wrong from simply fitting “airplane” to the same person. How do you write the loss function that incorporates some penalty for racially offensive results? Ideally, you would want them to never happen, so you could imagine trying to identify all possible insults and assigning those outcomes an infinitely large loss. Which is essentially what Google did — their “workaround” was to stop classifying “gorilla” entirely because the loss incurred by misclassifying a person as a gorilla was so large.

This is a fundamental problem with neural networks — they’re only as good as the data and the loss function. These days, the data has become less of a problem, but getting the loss right is a multi-level game, as these neural network trainwrecks demonstrate. And it’s not as easy as writing an equation that isn’t “racist”, whatever that would mean. The loss function is being asked to encapsulate human sensitivities, navigate around them and quantify them, and eventually weigh the slight risk of making a particularly offensive misclassification against not recognizing certain animals at all.

I’m not sure this problem is solvable, even with tremendously large datasets. (There are mathematical proofs that with infinitely large datasets the model will classify everything correctly, so you needn’t worry. But how close are we to infinity? Are asymptotic proofs relevant?)

Anyway, this problem is bigger than algorithms, or even their writers, being “racist”. It may be a fundamental problem of machine learning, and we’re definitely going to see further permutations of the Twitter fiasco in the future as machine classification is being increasingly asked to respect human dignity.

Community Testing Suggests Bias In Twitter’s Cropping Algorithm

With social media and online services are now huge parts of daily life to the point that our entire world is being shaped by algorithms. Arcane in their workings, they are responsible for the content we see and the adverts we’re shown. Just as importantly, they decide what is hidden from view as well.

Important: Much of this post discusses the performance of a live website algorithm. Some of the links in this post may not perform as reported if viewed at a later date. 

The initial Zoom problem that brought Twitter’s issues to light.

Recently, [Colin Madland] posted some screenshots of a Zoom meeting to Twitter, pointing out how Zoom’s background detection algorithm had improperly erased the head of a colleague with darker skin. In doing so, [Colin] noticed a strange effect — although the screenshot he submitted shows both of their faces, Twitter would always crop the image to show just his light-skinned face, no matter the image orientation. The Twitter community raced to explore the problem, and the fallout was swift.

Continue reading “Community Testing Suggests Bias In Twitter’s Cropping Algorithm”

This Week In Security: Garmin Ransomware, KeePass , And Twitter Warnings

On July 23, multiple services related to Garmin were taken offline, including their call center and aviation related services. Thanks to information leaked by Garmin employees, we know that this multi-day outage was caused by the Wastedlocker ransomware campaign. After four days, Garmin was able to start the process of restoring the services.

It’s reported that the requested ransom was an eye-watering $10 million. It’s suspected that Garmin actually paid the ransom. A leaked decryptor program confirms that they received the decryption key. The attack was apparently very widespread through Garmin’s network, as it seems that both workstations and public facing servers were impacted. Let’s hope Garmin learned their lesson, and are shoring up their security practices. Continue reading “This Week In Security: Garmin Ransomware, KeePass , And Twitter Warnings”

Today’s Twitter Hack Is New Take On “Nigerian Prince” Scam

Don’t send bitcoin to celebrities… or to random people for that matter. This afternoon a number of high profile Twitter accounts were taken over, including Joe Biden, Bill Gates, Elon Musk, Apple, Jeff Bezos, and Kanye West, and the event appears to be ongoing. Each displayed a message saying they wanted to “give back” by doubling the bitcoin that they are sent. The messages all appear to have the same bitcoin wallet address.

This is reminiscent of the “Nigerian prince” scams, a form of advance-fee scam where an email asks for help with a small sum of money in order to obtain a larger sum. Those usually come in as spam emails which most people are wise to at this point. However, blindly following celebrities on Twitter may still deliver a good dose of naïveté when those platforms are misused.

Bitcoin transactions can be viewed publicly and this wallet is showing 11.8 BTC in and 5.8 BTC out in a total of 288 transactions. The net is roughly 6 bitcoin or $55k USD at the time of writing. Twitter’s response appears to have locked down all verified accounts from publishing new tweets. They retain the ability to retweet and delete existing tweets.


Main image screenshot sources:

Hackaday Links Column Banner

Hackaday Links: April 5, 2020

Git is powerful, but with great power comes the ability to really bork things up. When you find yourself looking at an inscrutable error message after an ill-advised late-night commit, it can be a maximum pucker-factor moment, and keeping a clear enough head to fix the problem can be challenging. A little proactive social engineering may be in order, which is why Jonathan Bisson wrote git-undo, a simple shell script that displays the most common un-borking commands he’s likely to need. There are other ways to prompt yourself through Git emergencies, like Oh Shit, Git (or for the scatologically sensitive, Dangit Git), but git-undo has the advantage of working without an Internet connection.

Suddenly find yourself with a bunch of time on your hands and nothing to challenge your skills? Why not try to write a program in a single Tweet? The brainchild of Dominic Pajak, the BBC Micro Bot Twitter account accepts tweets and attempts to run them as BASIC programs on a BBC Microcomputer emulator, replying with the results of the program. It would seem that 280 characters would make it difficult to do anything interesting, but check out some of the results. Most are graphic displays, some animated, and with an unsurprising number of nods to 1980s pop culture. Some are truly impressive, though, like Conway’s Game of Life written by none other than Eben Upton.

The COVID-19 pandemic is causing all sorts of cultural shifts, but we didn’t expect to see much change in the culture of a community that’s been notoriously resistant to change for over a century: amateur radio. One of the most basic facts of life in the amateur radio world is that you need a license to participate, with governments regulating the process. But as a response to the pandemic, Spain has temporarily lifted licensing requirements for amateur radio operators. Normally, an unlicensed person is only allowed to operate on amateur bands under the direct supervision of a licensed amateur. The rules change allows unlicensed operators to use a station without supervision and is intended to give schoolchildren trapped at home an educational experience. In another change, some countries are allowing special callsign suffixes, like “STAYHOME,” to raise awareness during the pandemic. And the boom in interest in amateur radio since the pandemic started is remarkable; unfortunately, finding a way to take your test in a socially distant world is quite a trick. Our friend Josh Nass (KI6NAZ) has some thoughts about testing under these conditions that you might find interesting.

And finally, life goes on during all this societal disruption, and every new life deserves to be celebrated. And when Lauren Devinck made her appearance last month, her proud parents decided to send out unique birth announcement cards with a printed circuit board feature. The board is decorative, not functional, but adds a distinctive look to the card. The process of getting the boards printed was non-trivial; it turns out that free-form script won’t pass most design rule tests, and that panelizing them required making some compromises. We think the finished product is classy, but can’t help but think that a functional board would have really made a statement. Regardless, we welcome Lauren and congratulate her proud parents.