Sega’s AI Computer Embraces The Artificial Intelligence Revolution

Recently a little-known Sega computer system called the Sega AI Computer was discovered for sale in Japan, including a lot of the accompanying software. Although this may not really raise eyebrows, what’s interesting is that this was Sega’s 1986 attempt to cash in on Artificial Intelligence (AI) hype, with a home computer that could handle natural language. Based on the available software and documentation, it looked to be mostly targeted at younger children, with plans to launch it in the US later on, but ultimately it was quietly shelved by the end of the 1980s.

Part of the Sega AI Computer's mainboard, with the V20 MPU and ROMs.
Part of the Sega AI Computer’s mainboard, with the V20 MPU and ROMs.

The computer system itself is based around the NEC v20 8088-compatible MPU with 128 kB of RAM and a total of 512 kB of ROM, across multiple chips. The latter contains not only the character set, but also a speech table for the text to speech functionality and the Prolog-based operating system ROM. It is this Prolog-based environment which enables the ‘AI’ functionality. For example, the ‘diary’ application will ask the user a few questions about their day, and writes a grammatically correct diary entry for that day based on the responses.

On the system’s touch panel overlays can be used through cartridge or tape-based application to make it easy for children to interact with the system, or a full-sized keyboard can be used instead. All together, 14 tapes and 26 cartridges (‘my cards’) had their contents dumped, along with the contents of every single ROM in the system. The manual and any further documentation and advertising material that came with the system were scanned in, which you can peruse while you boot up your very own Sega AI Computer in MAME. Mind that the MAME system is still a work in progress, so bugs are to be expected. Even so, this is a rare glimpse at one of those aspirational systems that never made it out of the 1980s.

The NSA’s Furby Artificial Intelligence Scare: FOIA Documents Provide Insight

For those of us who were paying a modicum of attention to the part of the news around 1999 which did not involve the imminent demise of humanity due to the Y2K issue, a certain toy called a ‘Furby’ was making the headlines. In addition to driving parents batty, it also gave everyone’s favorite US three-letter agency a scare, with it being accused of being both a spying tool and equipped with an advanced artificial intelligence chip. Courtesy of a recent Freedom of Information Act (FOIA) request we now have the low-down on what had the NSA all atwitter.

In a Twitter thread (Nitter) user [dakotathekat] announced the release, which finally answered many questions about the NSA’s on-premises ban of Furbys (or Furbees if you’re Swedish). The impression one gets is that this ‘Furby ban’ was primarily instated out of an abundance of caution, as unauthorized recording devices of any kind are strictly forbidden on NSA premises. With nobody at the NSA apparently interested in doing a teardown of a Furby to ascertain its internals, and the careful balance between allowing children’s toys on NSA grounds versus the risk of a ‘Furbygate’, a ban seemed the easy way out. Similarly, the FAA saw fit to also make people turn their Furbys off like all other electronic devices.

The original Furby toys did not have anything more complex inside of them than a 6502-derived MCU and a Ti TSP50C04 IC for speech synthesis duties, with the supposed ‘learning’ process using a hardcoded vocabulary that gradually replaced its default gibberish with English or another target language.

GitHub Copilot And The Unfulfilled Promises Of An Artificial Intelligence Future

In late June of 2021, GitHub launched a ‘technical preview’ of what they termed GitHub Copilot, described as an ‘AI pair programmer which helps you write better code’. Quite predictably, responses to this announcement varied from glee at the glorious arrival of our code-generating AI overlords, to dismay and predictions of doom and gloom as before long companies would be firing software developers en-masse.

As is usually the case with such controversial topics, neither of these extremes are even remotely close to the truth. In fact, the OpenAI Codex machine learning model which underlies GitHub’s Copilot is derived from OpenAI’s GPT-3 natural language model,  and features many of the same stumbles and gaffes which GTP-3 has. So if Codex and with it Copilot isn’t everything it’s cracked up to be, what is the big deal, and why show it at all?

Continue reading “GitHub Copilot And The Unfulfilled Promises Of An Artificial Intelligence Future”

Artificial Intelligence Powers A Wasp-Killing Machine

At the time of publication, Hackaday is of the understanding that there is no pro-wasp lobby active in the United States or abroad. Why? Well, the wasp is an insect that is considered incapable of any viable economic contribution to society, and thus has few to no adherents who would campaign in its favor. In fact, many actively seek to defeat the wasp, and [Tegwyn☠Twmffat] is one of them.

[Tegwyn]’s project is one that seeks to destroy wasps and Asian Hornets in habitats where they are an invasive pest. To achieve this goal without harming other species, the aim is to train a neural network to detect the creatures, before then using a laser to vaporize them.

Initial plans involved a gimballed sentry-gun style setup. However, safety concerns about firing lasers in the open, combined with the difficulty of imaging flying insects, conspired to put this idea to rest. The current system involves instead guiding insects down a small tube at the entrance to a hive. Here, they can be easily imaged at close range and great detail, as well as vaporized by a laser safely contained within the tube, if they are detected as wasps or hornets.

It’s an exciting project that could serve as a good model of how to deal with invasive insect species in the wild. We’ve seen insects grace our pages before, too.  Video after the break. Continue reading “Artificial Intelligence Powers A Wasp-Killing Machine”

Forget Artificial Intelligence; Think Artificial Life

If you are a science fiction fan, you are probably aware of one of the genre’s oddest dichotomies. A lot of science fiction is concerned about if a robot, alien, or whatever is a person. However — sometimes in the same story — finding life is as easy as asking the science officer with a fancy tricorder. If you go to Mars and meet Marvin, it is pretty clear he’s alive, but faced with a bunch of organic molecules, the task is a bit harder. Now it is going to get harder still because Cornell scientists have created a material that has an artificial metabolism and checks quite a few boxes of what we associate with life. You can read the entire paper if you want more detail.

Three of the things people look for to classify something as alive is that it has a metabolism, self-arranges, and reproduces. There are other characteristics, depending on who you ask, but those three are pretty crucial.

Continue reading “Forget Artificial Intelligence; Think Artificial Life”

Stethoscopes, Electronics, And Artificial Intelligence

For all the advances in medical diagnostics made over the last two centuries of modern medicine, from the ability to peer deep inside the body with the help of superconducting magnets to harnessing the power of molecular biology, it seems strange that the enduring symbol of the medical profession is something as simple as the stethoscope. Hardly a medical examination goes by without the frigid kiss of a stethoscope against one’s chest, while we search the practitioner’s face for a telltale frown revealing something wrong from deep inside us.

The stethoscope has changed little since its invention and yet remains a valuable if problematic diagnostic tool. Efforts have been made to solve these problems over the years, but only with relatively recent advances in digital signal processing (DSP), microelectromechanical systems (MEMS), and artificial intelligence has any real progress been made. This leaves so-called smart stethoscopes poised to make a real difference in diagnostics, especially in the developing world and under austere or emergency situations.

Continue reading “Stethoscopes, Electronics, And Artificial Intelligence”

Artificial Intelligence Composes New Christmas Songs

One of the most common uses of neural networks is the generation of new content, given certain constraints. A neural network is created, then trained on source content – ideally with as much reference material as possible. Then, the model is asked to generate original content in the same vein. This generally has mixed, but occasionally amusing, results. The team at [Made by AI] had a go at generating Christmas songs using this very technique.

The team decided that the easiest way to train their model would be to use note data from MIDI files. MIDI versions of Christmas songs are readily available and provide a broad base with which to train the model. For a neural network, the team chose to use a Long-short Term Memory (LSTM) architecture. This is a model which is contextually sensitive, which is important when dealing with structured formats like music or language.

The neural network generated five tunes which you can listen to on the Made by AI Soundcloud page. The team notes their time was limited, and we think that with some further work and more adherence to musical concepts such as structure and repetition, it might be possible to generate something a little more catchy.

There are other applications for AI in music, too – like these intelligent musical prostheses.