Sega’s AI Computer Embraces The Artificial Intelligence Revolution

Recently a little-known Sega computer system called the Sega AI Computer was discovered for sale in Japan, including a lot of the accompanying software. Although this may not really raise eyebrows, what’s interesting is that this was Sega’s 1986 attempt to cash in on Artificial Intelligence (AI) hype, with a home computer that could handle natural language. Based on the available software and documentation, it looked to be mostly targeted at younger children, with plans to launch it in the US later on, but ultimately it was quietly shelved by the end of the 1980s.

Part of the Sega AI Computer's mainboard, with the V20 MPU and ROMs.
Part of the Sega AI Computer’s mainboard, with the V20 MPU and ROMs.

The computer system itself is based around the NEC v20 8088-compatible MPU with 128 kB of RAM and a total of 512 kB of ROM, across multiple chips. The latter contains not only the character set, but also a speech table for the text to speech functionality and the Prolog-based operating system ROM. It is this Prolog-based environment which enables the ‘AI’ functionality. For example, the ‘diary’ application will ask the user a few questions about their day, and writes a grammatically correct diary entry for that day based on the responses.

On the system’s touch panel overlays can be used through cartridge or tape-based application to make it easy for children to interact with the system, or a full-sized keyboard can be used instead. All together, 14 tapes and 26 cartridges (‘my cards’) had their contents dumped, along with the contents of every single ROM in the system. The manual and any further documentation and advertising material that came with the system were scanned in, which you can peruse while you boot up your very own Sega AI Computer in MAME. Mind that the MAME system is still a work in progress, so bugs are to be expected. Even so, this is a rare glimpse at one of those aspirational systems that never made it out of the 1980s.

2023: As The Hardware World Turns

We’ve made it through another trip around the sun, and for the first time in what feels like far too long, it seems like things went pretty well for the hackers and makers of the world. Like so many, our community suffered through a rough couple of years: from the part shortages that made building even the simplest of devices more expensive and difficult than it should have been, to the COVID-mandated social distancing that robbed us of our favorite meetups. But when looking back on the last twelve months, most of the news was refreshingly positive.

Pepperoni costs ten bucks, but they can’t activate Windows on their registers…

Oh sure, a trip to to the grocery store can lead to a minor existential crisis at the register, but there’s not much we at Hackaday can do about that other than recommend you some good hydroponics projects to help get your own home farm up and running.

As has become our New Year tradition, we like to take this time to go over some of the biggest stories and trends that we picked up on from our unique vantage point. Some will be obvious, but there’s always a few that sneak up on us. These posts tend to make for interesting reading in the future, and if you’ve got the time, we’d recommend going back and reading the previous entries in this series and reminiscing a bit.

It’s also a good time to reflect on Hackaday itself — how we’ve grown, the things that have changed, and perhaps what we can do better going forward. Believe it or not we do read all of the feedback from the community, whether it’s in the comments of individual posts or sent into us directly. We couldn’t do this without readers like you, so please drop us a line and let us know what you’re thinking.

So before we get any farther into 2024, let’s wind back the clock and revisit some of the highlights from the previous year.

Continue reading “2023: As The Hardware World Turns”

2023 Hackaday Prize: Two Bee Or More Bee Swarm Detection

In the bustling world of bees, swarming is the ultimate game of real estate shuffle. When a hive gets too crowded or craves a change of scenery, colonies scout out swarms for a new hive. [Captain Flatus O’Flaherty] is a beekeeper trying to capture more native honey bees, and a custom LoRa-enabled capture hive helps him do that.

A catch hive, perched high and mighty, lures scouting as potential new homes. If selected, a swarm of over a thousand bees can move in, where [Flatus]’s detector comes in. Many catch hives are scattered around, and manually checking them is difficult. While the breath of one bee is hard to see, a thousand bees produce enough CO2 to be detected by a sensor. A custom PCB with a solar-powered  +30dB LoRa radio measures CO2 and reports back. The PCB contains an ESP32 D4 and a 1-watt Ebyte E22-400M30S LoRa module. If the CO2 levels are still elevated at nightfall, [Flatus] can be pretty confident a swarm has moved in.

Using the data collected, he massaged it to create a dataset suitable for training on XGBoost. With weather data and other conditions, the model tries to predict when a swarm is more or less likely to happen. Apis Mellifera (the local honeybee around [Flatus]) loves sun-kissed, warm, humid afternoons with little wind.

We’ve seen beehive monitors before and love exploring what the data could be used for—video after the break.

Continue reading “2023 Hackaday Prize: Two Bee Or More Bee Swarm Detection”

Liquid Neural Networks Do More With Less

[Ramin Hasani] and colleague [Mathias Lechner] have been working with a new type of Artificial Neural Network called Liquid Neural Networks, and presented some of the exciting results at a recent TEDxMIT.

Liquid neural networks are inspired by biological neurons to implement algorithms that remain adaptable even after training. [Hasani] demonstrates a machine vision system that steers a car to perform lane keeping with the use of a liquid neural network. The system performs quite well using only 19 neurons, which is profoundly fewer than the typically large model intelligence systems we’ve come to expect. Furthermore, an attention map helps us visualize that the system seems to attend to particular aspects of the visual field quite similar to a human driver’s behavior.

 

Mathias Lechner and Ramin Hasani
[Mathias Lechner] and [Ramin Hasani]
The typical scaling law of neural networks suggests that accuracy is improved with larger models, which is to say, more neurons. Liquid neural networks may break this law to show that scale is not the whole story. A smaller model can be computed more efficiently. Also, a compact model can improve accountability since decision activity is more readily located within the network. Surprisingly though, liquid neural network performance can also improve generalization, robustness, and fairness.

A liquid neural network can implement synaptic weights using nonlinear probabilities instead of simple scalar values. The synaptic connections and response times can adapt based on sensory inputs to more flexibly react to perturbations in the natural environment.

We should probably expect to see the operational gap between biological neural networks and artificial neural networks continue to close and blur. We’ve previously presented on wetware examples of building neural networks with actual neurons and ever advancing brain-computer interfaces.

Continue reading “Liquid Neural Networks Do More With Less”

Why LLaMa Is A Big Deal

You might have heard about LLaMa or maybe you haven’t. Either way, what’s the big deal? It’s just some AI thing. In a nutshell, LLaMa is important because it allows you to run large language models (LLM) like GPT-3 on commodity hardware. In many ways, this is a bit like Stable Diffusion, which similarly allowed normal folks to run image generation models on their own hardware with access to the underlying source code. We’ve discussed why Stable Diffusion matters and even talked about how it works.

LLaMa is a transformer language model from Facebook/Meta research, which is a collection of large models from 7 billion to 65 billion parameters trained on publicly available datasets. Their research paper showed that the 13B version outperformed GPT-3 in most benchmarks and LLama-65B is right up there with the best of them. LLaMa was unique as inference could be run on a single GPU due to some optimizations made to the transformer itself and the model being about 10x smaller. While Meta recommended that users have at least 10 GB of VRAM to run inference on the larger models, that’s a huge step from the 80 GB A100 cards that often run these models.

While this was an important step forward for the research community, it became a huge one for the hacker community when [Georgi Gerganov] rolled in. He released llama.cpp on GitHub, which runs the inference of a LLaMa model with 4-bit quantization. His code was focused on running LLaMa-7B on your Macbook, but we’ve seen versions running on smartphones and Raspberry Pis. There’s even a version written in Rust! A rough rule of thumb is anything with more than 4 GB of RAM can run LLaMa. Model weights are available through Meta with some rather strict terms, but they’ve been leaked online and can be found even in a pull request on the GitHub repo itself. Continue reading “Why LLaMa Is A Big Deal”

Will A.I. Steal All The Code And Take All The Jobs?

New technology often brings with it a bit of controversy. When considering stem cell therapies, self-driving cars, genetically modified organisms, or nuclear power plants, fears and concerns come to mind as much as, if not more than, excitement and hope for a brighter tomorrow. New technologies force us to evolve perspectives and establish new policies in hopes that we can maximize the benefits and minimize the risks. Artificial Intelligence (AI) is certainly no exception. The stakes, including our very position as Earth’s apex intellect, seem exceedingly weighty. Mathematician Irving Good’s oft-quoted wisdom that the “first ultraintelligent machine is the last invention that man need make” describes a sword that cuts both ways. It is not entirely unreasonable to fear that the last invention we need to make might just be the last invention that we get to make.

Artificial Intelligence and Learning

Artificial intelligence is currently the hottest topic in technology. AI systems are being tasked to write prose, make art, chat, and generate code. Setting aside the horrifying notion of an AI programming or reprogramming itself, what does it mean for an AI to generate code? It should be obvious that an AI is not just a normal program whose code was written to spit out any and all other programs. Such a program would need to have all programs inside itself. Instead, an AI learns from being trained. How it is trained is raising some interesting questions.

Humans learn by reading, studying, and practicing. We learn by training our minds with collected input from the world around us. Similarly, AI and machine learning (ML) models learn through training. They must be provided with examples from which to learn. The examples that we provide to an AI are referred to as the data corpus of the training process. The robot Johnny 5 from “Short Circuit”, like any curious-minded student, needs input, more input, and more input.

Continue reading “Will A.I. Steal All The Code And Take All The Jobs?”

AI Dreaming Of Time Travel

We love the intersection between art and technology, and a video made by an AI (Stable Diffusion) imagining a journey through time (Nitter) is a lovely example. The project is relatively straightforward, but as with most art projects, there were endless hours of [Xander Steenbrugge] tweaking and playing with different parts of the process until it was just how he liked it. He mentions trying thousands of different prompts and seeds — an example of one of the prompts is “a small tribal village with huts.” In the video, each prompt got 72 frames, slowly increasing in strength and then decreasing as the following prompt came along.

There are other AI videos on YouTube, often putting the lyrics of a song into AI-generated form. But if you’ve worked with AI systems, you’ll notice that the background stays remarkably stable in [Xander]’s video as it goes through dozens of feedback loops. This is difficult to do as you want to change the image’s content without changing the look. So he had to write a decent amount of code to try and maintain visual temporal cohesion over time. Hopefully, we’ll see an open-source version of some of his improvements, as he mentioned on Twitter.

In the meantime, we get to sit back and enjoy something beautiful. If you still aren’t convinced that Stable Diffusion isn’t a big deal, perhaps we can do a little more to persuade your viewpoint.

Continue reading “AI Dreaming Of Time Travel”