All You Need For Artificial Intelligence Is A Commodore 64

Artificial intelligence has always been around us, with [Timothy J. O’Malley]’s 1985 book on AI projects for the Commodore 64 being one example of this. With AI defined as being the theory and development of systems that can perform tasks that normally requiring human intelligence (e.g. visual perception, speech recognition, decision-making), this book is a good introduction to the many ways that computer systems for decades now have been able to learn, make decisions and in general become more human-like. Even if there’s no electronic personality behind the actions.

In the book’s first chapter, [Timothy] isn’t afraid to toss in some opinions about the true nature of intelligence and thinking. Starting with the concept that intelligence is based around storing information and being able to derive meaning from connections between stored pieces of information, the idea of a basic AI as one would use in a game for the computer opponent arises. A number of ways of implementing such an AI is explored in the first and subsequent chapters, using Towers of Hanoi, chess, Nim and other games.

After this we look at natural language processing – referencing ELIZA as an example – followed by heuristics, pattern recognition and AI for robotics. Although much of this may seem outdated in this modern age of LLMs and neural networks, it’s important to realize that much of what we consider ‘bleeding edge’ today has its roots in AI research performed in the 1950s and 1960s. As [Timothy] rightfully states in the final chapter, there is no real limit to how far you can push this type of AI as long as you have more hardware and storage to throw at the problem. This is where we now got datacenters full of GPU-equipped systems churning through vector space calculations for the sake of today’s LLM & diffusion model take on ‘AI’.

Using a Commodore 64 to demonstrate the (lack of) validity of claims is not a new one, with recently a group of researchers using one of these breadbin marvels to run an Ising model with a tensor network and outperforming IBM’s quantum processor. As they say, just because it’s new and shiny doesn’t necessarily mean that it is actually better.

Mitre Wants The Feds To Play In Its Sandbox

If you haven’t worked with the US government, you might not know Mitre, a non-profit government research organization. Formed in 1958 by the U.S. Air Force as a company to guide the SAGE computer, they are often research experts who oversee government contracts or evaluate proposals. Now they are building a $20 millon “AI Sandbox” for the Federal government to build AI prototypes.

Partnered with NVidia, the sandbox will use an NVidia GDX SuperPOD system capable of an exaFLOP of 8-bit AI computation. Mitre reports this will increase their compute power for AI by two orders of magnitude.

Continue reading “Mitre Wants The Feds To Play In Its Sandbox”

AI Pet Door Rejects Dead Mice

If you have pet with a little access door to the outside world, and that pet happens to be a cat, you’re likely on the receiving end of all kinds of lifeless little lagniappes. Don’t worry, it’s CES season out in Las Vegas and a company called Flappie has the solution — an AI-powered cat door that rejects dead mice and other would-be offerings.

Image by Nathan Ingraham via Engadget

It works about like you might expect — there’s a motion sensor and a night-vision camera on the exterior side of the door. Using Flappie’s “unique and proprietary” dataset, the door distinguishes between Tom and Jerry and keeps out unwanted guests with more than 90% accuracy. To do this, Flappie collected video of a lot of cats and prey in a variety of lighting conditions. There’s even a chip detection system that will reject all other cats.

Thankfully, it’s not all automation. The prey detection system can be turned off entirely, and there are manual switches on the inside for locking and unlocking the door at will. You don’t even have to hook it up to the Internet, it seems.

Americans will have to wait a while, as the company is rolling out the door in Switzerland and Germany first. No word on when the US launch will take place, but interested parties can expect to pay around $399.

Of course, this problem can be solved without AI as long as you’re willing to review the situation and unlock the door yourself.

Can Google’s New AI Read Your Datasheets For You?

We’ve seen a lot of AI tools lately, and, of course, we know they aren’t really smart, but they sure fool people into thinking they are actually intelligent. Of course, these programs can only pick through their training, and a lot depends on what they are trained on. When you use something like ChatGPT, for example, you assume they trained it on reasonable data. Sure, it might get things wrong anyway, but there’s also the danger that it simply doesn’t know what you are talking about. It would be like calling your company’s help desk and asking where you left your socks — they simply don’t know.

We’ve seen attempts to have AI “read” web pages or documents of your choice and then be able to answer questions about them. The latest is from Google with NotebookLM. It integrates a workspace where you can make notes, ask questions, and provide sources. The sources can be text snippets, documents from Google Drive, or PDF files you upload.

You can’t ask questions until you upload something, and we presume the AI restricts its answers to what’s in the documents you provide. It still won’t be perfect, but at least it won’t just give you bad information from an unknown source. Continue reading “Can Google’s New AI Read Your Datasheets For You?”

AI Image Generation Sharpens Your Bad Photos And Kills Photography?

We don’t fully understand the appeal of asking an AI for a picture of a gorilla eating a waffle while wearing headphones. However, [Micael Widell] shows something in a recent video that might be the best use we’ve seen yet of DALL-E 2. Instead of concocting new photos, you can apparently use the same technology for cleaning up your own rotten pictures. You can see his video, below. The part about DALL-E 2 editing is at about the 4:45 mark.

[Nicholas Sherlock] fed the AI a picture of a fuzzy ladybug and asked it to focus the subject. It did. He also fed in some other pictures and asked it to make subtle variations of them. It did a pretty good job of that, too.

Continue reading “AI Image Generation Sharpens Your Bad Photos And Kills Photography?”

Box with a hole. Camera and Raspberry Pi inside.

A Label Maker That Uses AI Really Poorly

[8BitsAndAByte] found herself obsessively labeling items around her house, and, like the rest of the world, wanted to see what simple, routine tasks could be made unnecessarily complicated by using AI. Instead of manually identifying objects using human intelligence, she thought it would be fun to offload that task to our AI overlords and the results are pretty amusing.

She constructed a cardboard enclosure that housed a Raspberry Pi 3B+, a Pi Camera Module V2, and a small thermal printer for making the labels. The enclosure included a hole for the camera and a button for taking the picture. The image taken by the Pi is analyzed by the DeepAI DenseCap API which, in theory, should create a label for each object detected within the image. Unfortunately, it doesn’t seem to do that very well and [8BitsAndAByte] is left with labels that don’t match any of the objects she took pictures of. In some cases it didn’t even get close, for example, the model thought an apple was a person’s head and a rotary dial phone was a cup. Go figure. It didn’t really seem to bother her though, and she got a pretty good laugh from the whole thing.

It appears the model detects all objects in the image, but only prints the label for the object it was most certain about. So maybe part of her problem is there were just too many objects in the background? If that were the case, you could probably improve the accuracy of the model by placing the object against a neutral background. That may confuse the AI a lot less and possibly give you better results. Or maybe try a different classifier altogether? Or don’t. Then you could just use it as a fun, gag project at your next get-together. That works too.

Cool project [8BitsAndAByte]! Hey, maybe this is a sign the world will still need some human intelligence after all. Who knows?

Continue reading “A Label Maker That Uses AI Really Poorly”

Robotic Biped Walks On Inverse Kinematics

Robotics projects are always a favorite for hackers. Being able to almost literally bring your project to life evokes a special kind of joy that really drives our wildest imaginations. We imagine this is one of the inspirations for the boom in interactive technologies that are flooding the market these days. Well, [Technovation] had the same thought and decided to build a fully articulated robotic biped.

Each leg has pivot points at the foot, knee, and hip, mimicking the articulation of the human leg. To control the robot’s movements, [Technovation] uses inverse kinematics, a method of calculating join movements rather than explicitly programming them. The user inputs the end coordinates of each foot, as opposed to each individual joint angle, and a special function outputs the joint angles necessary to reach each end coordinate. This part of the software is well commented and worth your time to dig into.

In case you want to change the height of the robot or its stride length, [Technovation] provides a few global constants in the firmware that will automatically adjust the calculations to fit the new robot’s dimensions. Of all the various aspects of this project, the detailed write-up impressed us the most. The robot was designed in Fusion 360 and the parts were 3D printed allowing for maximum design flexibility for the next hacker.

Maybe [Technovation’s] biped will help resurrect the social robot craze. Until then, happy hacking.

Continue reading “Robotic Biped Walks On Inverse Kinematics”