Digital Squid’s Behavior Shaped By Neural Network

In the 90s, a video game craze took over the youth of the world — but unlike today’s games that rely on powerful PCs or consoles, these were simple, standalone devices with monochrome screens, each home to a digital pet. Often clipped to a keychain, they could travel everywhere with their owner, which was ideal from the pet’s perspective since, like real animals, they needed attention around the clock. [ViciousSquid] is updating this 90s idea for the 20s with a digital pet squid that uses a neural network to shape its behavior.

The neural network that controls the squid’s behavior takes a large number of variables into account, including whether or not it’s hungry or sleepy, or if it sees food. The neural network adapts as different conditions are encountered, allowing the squid to make decisions and strengthen its algorithms. [ViciousSquid] is using a Hebbian learning algorithm which strengthens connections between neurons which activate often together. Additionally, the squid’s can form both short- and long-term memories, and the neural network can even form new neurons on its own as needed.

[ViciousSquid] is still working on this project, and hopes to eventually implement a management system in the future, allowing the various behavior variables to be tracked over time and overall allow it to act in a way more familiar to the 90s digital pets it’s modeled after. It’s an interesting and fun take on those games, though, and much of the code is available on GitHub for others to experiment with as well. For those looking for the original 90s games, head over to this project where an emulator for Tamagotchis was created using modern microcontroller platforms.

An illustration of two translucent blue hands knitting a DNA double helix of yellow, green, and red base pairs from three colors of yarn. Text in white to the left of the hands reads: "Evo 2 doesn't just copy existing DNA -- it creates truly new sequences not found in nature that scientists can test for useful properties."

LLMs Coming For A DNA Sequence Near You

While tools like CRISPR have blown the field of genome hacking wide open, being able to predict what will happen when you tinker with the code underlying the living things on our planet is still tricky. Researchers at Stanford hope their new Evo 2 DNA generative AI tool can help.

Trained on a dataset of over 100,000 organisms from bacteria to humans, the system can quickly determine what mutations contribute to certain diseases and what mutations are mostly harmless. An “area we are hopeful about is using Evo 2 for designing new genetic sequences with specific functions of interest.”

To that end, the system can also generate gene sequences from a starting prompt like any other LLM as well as cross-reference the results to see if the sequence already occurs in nature to aid in predicting what the sequence might do in real life. These synthetic sequences can then be made using CRISPR or similar techniques in the lab for testing. While the prospect of building our own Moya is exciting, we do wonder what possible negative consequences could come from this technology, despite the hand-wavy mention of not training the model on viruses to “to prevent Evo 2 from being used to create new or more dangerous diseases.”

We’ve got you covered if you need to get your own biohacking space setup for DNA gels or if you want to find out more about powering living computers using electricity. If you’re more curious about other interesting uses for machine learning, how about a dolphin translator or discovering better battery materials?

Software Project Pieces Broken Bits Back Together

With all the attention on LLMs (Large Language Models) and image generators lately, it’s nice to see some of the more niche and unusual applications of machine learning. GARF (Generalizeable 3D reAssembly for Real-world Fractures) is one such project.

GARF may play fast and loose with acronym formation, but it certainly knows how to be picky when it counts. Its whole job is to look at the pieces of a broken object and accurately figure out how to fit the pieces back together, even if there are some missing bits or the edges aren’t clean.

Re-assembling an object from imperfect fragments is a nontrivial undertaking.

Efficiently and accurately figuring out how to re-assemble different pieces into a whole is not a trivial task. One may think it can in theory be brute-forced, but the complexity of such a job rapidly becomes immense. That’s where machine learning methods come in, as researchers created a system that can do exactly that. It addresses the challenge of generalizing from a synthetic data set (in which computer-generated objects are broken and analyzed for training) and successfully applying it to the kinds of highly complex breakage patterns that are seen in real-world objects like bones, recovered archaeological artifacts, and more.

The system is essentially a highly adept 3D puzzle solver, but an entirely different beast from something like this jigsaw puzzle solving pick-and-place robot. Instead of working on flat pieces with clean, predictable edges it handles 3D scanned fragments with complex break patterns even if the edges are imperfect, or there are missing pieces.

GARF is exactly the kind of software framework that is worth keeping in the back of one’s mind just in case it comes in handy some day. The GitHub repository contains the code (although at this moment the custom dataset is not yet uploaded) but there is also a demo available for the curious.

A blue-gloved hand holds a glass plate with a small off-white rectangular prism approximately one quarter the area of a fingernail in cross-section.

AI Helps Researchers Discover New Structural Materials

Nanostructured metamaterials have shown a lot of promise in what they can do in the lab, but often have fatal stress concentration factors that limit their applications. Researchers have now found a strong, lightweight nanostructured carbon. [via BGR]

Using a multi-objective Bayesian optimization (MBO) algorithm trained on finite element analysis (FEA) datasets to identify the best candidate nanostructures, the researchers then brought the theoretical material to life with 2 photon polymerization (2PP) photolithography. The resulting “carbon nanolattices achieve the compressive strength of carbon steels (180–360 MPa) with the density of Styrofoam (125–215 kg m−3) which exceeds the specific strengths of equivalent low-density materials by over an order of magnitude.”

While you probably shouldn’t start getting investors for your space elevator startup just yet, lighter materials like this are promising for a lot of applications, most notably more conventional aviation where fuel (or energy) prices are a big constraint on operations. As with any lab results, more work is needed until we see this in the real world, but it is nice to know that superalloys and composites aren’t the end of the road for strong and lightweight materials.

We’ve seen AI help identify battery materials already and this seems to be one avenue where generative AI isn’t just about making embarrassing photos or making us less intelligent.

Genetic Algorithm Runs On Atari 800 XL

For the last few years or so, the story in the artificial intelligence that was accepted without question was that all of the big names in the field needed more compute, more resources, more energy, and more money to build better models. But simply throwing money and GPUs at these companies without question led to them getting complacent, and ripe to be upset by an underdog with fractions of the computing resources and funding. Perhaps that should have been more obvious from the start, since people have been building various machine learning algorithms on extremely limited computing platforms like this one built on the Atari 800 XL.

Unlike other models that use memory-intensive applications like gradient descent to train their neural networks, [Jean Michel Sellier] is using a genetic algorithm to work within the confines of the platform. Genetic algorithms evaluate potential solutions by evolving them over many generations and keeping the ones which work best each time. The changes made to the surviving generations before they are put through the next evolution can be made in many ways, but for a limited system like this a quick approach is to make small random changes. [Jean]’s program, written in BASIC, performs 32 generations of evolution to predict the points that will lie on a simple mathematical function.

While it is true that the BASIC program relies on stochastic methods to train, it does work and proves that it’s effective to create certain machine learning models using limited hardware, in this case an 8-bit Atari running BASIC. In previous projects he’s also been able to show how similar computers can be used for other complex mathematical tasks as well. Of course it’s true that an 8-bit machine like this won’t challenge OpenAI or Anthropic anytime soon, but looking for more efficient ways of running complex computation operations is always a more challenging and rewarding problem to solve than buying more computing resources.

Continue reading “Genetic Algorithm Runs On Atari 800 XL”

Training A Self-Driving Kart

There are certain tasks that humans perform every day that are notoriously difficult for computers to figure out. Identifying objects in pictures, for example, was something that seems fairly straightforward but was only done by computers with any semblance of accuracy in the last few years. Even then, it can’t be done without huge amounts of computing resources. Similarly, driving a car is a surprisingly complex task that even companies promising full self-driving vehicles haven’t been able to deliver despite working on the problem for over a decade now. [Austin] demonstrates this difficulty in his latest project, which adds self-driving capabilities to a small go-kart.

[Austin] had been working on this project at the local park but grew tired of packing up all his gear when he wanted to work on his machine-learning algorithms. So he took all the self-driving equipment off of the first kart and incorporated it into a smaller kart with a very small turning radius so he could develop it in his shop.

He laid down some tape on the floor to create the track and then set up the vehicle to learn how to drive by watching and gathering data. The model is trained with a convolutional neural network and this data. The only inputs that the model gets are images from cameras at the front of the kart. At first, it could only change the steering angle, with [Austin] controlling the throttle to prevent crashes. Eventually, he gave it control of the throttle as well, which behaves well except at the fastest speeds.

There were plenty of challenges along the way, especially when compared to the models trained at the park; [Austin] correctly theorized that the cause of the hardship in the park was a lack of contrast at the boundary between the track and any out-of-bounds areas. With a few tweaks to the track, as well as adding some wide-angle lenses to his cameras, he was able to get a model that works fairly well. Getting started on a project like this doesn’t have as high of a barrier to entry as one might imagine, either. Take a look at this comprehensive open-source Python library for self-driving projects. If you want to start smaller, perhaps don’t start with a self-driving kart.

Continue reading “Training A Self-Driving Kart”

Artificial Intelligence Runs On Arduino

Fundamentally, an artificial intelligence (AI) is nothing more than a system that takes a series of inputs, makes some prediction, and then outputs that information. Of course, the types of AI in the news right now can handle a huge number of inputs and need server farms’ worth of compute to generate outputs of various forms, but at a basic level, there’s no reason a purpose-built AI can’t run on much less powerful hardware. As a demonstration, and to win a bet with a friend, [mondal3011] got an artificial intelligence up and running on an Arduino.

This AI isn’t going to do anything as complex as generate images or write clunky preambles to every recipe on the Internet, but it is still a functional and useful piece of software. This one specifically handles the brightness of a single lamp, taking user input on acceptable brightness ranges in the room and outputting what it thinks the brightness of the lamp should be to match the user’s preferences. [mondal3011] also builds a set of training data for the AI to learn from, taking the lamp to various places around the house and letting it figure out where to set the brightness on its own. The training data is run through a linear regression model in Python which generates the function that the Arduino needs to automatically operate the lamp.

Although this isn’t the most complex model, it does go a long way to demonstrating the basic principles of using artificial intelligence to build a useful and working model, and then taking that model into the real world. Note also that the model is generated on a more powerful computer before being ported over to the microcontroller platform. But that’s all par for the course in AI and machine learning. If you’re looking to take a step up from here, we’d recommend this robot that uses neural networks to learn how to walk.