Creating Video Games With AI: A Mario Example

Artificial intelligence (AI) seems to be doing everything these days. Making images, making videos, and replacing most of us real human writers if you believe the hype. Maybe it’s all over! And yet, we persist, to write about yet another job taken over by AI: creating video games.

The research paper is entitled “Video Game Generation: A Practical Study using Mario.” The basic idea is whether a generative AI model can create an interactive video game by first training it on an existing game.

MarioVGG, as it is called, is a “text-to-video model.” It hasn’t built the Mario game that you’re familiar with, though. It takes player commands as text inputs—such as “run, or “jump”—and then outputs video frames showing the result in the ‘game.’ The model was trained on a dataset of frame-by-frame Super Mario Brothers game play, combined with data on user inputs at the time. The model shows an ability to generate believable video output for given player inputs, including basic game physics, item interactions, and collisions. It’s able to do this in a chained way, so that it can reasonably simulate a player making multiple actions and moving through a level of the game.

It’s not like playing a real Mario game yet, by any means. Regardless, the AI model has shown an ability to replicate the world of the game in a way that behaves relatively consistently with its established rules. If you’re in the field of video game development, though, you probably don’t have a lot to worry about just yet—you probably moved past making basic Mario clones years ago, so you’ve got quite an edge for now!

What’s The Deal With AI Art?

A couple weeks ago, we had a kerfuffle here on Hackaday: A writer put out a piece with AI-generated headline art. It was, honestly, pretty good, but it was also subject to all of the usual horrors that get generated along the way. If you have played around with any of the image generators you know the AI-art uncanny style, where it looks good enough at first glance, but then you notice limbs in the wrong place if you look hard enough. We replaced it shortly after an editor noticed.

The story is that the writer couldn’t find any nice visuals to go with the blog post, with was about encoding data in QR codes and printing them out for storage. This is a problem we have frequently here, actually. When people write up a code hack, for instance, there’s usually just no good image to go along with it. Our writers have to get creative. In this case, he tossed it off to Stable Diffusion.

Some commenters were afraid that this meant that we were outsourcing work from our fantastic, and very human, art director Joe Kim, whose trademark style you’ve seen on many of our longer-form original articles. Of course we’re not! He’s a genius, and when we tell him we need some art about topics ranging from refining cobalt to Wimshurst machines to generate static electricity, he comes through. I think that all of us probably have wanted to make a poster out of one or more of his headline art pieces. Joe is a treasure.

But for our daily blog posts, which cover your works, we usually just use a picture of the project. We can’t ask Joe to make ten pieces of art per day, and we never have. At least as far as Hackaday is concerned, AI-generated art is just as good as finding some cleared-for-use clip art out there, right?

Except it’s not. There is a lot of uncertainty about the data that the algorithms are trained on, whether the copyright of the original artists was respected or needed to be, ethically or legally. Some people even worry that the whole thing is going to bring about the end of Art. (They worried about this at the introduction of the camera as well.) But then there’s also the extra limbs, and AI-generated art’s cliche styles, which we fear will get old and boring after we’re all saturated with them.

So we’re not using AI-generated art as a policy for now, but that’s not to say that we don’t see both the benefits and the risks. We’re not Luddites, after all, but we are also in favor of artists getting paid for their work, and of respect for the commons when people copyleft license their images. We’re very interested to see how this all plays out in the future, but for now, we’re sitting on the sidelines. Sorry if that means more headlines with colorful code!

Creating A Twisted Grid Image Illusion With A Diffusion Model

Images that can be interpreted in a variety of ways have existed for many decades, with the classical example being Rubin’s vase — which some viewers see as a vase, and others a pair of human faces.

When the duck becomes a bunny, if you ignore the graphical glitches that used to be part of the duck. (Credit: Steve Mould, YouTube)
When the duck becomes a bunny, if you ignore the graphical glitches that used to be part of the duck. (Credit: Steve Mould, YouTube)

Where things get trickier is if you want to create an image that changes into something else that looks realistic when you rotate each section of it within a 3×3 grid. In a video by [Steve Mould], he explains how this can be accomplished, by using a diffusion model to identify similar characteristics of two images and to create an output image that effectively contains essential features of both images.

Naturally, this process can be done by hand too, with the goal always being to create a plausible image in either orientation that has enough detail to trick the brain into filling in the details. To head down the path of interpreting what the eye sees as a duck, a bunny, a vase or the outline of faces.

Using a diffusion model to create such illusions is quite a natural fit, as it works with filling in noise until a plausible enough image begins to appear. Of course, whether it is a viable image is ultimately not determined by the model, but by the viewer, as humans are susceptible to such illusions while machine vision still struggles to distinguish a cat from a loaf and a raisin bun from a spotted dog. The imperfections of diffusion models would seem to be a benefit here, as it will happily churn through abstractions and iterations with no understanding or interpretive bias, while the human can steer it towards a viable interpretation.

Continue reading “Creating A Twisted Grid Image Illusion With A Diffusion Model”

Large Language Models On Small Computers

As technology progresses, we generally expect processing capabilities to scale up. Every year, we get more processor power, faster speeds, greater memory, and lower cost. However, we can also use improvements in software to get things running on what might otherwise be considered inadequate hardware. Taking this to the extreme, while large language models (LLMs) like GPT are running out of data to train on and having difficulty scaling up, [DaveBben] is experimenting with scaling down instead, running an LLM on the smallest computer that could reasonably run one.

Of course, some concessions have to be made to get an LLM running on underpowered hardware. In this case, the computer of choice is an ESP32, so the dataset was reduced from the trillions of parameters of something like GPT-4 or even hundreds of billions for GPT-3 down to only 260,000. The dataset comes from the tinyllamas checkpoint, and llama.2c is the implementation that [DaveBben] chose for this setup, as it can be streamlined to run a bit better on something like the ESP32. The specific model is the ESP32-S3FH4R2, which was chosen for its large amount of RAM compared to other versions since even this small model needs a minimum of 1 MB to run. It also has two cores, which will both work as hard as possible under (relatively) heavy loads like these, and the clock speed of the CPU can be maxed out at around 240 MHz.

Admittedly, [DaveBben] is mostly doing this just to see if it can be done since even the most powerful of ESP32 processors won’t be able to do much useful work with a large language model. It does turn out to be possible, though, and somewhat impressive, considering the ESP32 has about as much processing capability as a 486 or maybe an early Pentium chip, to put things in perspective. If you’re willing to devote a few more resources to an LLM, though, you can self-host it and use it in much the same way as an online model such as ChatGPT.

DIY Rabbit R1 Clone Could Be Neat With More Hardware

The Teenage Engineering badging usually appears on some cool gear that almost always costs a great deal of money. One such example is the Rabbit R1, an AI-powered personal assistant that retails for $199. It was also revealed that it’s basically a small device running a simple Android app. That raises the question — could build your own dupe for $20? That’s what [Thomas the Maker] did.

Meet Rappit. It’s basically [Thomas]’s take on an AI friend that doesn’t break the bank. It runs on a Raspberry Pi Zero 2W, which has the benefit of integrated wireless connectivity on board. It’s powered by rechargeable AA batteries or a USB power bank to keep things simple. [Thomas] then wrapped it all up in a cute 3D printed enclosure to give it some charm.

It’s software that makes the Rappit what it is. Rather than including a screen, microphone, or speakers on the device itself, [Thomas] interacts with the Pi-based device via smartphone. It makes it a less convincing dupe of the self-contained Rabbit R1, but the basic concept is the same. [Thomas] can make queries of the Rappit via a simple Android or iOS app he created called “Comfyspace,” and the Rappit responds with the aid of Google’s Gemini AI.

If you’re really trying to duplicate the trend of AI assistants, you really need standalone hardware. To that end, the Rappit design could really benefit from a screen, microphone, speaker, and speech synth. Honestly, though, that would only take you a few hours extra work compared to what [Thomas] has already done here. As it is, [Thomas] could simply throw away the Raspberry Pi and just use the smartphone with Gemini directly, right? But he chose this route of using the smartphone as an interface to keep costs down by minimizing hardware outlay.

If you want a real Rabbit R1, you can order one here. We’ve discussed controversy around the device before, too. Video after the break.

Continue reading “DIY Rabbit R1 Clone Could Be Neat With More Hardware”

Taco Bell To Bring Voice AI Ordering To Hundreds Of US Drive-Throughs

Drive-throughs are a popular feature at fast-food places, where you can get some fast grub without even leaving your car. For the fast-food companies running them they are also a big focus of automation, with the ideal being a voice assistant that can take orders and pass them on to the (still human) staff. This probably in lieu of being able to make customers use the touch screens-equipped order kiosks that are common these days. Pushing for this drive-through automation change is now Taco Bell, or specifically the Yum Brands parent company.

This comes interestingly enough shortly after McDonalds deemed its own drive-through voice assistant to be a failure and removing it. Meanwhile multiple Taco Bell in the US in 13 states and five KFC restaurants in Australia are trialing the system, with results apparently encouraging enough to start expanding it. Company officials are cited as it having ‘improved order accuracy’, ‘decreased wait times’ and ‘increased profits’. Considering the McDonalds experience which was pretty much the exact opposite in all of these categories we will remain with bated breath. Feel free to share your Taco Bell or other Voice AI-enabled drive-through experiences in the comments. Maybe whoever Yum Brands contracted for their voice assistant did a surprisingly decent job, which would be a pleasant change.

Top image: Taco Bell – Vadnais Heights, MN (Credit: Gabriel Vanslette, Wikimedia)

AI Image Generator Twists In Response To MIDI Dials, In Real-time

MIDI isn’t just about music, as [Johannes Stelzer] shows by using dials to adjust AI-generated imagery in real-time. The results are wild, with an interactivity to them that we don’t normally see in such things.

[Johannes] uses Stable Diffusion‘s SDXL Turbo to create a baseline image of “photo of a red brick house, blue sky”. The hardware dials act as manual controls for applying different embeddings to this baseline, such as “coral”, “moss”, “fire”, “ice”, “sand”, “rusty steel” and “cookie”.

By adjusting the dials, those embeddings are applied to the base image in varying strengths. The results are generated on the fly and are pretty neat to see, especially since there is no appreciable amount of processing time required.

The MIDI controller is integrated with the help of lunar_tools, a software toolkit on GitHub to facilitate creating interactive exhibits. As for the image end of things, we’ve previously covered how AI image generators work.