So far, food for astronauts hasn’t exactly been haute cuisine. Freeze-dried cereal cubes, squeezable tubes filled with what amounts to baby food, and meals reconstituted with water from a fuel cell don’t seem like meals to write home about. And from the sound of research into turning asteroids into astronaut food, things aren’t going to get better with space food anytime soon. The work comes from Western University in Canada and proposes that carbonaceous asteroids like the recently explored Bennu be converted into edible biomass by bacteria. The exact bugs go unmentioned, but when fed simulated asteroid bits are said to produce a material similar in texture and appearance to a “caramel milkshake.” Having grown hundreds of liters of bacterial cultures in the lab, we agree that liquid cultures spun down in a centrifuge look tasty, but if the smell is any indication, the taste probably won’t live up to expectations. Still, when a 500-meter-wide chunk of asteroid can produce enough nutritionally complete food to sustain between 600 and 17,000 astronauts for a year without having to ship it up the gravity well, concessions will likely be made. We expect that this won’t apply to the nascent space tourism industry, which for the foreseeable future will probably build its customer base on deep-pocketed thrill-seekers, a group that’s not known for its ability to compromise on creature comforts.
ai265 Articles
Creating Video Games With AI: A Mario Example
Artificial intelligence (AI) seems to be doing everything these days. Making images, making videos, and replacing most of us real human writers if you believe the hype. Maybe it’s all over! And yet, we persist, to write about yet another job taken over by AI: creating video games.
The research paper is entitled “Video Game Generation: A Practical Study using Mario.” The basic idea is whether a generative AI model can create an interactive video game by first training it on an existing game.
MarioVGG, as it is called, is a “text-to-video model.” It hasn’t built the Mario game that you’re familiar with, though. It takes player commands as text inputs—such as “run, or “jump”—and then outputs video frames showing the result in the ‘game.’ The model was trained on a dataset of frame-by-frame Super Mario Brothers game play, combined with data on user inputs at the time. The model shows an ability to generate believable video output for given player inputs, including basic game physics, item interactions, and collisions. It’s able to do this in a chained way, so that it can reasonably simulate a player making multiple actions and moving through a level of the game.
It’s not like playing a real Mario game yet, by any means. Regardless, the AI model has shown an ability to replicate the world of the game in a way that behaves relatively consistently with its established rules. If you’re in the field of video game development, though, you probably don’t have a lot to worry about just yet—you probably moved past making basic Mario clones years ago, so you’ve got quite an edge for now!
Hackaday Links: September 29, 2024
There was movement in the “AM Radio in Every Vehicle Act” last week, with the bill advancing out of the US House of Representatives Energy and Commerce Committee and heading to a full floor vote. For those not playing along at home, auto manufacturers have been making moves toward deleting AM radios from cars because they’re too sensitive to all the RF interference generated by modern vehicles. The trouble with that is that the government has spent a lot of effort on making AM broadcasters the centerpiece of a robust and survivable emergency communications system that reaches 90% of the US population.
The bill would require cars and trucks manufactured or sold in the US to be equipped to receive AM broadcasts without further fees or subscriptions, and seems to enjoy bipartisan support in both the House and the Senate. Critics of the bill will likely point out that while the AM broadcast system is a fantastic resource for emergency communications, if nobody is listening to it when an event happens, what’s the point? That’s fair, but short-sighted; emergency communications isn’t just about warning people that something is going to happen, but coordinating the response after the fact. We imagine Hurricane Helene’s path of devastation from Florida to Pennsylvania this week and the subsequent emergency response might bring that fact into focus a bit.
What’s The Deal With AI Art?
A couple weeks ago, we had a kerfuffle here on Hackaday: A writer put out a piece with AI-generated headline art. It was, honestly, pretty good, but it was also subject to all of the usual horrors that get generated along the way. If you have played around with any of the image generators you know the AI-art uncanny style, where it looks good enough at first glance, but then you notice limbs in the wrong place if you look hard enough. We replaced it shortly after an editor noticed.
The story is that the writer couldn’t find any nice visuals to go with the blog post, with was about encoding data in QR codes and printing them out for storage. This is a problem we have frequently here, actually. When people write up a code hack, for instance, there’s usually just no good image to go along with it. Our writers have to get creative. In this case, he tossed it off to Stable Diffusion.
Some commenters were afraid that this meant that we were outsourcing work from our fantastic, and very human, art director Joe Kim, whose trademark style you’ve seen on many of our longer-form original articles. Of course we’re not! He’s a genius, and when we tell him we need some art about topics ranging from refining cobalt to Wimshurst machines to generate static electricity, he comes through. I think that all of us probably have wanted to make a poster out of one or more of his headline art pieces. Joe is a treasure.
But for our daily blog posts, which cover your works, we usually just use a picture of the project. We can’t ask Joe to make ten pieces of art per day, and we never have. At least as far as Hackaday is concerned, AI-generated art is just as good as finding some cleared-for-use clip art out there, right?
Except it’s not. There is a lot of uncertainty about the data that the algorithms are trained on, whether the copyright of the original artists was respected or needed to be, ethically or legally. Some people even worry that the whole thing is going to bring about the end of Art. (They worried about this at the introduction of the camera as well.) But then there’s also the extra limbs, and AI-generated art’s cliche styles, which we fear will get old and boring after we’re all saturated with them.
So we’re not using AI-generated art as a policy for now, but that’s not to say that we don’t see both the benefits and the risks. We’re not Luddites, after all, but we are also in favor of artists getting paid for their work, and of respect for the commons when people copyleft license their images. We’re very interested to see how this all plays out in the future, but for now, we’re sitting on the sidelines. Sorry if that means more headlines with colorful code!
Building AI Models To Diagnose HVAC Issues
HVAC – heating, ventilation, and air conditioning – can account for a huge amount of energy usage of a building, whether it’s residential or industrial. Often it’s the majority energy consumer, especially in places with extreme climates or for things like data centers where cooling is a large design consideration. When problems arise with these complex systems, they can go undiagnosed for a time and additionally be difficult to fix, leading to even more energy losses until repairs are complete. With the growing availability of platforms that can run capable artificial intelligences, [kutluhan_aktar] is working towards a system that can automatically diagnose potential issues and help humans get a handle on repairs faster.
The prototype system is designed for hydronic (water-based) systems and uses two separate artificial intelligences, one to analyze thermal imagery of the system and look for problems like leaks, hot spots, or blockages, and the other to listen for anomalous sounds especially relating to the behavior of cooling fans. For the first, a CNC-like machine was built to move a thermal camera around a custom-built model HVAC system and report its images back to a central system where they can be analyzed for anomalies. The second system which analyses audio runs its artificial intelligence on a XIAO ESP32C6 and listens to the cooling fans running in the model.
One problem that had to be tackled before any of this could be completed was actually building an open-source dataset to train the AI on. That’s part of the reason for the HVAC model in this project; being able to create problems to train the computer to detect before rolling it out to a larger system. The project’s code and training models can be found on its GitHub page. It seems to be a fairly robust solution to this problem, though, and we’ll be looking forward to future versions running on larger systems. Not everyone has a hydronic HVAC system, though. As heat pumps become more and more popular and capable, you’ll need systems to control those as well.
AI Image Generator Twists In Response To MIDI Dials, In Real-time
MIDI isn’t just about music, as [Johannes Stelzer] shows by using dials to adjust AI-generated imagery in real-time. The results are wild, with an interactivity to them that we don’t normally see in such things.
[Johannes] uses Stable Diffusion‘s SDXL Turbo to create a baseline image of “photo of a red brick house, blue sky”. The hardware dials act as manual controls for applying different embeddings to this baseline, such as “coral”, “moss”, “fire”, “ice”, “sand”, “rusty steel” and “cookie”.
By adjusting the dials, those embeddings are applied to the base image in varying strengths. The results are generated on the fly and are pretty neat to see, especially since there is no appreciable amount of processing time required.
The MIDI controller is integrated with the help of lunar_tools, a software toolkit on GitHub to facilitate creating interactive exhibits. As for the image end of things, we’ve previously covered how AI image generators work.
Peering Into The Black Box Of Large Language Models
Large Language Models (LLMs) can produce extremely human-like communication, but their inner workings are something of a mystery. Not a mystery in the sense that we don’t know how an LLM works, but a mystery in the sense that the exact process of turning a particular input into a particular output is something of a black box.
This “black box” trait is common to neural networks in general, and LLMs are very deep neural networks. It is not really possible to explain precisely why a specific input produces a particular output, and not something else.
Why? Because neural networks are neither databases, nor lookup tables. In a neural network, discrete activation of neurons cannot be meaningfully mapped to specific concepts or words. The connections are complex, numerous, and multidimensional to the point that trying to tease out their relationships in any straightforward way simply does not make sense.
Continue reading “Peering Into The Black Box Of Large Language Models”