Building AI Models To Diagnose HVAC Issues

HVAC – heating, ventilation, and air conditioning – can account for a huge amount of energy usage of a building, whether it’s residential or industrial. Often it’s the majority energy consumer, especially in places with extreme climates or for things like data centers where cooling is a large design consideration. When problems arise with these complex systems, they can go undiagnosed for a time and additionally be difficult to fix, leading to even more energy losses until repairs are complete. With the growing availability of platforms that can run capable artificial intelligences, [kutluhan_aktar] is working towards a system that can automatically diagnose potential issues and help humans get a handle on repairs faster.

The prototype system is designed for hydronic (water-based) systems and uses two separate artificial intelligences, one to analyze thermal imagery of the system and look for problems like leaks, hot spots, or blockages, and the other to listen for anomalous sounds especially relating to the behavior of cooling fans. For the first, a CNC-like machine was built to move a thermal camera around a custom-built model HVAC system and report its images back to a central system where they can be analyzed for anomalies. The second system which analyses audio runs its artificial intelligence on a XIAO ESP32C6 and listens to the cooling fans running in the model.

One problem that had to be tackled before any of this could be completed was actually building an open-source dataset to train the AI on. That’s part of the reason for the HVAC model in this project; being able to create problems to train the computer to detect before rolling it out to a larger system. The project’s code and training models can be found on its GitHub page. It seems to be a fairly robust solution to this problem, though, and we’ll be looking forward to future versions running on larger systems. Not everyone has a hydronic HVAC system, though. As heat pumps become more and more popular and capable, you’ll need systems to control those as well.

Sealed Packs Of Pokémon Cards Give Up Their Secrets Without Opening Them

[Ahron Wayne] succeeded in something he’s been trying to accomplish for some time: figuring out what’s inside a sealed Pokémon card packet without opening it. There’s a catch, however. It took buying an X-ray CT scanner off eBay, refurbishing and calibrating it, then putting a load of work into testing and scanning techniques. Then finally combining the data with machine learning in order to make useful decisions. It’s a load of work but [Ahron] succeeded by developing some genuinely novel techniques.

While using an X-ray machine to peek inside a sealed package seems conceptually straightforward, there are in fact all kinds of challenges in actually pulling it off.  There’s loads of noise. So much that the resulting images give a human eyeball very little to work with. Luckily, there are also some things that make the job a little easier.

For example, it’s not actually necessary to image an entire card in order to positively identify it. Teasing out the individual features such as a fist, a tentacle, or a symbol are all useful to eliminate possibilities. Interestingly, as a side effect the system can easily spot counterfeit cards; the scans show up completely different.

When we first covered [Ahron]’s fascinating journey of bringing CT scanners back to life, he was able to scan cards but made it clear he wasn’t able to scan sealed packages. We’re delighted that he ultimately succeeded, and also documented the process. Check it out in the video below.

Continue reading “Sealed Packs Of Pokémon Cards Give Up Their Secrets Without Opening Them”

CUDA, But Make It AMD

Compute Unified Device Architecture, or CUDA, is a software platform for doing big parallel calculation tasks on NVIDIA GPUs. It’s been a big part of the push to use GPUs for general purpose computing, and in some ways, competitor AMD has thusly been left out in the cold. However, with more demand for GPU computation than ever, there’s been a breakthrough. SCALE from [Spectral Compute] will let you compile CUDA applications for AMD GPUs.

SCALE allows CUDA programs to run as-is on AMD GPUs, without modification. The SCALE compiler is also intended as a drop-in swap for nvcc, right down to the command line options. For maximum ease of use, it acts like you’ve installed the NVIDIA Cuda Toolkit, so you can build with cmake just like you would for a normal NVIDIA setup. Currently, Navi 21 and Navi 31 (RDNA 2.0 and RDNA 3.0) targets are supported, while a number of other GPUs are undergoing testing and development.

The basic aim is to allow developers to use AMD hardware without having to maintain an entirely separate codebase. It’s still a work in progress, but it’s a promising tool that could help break NVIDIA’s stranglehold on parts of the GPGPU market.

 

Tabletop Handybot Is Handy, And Powered By AI

Decently useful AI has been around for a little while now, and robotic arms have been around much longer. Yet somehow, we don’t have little robot helpers on our desks yet! Thankfully, [Yifei] is working towards that reality with Tabletop Handybot.

What [Yifei] has developed is a robotic arm that accepts voice commands. The robot relies on a Realsense D435 RGB-D camera, which provides color vision with depth information as well. Grounding DINO is used for object detection on the RGB images. Segment Anything and Open3D are used for further processing of the visual and depth data to help the robot understand what it’s looking at. Meanwhile, voice commands are interpreted via OpenAI Whisper, which can feed prompts to ChatGPT for further processing.

[Yifei] demonstrates his robot picking up markers on command, which is a pretty cool demo. With so many modern AI tools available, we’re getting closer to the ideal of robots that can understand and execute on general spoken instructions. This is a great example. We may not be all the way there yet, but perhaps soon. Video after the break.

Continue reading “Tabletop Handybot Is Handy, And Powered By AI”

Generative AI Hits The Commodore 64

Image-generating AIs are typically trained on huge arrays of GPUs and require great wads of processing power to run. Meanwhile, [Nick Bild] has managed to get something similar running on a Commodore 64. (via Tom’s Hardware).

A figure generated by [Nick]’s C64. We shall name him… “Sword Guy”!
As you might imagine, [Nick’s] AI image generator isn’t churning out 4K cyberpunk stills dripping in neon. Instead, he aimed at a smaller target, more befitting the Commodore 64 itself. His image generator creates 8×8 game sprites instead.

[Nick’s] model was trained on 100 retro-inspired sprites that he created himself. He did the training phase on a modern computer, so that the Commodore 64 didn’t have to sweat this difficult task on its feeble 6502 CPU. However, it’s more than capable of generating sprites using the model, thanks to some BASIC code that runs off of the training data. Right now, it takes the C64 about 20 minutes to run through 94 iterations to generate a decent sprite.

8×8 sprites are generally simple enough that you don’t need to be an artist to create them. Nonetheless, [Nick] has shown that modern machine learning techniques can be run on slow archaic hardware, even if there is limited utility in doing so. Video after the break.

Continue reading “Generative AI Hits The Commodore 64”

How AI Large Language Models Work, Explained Without Math

Large Language Models (LLMs ) are everywhere, but how exactly do they work under the hood? [Miguel Grinberg] provides a great explanation of the inner workings of LLMs in simple (but not simplistic) terms that eschews the low-level mathematics of how they work in favor of laying bare what it is they do.

At their heart, LLMs are prediction machines that work on tokens (small groups of letters and punctuation) and are as a result capable of great feats of human-seeming communication. Most technical-minded people understand that LLMs have no idea what they are saying, and this peek at their inner workings will make that abundantly clear.

Be sure to also review an illustrated guide to how image-generating AIs work. And if a peek under the hood of LLMs left you hungry for more low-level details, check out our coverage of training a GPT-2 LLM using pure C code.

The Perfect Desktop Kit For Experimenting With Self Driving Cars

When we think about self-driving cars, we normally think about big projects measured in billions of dollars, all funded by major automakers. But you can still dive into this world on a smaller scale, as [jmoreno555] demonstrates.

The build consists of a small RC car—an HSP 94123, in fact. It’s got a simple brushed motor inside, driven by a conventional speed controller, and servo-driven steering. A Raspberry Pi 4 is charged with driving the car, but it’s not alone. It’s outfitted with a Google Coral USB stick, which is a machine learning accelerator card capable of 4 trillion operations per second. The car also has a Wemos D1 onboard, charged with interfacing distance sensors to give the car a sense of its environment. Vision is courtesy of a 1.2-megapixel camera with a 160-degree lens, and a stereoscopic camera with twin 75-degree lenses. Software-wise, it’s early days yet. [jmoreno555] is exploring the use of Python and OpenCV to implement basic lane detection and other self driving routines, while using Blender as a simulator.

The real magic idea, though, is the treadmill. [jmoreno555] realized that one of the frustrations of working in this space is in having to chase a car around a test track. Instead, the use of a desktop treadmill allows the car to be programmed and debugged with less fuss in the early stages of development.

If you’re looking for a platform to experiment with AI and self-driving, this could be an project to dive in to. We’ve covered some other great builds in this space, too. Meanwhile, if you’ve cracked driving autonomy and want to let us know, our tipsline is always standing by!