Divining Air Quality With A Cheap Computer Vision Device

There are all kinds of air quality sensors on the market that rely on all kinds of electro-physical effects to detect gases or contaminants and report them back as a value. [lucascreator] has instead been investigating a method of determining air quality that is closer to divination than measurement—using computer vision and a trained AI model.

The system relies on an Unihiker K10—a microcontroller module based around the ESP32-S3 at heart. The chip is running a lightweight convolutional neural network (CNN) trained on 12,000 images of the sky. These images were sourced from a public dataset; they were taken in India and Nepal, and tagged with the relevant Air Quality Index at the time of capture. [lucascreator] used this data to train their model to look at an image taken with a camera attached to the ESP32 and estimate the air quality index based on what it has seen in that existing dataset.

It might sound like a spurious concept, but it does have some value. [lucascreator] cites studies where video data was used for low-cost air quality estimation—not as a replacement for proper measurement, but as an additional data point that could be sourced from existing surveillance infrastructure. Performance of such models has, in some cases, been remarkably accurate.

[lucascreator] is pragmatic about the limitations of their implementation of this concept, noting that their very compact model didn’t always perform the best in terms of determining actual air quality. The concept may have some value, but implementing it on an ESP32 isn’t so easy if you’re looking for supreme accuracy. We’ve featured some other great air quality projects before, though, if you’re looking for other ways to capture this information. Video after the break.

Continue reading “Divining Air Quality With A Cheap Computer Vision Device”

Mesa Project Adds Code Comprehension Requirement After AI Slop Incident

Recently [Faith Ekstrand] announced on Mastodon that Mesa was updating its contributor guide. This follows a recent AI slop incident where someone submitted a massive patch to the Mesa project with the claim that this would improve performance ‘by a few percent’. The catch? The entire patch was generated by ChatGPT, with the submitter becoming somewhat irate when the very patient Mesa developers tried to explain that they’d happily look at the issue after the submitter had condensed the purported ‘improvement’ into a bite-sized patch.

The entire saga is summarized in a recent video by [Brodie Robertson] which highlights both how incredibly friendly the Mesa developers are, and how the use of ChatGPT and kin has made some people with zero programming skills apparently believe that they can now contribute code to OSS projects. Unsurprisingly, the Mesa developers were unable to disabuse this particular individual from that notion, but the diff to the Mesa contributor guide by [Timur Kristóf] should make abundantly clear that someone playing Telephone between a chatbot and OSS project developers is neither desirable nor helpful.

That said, [Brodie] also highlights a recent post by [Daniel Stenberg] of Curl fame, who thanked [Joshua Rogers] for contributing a massive list of potential issues that were found using ‘AI-assisted tools’, as detailed in this blog post by [Joshua]. An important point here is that these ‘AI tools’ are not LLM-based chatbots, but rather tweaked existing tools like static code analyzers with more smarts bolted on. They’re purpose-made tools that still require you to know what you’re doing, but they can be a real asset to a developer, and a heck of a lot more useful to a project like Curl than getting sent fake bug reports by a confabulating chatbot as has happened previously.

Continue reading “Mesa Project Adds Code Comprehension Requirement After AI Slop Incident”

LLM Dialogue In Animal Crossing Actually Works Very Well

In the original Animal Crossing from 2001, players are able to interact with a huge cast of quirky characters, all with different interests and personalities. But after you’ve played the game for awhile, the scripted interactions can become a bit monotonous. Seeing an opportunity to improve the experience, [josh] decided to put a Large Language Model (LLM) in charge of these interactions. Now when the player chats with other characters in the game, the dialogue is a lot more engaging, relevant, and sometimes just plain funny.

How does one go about hooking a modern LLM into a 24-year-old game built for an entirely offline console? [josh]’s clever approach required a lot of poking about, and did a good job of leveraging some of the game’s built-in features for a seamless result.

Continue reading “LLM Dialogue In Animal Crossing Actually Works Very Well”

A Trail Camera Built With Raspberry Pi

You can get all kinds of great wildlife footage if you trek out into the woods with a camera, but it can be tough to stay awake all night. However, this is a task you can readily automate, as [Luke] did with his DIY trail camera.

A Raspberry Pi Zero 2W serves as the heart of the build. It’s compact and runs on very little power, but also provides a good amount more processing power than the original Raspberry Pi Zero. It’s kitted out with the Raspberry Pi AI Camera, which uses the Sony IMX500 Intelligent Vision Sensor — providing a great platform for neural networks doing image classification and similar machine learning tasks. A Witty Pi power management module is used both for its real time clock and to schedule start-ups and shutdowns to best manage the power on offer from the batteries. All these components are wrapped up in a 3D printed housing to keep the Pi safe out in the wild.

We’ve seen some neat projects in this vein before.

Continue reading “A Trail Camera Built With Raspberry Pi”

Fully-Local AI Agent Runs On Raspberry Pi, With A Little Patience

[Simone]’s AI assistant, dubbed Max Headbox, is a wakeword-triggered local AI agent capable of following instructions and doing simple tasks. It’s an experiment in many ways, but also a great demonstration not only of what is possible with the kinds of open tools and hardware available to a modern hobbyist, but also a reminder of just how far some of these software tools have come in only a few short years.

Max Headbox is not just a local large language model (LLM) running on Pi hardware; the model is able to make tool calls in a loop, chaining them together to complete tasks. This means the system can break down a spoken instruction (for example, “find the weather report for today and email it to me”) into a series of steps to complete, utilizing software tools as needed throughout the process until the task is finished.

Continue reading “Fully-Local AI Agent Runs On Raspberry Pi, With A Little Patience”

Moondream title with man's face visible in background.

Using Moondream AI To Make Your Pi “See” Like A Human

[Jaryd] from Core Electronics shows us human-like computer vision with Moondream on the Pi 5.

Using the Moondream visual language model, which runs directly on your Raspberry Pi, and not in the cloud, you can answer questions such as “are the clothes on the line?”, “is there a package on the porch?”, “did I leave the fridge open?”, or “is the dog on the bed?” [Jaryd] compares Moondream to an alternative visual AI system, You Only Look Once (YOLO).

Processing a question with Moondream on your Pi can take anywhere from just a few moments to 90 seconds, depending on the model used and the nature of the question. Moondream comes in two varieties, based on size, one is two billion parameters and the other five hundred million parameters. The larger model is more capable and more accurate, but it has a longer processing time — the fastest possible response time coming in at about 22 to 25 seconds. The smaller model is faster, about 8 to 10 seconds, but as you might expect its results are not as good. Indeed, [Jaryd] says the answers can be infuriatingly bad.

In the write-up, [Jaryd] runs you through how to use Moonbeam on your Pi 5 and the video (embedded below) shows it in action. Fair warning though, Moondream is quite RAM intensive so you will need at least 8 GB of memory in your Pi if you want to play along.

If you’re interested in machine vision you might also like to check out Machine Vision Automates Trainspotting With Unique Full-Length Portraits.

Continue reading “Using Moondream AI To Make Your Pi “See” Like A Human”

Regretfully: $3,000 Worth Of Raspberry Pi Boards

We feel for [Jeff Geerling]. He spent a lot of effort building an AI cluster out of Raspberry PI boards and $3,000 later, he’s a bit regretful. As you can see in the video below, it is a neat build. As Jeff points out, it is relatively low power and dense. But dollar for dollar, it isn’t much of a supercomputer.

Of course, the most obvious thing is that there’s plenty of CPU, but no GPU. We can sympathize, too, with the fact that he had to strip it down twice and rebuild it for a total of three rebuilds. One time, he decided to homogenize the SSDs for each board. The second time was to affix the heatsinks. It is always something.

With ten “blades” — otherwise known as compute modules — the plucky little computer turned in about 325 gigaflops on tests. That sounds pretty good, but a Framework Desktop x4 manages 1,180 gigaflops. What’s more is that the Framework turned out cheaper per gigaflop, too. Each dollar bought about 110 megaflops for the Pis, but about 140 for the Framework.

Continue reading “Regretfully: $3,000 Worth Of Raspberry Pi Boards”