Winners Of Hackaday’s Data Loggin’ Contest: Bluetooth Gardening, Counting Cups, And Predicting Rainfall

The votes for Hackaday’s Data Loggin’ Contest have been received, saved to SD, pushed out to MQTT, and graphed. Now it’s time to announce the three projects that made the most sense out of life’s random data and earned themselves a $100 gift certificate for Tindie, the Internet’s foremost purveyor of fine hand-crafted artisanal electronics.

First up, and winner of the Data Wizard category, is this whole-garden soil moisture monitor by [Joseph Eoff]. You might not realize it from the picture at the top of the page, but lurking underneath the mulch of that lovely garden is more than 20 Bluetooth soil sensors arranged in a grid pattern. All of the data is sucked up by a series of solar powered ESP32 access points, and ultimately ends up on a Raspberry Pi by way of MQTT. Here, custom Python software generates a heatmap that indicates possible trouble spots in the garden. With its easy to understand visualization of what’s happening under the surface, this project perfectly captured the spirit of the category.

Next up is the Nespresso Shield from [Steadman]. This clever gadget literally listens for the telltale sounds of the eponymous coffee maker doing its business to not only estimate your daily consumption, but warn you when the machine is running low on water. The clever non-invasive method of pulling data from a household appliance made this a strong entry for the Creative Genius category.

Last but certainly not least is this comprehensive IoT weather station that uses machine learning to predict rainfall. With crops and livestock at risk from sudden intense storms, [kutluhan_aktar] envisions this device as an early warning for farmers. The documentation on this project, from setting up the GPRS-enabled ESP8266 weather station to creating the web interface and importing all the data into TensorFlow, is absolutely phenomenal. This project serves as a invaluable framework for similar DIY weather detection and prediction systems, which made it the perfect choice for our World Changer category.

There may have only been three winners this time around, but the legendary skill and creativity of the Hackaday community was on full display for this contest. A browse through the rest of the submissions is highly recommended, and we’re sure the creators would love to hear your feedback and suggestions in the comments.

Continue reading “Winners Of Hackaday’s Data Loggin’ Contest: Bluetooth Gardening, Counting Cups, And Predicting Rainfall”

An RP2040 Board Designed For Machine Learning

Machine learning (ML) typically conjures up ideas of fancy code requiring oodles of storage and tons of processing power. However, there are some ML models that, once trained, can readily be run on much more spartan hardware – even a microcontroller! The RP2040, star of the Raspberry Pi Pico, is one such chip up to the task, and [Arducam] have announced a board aiming to employ it to those ends – the Pico4ML.

The board goes heavy on the hardware, equipping the RP2040 with plenty of tools useful for machine learning tasks. There’s a QVGA camera on board, as well as a tiny 0.96″ TFT display. The camera feed can even be streamed live to the screen if so desired. There’s also a microphone to capture audio and an IMU, already baked into the board. This puts object, speech, and gesture recognition well within the purview of the Pico4ML.

Running ML models on a board like the Pico4ML isn’t about robust high performance situations. Instead, it’s intended for applications where low power and portability are key. If you’ve got some ideas on what the Pico4ML could do and do well, sound off in the comments. We’d probably hook it up to a network so we could have it automatically place an order when we yell out for pizza. We’ve covered machine learning on microcontrollers before, too – with a great Remoticon talk on how to get started!

AI Upscaling And The Future Of Content Delivery

The rumor mill has recently been buzzing about Nintendo’s plans to introduce a new version of their extremely popular Switch console in time for the holidays. A faster CPU, more RAM, and an improved OLED display are all pretty much a given, as you’d expect for a mid-generation refresh. Those upgraded specifications will almost certainly come with an inflated price tag as well, but given the incredible demand for the current Switch, a $50 or even $100 bump is unlikely to dissuade many prospective buyers.

But according to a report from Bloomberg, the new Switch might have a bit more going on under the hood than you’d expect from the technologically conservative Nintendo. Their sources claim the new system will utilize an NVIDIA chipset capable of Deep Learning Super Sampling (DLSS), a feature which is currently only available on high-end GeForce RTX 20 and GeForce RTX 30 series GPUs. The technology, which has already been employed by several notable PC games over the last few years, uses machine learning to upscale rendered images in real-time. So rather than tasking the GPU with producing a native 4K image, the engine can render the game at a lower resolution and have DLSS make up the difference.

The current model Nintendo Switch

The implications of this technology, especially on computationally limited devices, is immense. For the Switch, which doubles as a battery powered handheld when removed from its dock, the use of DLSS could allow it to produce visuals similar to the far larger and more expensive Xbox and PlayStation systems it’s in competition with. If Nintendo and NVIDIA can prove DLSS to be viable on something as small as the Switch, we’ll likely see the technology come to future smartphones and tablets to make up for their relatively limited GPUs.

But why stop there? If artificial intelligence systems like DLSS can scale up a video game, it stands to reason the same techniques could be applied to other forms of content. Rather than saturating your Internet connection with a 16K video stream, will TVs of the future simply make the best of what they have using a machine learning algorithm trained on popular shows and movies?

Continue reading “AI Upscaling And The Future Of Content Delivery”

Machine Learning Current Sensor Snoops On MCUs

Anyone who’s ever tried their hand at reverse engineering a piece of hardware has wished there was some kind of magic wand you could tap on a PCB to understand what its doing and why. We imagine that’s what put security researcher [Mark C] on the path to developing CurrentSense-TinyML, a fascinating proof of concept that uses machine learning and sensitive current measurements to try and determine what a microcontroller is up to.

Energy consumption as the LED blinks.

The idea is simple enough: just place a INA219 current sensor between the power supply and the microcontroller under observation, and record the resulting measurements as it goes about its business. Of course in this case, [Mark] knew what the target Arduino Nano was doing because he wrote the code that blinks its onboard LED.

This allowed him to create training data for TensorFlow, which was ultimately optimized into a model that could fit onto the Arduino Nano 33 BLE Sense which stands in for our magic wand. The end result is that the model can accurately predict when the Nano has fired up its LED based on the amount of power it’s using. [Mark] has done a fantastic job of documenting the whole process, which also doubles as a great intro for putting machine learning to work on a microcontroller.

Now we already know what you’re thinking: obviously the current would go up when the LED was lit, so the machine learning aspect is completely unnecessary. That may be true in this limited context, but remember, this is just a proof of concept to base further work on. In the future, with more training data, this technique could potentially be used to identify a whole range of nuanced activities. You’d be able to see when the MCU was sitting idle, when it was writing to flash, or when it was reading from sensors. In fact, with a good enough model, it might even be possible to identify the individual sensors that are being polled.

These are early days, but we’re very interested in seeing where this research goes. It might not be magic, but if analyzing the current draw of a coffee maker can tell you how much everyone in the office is drinking, then maybe it can help us figure out what all these unlabeled ICs are doing.

Science Officer…Scan For Elephants!

If you watch many espionage or terrorism movies set in the present day, there’s usually a scene where some government employee enhances a satellite image to show a clear picture of the main villain’s face. Do modern spy satellites have that kind of resolution? We don’t know, and if we did we couldn’t tell you anyway. But we do know that even with unclassified resolution, scientists are using satellite imagery and machine learning to count things like elephant populations.

When you think about it, it is a hard problem to count wildlife populations in their habitat. First, if you go in person you disturb the target animals. Even a drone is probably going to upset timid wildlife. Then there is the problem with trying to cover a large area and figuring out if the elephant you see today is the same one as one you saw yesterday. If you guess wrong you will either undercount or overcount.

The Oxford scientists counting elephants used the Worldview-3 satellite. It collects up to 680,000 square kilometers every day. You aren’t disturbing any of the observed creatures, and since each shot covers a huge swath of territory, your problem of double counting all but vanishes.

Continue reading “Science Officer…Scan For Elephants!”

Machine Learning In The Kitchen Makes For Tasty Mashup Desserts

What did you do during lockdown? A whole lot of people turned to baking in between trips to the store to search for toilet paper and hand sanitizer. Many of them baked bread for some reason, but like us, [Sara Robinson] turned to sweeter stuff to get through it.

The first Cakie ever made. Image via Google Cloud

Her pandemic ponderings wandered into the realm of baking existentialist questions, like what separates baked goods from each other, categorically speaking? What is the science behind the crunchiness of cookies, the sponginess of cake, and the fluffiness of bread?

As a developer advocate for Google Cloud, [Sara] turned to machine learning to figure out why the cookie crumbles. She collected 33 recipes each of cookies, cake, and bread and built a TensorFlow model to analyze them, which resulted in a cookie/cake/bread lineage for each recipe in a set of percentages. Not only was the model able to accurately classify recipes by type, [Sara] was able to use the model to come up with a 50/50 cookie-cake hybrid recipe. The AI delivered a list of ingredients to which she added vanilla extract and chocolate chips for flavor. From there, she had to wing it and come up with her own baking directions for the Cakie.

Continue reading “Machine Learning In The Kitchen Makes For Tasty Mashup Desserts”

Hide And Seek AI Shows Emergent Tool Use

Machine learning has come a long way in the last decade, as it turned out throwing huge wads of computing power at piles of linear algebra actually turned out to make creating artificial intelligence relatively easy. OpenAI have been working in the field for a while now, and recently observed some exciting behaviour in a hide-and-seek game they built.

The game itself is simple; two teams of AI bots play a game of hide-and-seek, with the red bots being rewarded for spotting the blue ones, and the blue ones being rewarded for avoiding their gaze. Initially, nothing of note happens, but as the bots randomly run around, they slowly learn. Over millions of trials, the seekers first learn to find the hiders, while the hiders respond by building barriers to hide behind. The seekers then learn to use ramps to loft over them, while the blue bots learn to bend the game’s physics and throw them out of the playfield. It ends with the seekers learning to skate around on blocks and the hiders building tight little barriers. It’s a continual arms race of techniques between the two sides, organically developed as the bots play against each other over time.

It’s a great study, and particularly interesting to note how much longer it takes behaviours to develop when the team switches from a basic fixed scenario to an changable world with more variables. We’ve seen other interesting gaming efforts with machine learning, too – like teaching an AI to play Trackmania. Video after the break.

Continue reading “Hide And Seek AI Shows Emergent Tool Use”