Comparing ‘AI’ For Basic Plant Care With Human Brown Thumbs

The future of healthy indoor plants, courtesy of AI. (Credit: [Liam])
The future of healthy indoor plants, courtesy of AI. (Credit: [Liam])
Like so many of us, [Liam] has a big problem. Whether it’s the curse of Brown Thumbs or something else, those darn houseplants just keep dying despite guides always telling you how incredibly easy it is to keep them from wilting with a modicum of care each day, even without opting for succulents or cactuses. In a fit of despair [Liam] decided to pin his hopes on what we have come to accept as the Savior of Humankind, namely ‘AI’, which can stand for a lot of things, but it’s definitely really smart and can even generate pretty pictures, which is something that the average human can not. Hence it’s time to let an LLM do all the smart plant caring stuff with ‘PlantMom’.

Since LLMs (so far) don’t come with physical appendages by default, some hardware had to be plugged together to measure parameters like light, temperature and soil moisture. Add to this a grow light & a water pump and all that remained was to tell the LMM using an extensive prompt (containing Python code) what it should do (keep the plant alive) and what responses (Python methods) are available. All that was left now was to let the ‘AI’ (Google’s Gemma 3) handle it.

To say that this resulted in a dramatic failure along with what reads like an emotional breakdown (on the side of the LLM) would be an understatement. The LLM insisted on turning the grow light on when it should be off and had the most erratic watering responses imaginable based on absolutely incorrect interpretations of the ADC data (flipping dry vs wet). After this episode the poor chili plant’s soil was absolutely saturated and is still trying to dry out, while the ongoing LLM experiment (with empty water tank) has the grow light blasting more often than a weed farm.

So far it seems like that the humble state machine’s job is still safe from being taken over by ‘AI’, and not even brown thumb folk can kill plants this efficiently.

A blue-gloved hand holds a glass plate with a small off-white rectangular prism approximately one quarter the area of a fingernail in cross-section.

AI Helps Researchers Discover New Structural Materials

Nanostructured metamaterials have shown a lot of promise in what they can do in the lab, but often have fatal stress concentration factors that limit their applications. Researchers have now found a strong, lightweight nanostructured carbon. [via BGR]

Using a multi-objective Bayesian optimization (MBO) algorithm trained on finite element analysis (FEA) datasets to identify the best candidate nanostructures, the researchers then brought the theoretical material to life with 2 photon polymerization (2PP) photolithography. The resulting “carbon nanolattices achieve the compressive strength of carbon steels (180–360 MPa) with the density of Styrofoam (125–215 kg m−3) which exceeds the specific strengths of equivalent low-density materials by over an order of magnitude.”

While you probably shouldn’t start getting investors for your space elevator startup just yet, lighter materials like this are promising for a lot of applications, most notably more conventional aviation where fuel (or energy) prices are a big constraint on operations. As with any lab results, more work is needed until we see this in the real world, but it is nice to know that superalloys and composites aren’t the end of the road for strong and lightweight materials.

We’ve seen AI help identify battery materials already and this seems to be one avenue where generative AI isn’t just about making embarrassing photos or making us less intelligent.

Will Embodied AI Make Prosthetics More Humane?

Building a robotic arm and hand that matches human dexterity is tougher than it looks. We can create aesthetically pleasing ones, very functional ones, but the perfect mix of both? Still a work in progress. Just ask [Sarah de Lagarde], who in 2022 literally lost an arm and a leg in a life-changing accident. In this BBC interview, she shares her experiences openly – highlighting both the promise and the limits of today’s prosthetics.

The problem is that our hands aren’t just grabby bits. They’re intricate systems of nerves, tendons, and ridiculously precise motor control. Even the best AI-powered prosthetics rely on crude muscle signals, while dexterous robots struggle with the simplest things — like tying shoelaces or flipping a pancake without launching it into orbit.

That doesn’t mean progress isn’t happening. Researchers are training robotic fingers with real-world data, moving from ‘oops’ to actual precision. Embodied AI, i.e. machines that learn by physically interacting with their environment, is bridging the gap. Soft robotics with AI-driven feedback loops mimic how our fingers instinctively adjust grip pressure. If haptics are your point of interest, we have posted about it before.

The future isn’t just robots copying our movements, it’s about them understanding touch. Instead of machine learning, we might want to shift focus to human learning. If AI cracks that, we’re one step closer.

 

Preventing AI Plagiarism With .ASS Subtitling

Around two years ago, the world was inundated with news about how generative AI or large language models would revolutionize the world. At the time it was easy to get caught up in the hype, but in the intervening months these tools have done little in the way of productive work outside of a few edge cases, and mostly serve to burn tons of cash while turning the Internet into even more of a desolate wasteland than it was before. They do this largely by regurgitating human creations like text, audio, and video into inferior simulacrums and, if you still want to exist on the Internet, there’s basically nothing you can do to prevent this sort of plagiarism. Except feed the AI models garbage data like this YouTuber has started doing.

At least as far as YouTube is concerned, the worst offenders of AI plagiarism work by downloading the video’s subtitles, passing them through some sort of AI model, and then generating another YouTube video based off of the original creator’s work. Most subtitle files are the fairly straightfoward .srt filetype which only allows for timing and text information. But a more obscure subtitle filetype known as Advanced SubStation Alpha, or .ass, allows for all kinds of subtitle customization like orientation, formatting, font types, colors, shadowing, and many others. YouTuber [f4mi] realized that using this subtitle system, extra garbage text could be placed in the subtitle filetype but set out of view of the video itself, either by placing the text outside the viewable area or increasing its transparency. So now when an AI crawler downloads the subtitle file it can’t distinguish real subtitles from the garbage placed into it.

[f4mi] created a few scripts to do this automatically so that it doesn’t have to be done by hand for each one. It also doesn’t impact the actual subtitles on the screen for people who need them for accessibility reasons. It’s a great way to “poison” AI models and make it at least harder for them to rip off the creations of original artists, and [f4mi]’s tests show that it does work. We’ve actually seen a similar method for poisoning data sets used for emails long ago, back when we were all collectively much more concerned about groups like the NSA using automated snooping tools in our emails than we were that machines were going to steal our creative endeavors.

Thanks to [www2] for the tip!

Continue reading “Preventing AI Plagiarism With .ASS Subtitling”

Render of life-size robot rat animatronic on blue plane

Robot Rodents: How AI Learned To Squeak And Play

In an astonishing blend of robotics and nature, SMEO—a robot rat designed by researchers in China and Germany — is fooling real rats into treating it like one of their own.

What sets SMEO apart is its rat-like adaptability. Equipped with a flexible spine, realistic forelimbs, and AI-driven behavior patterns, it doesn’t just mimic a rat — it learns and evolves through interaction. Researchers used video data to train SMEO to “think” like a rat, convincing its living counterparts to play, cower, or even engage in social nuzzling. This degree of mimicry could make SMEO a valuable tool for studying animal behavior ethically, minimizing stress on live animals by replacing some real-world interactions.

For builders and robotics enthusiasts, SMEO is a reminder that robotics can push boundaries while fostering a more compassionate future. Many have reservations about keeping intelligent creatures in confined cages or using them in experiments, so imagine applying this tech to non-invasive studies or even wildlife conservation. In a world where robotic dogs, bees, and even schools of fish have come to life, this animatronic rat sounds like an addition worth further exploring. SMEO’s development could, ironically, pave the way for reducing reliance on animal testing.

Continue reading “Robot Rodents: How AI Learned To Squeak And Play”

Artificial Intelligence Runs On Arduino

Fundamentally, an artificial intelligence (AI) is nothing more than a system that takes a series of inputs, makes some prediction, and then outputs that information. Of course, the types of AI in the news right now can handle a huge number of inputs and need server farms’ worth of compute to generate outputs of various forms, but at a basic level, there’s no reason a purpose-built AI can’t run on much less powerful hardware. As a demonstration, and to win a bet with a friend, [mondal3011] got an artificial intelligence up and running on an Arduino.

This AI isn’t going to do anything as complex as generate images or write clunky preambles to every recipe on the Internet, but it is still a functional and useful piece of software. This one specifically handles the brightness of a single lamp, taking user input on acceptable brightness ranges in the room and outputting what it thinks the brightness of the lamp should be to match the user’s preferences. [mondal3011] also builds a set of training data for the AI to learn from, taking the lamp to various places around the house and letting it figure out where to set the brightness on its own. The training data is run through a linear regression model in Python which generates the function that the Arduino needs to automatically operate the lamp.

Although this isn’t the most complex model, it does go a long way to demonstrating the basic principles of using artificial intelligence to build a useful and working model, and then taking that model into the real world. Note also that the model is generated on a more powerful computer before being ported over to the microcontroller platform. But that’s all par for the course in AI and machine learning. If you’re looking to take a step up from here, we’d recommend this robot that uses neural networks to learn how to walk.