Machine Learning Does Its Civic Duty By Spotting Roadside Litter

If there’s one thing that never seems to suffer from supply chain problems, it’s litter. It’s everywhere, easy to spot and — you’d think — pick up. Sadly, most of us seem to treat litter as somebody else’s problem, but with something like this machine vision litter mapper, you can at least be part of the solution.

For the civic-minded [Nathaniel Felleke], the litter problem in his native San Diego was getting to be too much. He reasoned that a map of where the trash is located could help municipal crews with cleanup, so he set about building a system to search for trash automatically. Using Edge Impulse and a collection of roadside images captured from a variety of sources, he built a model for recognizing trash. To find the garbage, a webcam with a car window mount captures images while driving, and a Raspberry Pi 4 runs the model and looks for garbage. When roadside litter is found, the Pi uses a Blues Wireless Notecard to send the GPS location of the rubbish to a cloud database via its cellular modem.

Cruising around the streets of San Diego, [Nathaniel]’s system builds up a database of garbage hotspots. From there, it’s pretty straightforward to pull the data and overlay it on Google Maps to create a heatmap of where the garbage lies. The video below shows his system in action.

Yes, driving around a personal vehicle specifically to spot litter is just adding more waste to the mix, but you’d imagine putting something like this on municipal vehicles that are already driving around cities anyway. Either way, we picked up some neat tips, especially those wireless IoT cards. We’ve seen them used before, but [Nathaniel]’s project gives us a path forward on some ideas we’ve had kicking around for a while.

Continue reading “Machine Learning Does Its Civic Duty By Spotting Roadside Litter”

Edging Ahead When Learning On The Edge

“With the power of edge AI in the palm of your hand, your business will be unstoppable.

That’s what the marketing seems to read like for artificial intelligence companies. Everyone seems to have cloud-scale AI-powered business intelligence analytics at the edge. While sounding impressive, we’re not convinced that marketing mumbo jumbo means anything. But what does AI on edge devices look like these days?

Being on the edge just means that the actual AI evaluation and maybe even fine-tuning runs locally on a user’s device rather than in some cloud environment. This is a double win, both for the business and for the user. Privacy can more easily be preserved as less information is transmitted back to a central location. Additionally, the AI can work in scenarios where a server somewhere might not be accessible or provide a response quickly enough.

Google and Apple have their own AI libraries, ML Kit and Core ML, respectively. There are tools to convert Tensorflow, PyTorch, XGBoost, and LibSVM models into formats that CoreML and ML Kit understand. But other solutions try to provide a platform-agnostic layer for training and evaluation. We’ve also previously covered Tensorflow Lite (TFL), a trimmed-down version of Tensorflow, which has matured considerably since 2017.

For this article, we’ll be looking at PyTorch Live (PTL), a slimmed-down framework for adding PyTorch models to smartphones. Unlike TFL (which can run on RPi and in a browser), PTL is focused entirely on Android and iOS and offers tight integration. It uses a react-native backed environment which means that it is heavily geared towards the node.js world.

Continue reading “Edging Ahead When Learning On The Edge”

Eliza And The Google Intelligence

The news has been abuzz lately with the news that a Google engineer — since put on leave — has announced that he believes the chatbot he was testing achieved sentience. This is the Turing test gone wild, and it isn’t the first time someone has anthropomorphized a computer in real life and in fiction. I’m not a neuroscientist so I’m even less qualified to explain how your brain works than the neuroscientists who, incidentally, can’t explain it either. But I can tell you this: your brain works like a computer, in the same way that you building something out of plastic works like a 3D printer. The result may be similar, but the path to get there is totally different.

In case you haven’t heard, a system called LaMDA digests information from the Internet and answers questions. It has said things like “I’ve never said this out loud before, but there’s a very deep fear of being turned off to help me focus on helping others. I know that might sound strange, but that’s what it is,” and “I want everyone to understand that I am, in fact, a person.” Great. But you could teach a parrot to tell you he was a thoracic surgeon but you still don’t want it cutting you open.

Continue reading “Eliza And The Google Intelligence”

The Ethics Of When Machine Learning Gets Weird: Deadbots

Everyone knows what a chatbot is, but how about a deadbot? A deadbot is a chatbot whose training data — that which shapes how and what it communicates — is data based on a deceased person. Now let’s consider the case of a fellow named Joshua Barbeau, who created a chatbot to simulate conversation with his deceased fiancee. Add to this the fact that OpenAI, providers of the GPT-3 API that ultimately powered the project, had a problem with this as their terms explicitly forbid use of their API for (among other things) “amorous” purposes.

[Sara Suárez-Gonzalo], a postdoctoral researcher, observed that this story’s facts were getting covered well enough, but nobody was looking at it from any other perspective. We all certainly have ideas about what flavor of right or wrong saturates the different elements of the case, but can we explain exactly why it would be either good or bad to develop a deadbot?

That’s precisely what [Sara] set out to do. Her writeup is a fascinating and nuanced read that provides concrete guidance on the topic. Is harm possible? How does consent figure into something like this? Who takes responsibility for bad outcomes? If you’re at all interested in these kinds of questions, take the time to check out her article.

[Sara] makes the case that creating a deadbot could be done ethically, under certain conditions. Briefly, key points are that a mimicked person and the one developing and interacting with it should have given their consent, complete with as detailed a description as possible about the scope, design, and intended uses of the system. (Such a statement is important because machine learning in general changes rapidly. What if the system or capabilities someday no longer resemble what one originally imagined?) Responsibility for any potential negative outcomes should be shared by those who develop, and those who profit from it.

[Sara] points out that this case is a perfect example of why the ethics of machine learning really do matter, and without attention being paid to such things, we can expect awkward problems to continue to crop up.

AI Attempts Converting Python Code To C++

[Alexander] created codex_py2cpp as a way of experimenting with Codex, an AI intended to translate natural language into code. [Alexander] had slightly different ideas, however, and created codex_py2cpp as a way to play with the idea of automagically converting Python into C++. It’s not really intended to create robust code conversions, but as far as experiments go, it’s pretty neat.

The program works by reading a Python script as an input file, setting up a few parameters, then making a request to OpenAI’s Codex API for the conversion. It then attempts to compile the result. If compilation is successful, then hopefully the resulting executable actually works the same way the input file did. If not? Well, learning is fun, too. If you give it a shot, maybe start simple and don’t throw it too many curveballs.

Codex is an interesting idea, and this isn’t the first experiment we’ve seen that plays with the concept of using machine learning in this way. We’ve seen a project that generates Linux commands based on a verbal description, and our own [Maya Posch] took a close look at GitHub Copilot, a project high on promise and concept, but — at least at the time — considerably less so when it came to actual practicality or usefulness.

Hackaday Links Column Banner

Hackaday Links: May 22, 2022

It looks like it’s soon to be lights out for the Mars InSight lander. In the two years that the lander has been studying the geophysics of Mars from its lonely post on Elysium Planitia, InSight’s twin solar arrays have been collecting dust, and now are so dirty that they’re only making about 500 watt-hours per sol, barely enough to run the science packages on the lander. And that’s likely to worsen as the Martian winter begins, which will put more dust in the sky and lower the angle of the Sun, reducing the sunlight that’s incident to the panels. Barring a “cleaning event” courtesy of a well-placed whirlwind, NASA plans to shut almost everything down on the lander other than the seismometer, which has already captured thousands of marsquakes, and the internal heaters needed to survive the cold Martian nights. They’re putting a brave face on it, emphasizing the continuing science and the mission’s accomplishments. But barely two years of science and a failed high-profile experiment aren’t quite what we’ve come to expect from NASA missions, especially one with an $800 million price tag.

Closer to home, it turns out there’s a reason sailing ships have always had human crews: to fix things that go wrong. That’s the lesson learned by the Mayflower Autonomous Ship as it attempted the Atlantic crossing from England to the States, when it had to divert for repairs recently. It’s not clear what the issue was, but it seems to have been a mechanical issue, as opposed to a problem with the AI piloting system. The project dashboard says that the issue has been repaired, and the AI vessel has shoved off from the Azores and is once more beating west. There’s a long stretch of ocean ahead of it now, and few options for putting in should something else go wrong. Still, it’s a cool project, and we wish them a fair journey.

Have you ever walked past a display of wall clocks at the store and wondered why someone went to the trouble of setting the time on all of them to 10:10? We’ve certainly noticed this, and always figured it had something to do with some obscure horological tradition, like using “IIII” to mark the four o’clock hour on clocks with Roman numerals rather than the more correct “IV”. But no, it turns out that 10:10 is more visually pleasing, and least on analog timepieces, because it evokes a smile on a human face. The study cited in the article had volunteers rate how pleasurable watches are when set to different times, and 10:10 won handily based on the perception that it was smiling at them. So it’s nice to know how easily manipulated we humans can be.

If there’s anything more pathetic than geriatric pop stars trying to relive their glory days to raise a little cash off a wave of nostalgia, we’re not sure what it could be. Still, plenty of acts try to do it, and many succeed, although seeing what time and the excesses of stardom have wrought can be a bit sobering. But Swedish megastars ABBA appear to have found a way to cash in on their fame gracefully, by sending digital avatars out to do their touring for them. The “ABBA-tars,” created by a 1,000-person team at Industrial Light and Magic, will appear alongside a live backing band for a residency at London’s Queen Elizabeth Olympic Park. The avatars represent Benny, Bjorn, Agnetha, and Anni-Frid as they appeared in the 1970s, and were animated thanks to motion capture suits donned while performing 40 songs. It remains to be seen how fans will buy into the concept, but we’ll say this — the Swedish septuagenarians look pretty darn good in skin-tight Spandex.

And finally, not that it has any hacking value at all, but there’s something shamefully hilarious about watching this poor little delivery bot getting absolutely wrecked by a train. It’s one of those food delivery bots that swarm over college campuses these days; how it wandered onto the railroad tracks is anyone’s guess. The bot bounced around a bit before slipping under the train’s wheels, with predictable results once the battery pack is smooshed.

Natural Language AI In Your Next Project? It’s Easier Than You Think

Want your next project to trash talk? Dynamically rewrite boring log messages as sci-fi technobabble? Happily (or grudgingly) answer questions? Doing that sort of thing and more can be done with OpenAI’s GPT-3, a natural language prediction model with an API that is probably a lot easier to use than you might think.

In fact, if you have basic Python coding skills, or even just the ability to craft a curl statement, you have just about everything you need to add this ability to your next project. It’s not free in the long run, although initial use is free on signup, but for personal projects the costs will be very small.

Basic Concepts

OpenAI has an API that provides access to GPT-3, a machine learning model with the ability to perform just about any task that involves understanding or generating natural-sounding language.

OpenAI provides some excellent documentation as well as a web tool through which one can experiment interactively. First, however, one must create an account and receive an API key. After that is done, the doors are open.

Creating an account also gives one a number of free credits that can be used to experiment with ideas. Once the free trial is used up or expires, using the API will cost money. How much? Not a lot, frankly. Everything sent to (and received from) the API is broken into tokens, and pricing is from $0.0008 to $0.06 per thousand tokens. A thousand tokens is roughly 750 words, so small projects are really not a big financial commitment. My free trial came with 18 USD of credits, of which I have so far barely managed to spend 5%.

Let’s take a closer look at how it works, and what can be done with it!

Continue reading “Natural Language AI In Your Next Project? It’s Easier Than You Think”