Hackaday Links Column Banner

Hackaday Links: August 14, 2022

What’s this? News about robot dogs comes out, and there’s no video of the bots busting a move on the dance floor? Nope — it looks like quadruped robots are finally going to work for real as “ground drones” are being deployed to patrol Cape Canaveral. Rather than the familiar and friendly Boston Dynamics “Big Dog” robot, the US Space Force went with Ghost Robotics Vision 60 Q-UGVs, or “quadruped unmanned ground vehicles.” The bots share the same basic layout as Big Dog but have a decidedly more robust appearance, and are somehow more sinister. The dogs are IP67-rated for all-weather use, and will be deployed for “damage assessments and patrols,” whatever that means. Although since this is the same dog that has had a gun mounted to it, we’d be careful not to stray too far from the tours at Kennedy Space Center.

Continue reading “Hackaday Links: August 14, 2022”

Microsoft’s New Simulator Helps Train Drone AIs

Testing any kind of project in the real world is expensive. You have to haul people and equipment around, which costs money, and if you break anything, you have to pay for that too! Simulation tends to come first. Making mistakes in a simulation is much cheaper, and the lessons learned can later be verified in the real world. If you want to learn to fly a quadcopter, the best thing to do is get some time behind the sticks of a simulator before you even purchase anything with physical whirly blades.

Oddly enough, the same goes for AI. Microsoft built a simulation product to aid the development of artificial intelligence systems for drones by the name of Project AirSim. It aims to provide a comprehensive environment for the testing of drone AI systems, making development faster, cheaper, and more practical.

Continue reading “Microsoft’s New Simulator Helps Train Drone AIs”

AI Creates Your Spreadsheets, Sometimes

We’ve been interested in looking at how AI can process things other than silly images. That’s why the “Free AI Bot that Generates the Excel Formula for Any Problem” caught our eye. Based on GPT-3, it supposedly transforms your problem description into a formula suitable for Excel or Google Sheets.

Our first prompt didn’t work out very well. But that was sort of our fault. When they say “Excel formula” they mean that quite literally. So trying to describe the actual result you want in terms of columns or rows seems to be beyond it. Not realizing that, we asked:

If the sum of column H is greater than 50, multiply column A by 0.33

And got:

=IF(SUM(H:H)>50,A*0.33,0)

A Better Try

Which is close, but not really how anyone even mildly proficient with Excel would interpret that request. But that’s not fair. It really needs to be a y=f(x) sort of problem, we suppose.

Continue reading “AI Creates Your Spreadsheets, Sometimes”

AI Image Generation Sharpens Your Bad Photos And Kills Photography?

We don’t fully understand the appeal of asking an AI for a picture of a gorilla eating a waffle while wearing headphones. However, [Micael Widell] shows something in a recent video that might be the best use we’ve seen yet of DALL-E 2. Instead of concocting new photos, you can apparently use the same technology for cleaning up your own rotten pictures. You can see his video, below. The part about DALL-E 2 editing is at about the 4:45 mark.

[Nicholas Sherlock] fed the AI a picture of a fuzzy ladybug and asked it to focus the subject. It did. He also fed in some other pictures and asked it to make subtle variations of them. It did a pretty good job of that, too.

Continue reading “AI Image Generation Sharpens Your Bad Photos And Kills Photography?”

A 3D-Printed Nixie Clock Powered By An Arduino Runs This Robot

While it is hard to tell with a photo, this robot looks more like a model of an old- fashioned clock than anything resembling a Nixie tube. It’s the kind of project that could have been created by anyone with a little bit of Arduino tinkering experience. In this case, the 3D printer used by the Nixie clock project is a Prusa i3 (which is the same printer used to make the original Nixie tubes).

The Nixie clock project was started by a couple of students from the University of Washington who were bored one day and decided to have a go at creating their own timepiece. After a few prototypes and tinkering around with the code , they came up with a design for the clock that was more functional than ornate.

The result is a great example of how one can create a functional and aesthetically pleasing project with a little bit of free time.

Confused yet? You should be.

If you’ve read this far then you’re probably scratching your head and wondering what has come over Hackaday. Should you not have already guessed, the paragraphs above were generated by an AI — in this case Transformer — while the header image came by the popular DALL-E Mini, now rebranded as Craiyon. Both of them were given the most Hackaday title we could think of, “A 3D-Printed Nixie Clock Powered By An Arduino Runs This Robot“, and told to get on with it. This exercise was sparked by curiosity following the viral success of AI generators, which posed the question of whether an AI could make a passable stab at a Hackaday piece. Transformer runs on a prompt model in which the operator is given a choice of several sentence fragments so the text reflects those choices, but the act of choosing could equally have followed any of the options.

The text is both reassuring as a Hackaday writer because it doesn’t manage to convey anything useful, and also slightly shocking because from just that single prompt it’s created meaningful and clear sentences which on another day might have flowed from a Hackaday keyboard as part of a real article. It’s likely that we’ve found our way into whatever corpus trained its model and it’s also likely that subject matter so Hackaday-targeted would cause it to zero in on that part of its source material, but despite that it’s unnerving to realise that a computer somewhere might just have your number. For now though, Hackaday remains safe at the keyboards of a group of meatbags.

We’ve considered the potential for AI garbage before, when we looked at GitHub Copilot.

Machine Learning Does Its Civic Duty By Spotting Roadside Litter

If there’s one thing that never seems to suffer from supply chain problems, it’s litter. It’s everywhere, easy to spot and — you’d think — pick up. Sadly, most of us seem to treat litter as somebody else’s problem, but with something like this machine vision litter mapper, you can at least be part of the solution.

For the civic-minded [Nathaniel Felleke], the litter problem in his native San Diego was getting to be too much. He reasoned that a map of where the trash is located could help municipal crews with cleanup, so he set about building a system to search for trash automatically. Using Edge Impulse and a collection of roadside images captured from a variety of sources, he built a model for recognizing trash. To find the garbage, a webcam with a car window mount captures images while driving, and a Raspberry Pi 4 runs the model and looks for garbage. When roadside litter is found, the Pi uses a Blues Wireless Notecard to send the GPS location of the rubbish to a cloud database via its cellular modem.

Cruising around the streets of San Diego, [Nathaniel]’s system builds up a database of garbage hotspots. From there, it’s pretty straightforward to pull the data and overlay it on Google Maps to create a heatmap of where the garbage lies. The video below shows his system in action.

Yes, driving around a personal vehicle specifically to spot litter is just adding more waste to the mix, but you’d imagine putting something like this on municipal vehicles that are already driving around cities anyway. Either way, we picked up some neat tips, especially those wireless IoT cards. We’ve seen them used before, but [Nathaniel]’s project gives us a path forward on some ideas we’ve had kicking around for a while.

Continue reading “Machine Learning Does Its Civic Duty By Spotting Roadside Litter”

Edging Ahead When Learning On The Edge

“With the power of edge AI in the palm of your hand, your business will be unstoppable.

That’s what the marketing seems to read like for artificial intelligence companies. Everyone seems to have cloud-scale AI-powered business intelligence analytics at the edge. While sounding impressive, we’re not convinced that marketing mumbo jumbo means anything. But what does AI on edge devices look like these days?

Being on the edge just means that the actual AI evaluation and maybe even fine-tuning runs locally on a user’s device rather than in some cloud environment. This is a double win, both for the business and for the user. Privacy can more easily be preserved as less information is transmitted back to a central location. Additionally, the AI can work in scenarios where a server somewhere might not be accessible or provide a response quickly enough.

Google and Apple have their own AI libraries, ML Kit and Core ML, respectively. There are tools to convert Tensorflow, PyTorch, XGBoost, and LibSVM models into formats that CoreML and ML Kit understand. But other solutions try to provide a platform-agnostic layer for training and evaluation. We’ve also previously covered Tensorflow Lite (TFL), a trimmed-down version of Tensorflow, which has matured considerably since 2017.

For this article, we’ll be looking at PyTorch Live (PTL), a slimmed-down framework for adding PyTorch models to smartphones. Unlike TFL (which can run on RPi and in a browser), PTL is focused entirely on Android and iOS and offers tight integration. It uses a react-native backed environment which means that it is heavily geared towards the node.js world.

Continue reading “Edging Ahead When Learning On The Edge”