Exploring Animal Intelligence Hack Chat

Join us on Wednesday, October 21st at noon Pacific for the Exploring Animal Intelligence Hack Chat with Hans Forsberg!

From our lofty perch atop the food chain it’s easy to make the assumption that we humans are the last word in intelligence. A quick glance at social media or a chat with a random stranger at the store should be enough to convince you that human intelligence isn’t all it’s cracked up to be, or at least that it’s not evenly distributed. But regardless, we are pretty smart, thanks to those big, powerful brains stuffed into our skulls.

We’re far from the only smart species on the planet, though. Fellow primates and other mammals clearly have intelligence, and we’ve seen amazingly complex behaviors from animals in just about every taxonomic rank. But it’s the birds who probably stuff the most functionality into their limited neural hardware, with tool use, including the ability to make new tools, being common, along with long-distance navigation, superb binocular vision, and of course the ability to rapidly maneuver in three-dimensions while flying.

Hans Forsberg has taken an interest in avian intelligence lately, and to explore just what’s possible he devised a fiendishly clever system to train his local magpie flock to clean up his yard, which he calls “BirdBox”. We recently wrote up his initial training attempts, which honestly bear a strong resemblance to training a machine learning algorithm, which is probably no small coincidence since his professional background is with neural networks. He has several years of work into his birds, and he’ll stop by the Hack Chat to talk about what goes into leveraging animal intelligence, what we can learn about our systems from it, and where BirdBox goes next.

join-hack-chatOur Hack Chats are live community events in the Hackaday.io Hack Chat group messaging. This week we’ll be sitting down on Wednesday, October 21 at 12:00 PM Pacific time. If time zones baffle you as much as us, we have a handy time zone converter.

Click that speech bubble to the right, and you’ll be taken directly to the Hack Chat group on Hackaday.io. You don’t have to wait until Wednesday; join whenever you want and you can see what the community is talking about.

Continue reading “Exploring Animal Intelligence Hack Chat”

Hackaday Links Column Banner

Hackaday Links: June 14, 2020

You say you want to go to Mars, but the vanishingly thin atmosphere, the toxic and corrosive soil, the bitter cold, the deadly radiation that sleets down constantly, and the long, perilous journey that you probably won’t return from has turned you off a little. Fear not, because there’s still a way for you to get at least part of you to Mars: your intelligence. Curiosity, the Mars rover that’s on the eighth year of its 90-day mission, is completely remote-controlled, and NASA would like to add some self-driving capabilities to it. Which is why they’re asking for human help in classifying thousands of images of the Martian surface. By annotating images and pointing out what looks like soil and what looks like rock, you’ll be training an algorithm that one day might be sent up to the rover. If you’ve got the time, give it a shot — it seems a better use of time than training our eventual AI overlords.

We got a tip this week that ASTM, the international standards organization, has made its collection of standards for testing PPE available to the public. With titles like “Standard Test Method for Resistance of Medical Face Masks to Penetration by Synthetic Blood (Horizontal Projection of Fixed Volume at a Known Velocity)”, it seems like the standards body wants to make sure that that homebrew PPE gets tested properly before being put into service. The timing of this release is fortuitous since this week’s Hack Chat features Hiram Gay and Lex Kravitz, colleagues from the Washington University School of Medicine who will talk about what they did to test a respirator made from a full-face snorkel mask.

There’s little doubt that Lego played a huge part in the development of many engineers, and many of us never really put them away for good. We still pull them out occasionally, for fun or even for work, especially the Technic parts, which make a great prototyping system. But what if you need a Technic piece that you don’t have, or one that never existed in the first place? Easy — design and print your own custom Technic pieces. Lego Part Designer is a web app that breaks Technic parts down into five possible blocks, and lets you combine them as you see fit. We doubt that most FDM printers can deal with the fine tolerances needed for that satisfying Lego fit, but good enough might be all you need to get a design working.

Chances are pretty good that you’ve participated in more than a few video conferencing sessions lately, and if you’re anything like us you’ve found the experience somewhat lacking. The standard UI, with everyone in the conference organized in orderly rows and columns, reminds us of either a police line-up or the opening of The Brady Bunch, neither of which is particularly appealing. The paradigm could use a little rethinking, which is what Laptops in Space aims to do. By putting each participant’s video feed in a virtual laptop and letting them float in space, you’re supposed to have a more organic meeting experience. There’s a tweet with a short clip, or you can try it yourself. We’re not sure how we feel about it yet, but we’re glad someone is at least trying something new in this space.

And finally, if you’re in need of a primer on charlieplexing, or perhaps just need to brush up on the topic, [pileofstuff] has just released a video that might be just what you need. He explains the tri-state logic LED multiplexing method in detail, and even goes into some alternate uses, like using optocouplers to drive higher loads. We like his style — informal, but with a good level of detail that serves as a jumping-off point for further exploration.

Lego Machine Uses Machine Learning To Sort Itself Out

In our opinion, the primary evidence of a properly lived childhood is an enormous box of every conceivable Lego piece, from simple bricks to girders and gears, all with a small town’s worth of minifigs swimming through it. It takes years of birthdays and Christmases to accumulate a Lego collection best measured by the pound, but like anything worth doing, it’s worth overdoing.

But what to do with such a collection? Digging through it to find Just the Right Piece™ can be frustrating, and bringing order to the chaos with manual sorting is just so impractical. How about putting some of those bricks to work with a machine-vision Lego sorter built from Lego?

[Daniel West]’s approach is hardly new – we’ve even featured brick-built Lego sorters before – but we’re impressed by its architecture. First, the mechanical system is amazing. It uses a series of conveyors to transport bricks from a hopper, winnowing the stream down as it goes. The final step is a vibratory feeder that places one piece on a conveyor at a time. Those pass under a camera attached to a Raspberry Pi, where OpenCV does background subtraction from the video stream, applies bounding boxes to the parts, and runs the images through a convolutional neural network (CNN) that’s been trained on a database of every Lego part. Servo-controlled gates then direct the parts into one of 18 bins. See it in action in the video below.

We must admit that we’re not sure what the sorting criteria are, as some bins seem nearly as chaotic as the input mix. Still, we appreciate the fine engineering, and award extra style points for all the Lego goodness.

Continue reading “Lego Machine Uses Machine Learning To Sort Itself Out”

Rock, Paper, Neural Net

You might think the game of Rock Paper Scissors is just the random chance, but that’s not true. There is a strategy for Rock Paper Scissors, multiple ones in fact, and the best human players can consistently beat any Joe Schmoe off the street. But what about computers? [Paul] answered that question with a tiny little keychain dongle that can beat you at Rock Paper Scissors.

This is a neural network, and you need to train a neural network, so where did [Paul] get all that data? roshambo.me offers thousands of paper rock scissor games, and trained the network on more than 85,000 human games, along with about 10,000 simulated games. Rock Paper Scissors isn’t a complicated game at all, and the entire neural network is stored on an ATtiny1614 microcontroller. The calculations are done as floats, even. That’s how non-computationally intensive this project is.

Building a neural network is one thing, but putting it in a handy keychain enclosure is something else. This handsome device fits on a PCB just larger than a 2032 coin cell battery and is enclosed in a 3D printed case. The buttons are 3D printed as well, with some clever application of fiber optic as light pipes for the LEDs. The end result is something that is slightly better than random chance at Rock Paper Scissors and shows off some matrix programming skills. Check out the video below.

Continue reading “Rock, Paper, Neural Net”

But Can Your AI Recognize Slugs?

The common garden slug is a mystery. Observing these creatures as they slowly emerge from their slimy lairs each evening, it’s hard to imagine how much damage they can do. With paradoxical speed, they can mow down row after row of tender seedlings, leaving nothing but misery in their mucusy wake.

To combat this slug menace, [Tegwyn☠Twmffat] (the [☠] is silent) is developing this AI-powered slug busting system. The squeamish or those challenged by the ethics of slug eradication can relax: no slugs have been harmed yet. So far [Tegwyn] has concentrated on the detection of slugs, a considerably non-trivial problem since there are few AI models that are already trained for slugs.

So far, [Tegwyn] has acquired 5,712 images of slugs in their natural environment – no mean feat as they only come out at night, they blend into their background, and their slimy surface makes for challenging reflections. The video below shows moderate success of the trained model using a static image of a slug; it also gives a glimpse at the hardware used, which includes an Nvidia Jetson TX2. [Tegwyn] plans to capture even more images to refine the model and boost it up from the 50 to 60% confidence level to something that will allow for the remediation phase of the project, which apparently involves lasers. Although he’s willing to entertain other methods of disposal; perhaps a salt-shooting turret gun?

This isn’t the first garden-tending project [Tegwyn] has tackled. You may recall The Weedinator, his 2018 Hackaday Prize entry. This slug buster is one of his entries for the 2019 Hackaday Prize, which was just announced. We’re looking forward to seeing the onslaught of cool new projects everyone will be coming up with.

Continue reading “But Can Your AI Recognize Slugs?”

The Tiniest Computer Vision Platform Just Got Better

The future, if you believe the ad copy, is a world filled with cameras backed by intelligence, neural nets, and computer vision. Despite the hype, this may actually turn out to be true: drones are getting intelligent cameras, self-driving cars are loaded with them, and in any event it makes a great toy.

That’s what makes this Kickstarter so exciting. It’s a camera module, yes, but there are also some smarts behind it. The OpenMV is a MicroPython-powered machine vision camera that gives your project the power of computer vision without the need to haul a laptop or GPU along for the ride.

The OpenMV actually got its start as a Hackaday Prize entry focused on one simple idea. There are cheap camera modules everywhere, so why not attach a processor to that camera that allows for on-board image processing? The first version of the OpenMV could do face detection at 25 fps, color detection at more than 30 fps, and became the basis for hundreds of different robots loaded up with computer vision.

This crowdfunding campaign is financing the latest version of the OpenMV camera, and there are a lot of changes. The camera module is now removable, meaning the OpenMV now supports global shutter and thermal vision in addition to the usual color/rolling shutter sensor. Since this camera has a faster microcontroller, this latest version can support multi-blob color tracking at 80 fps. With the addition of a FLIR Lepton sensor, this camera does thermal sensing, and thanks to a new library, the OpenMV also does number detection with the help of neural networks.

We’ve seen a lot of builds using the OpenMV camera, and it’s getting ot the point where you can’t compete in an autonomous car race without this hardware. This new version has all the bells and whistles, making it one of the best ways we’ve seen to add computer vision to any hardware project.

Turn Yourself Into A Cyborg With Neural Nets

If smartwatches and tiny Bluetooth earbuds are any indications, the future is with wearable electronics. This brings up a problem: developing wearable electronics isn’t as simple as building a device that’s meant to sit on a shelf. No, wearable electronics move, they stretch, people jump, kick, punch, and sweat. If you’re prototyping wearable electronics, it might be a good idea to build a Smart Internet of Things Wearable development board. That’s exactly what [Dave] did for his Hackaday Prize entry, and it’s really, really fantastic.

[Dave]’s BodiHub is an outgrowth of his entry into last year’s Hackaday Prize. While the project might not look like much, that’s kind of the point; [Dave]’s previous projects involved shrinking thousands of dollars worth of equipment down to a tiny board that can read muscle signals. This project takes that idea a bit further by creating a board that’s wearable, has support for battery charging, and makes prototyping with wearable electronics easy.

You might be asking what you can do with a board like this. For that, [David] suggests a few projects like boxing gloves that talk to each other, or tell you how much force you’re punching something with. Alternatively, you could read body movements and synchronize a LED light show to a dance performance. It can go further than that, though, because [David] built a mesh network logistics tracking system that uses an augmented reality interface. This was actually demoed at TechCrunch Disrupt NY, and the audience was wowed. You can check out the video of that demo here.