Counting Bees With A Raspberry Pi

Even if keeping bees sounds about as wise to you as keeping velociraptors (we all know how that movie went), we have to acknowledge that they are a worthwhile thing to have around. We don’t personally want them around us of course, but we respect those who are willing to keep a hive on their property for the good of the environment. But as it turns out, there are more challenges to keeping bees than not getting stung: you’ve got to keep track of the things too.

Keeping an accurate record of how many bees are coming and going, and when, is a rather tricky problem. Apparently bees don’t like electromagnetic fields, and will flee if they detect them. So putting electronic measuring devices inside of the hive can be an issue. [Mat Kelcey] decided to try counting his bees with computer vision, and so far the results are very promising.

After some training, a Raspberry Pi with a camera can count how many bees are in a given image to within a few percent of the actual number. Getting an accurate count of his bees allows [Mat] to generate fascinating visualizations about his hive’s activity and health. With real-world threats such as colony collapse disorder, this type of hard data can be crucial.

This is a perfect example of a hack which might not pertain to many of us as-is, but still contains a wealth of information which could be applicable to other projects. [Mat] goes into a fantastic amount of detail about the different approaches he tried, what worked, what didn’t, and where he goes from here. So far the only problem he’s having is with the Raspberry Pi: it’s only able to run at one frame per second due to the computational requirements of identifying the bees. But he’s got some ideas to improve the situation.

As it so happens, we’ve covered a few other methods of counting bees in the past, though this is the first one to be entirely vision based. Interestingly, this method is similar to the project to track squirrels in the garden. Albeit without the automatic gun turret part.

Training The Squirrel Terminator

Depending on which hemisphere of the Earth you’re currently reading this from, summer is finally starting to fight its way to the surface. For the more “green” of our readers, that can mean it’s time to start making plans for summer gardening. But as anyone who’s ever planted something edible can tell you, garden pests such as squirrels are fantastically effective at turning all your hard work into a wasteland. Finding ways to keep them away from your crops can be a full-time job, but luckily it’s a job nobody will mind if automation steals from humans.

Kitty gets a pass

[Peter Quinn] writes in to tell us about the elaborate lengths he is going to keep bushy-tailed marauders away from his tomatoes this year. Long term he plans on setting up a non-lethal sentry gun to scare them away, but before he can get to that point he needs to perfect the science of automatically targeting his prey. At the same time, he wants to train the system well enough that it won’t fire on humans or other animals such as cats and birds which might visit his garden.

A Raspberry Pi 3 with a cheap webcam is used to surveil the garden and detect motion. When frames containing motion are detected, they are forwarded to a laptop which has enough horsepower to handle the squirrel detection through Darknet YOLO. [Peter] recognizes this isn’t an ideal architecture for real-time targeting of a sentry turret, but it’s good enough for training the system.

Which incidentally is what [Peter] spends the most time explaining on the project’s Hackaday.io page. From the saga of getting the software environment up and running to determining how many pictures of squirrels in his yard he should provide the software for training, it’s an excellent case study in rolling your own image recognition system. After approximately 18 hours of training, he now has a system which is able to pick squirrels out from the foliage. The next step is hooking up the turret.

We’ve covered other automated turrets here on Hackaday, and we’ve seen automated devices for terrifying squirrels before, but this is the first time we’ve seen the concepts mixed.

Raspberry Pi Offers Soulless Work Oversight

If you’re like us, you spend more time than you care to admit staring at a computer screen. Whether it’s trying to find the right words for a blog post or troubleshooting some code, the end result is the same: an otherwise normally functioning human being is reduced to a slack-jawed zombie. Wouldn’t it be nice to be able to quantify just how much of your life is being wasted basking in the flickering glow of your monitor? Surely that wouldn’t be a crushingly depressing piece of information to have at the end of the week.

With the magic of modern technology, you need wonder no longer. Prolific hacker [dekuNukem] has created the aptly named “facepunch”, which allows you to “punch in” with nothing more than your face. Just sit down in front of your Raspberry Pi’s camera, and the numbers start ticking away. It’s like the little clock in the front of a taxi: except at the end you don’t have to pay anyone, you just have to come to terms with what your life has become. So that’s cool.

It doesn’t take much hardware to play along at home. All you need is a Raspberry Pi and the official camera accessory. Though for the full effect you should add one of the displays supported by the Luma.OLED driver so you can see the minutes and hours ticking away in real-time.

To get the facial recognition going, all you need to do is take a well-lit picture of your face and save it as a 400×400 JPEG. The Python 3 script will take care of the rest: checking the frames from the camera every few seconds to see if your beautiful mug is in the frame, and incrementing the counters accordingly.

Even if you’re not in the market for an Orwellian electronic supervisor, this project is a great example to get you started in the world of facial recognition. With a little luck, you’ll be weaponizing it in no time.

High-Speed Drones Use AI To Spoil The Fun

Some people look forward to the day when robots have taken over all our jobs and given us an economy where we can while our days away on leisure activities. But if your idea of play is drone racing, you may be out of luck if this AI pilot for high-speed racing drones has anything to say about it.

NASA’s Jet Propulsion Lab has been working for the past two years to develop the algorithms needed to let high-performance UAVs navigate typical drone racing obstacles, and from the look of the tests in the video below, they’ve made a lot of progress. The system is vision based, with the AI drones equipped with wide-field cameras looking both forward and down. The indoor test course has seemingly random floor tiles scattered around, which we guess provide some kind of waypoints for the drones. A previous video details a little about the architecture, and it seems the drones are doing the computer vision on-board, which we find pretty impressive.

Despite the program being bankrolled by Google, we’re sure no evil will come of this, and that we’ll be in no danger of being chased down by swarms of high-speed flying killbots anytime soon. For now we can take solace in the fact that JPL’s algorithms still can’t beat an elite human pilot like [Ken Loo], who bested the bots overall. But alarmingly, the human did no better than the bots on his first lap, which suggests that once the AI gets a little creativity and intuition like that needed to best a Go champion, [Ken] might need to find another line of work.

Continue reading “High-Speed Drones Use AI To Spoil The Fun”

Google’s Inception Sees This Turtle As A Gun; Image Recognition Camouflage

The good people at MIT’s Computer Science and Artificial Intelligence Laboratory [CSAIL] have found a way of tricking Google’s InceptionV3 image classifier into seeing a rifle where there actually is a turtle. This is achieved by presenting the classifier with what is called ‘adversary examples’.

Adversary examples are a proven concept for 2D stills. In 2014 [Goodfellow], [Shlens] and [Szegedy] added imperceptible noise to the image of a panda that from then on was classified as gibbon. This method relies on the image being undisturbed and can be overcome by zooming, blurring or rotating the image.

The applicability for real world shenanigans has been seriously limited but this changes everything. This weaponized turtle is a color 3D print that is reliably misclassified by the algorithm from any point of view. To achieve this, some knowledge about the classifier is required to generate misleading input. The image transformations, such as rotation, scaling and skewing but also color corrections and even print errors are added to the input and the result is then optimized to reliably mislead the algorithm. The whole process is documented in [CSAIL]’s paper on the method.

What this amounts to is camouflage from machine vision. Assuming that the method also works the other way around, the possibility of disguising guns (or anything else) as turtles has serious implications for automated security systems.

As this turtle targets the Inception algorithm, it should be able to fool the DIY image recognition talkbox that Hackaday’s own [Steven Dufresne] built.

Thanks to [Adam] for the tip.

Hackaday Prize Entry: Elephant AI

[Neil K. Sheridan]’s Automated Elephant Detection System was a semi-finalist in last year’s Hackaday Prize. Encouraged by his close finish, [Neil] is back at it with a refreshed and updated Elephant AI project.

The purpose of Elephant AI is to help humans and elephants coexist by eliminating contact between the two species. What this amounts to is an AI that can herd elephants. For this year’s project, [Neil] did away with the RF communications and village base stations in favor of 4G/3G-equipped, autonomous sentries equipped with Raspberry Pi computers with Go Pro cameras.

The main initiative of the project involves developing a system able to classify wild elephants visually, by automatically capturing images and then attempting to determine the elephant’s gender and age. Of particular importance is the challenge of detecting and controlling bull elephants during musth, a state of heightened aggressiveness that causes bulls to charge anyone who comes near. Musth can be detected visually, thanks to secretions called temporin that appear on the sides of the head. If cameras could identify bull elephants in musth and somehow guide them away from humans, everyone benefits.

This brings up another challenge: [Neil] is researching ways to actually get elephants to move away if they’re approaching humans. He’s looking into nonlethal techniques like audio files of bees or lions, as well as ping-pong balls containing chili pepper.

Got some ideas? Follow the Elephant AI project on Hackaday.io.

Reading Bingo Balls With Microcontrollers

Every once in a while a project comes along with that magical power to consume your time and attention for many months. When you finally complete it, you feel sorry that you don’t have to do anything more.

What is so special about this Bingo ball reader? It may seem like an ordinary OCR project at first glance; a camera captures the image and OCR software recognizes the number. Simple as that. And it works without problems, like every simple gadget should.

But then again, maybe it’s not that simple. Numbers are scattered all over the ball, so they have to be located first, and the best candidate for reading must be selected. Then, numbers are painted onto a sphere rather than a flat surface, sometimes making them deformed to the point where their shape has to be recovered first. Also, the angle of reading is not fixed but somewhere on a 360° scale. And then we have the glare problem to boot, as Bingo balls are so shiny that every light source reflects as a saturated bright spot.

So, is that all of it? Well, almost. The task is supposed to be performed by an embedded microcontroller, with limited speed and memory, yet the recognition process for one ball has to be fast — 500 ms at worst. But that’s just one part of the process. The project includes the pipelined mechanism which accepts the ball, transports it to be scanned by the OCR and then shot by the public broadcast camera before it gets dumped. And finally, if the reading was not reliable enough, the ball has to be subtly rotated so that the numbers would be repositioned for another reading attempt.

Despite these challenges I did manage to build this system. It’s fast and reliable, and I discovered some very interesting tricks along the way. Take a look at the quick demo video below to get a feel for the speed, and what the system “sees”. Then join me after the break to dive into the details of this interesting embedded build.

Continue reading “Reading Bingo Balls With Microcontrollers”