Machine learning is an incredible tool for conservation research, especially for scenarios like long term observation, and sifting through massive amounts of data. While the average Hackaday reader might not be able to take part in data gathering in an isolated wilderness somewhere, we are all surrounded by bird life. Using an Arduino Nano 33 BLE Sense and an online machine learning tool, a team made up of [Errol Joshua], [Ajith KJ], [Mahesh Nayak], and [Supriya Nickam] demonstrate how to set up an automated bird call classifier.
The Arduino Nano 33 BLE Sense is a fully featured little dev board that features the very capable NRF52840 microcontroller with Bluetooth Low Energy, and a variety of onboard sensors, including a microphone. Training a machine learning model might seem daunting to many people, but online services like Edge Impulse makes the process very beginner-friendly. Once you start training your own models for specific applications, you quickly learn that building and maintaining a high quality dataset is often the most time-consuming part of machine learning. Fortunately for this use case, a massive online library of bird calls from all over the world is available on Xeno-Canto. This can be augmented with background noise from the area where the device will be deployed to reduce false-positives. Edge Impulse will train the model using the provided dataset, and generate a library that can be used on the Arduino with one of the provided sample sketches to log and send the collected data to a server. Then comes the never ending process of iteratively testing and improving the recognition model. Edge Impulse is also compatible with more powerful devices such as the Raspberry Pi and Jetson Nano if you want more intensive machine learning models.
We’ve also seen the exact same setup get used for smart baby monitor. If you want to learn more, be sure to watch at [Shawn Hymel]’s talk from the 2020 Remoticon about machine learning on microcontrollers.
Header photo by Joey Smith on Unsplash
Spurtleete ceoweet, robin. A spelt vocabulary is is needed perhaps. Peterson’s has voice graphs. I’ve never heard of the library above but have heard of the Cornell School of Ornithology which is the standard of bird sounds.
This is a worthwhile project to document the decline of songbirds. Baseline data to start. They are hurting and the starlings populate.
Very grateful to Shawn Hymel for his talk from the 2020 Remoticon about machine learning on microcontrollers. Sorry I missed it at inception.
I’d been poring over research papers on sound detection and classification, none of which could adequately explain to me what Mel-frequency cepstral coefficients were.
Although Shawn gave an inspired introduction to the subject, amongst others, it appears that the encouraging and very thoughtful people at Edge Impulse have done all the coding for us.
Their superb UX and Edge Optimized Neural compiler promise to make running ML applications, on a wide variety of resource-constrained CPUs and microcontrollers, veritable child’s play.
Link typo – this: “http://urlm.co.uk/www.xeno-canto.org” should be this: “https://www.xeno-canto.org/”.
ML ultrasonic bat counter :-) ML hydrophonic whale-song tracker :-) ML seismic land mine :-(