People in meeting, with highlights of detected phones and identities

Machine Learning Detects Distracted Politicians

[Dries Depoorter] has a knack for highly technical projects with a solid artistic bent to them, and this piece is no exception. The Flemish Scrollers is a software system that watches live streamed sessions of the Flemish government, and uses Python and machine learning to identify and highlight politicians who pull out phones and start scrolling. The results? Pushed out live on Twitter and Instagram, naturally. The project started back in July 2021, and has been dutifully running ever since, so by now we expect that holding one’s phone where the camera can see it is probably considered a rookie mistake.

This project can also be considered a good example of how to properly handle confidence in results depending on the application. In this case, false negatives (a politician is using a phone, but the software doesn’t detect it properly) are much more acceptable than false positives (a member gets incorrectly identified, or is wrongly called-out for using a mobile device when they are not.)

Keras, an open-source software library, is used for the object detection and facial recognition (GitHub repository for Keras is here.) We’ve seen it used in everything from bat detection to automatic trash sorting, so if you’re interested in machine learning applications, give it a peek.

OpenSource GUI Tool For OpenCV And DeepLearning

AI and Deep Learning for computer vision projects has come to the masses. This can be attributed partly to the  community projects that help ease the pain for newbies. [Abhishek] contributes one such project called Monk AI which comes with a GUI for transfer learning.

Monk AI is essentially a wrapper for Computer Vision and deep learning experiments. It facilitates users to finetune deep neural networks using transfer learning and is written in Python. Out of the box, it supports Keras and Pytorch and it comes with a few lines of code; you can get started with your very first AI experiment.

[Abhishek] also has an Object Detection wrapper(GitHub) that has some useful examples as well as a Monk GUI(GitHub) tool that looks similar to the tools available in commercial packages for running, training and inference experiments.

The documentation is a work in progress though it seems like an excellent concept to build on. We need more tools like these to help more people getting started with Deep Learning. Hardware such as the Nvidia Jetson Nano and Google Coral are affordable and facilitate the learning and experimentation.

How To Run ML Applications On Particle Hardware

With the release of TensorFlow Lite at Google I/O 2019, the accessible machine learning library is no longer limited to applications with access to GPUs. You can now run machine learning algorithms on microcontrollers much more easily, improving on-board inference and computation.

[Brandon Satrom] published a demo on how to run TFLite on Particle devices (tested on Photon, Argon, Boron,  and Xenon) making it possible to make predictions on live data with pre-trained models. While some of the easier computation that occurs on MCUs requires manipulating data with existing equations (mapping analog inputs to a percentage range, for instance), many applications require understanding large, complex sets of sensor data gathered in real time. It’s often more difficult to get accurate results from a simple equation.

The current method is to train ML models on specialty hardware, deploy the models on cloud infrastructure, and backhaul sensor data to the cloud for inference. By running the inference and decision-making on-board, MCUs can simply take action without backhauling any data.

He starts off by constructing a simple TGLite model for MCU execution, using mean squared error for loss and stochastic gradient descent for the optimization. After training the model on sample data, you can save the model and convert it to a C array for the MCU. On the MCU, you can load the model, TFLite libraries, and operations resolver, as well as instantiate an interpreter and tensors. From there you invoke the model on the MCU and see your results!

[Thanks dcschelt for the tip!]

Automate Sorting Your Trash With Some Healthy Machine Learning

Sorting trash into the right categories is pretty much a daily bother. Who hasn’t stood there in front of the two, three, five or more bins (depending on your area and country), pondering which bin it should go into? [Alvaro Ferrán Cifuentes]’s SeparAItor project is a proof of concept robot that uses a robotic sorting tray and a camera setup that aims to identify and sort trash that is put into the sorting tray.

The hardware consists of a sorting tray mounted to the top of a Bluetooth-connected pan and tilt platform. The platform communicates with the rest of the system, which uses a camera and OpenCV to obtain the image data, and a Keras-based back-end which implements a deep learning neural network in Python.

Training of the system was performed by using self-made photos of the items that would need to be sorted as these would most closely match real-life conditions. After getting good enough recognition results, the system was put together, with a motion detection feature added to respond when a new item was tossed into the tray. The system will then attempt to identify the item, categorize it, and instruct the platform to rotate to the correct orientation before tilting and dropping it into the appropriate bin. See the embedded video after the break for the system in action.

Believe it or not, this isn’t the first trash-sorting robot to grace the pages of Hackaday. Potentially concepts like these, that rely on automation and machine vision, could one day be deployed on a large scale to help reduce how much recyclable material end up in landfills. Continue reading “Automate Sorting Your Trash With Some Healthy Machine Learning”

Leigh Johnson’s Guide To Machine Vision On Raspberry Pi

We salute hackers who make technology useful for people in emerging markets. Leigh Johnson joined that select group when she accepted the challenge to build portable machine vision units that work offline and can be deployed for under $100 each. For hardware, a Raspberry Pi with camera plus screen can fit under that cost ceiling, and the software to give it sight is the focus of her 2018 Hackaday Superconference presentation. (Video also embedded below.)

The talk is a very concise 13 minutes, so Leigh flies through definitions of basic terms, before quickly naming TensorFlow and Keras as the tools she used. The time she saved here was spent on explaining what convolutional neural networks are and how they work, just enough to prepare the audience. But all of that is really just background, the meat of the talk is self-contained examples that Leigh has put together and made available online. I love to see that since it means you go beyond just watching and try it out for yourself. Continue reading “Leigh Johnson’s Guide To Machine Vision On Raspberry Pi”

We Should Stop Here, It’s Bat Country!

[Roland Meertens] has a bat detector, or rather, he has a device that can record ultrasound – the type of sound that bats use to echolocate. What he wants is a bat detector. When he discovered bats living behind his house, he set to work creating a program that would use his recorder to detect when bats were around.

[Roland]’s workflow consists of breaking up a recording from his backyard into one second clips, loading them in to a Python program and running some machine learning code to determine whether the clip is a recording of a bat or not and using this to determine the number of bats flying around. He uses several Python libraries to do this including Tensorflow and LibROSA.

The Python code breaks each one second clip into twenty-two parts. For each part, he determines the max, min, mean, standard deviation, and max-min of the sample – if multiple parts of the signal have certain features (such as a high standard deviation), then the software has detected a bat call. Armed with this, [Roland] turned his head to the machine learning so that he could offload the work of detecting the bats. Again, he turned to Python and the Keras library.

With a 95% success rate, [Roland] now has a bat detector! One that works pretty well, too. For more on detecting bats and machine learning, check out the bat detector in this list of ultrasonic projects and check out this IDE for working with Tensorflow and machine learning.