Author with book

Learn All About Writing A Published Technical Book, From Idea To Print

Ever wondered what, exactly, goes into creating a technical book? If you’d like to know the steps that bring a book from idea to publication, [Sara Robinson] tells all about it as she explains what went into co-authoring O’Reilly’s Machine Learning Design Patterns.

Her post was written in 2020, but don’t let that worry you, because her writeup isn’t about the book itself so much as it is about the whole book-writing process, and her experiences in going through it. (By the way, every O’Reilly book has a distinctive animal on the cover, and we learned from [Sara] that choosing the cover animal is a slightly mysterious process, and is not done by the authors.)

It turns out that there are quite a few steps that need to happen — like proposals and approvals — before the real writing even starts. The book writing itself is a process, and like most processes to which one is new, things start out slow and inefficient before they improve.

[Sara] also talks a bit about burnout, and her advice on dealing with it is as insightful as it is practical: begin by communicating honestly how you are feeling to the people involved.

Over the years I’ve learned that people will very rarely guess how you’re feeling and it’s almost always better to tell them […] I decided to tell my co-authors and my manager that I was burnt out. This went better than expected.

There is a lot of code in the book, and it has its own associated GitHub repository should you wish to check some of it out.

By the way, [Sara] celebrated publication by making a custom cake, which you can see near the bottom of her blog post. This comes as no surprise seeing as she has previously managed to combine machine learning with her love of making cakes!

A Soft Thumb-Sized Vision-Based Touch Sensor

A team from the Max Planck Institute for Intelligent Systems in Germany have developed a novel thumb-shaped touch sensor capable of resolving the force of a contact, as well as its direction, over the whole surface of the structure. Intended for dexterous manipulation systems, the system is constructed from easily sourced components, so should scale up to a larger assemblies without breaking the bank. The first step is to place a soft and compliant outer skin over a rigid metallic skeleton, which is then illuminated internally using structured light techniques. From there, machine learning can be used to estimate the shear and normal force components of the contact with the skin, over the entire surface, by observing how the internal envelope distorts the structured illumination.

The novelty here is the way they combine both photometric stereo processing with other structured light techniques, using only a single camera. The camera image is fed straight into a pre-trained machine learning system (details on this part of the system are unfortunately a bit scarce) which directly outputs an estimate of the contact shape and force distribution, with spatial accuracy reported good to less than 1 mm and force resolution down to 30 millinewtons. By directly estimating normal and shear force components the direction of the contact could be resolved to 5 degrees. The system is so sensitive that it can reportedly detect its own posture by observing the deformation of the skin due its own weight alone!

We’ve not covered all that many optical sensing projects, but here’s one using a linear CIS sensor to turn any TV into a touch screen. And whilst we’re talking about using cameras as sensors, here’s a neat way to use optical fibers to read multiple light-gates with a single camera and OpenCV.

Continue reading “A Soft Thumb-Sized Vision-Based Touch Sensor”

Weather Station Predicts Air Quality

Measuring air quality at any particular location isn’t too complicated. Just a sensor or two and a small microcontroller is generally all that’s needed. Predicting the upcoming air quality is a little more complicated, though, since so many factors determine how safe it will be to breathe the air outside. Luckily, though, we don’t need to know all of these factors and their complex interactions in order to predict air quality. We can train a computer to do that for us as [kutluhan_aktar] demonstrates with a machine learning-capable air quality meter.

The build is based around an Arduino Nano 33 BLE which is connected to a small weather station outside. It specifically monitors ozone concentration as a benchmark for overall air quality but also uses an anemometer and a BMP180 precision pressure and temperature sensor to assist in training the algorithm. The weather data is sent over Bluetooth to a Raspberry Pi which is running TensorFlow. Once the neural network was trained, the model was sent back to the Arduino which is now capable of using it to make much more accurate predictions of future air quality.

The build goes into quite a bit of detail on setting up the models, training them, and then using them on the Arduino. It’s an impressive build capped off with a fun 3D-printed case that resembles an old windmill. Using machine learning to help predict the weather is starting to become more commonplace as well, as we have seen before with this weather station that can predict rainfall intensity.

People in meeting, with highlights of detected phones and identities

Machine Learning Detects Distracted Politicians

[Dries Depoorter] has a knack for highly technical projects with a solid artistic bent to them, and this piece is no exception. The Flemish Scrollers is a software system that watches live streamed sessions of the Flemish government, and uses Python and machine learning to identify and highlight politicians who pull out phones and start scrolling. The results? Pushed out live on Twitter and Instagram, naturally. The project started back in July 2021, and has been dutifully running ever since, so by now we expect that holding one’s phone where the camera can see it is probably considered a rookie mistake.

This project can also be considered a good example of how to properly handle confidence in results depending on the application. In this case, false negatives (a politician is using a phone, but the software doesn’t detect it properly) are much more acceptable than false positives (a member gets incorrectly identified, or is wrongly called-out for using a mobile device when they are not.)

Keras, an open-source software library, is used for the object detection and facial recognition (GitHub repository for Keras is here.) We’ve seen it used in everything from bat detection to automatic trash sorting, so if you’re interested in machine learning applications, give it a peek.

flow chart for Assessment of the Feasibility of Using Noninvasive Wearable Biometric Monitoring Sensors to Detect Influenza and the Common Cold Before Symptom Onset paper

Wearables Can Detect The Flu? Well…Maybe…

Surprisingly there are no pre-symptomatic screening methods for the common cold or the flu, allowing these viruses to spread unbeknownst to the infected. However, if we could detect when infected people will get sick even before they were showing symptoms, we could do a lot more to contain the flu or common cold and possibly save lives. Well, that’s what this group of researchers in this highly collaborative study set out to accomplish using data from wearable devices.

Participants of the study were given an E4 wristband, a research-grade wearable that measures heart rate, skin temperature, electrodermal activity, and movement. They then wore the E4 before and after inoculation of either influenza or rhinovirus. The researchers used 25 binary, random forest classification models to predict whether or not participants were infected based on the physiological data reported by the E4 sensor. Their results are pretty lengthy, so I’ll only highlight a few major discussion points. In one particular analysis, they found that at 36 hours after inoculation their model had an accuracy of 89% with a 100% sensitivity and a 67% specificity. Those aren’t exactly world-shaking numbers, but something the researchers thought was pretty promising nonetheless.

One major consideration for the accuracy of their model is the quality of the data reported by the wearable. Namely, if the data reported by the wearable isn’t reliable itself, no model derived from such data can be trustworthy either. We’ve discussed those points here at Hackaday before. Another major consideration is the lack of a control group. You definitely need to know if the model is simply tagging everyone as “infected” (which specificity does give us an idea of, to be fair) and a control group of participants who have not been inoculated with either virus would be one possible way to answer that question. Fortunately, the researchers admit this limitation of their work and we hope they will remedy this in future studies.

Studies like this are becoming increasingly common and the ongoing pandemic has motivated these physiological monitoring studies even further. It seems like wearables are here to stay as the academic research involving these devices seems to intensify each day. We’d love to see what kind of data could be obtained by a community-developed device, as we’ve seen some pretty impressive DIY biosensor projects over the years.

E4 Empatica device for measuring location, temperature, skin conductance, sleep, etc. on arm

Wearable Sensor For Detecting Substance Use Disorder

Oftentimes, the feature set for our typical fitness-focused wearables feels a bit empty. Push notifications on your wrist? OK, fine. Counting your steps? Sure, why not. But how useful are those capabilities anyway? Well, what if wearables could be used for a more dignified purpose like helping people in recovery from substance use disorder (SUD)? That’s what the researchers at the University of Massachusetts Medical School aimed to find out.

In their paper, they used a wrist-worn wearable to measure locomotion, heart rate, skin temperature, and electrodermal activity of 38 SUD patients during their everyday lives. They wanted to detect periods of stress and craving, as these parameters are possible triggers of substance use. Furthermore, they had patients self-report times during the day when they felt stressed or had cravings, and used those reports to calibrate their model.

They tried a number of classification models such as decision trees, discriminant analysis, logistic regression, and others, but found the most success using support vector machines though they failed to discuss why they thought that was the case. In the end, they found that they could detect stress vs. non-stress with an accuracy of 81.3% and craving vs. no-craving with an accuracy of 82.1%. Not amazing accuracy, but given the dire need for medical advancements for SUD, it’s something to keep an eye on. Interestingly enough, they found that locomotion data alone had an accuracy of approximately 75% when it came to indicating stress and cravings.

Much ado has been made about the insufficient accuracy of wearable devices for medical diagnoses, particularly of those that measure activity and heart rate. Maybe their model would perform better, being trained on real-time measurements of cortisol, a more accurate physiological measure of stress.

Finally, what really stood out to us about this study was how willing patients were to use a wearable in their treatment strategy. It’s sad that society oftentimes has a very negative perception of SUD patients, leading to fewer treatment options for patients. But hopefully, with technological advancements such as this, we’re one step closer to a more equitable future of healthcare.

accelerometer, oled, and PocketBeagle create a gesture-controlled calculator

The Calculator Charm: Calculatorium Leviosa!

Have you ever tried waving your hand around like a magic wand and summoning a calculator? We would guess not since you’d probably look a little silly doing so. That is unless you had [Andrei’s] cool gesture-controlled calculator. [Andrei] thought it would be helpful to use a calculator in his research lab without having to take his gloves off and the results are pretty cool.

His hardware consists of a PocketBeagle, an OLED, and an MPU6050 inertial measurement unit for capturing his hand motions using an accelerometer and gyroscope. The hardware is pretty straightforward, so the beauty of this project lies in its machine learning implementation.

[Andrei] first captured a few example datasets to train his algorithm by recreating the hand gestures for each number, 0-9, and recording the resulting accelerometer and gyroscope outputs. He processed the data first with a wavelet transform. The intent of the transform was two-fold. First, the transform allowed him to reduce the number of samples in his datasets while preserving the shape of the accelerometer and gyroscope signals, the key features in the machine learning classification. Secondly, he was able to increase the number of features for the classification since the wavelet transform resulted in both approximation and detailed coefficients which can both be fed into the algorithm.

Because he had a small dataset, he used the Stratified Shuffle Split technique instead of the test train split method which is generally more suited for larger datasets. The Stratified Shuffle Split ensured approximately the same number of train and test samples for each gesture. He was also very conscious of optimizing his model for running on a portable processing unit like the PocketBeagle. He spent some time optimizing the parameters of his algorithm and ultimately converted his model to a TensorFlowLite model using the built-in “TFLiteConverter” function within TensorFlow.

Finally, in true open-source fashion, all his code is available on GitHub, so feel free to give it a go yourself. Calculatorium Leviosa!

Continue reading “The Calculator Charm: Calculatorium Leviosa!”