A putter with an Arduino attached to its shaft

This Golf Club Uses Machine Learning To Perfect Your Swing

Golf can be a frustrating game to learn: it takes countless hours of practice to get anywhere near the perfect swing. While some might be lucky enough to have a pro handy every time they’re on the driving range or putting green, most of us will have to get by with watching the ball’s motion and using that to figure out what we’re doing wrong.

Luckily, technology is here to help: [Nick Bild]’s Golf Ace is a putter that uses machine learning to analyze your swing. An accelerometer mounted on the shaft senses the exact motion of the club and uses a machine learning algorithm to see how closely it matches a professional’s swing. An LED mounted on the club’s head turns green if your stroke was good, and red if it wasn’t. All of this is driven by an Arduino Nano 33 IoT and powered by a lithium-ion battery.

The Golf Ace doesn’t tell you what part of your swing to improve, so you’d still need some external instruction to help you get closer to the ideal form; [Nick]’s suggestion is to bundle an instructor’s swing data with a book or video that explains the important points. That certainly looks like a reasonable approach to us, and we can also imagine a similar setup to be used on woods and irons, although that would require a more robust mounting system.

In any case, the Golf Ace could very well be a useful addition to the many gadgets that try to improve your game. But in case you still end up frustrated, you might want to try this automated robotic golf club.

Continue reading “This Golf Club Uses Machine Learning To Perfect Your Swing”

A man performing push-ups in front of a PC

Machine Learning Helps You Get In Shape While Working A Desk Job

Humans weren’t made to sit in front of a computer all day, yet for many of us that’s how we spend a large part of our lives. Of course we all know that it’s important to get up and move around every now and then to stretch our muscles and get our blood flowing, but it’s easy to forget if you’re working towards a deadline. [Victor Sonck] thought he needed some reminders — as well as some not-so-gentle nudging — to get into the habit of doing a quick workout a few times a day.

To this end, he designed a piece of software that would lock his computer’s screen and only unlock it if he performed five push-ups. Locking the screen on his Linux box was as easy as sending a command through the network, but recognizing push-ups was a harder task for which [Victor] decided to employ machine learning. A Raspberry Pi with a webcam attached could do the trick, but the limited processing power of the Pi’s CPU might prove insufficient for processing lots of raw image data.

[Victor] therefore decided on using a Luxonis OAK-1, which is a 4K camera with a built-in machine-learning processor. It can run various kinds of image recognition systems including Blazepose, a pre-trained model that can recognize a person’s pose from an image. The OAK-1 uses this to send out a set of coordinates that describe the position of a person’s head, torso and limbs to the Raspberry Pi through a USB interface. A second machine-learning model running on the Pi then analyzes this dataset to recognize push-ups.

[Victor]’s video (embedded below) is an entertaining introduction into the world of machine-learning systems for video processing, as well as a good hands-on example of a project that results in a useful tool. If you’re interested in learning more about machine learning on small platforms, check out this 2020 Remoticon talk on machine learning on microcontrollers, or this 2019 Supercon talk about implementing machine vision on a Raspberry Pi.

Continue reading “Machine Learning Helps You Get In Shape While Working A Desk Job”

Camera held in hand

Review: Vizy Linux-Powered AI Camera

Vizy is a Linux-based “AI camera” based on the Raspberry Pi 4 that uses machine learning and machine vision to pull off some neat tricks, and has a design centered around hackability. I found it ridiculously simple to get up and running, and it was just as easy to make changes of my own, and start getting ideas.

Person and cat with machine-generated tags identifying them
Out of the box, Vizy is only a couple lines of Python away from being a functional Cat Detector project.

I was running pre-installed examples written in Python within minutes, and editing that very same code in about 30 seconds more. Even better, I did it all without installing a development environment, or even leaving my web browser, for that matter. I have to say, it made for a very hacker-friendly experience.

Vizy comes from the folks at Charmed Labs; this isn’t their first stab at smart cameras, and it shows. They also created the Pixy and Pixy 2 cameras, of which I happen to own several. I have always devoured anything that makes machine vision more accessible and easier to integrate into projects, so when Charmed Labs kindly offered to send me one of their newest devices, I was eager to see what was new.

I found Vizy to be a highly-polished platform with a number of truly useful hardware and software features, and a focus on accessibility and ease of use that I really hope to see more of in future embedded products. Let’s take a closer look.

Continue reading “Review: Vizy Linux-Powered AI Camera”

Using Statistics Instead Of Sensors

Statistics often gets a bad rap in mathematics circles for being less than concrete at best, and being downright misleading at worst. While these sentiments might ring true for things like political polling, it hides the fact that statistical methods can be put to good use in engineering systems with fantastic results. [Mark Smith], for example, has been working on an espresso machine which can make the perfect shot of coffee, and turned to one of the tools in the statistics toolbox in order to solve a problem rather than adding another sensor to his complex coffee-brewing machine.

To make espresso, steam is generated which is then forced through finely ground coffee. [Mark] found that his espresso machine was often pouring too much or too little coffee, and in order to improve his machine’s accuracy in this area he turned to the linear regression parameter R2, also known as the coefficient of determination. By using a machine learning algorithm tuned to this value, which assesses predictable variation in a data set, a computer can more easily tell when the coffee begins pouring out of the portafilter and into the espresso cup based on the pressure and water flow in the machine itself rather than using some other input such as the weight of the cup.

We have seen in the past how seriously [Mark] takes his coffee-making, and this is another step in a series of improvements he has made to his equipment. In this iteration, he has additionally produced a simulation in JupyterLab to better assist him in modeling the system and making even more accurate predictions. It’s quite a bit more effort than adding sensors, but since his espresso machine already included quite a bit of computing power it’s not too big a leap for him to make.

AI-Generated Sleep Podcast Urges You To Imagine Pleasant Nonsense

[Stavros Korokithakis] finds the experience of falling asleep to fairy tales soothing, and this has resulted in a fascinating project that indulges this desire by using machine learning to generate mildly incoherent fairy tales and read them aloud. The result is a fantastic sort of automated, machine-generated audible sleep aid. Even the logo is machine-generated!

The Deep Dreams Podcast is entirely machine-generated, including the logo.

The project leverages the natural language generation abilities of OpenAI’s GPT-3 to create fairytale-style content that is just coherent enough to sound natural, but not quite coherent enough to make a sensible plotline. The quasi-lucid, dreamlike result is perfect for urging listeners to imagine pleasant nonsense (thanks to Nathan W Pyle for that term) as they drift off to sleep.

We especially loved reading about the methods and challenges [Stavros] encountered while creating this project. For example, he talks about how there is more to a good-sounding narration than just pointing a text-to-speech engine at a wall of text and mashing “GO”. A good episode has things like strategic pauses, background music, and audio fades. That’s where pydub — a Python library for manipulating audio — came in handy. As for the speech, text-to-speech quality is beyond what it was even just a few years ago (and certainly leaps beyond machine-generated speech in the 80s) but it still took some work to settle on a voice that best suited the content, and the project gradually saw improvement.

Deep Dreams Podcast has a GitLab repository if you want to see the code that drives it all, and you can go to the podcast itself to give it a listen.

Author with book

Learn All About Writing A Published Technical Book, From Idea To Print

Ever wondered what, exactly, goes into creating a technical book? If you’d like to know the steps that bring a book from idea to publication, [Sara Robinson] tells all about it as she explains what went into co-authoring O’Reilly’s Machine Learning Design Patterns.

Her post was written in 2020, but don’t let that worry you, because her writeup isn’t about the book itself so much as it is about the whole book-writing process, and her experiences in going through it. (By the way, every O’Reilly book has a distinctive animal on the cover, and we learned from [Sara] that choosing the cover animal is a slightly mysterious process, and is not done by the authors.)

It turns out that there are quite a few steps that need to happen — like proposals and approvals — before the real writing even starts. The book writing itself is a process, and like most processes to which one is new, things start out slow and inefficient before they improve.

[Sara] also talks a bit about burnout, and her advice on dealing with it is as insightful as it is practical: begin by communicating honestly how you are feeling to the people involved.

Over the years I’ve learned that people will very rarely guess how you’re feeling and it’s almost always better to tell them […] I decided to tell my co-authors and my manager that I was burnt out. This went better than expected.

There is a lot of code in the book, and it has its own associated GitHub repository should you wish to check some of it out.

By the way, [Sara] celebrated publication by making a custom cake, which you can see near the bottom of her blog post. This comes as no surprise seeing as she has previously managed to combine machine learning with her love of making cakes!

A Soft Thumb-Sized Vision-Based Touch Sensor

A team from the Max Planck Institute for Intelligent Systems in Germany have developed a novel thumb-shaped touch sensor capable of resolving the force of a contact, as well as its direction, over the whole surface of the structure. Intended for dexterous manipulation systems, the system is constructed from easily sourced components, so should scale up to a larger assemblies without breaking the bank. The first step is to place a soft and compliant outer skin over a rigid metallic skeleton, which is then illuminated internally using structured light techniques. From there, machine learning can be used to estimate the shear and normal force components of the contact with the skin, over the entire surface, by observing how the internal envelope distorts the structured illumination.

The novelty here is the way they combine both photometric stereo processing with other structured light techniques, using only a single camera. The camera image is fed straight into a pre-trained machine learning system (details on this part of the system are unfortunately a bit scarce) which directly outputs an estimate of the contact shape and force distribution, with spatial accuracy reported good to less than 1 mm and force resolution down to 30 millinewtons. By directly estimating normal and shear force components the direction of the contact could be resolved to 5 degrees. The system is so sensitive that it can reportedly detect its own posture by observing the deformation of the skin due its own weight alone!

We’ve not covered all that many optical sensing projects, but here’s one using a linear CIS sensor to turn any TV into a touch screen. And whilst we’re talking about using cameras as sensors, here’s a neat way to use optical fibers to read multiple light-gates with a single camera and OpenCV.

Continue reading “A Soft Thumb-Sized Vision-Based Touch Sensor”