Gesture-Detecting Macro Keyboard Knows What You Want

[jakkra] bought a couple of capacitive touchpads from a Kickstarter a few years ago and recently got around to using them in a project. And what a project it is: this super macro pad combines two touchpads with a 6-pack of regular switches for a deluxe gesture-sensing input device.

Inside is an ESP32 running TensorFlow Lite to read in the gestures from the two touchpads. The pad at the top is a volume slider, and the square touchpad is the main input and is used in conjunction with the buttons to run AutoHotKey scripts within certain programs. [jakkra] can easily run git commands and more with a handful of simple gestures. The gestures all seem like natural choices to us: > for next media track, to push the current branch and to fetch and pull the current branch, s for git status, l for git log, and the one that sounds really useful to us — draw a C to get a notification that lists all the COM ports. One of the switches is dedicated to Bluetooth pairing and navigating menus on the OLED screen.

We love the combination of inputs here and think this looks great, especially with the double touchpad design. Be sure to check out the gesture demo gif after the break.

Gesture input seems well-suited to those who compute on the go, and a gesture glove feels like the perfect fit.

Continue reading “Gesture-Detecting Macro Keyboard Knows What You Want”

Remoticon Video: How To Use Machine Learning With Microcontrollers

Going from a microcontroller blinking an LED, to one that blinks the LED using voice commands based on a data set that you trained on a neural net work is a “now draw the rest of the owl” problem. Lucky for us, Shawn Hymel walks us through the entire process during his Tiny ML workshop from the 2020 Hackaday Remoticon. The video has just now been published and can be viewed below.

This is truly an end-to-end Hello World for getting machine learning up and running on a microcontroller. Shawn covers the process of collecting and preparing the audio samples, training the data set, and getting it all onto the microcontroller. At the end of two hours, he’s able to show the STM32 recognizing and responding to two different spoken words. Along the way he pauses to discuss the context of what’s happening in every step, which will help you go back and expand in those areas later to suit your own project needs.

Continue reading “Remoticon Video: How To Use Machine Learning With Microcontrollers”

Into The Belly Of The Beast With Placemon

No, no, at first we thought it was a Pokemon too, but Placemon monitors your place, your home, your domicile. Instead of a purpose-built device, like a CO detector or a burglar alarm, this is a generalized monitor that streams data to a central processor where machine learning algorithms notify you if something is awry. In a way, it is like a guard dog who texts you if your place is unusually cold, on fire, unlawfully occupied, or underwater.

[anfractuosity] is trying to make a hacker-friendly version based on inspiration from a scientific paper about general-purpose sensing, which will have less expensive components but will lose accuracy. For example, the article suggests thermopile arrays, like low-resolution heat-vision, but Placemon will have a thermometer, which seems like a prudent starting place.

The PCB is ready to start collecting sound, temperature, humidity, barometric pressure, illumination, and passive IR then report that telemetry via an onboard ESP32 using Wifi. A box utilizing Tensorflow receives the data from any number of locations and is training to recognize a few everyday household events’ sensor signatures. Training starts with events that are easy to repeat, like kitchen sounds and appliance operations. From there, [anfractuosity] hopes that he will be versed enough to teach it new sounds, so if a pet gets added to the mix, it doesn’t assume there is an avalanche every time Fluffy needs to go to the bathroom.

We have another outstanding example of sensing household events without directly interfacing with an appliance, and bringing a sensor suite to your car might be up your alley.

The Smallest Cell Phone Picture

Mobile phones are the photography tool for most of us, but they are a blunt tool. If you love astrophotography, you buy a DSLR and a lens adapter. Infrared photography needs camera surgery or a special unit. If you want to look closer to home, you may have a microscope with a CCD. Your pocket computer is not manufactured for microscopy, but that does not mean it cannot be convinced. Most of us have held our lens up to the eyepiece of some binoculars or a microscope, and it sort of works, but it is far from perfect. [Benedict Diederich] and a team are proving that we can get darn beautiful images with a microscope, a phone holder, and some purpose-built software on an Android phone with their cellSTORM.

The trick to getting useful images is to compare a series of pictures and figure out which pixels matter and which ones are noisy. Imagine someone shows you grainy nighttime footage from an outdoor security camera. When you pause, it looks like hot garbage, and you can’t tell the difference between a patio chair and a shrubbery. As it plays, the noisy pixels bounce around, and you figure out you’re looking at a spruce bush, and that is roughly how the software parses out a crisp image. At the cost of frame rate, you get clarity, which is why you need a phone holder. Some of their tests took minutes, so astrophotography might not fare as well.

We love high-resolution pictures of tiny things and that isn’t going to change anytime soon.

Thank you [Dr. Nicolás De Francesco] for the tip.

Hackaday Links Column Banner

Hackaday Links: October 20, 2019

It’s Nobel season again, with announcements of the prizes in literature, economics, medicine, physics, and chemistry going to worthies the world over. The wording of the Nobel citations are usually a vast oversimplification of decades of research and end up being a scientific word salad. But this year’s chemistry Nobel citation couldn’t be simpler: “For the development of lithium-ion batteries”. John Goodenough, Stanley Whittingham, and Akira Yoshino share the prize for separate work stretching back to the oil embargo of the early 1970s, when Goodenough invented the first lithium cathode. Wittingham made the major discovery in 1980 that adding cobalt improved the lithium cathode immensely, and Yoshino turned both discoveries into the world’s first practical lithium-ion battery in 1985. Normally, Nobel-worthy achievements are somewhat esoteric and cover a broad area of discovery that few ordinary people can relate to, but this is one that most of us literally carry around every day.

What’s going on with Lulzbot? Nothing good, if the reports of mass layoffs and employee lawsuits are to be believed. Aleph Objects, the Colorado company that manufactures the Lulzbot 3D printer, announced that they would be closing down the business and selling off the remaining inventory of products by the end of October. There was a reported mass layoff on October 11, with 90 of its 113 employees getting a pink slip. One of the employees filed a class-action suit in federal court, alleging that Aleph failed to give 60 days notice of terminations, which a company with more than 100 employees is required to do under federal law. As for the reason for the closure, nobody in the company’s leadership is commenting aside from the usual “streamlining operations” talk. Could it be that the flood of cheap 3D printers from China has commoditized the market, making it too hard for any manufacturer to stand out on features? If so, we may see other printer makers go under too.

For all the reported hardships of life aboard the International Space Station – the problems with zero-gravity personal hygiene, the lack of privacy, and an aroma that ranges from machine-shop to sweaty gym sock – the reward must be those few moments when an astronaut gets to go into the cupola at night and watch the Earth slide by. They all snap pictures, of course, but surprisingly few of them are cataloged or cross-referenced to the position of the ISS. So there’s a huge backlog of beautiful but unknown cities around the planet that. Lost at Night aims to change that by enlisting the pattern-matching abilities of volunteers to compare problem images with known images of the night lights of cities around the world. If nothing else, it’s a good way to get a glimpse at what the astronauts get to see.

Which Pi is the best Pi when it comes to machine learning? That depends on a lot of things, and Evan at Edje Electronics has done some good work comparing the Pi 3 and Pi 4 in a machine vision application. The SSD-MobileNet model was compiled to run on TensorFlow, TF Lite, or the Coral USB accelerator, using both a Pi 3 and a Pi 4. Evan drove around with each rig as a dashcam, capturing typical street scenes and measuring the frame rate from each setup. It’s perhaps no surprise that the Pi 4 and Coral setup won the day, but the degree to which it won was unexpected. It blew everything else away with 34.4 fps; the other five setups ranged from 1.37 to 12.9 fps. Interesting results, and good to keep in mind for your next machine vision project.

Have you accounted for shrinkage? No, not that shrinkage – shrinkage in your 3D-printed parts. James Clough ran into shrinkage issues with a part that needed to match up to a PCB he made. It didn’t, and he shared a thorough analysis of the problem and its solution. While we haven’t run into this problem yet, we can see how it happened – pretty much everything, including PLA, shrinks as it cools. He simply scaled up the model slightly before printing, which is a good tip to keep in mind.

And finally, if you’ve ever tried to break a bundle of spaghetti in half before dropping it in boiling water, you likely know the heartbreak of multiple breakage – many of the strands will fracture into three or more pieces, with the shorter bits shooting away like so much kitchen shrapnel. Because the world apparently has no big problems left to solve, a group of scientists has now figured out how to break spaghetti into only two pieces. Oh sure, they mask it in paper with the lofty title “Controlling fracture cascades through twisting and quenching”, but what it boils down to is applying an axial twist to the spaghetti before bending. That reduces the amount of bending needed to break the pasta, which reduces the shock that propagates along the strand and causes multiple breaks. They even built a machine to do just that, but since it only breaks a strand at a time, clearly there’s room for improvement. So get hacking!

Machine Learning With Microcontrollers Hack Chat

Join us on Wednesday, September 11 at noon Pacific for the Machine Learning with Microcontrollers Hack Chat with Limor “Ladyada” Fried and Phillip Torrone from Adafruit!

We’ve gotten to the point where a $35 Raspberry Pi can be a reasonable alternative to a traditional desktop or laptop, and microcontrollers in the Arduino ecosystem are getting powerful enough to handle some remarkably demanding computational jobs. But there’s still one area where microcontrollers seem to be lagging a bit: machine learning. Sure, there are purpose-built edge-computing SBCs, but wouldn’t it be great to be able to run AI models on versatile and ubiquitous MCUs that you can pick up for a couple of bucks?

We’re moving in that direction, and our friends at Adafruit Industries want to stop by the Hack Chat and tell us all about what they’re working on. In addition to Ladyada and PT, we’ll be joined by Meghna NatrajDaniel Situnayake, and Pete Warden, all from the Google TensorFlow team. If you’ve got any interest in edge computing on small form-factor computers, you won’t want to miss this chat. Join us, ask your questions about TensorFlow Lite and TensorFlow Lite for Microcontrollers, and see what’s possible in machine learning way out on the edge.

join-hack-chatOur Hack Chats are live community events in the Hackaday.io Hack Chat group messaging. This week we’ll be sitting down on Wednesday, September 11 at 12:00 PM Pacific time. If time zones have got you down, we have a handy time zone converter.

Click that speech bubble to the right, and you’ll be taken directly to the Hack Chat group on Hackaday.io. You don’t have to wait until Wednesday; join whenever you want and you can see what the community is talking about.

Bike-Mounted Synthetic-Aperture Radar Makes Detailed Images

Synthetic-aperture radar, in which a moving radar is used to simulate a very large antenna and obtain high-resolution images, is typically not the stuff of hobbyists. Nobody told that to [Henrik Forstén], though, and so we’ve got this bicycle-mounted synthetic-aperture radar project to marvel over as a result.

Neither the electronics nor the math involved in making SAR work is trivial, so [Henrik]’s comprehensive write-up is invaluable to understanding what’s going on. First step: build a 6-GHz frequency modulated-continuous wave (FMCW) radar, a project that [Henrik] undertook some time back that really knocked our socks off. His FMCW set is good enough to resolve human-scale objects at about 100 meters.

Moving the radar and capturing data along a path are the next steps and are pretty simple, but figuring out what to do with the data is anything but. [Henrik] goes into great detail about the SAR algorithm he used, called Omega-K, a routine that makes use of the Fast Fourier Transform which he implemented for a GPU using Tensor Flow. We usually see that for neural net applications, but the code turned out remarkably detailed 2D scans of a parking lot he rode through with the bike-mounted radar. [Henrik] added an auto-focus routine as well, and you can clearly see each parked car, light pole, and distant building within range of the radar.

We find it pretty amazing what [Henrik] was able to accomplish with relatively low-budget equipment. Synthetic-aperture radar has a lot of applications, and we’d love to see this refined and developed further.

[via r/electronics]