Join us on Wednesday, September 11 at noon Pacific for the Machine Learning with Microcontrollers Hack Chat with Limor “Ladyada” Fried and Phillip Torrone from Adafruit!
We’ve gotten to the point where a $35 Raspberry Pi can be a reasonable alternative to a traditional desktop or laptop, and microcontrollers in the Arduino ecosystem are getting powerful enough to handle some remarkably demanding computational jobs. But there’s still one area where microcontrollers seem to be lagging a bit: machine learning. Sure, there are purpose-built edge-computing SBCs, but wouldn’t it be great to be able to run AI models on versatile and ubiquitous MCUs that you can pick up for a couple of bucks?
We’re moving in that direction, and our friends at Adafruit Industries want to stop by the Hack Chat and tell us all about what they’re working on. In addition to Ladyada and PT, we’ll be joined by Meghna Natraj, Daniel Situnayake, and Pete Warden, all from the Google TensorFlow team. If you’ve got any interest in edge computing on small form-factor computers, you won’t want to miss this chat. Join us, ask your questions about TensorFlow Lite and TensorFlow Lite for Microcontrollers, and see what’s possible in machine learning way out on the edge.
Our Hack Chats are live community events in the Hackaday.io Hack Chat group messaging. This week we’ll be sitting down on Wednesday, September 11 at 12:00 PM Pacific time. If time zones have got you down, we have a handy time zone converter.
Click that speech bubble to the right, and you’ll be taken directly to the Hack Chat group on Hackaday.io. You don’t have to wait until Wednesday; join whenever you want and you can see what the community is talking about.
Synthetic-aperture radar, in which a moving radar is used to simulate a very large antenna and obtain high-resolution images, is typically not the stuff of hobbyists. Nobody told that to [Henrik Forstén], though, and so we’ve got this bicycle-mounted synthetic-aperture radar project to marvel over as a result.
Neither the electronics nor the math involved in making SAR work is trivial, so [Henrik]’s comprehensive write-up is invaluable to understanding what’s going on. First step: build a 6-GHz frequency modulated-continuous wave (FMCW) radar, a project that [Henrik] undertook some time back that really knocked our socks off. His FMCW set is good enough to resolve human-scale objects at about 100 meters.
Moving the radar and capturing data along a path are the next steps and are pretty simple, but figuring out what to do with the data is anything but. [Henrik] goes into great detail about the SAR algorithm he used, called Omega-K, a routine that makes use of the Fast Fourier Transform which he implemented for a GPU using Tensor Flow. We usually see that for neural net applications, but the code turned out remarkably detailed 2D scans of a parking lot he rode through with the bike-mounted radar. [Henrik] added an auto-focus routine as well, and you can clearly see each parked car, light pole, and distant building within range of the radar.
We find it pretty amazing what [Henrik] was able to accomplish with relatively low-budget equipment. Synthetic-aperture radar has a lot of applications, and we’d love to see this refined and developed further.
Neural networks are computer systems that are vaguely inspired by the construction of animal brains, and much like human brains, can be trained to obey the whims of the almighty domestic cat. [EdjeElectronics] has built just such a system, and his cat is better off for it.
The build uses a Raspberry Pi, fitted with the Pi Camera board, to image the area around the back door of the house. A Python script regularly captures images and passes them to a TensorFlow neural network for object recognition. The TensorFlow network returns object type and positions to the Python script. This information can be used to determine if there is a cat in the frame, and if it is inside or outside. If the cat remains in position for ten consecutive frames, a text message is sent via Twilio, indicating to the owner to let the cat in or out, as the case may be.
Thirty years ago, object classification was a pie-in-the-sky technology, but now you can run it on a $30 computer to figure out where your pets are. What a time we live in! A similar solution to this problem may be a cat door that unlocks via facial recognition. Video after the break.
[Thanks to Baldpower for the tip!]
Continue reading “Neural Network Knows When Cat Wants To Go Outside”
We’ve been talking a lot about machine learning lately. People are using it for speech generation and recognition, computer vision, and even classifying radio signals. If you’ve yet to climb the learning curve, you might be interested in a new free class from Google using TensorFlow.
Of course, we’ve covered tutorials for TensorFlow before, but this is structured as a 15 hour class with 25 lessons and 40 exercises. Of course, it is also from the horse’s mouth, so to speak. Google says the class will answer questions like:
- How does machine learning differ from traditional programming?
- What is loss, and how do I measure it?
- How does gradient descent work?
- How do I determine whether my model is effective?
- How do I represent my data so that a program can learn from it?
- How do I build a deep neural network?
Continue reading “Machine Learning Crash Course From Google”
Prostheses are a great help to those who have lost limbs, or who never had them in the first place. Over the past few decades there has been a great deal of research done to make these essential devices more useful, creating prostheses that are capable of movement and more accurately recreating the functions of human body parts. At Georgia Tech, they’re working on just that, with the help of AI.
[Jason Barnes] lost his arm in a work accident, which prevented him from playing the piano the way he used to. The researchers at Georgia Tech worked with him, eventually producing a prosthetic arm that, unlike most, actually has individual finger control. This is achieved through the use of an ultrasound probe, which is used to detect muscle movements elsewhere on his body, with enough detail to allow the control of individual fingers. This is done through a TensorFlow-based neural network which analyses the ultrasound data to determine which finger the user is trying to move. The use of ultrasound was the major breakthrough which made this possible; previous projects have often relied on electromyogram sensors to read muscle impulses but these lack the resolution required.
The prosthesis is nicknamed the “Skywalker arm”, after its similarities to the prostheses seen in the Star Wars films. It’s not [Jason]’s first advanced prosthetic, either – Georgia Tech has also equipped him with an advanced drumming prosthesis. This allows him to use two sticks with a single arm, the second stick using advanced AI routines to drum along with the music in the room.
It’s great to see music being used as a driver to create high-performance prosthetics and push the state of the art forward. We’re sure [Jason] enjoys performing with the new hardware, too. But perhaps you’d like to try something similar, even though you’ve got two hands already? Try this on for size.
Continue reading “AI Prosthesis Is Music To Our Ears”