The Smallest Cell Phone Picture

Mobile phones are the photography tool for most of us, but they are a blunt tool. If you love astrophotography, you buy a DSLR and a lens adapter. Infrared photography needs camera surgery or a special unit. If you want to look closer to home, you may have a microscope with a CCD. Your pocket computer is not manufactured for microscopy, but that does not mean it cannot be convinced. Most of us have held our lens up to the eyepiece of some binoculars or a microscope, and it sort of works, but it is far from perfect. [Benedict Diederich] and a team are proving that we can get darn beautiful images with a microscope, a phone holder, and some purpose-built software on an Android phone with their cellSTORM.

The trick to getting useful images is to compare a series of pictures and figure out which pixels matter and which ones are noisy. Imagine someone shows you grainy nighttime footage from an outdoor security camera. When you pause, it looks like hot garbage, and you can’t tell the difference between a patio chair and a shrubbery. As it plays, the noisy pixels bounce around, and you figure out you’re looking at a spruce bush, and that is roughly how the software parses out a crisp image. At the cost of frame rate, you get clarity, which is why you need a phone holder. Some of their tests took minutes, so astrophotography might not fare as well.

We love high-resolution pictures of tiny things and that isn’t going to change anytime soon.

Thank you [Dr. Nicolás De Francesco] for the tip.

Hackaday Links: October 20, 2019

It’s Nobel season again, with announcements of the prizes in literature, economics, medicine, physics, and chemistry going to worthies the world over. The wording of the Nobel citations are usually a vast oversimplification of decades of research and end up being a scientific word salad. But this year’s chemistry Nobel citation couldn’t be simpler: “For the development of lithium-ion batteries”. John Goodenough, Stanley Whittingham, and Akira Yoshino share the prize for separate work stretching back to the oil embargo of the early 1970s, when Goodenough invented the first lithium cathode. Wittingham made the major discovery in 1980 that adding cobalt improved the lithium cathode immensely, and Yoshino turned both discoveries into the world’s first practical lithium-ion battery in 1985. Normally, Nobel-worthy achievements are somewhat esoteric and cover a broad area of discovery that few ordinary people can relate to, but this is one that most of us literally carry around every day.

What’s going on with Lulzbot? Nothing good, if the reports of mass layoffs and employee lawsuits are to be believed. Aleph Objects, the Colorado company that manufactures the Lulzbot 3D printer, announced that they would be closing down the business and selling off the remaining inventory of products by the end of October. There was a reported mass layoff on October 11, with 90 of its 113 employees getting a pink slip. One of the employees filed a class-action suit in federal court, alleging that Aleph failed to give 60 days notice of terminations, which a company with more than 100 employees is required to do under federal law. As for the reason for the closure, nobody in the company’s leadership is commenting aside from the usual “streamlining operations” talk. Could it be that the flood of cheap 3D printers from China has commoditized the market, making it too hard for any manufacturer to stand out on features? If so, we may see other printer makers go under too.

For all the reported hardships of life aboard the International Space Station – the problems with zero-gravity personal hygiene, the lack of privacy, and an aroma that ranges from machine-shop to sweaty gym sock – the reward must be those few moments when an astronaut gets to go into the cupola at night and watch the Earth slide by. They all snap pictures, of course, but surprisingly few of them are cataloged or cross-referenced to the position of the ISS. So there’s a huge backlog of beautiful but unknown cities around the planet that. Lost at Night aims to change that by enlisting the pattern-matching abilities of volunteers to compare problem images with known images of the night lights of cities around the world. If nothing else, it’s a good way to get a glimpse at what the astronauts get to see.

Which Pi is the best Pi when it comes to machine learning? That depends on a lot of things, and Evan at Edje Electronics has done some good work comparing the Pi 3 and Pi 4 in a machine vision application. The SSD-MobileNet model was compiled to run on TensorFlow, TF Lite, or the Coral USB accelerator, using both a Pi 3 and a Pi 4. Evan drove around with each rig as a dashcam, capturing typical street scenes and measuring the frame rate from each setup. It’s perhaps no surprise that the Pi 4 and Coral setup won the day, but the degree to which it won was unexpected. It blew everything else away with 34.4 fps; the other five setups ranged from 1.37 to 12.9 fps. Interesting results, and good to keep in mind for your next machine vision project.

Have you accounted for shrinkage? No, not that shrinkage – shrinkage in your 3D-printed parts. James Clough ran into shrinkage issues with a part that needed to match up to a PCB he made. It didn’t, and he shared a thorough analysis of the problem and its solution. While we haven’t run into this problem yet, we can see how it happened – pretty much everything, including PLA, shrinks as it cools. He simply scaled up the model slightly before printing, which is a good tip to keep in mind.

And finally, if you’ve ever tried to break a bundle of spaghetti in half before dropping it in boiling water, you likely know the heartbreak of multiple breakage – many of the strands will fracture into three or more pieces, with the shorter bits shooting away like so much kitchen shrapnel. Because the world apparently has no big problems left to solve, a group of scientists has now figured out how to break spaghetti into only two pieces. Oh sure, they mask it in paper with the lofty title “Controlling fracture cascades through twisting and quenching”, but what it boils down to is applying an axial twist to the spaghetti before bending. That reduces the amount of bending needed to break the pasta, which reduces the shock that propagates along the strand and causes multiple breaks. They even built a machine to do just that, but since it only breaks a strand at a time, clearly there’s room for improvement. So get hacking!

Machine Learning With Microcontrollers Hack Chat

Join us on Wednesday, September 11 at noon Pacific for the Machine Learning with Microcontrollers Hack Chat with Limor “Ladyada” Fried and Phillip Torrone from Adafruit!

We’ve gotten to the point where a $35 Raspberry Pi can be a reasonable alternative to a traditional desktop or laptop, and microcontrollers in the Arduino ecosystem are getting powerful enough to handle some remarkably demanding computational jobs. But there’s still one area where microcontrollers seem to be lagging a bit: machine learning. Sure, there are purpose-built edge-computing SBCs, but wouldn’t it be great to be able to run AI models on versatile and ubiquitous MCUs that you can pick up for a couple of bucks?

We’re moving in that direction, and our friends at Adafruit Industries want to stop by the Hack Chat and tell us all about what they’re working on. In addition to Ladyada and PT, we’ll be joined by Meghna NatrajDaniel Situnayake, and Pete Warden, all from the Google TensorFlow team. If you’ve got any interest in edge computing on small form-factor computers, you won’t want to miss this chat. Join us, ask your questions about TensorFlow Lite and TensorFlow Lite for Microcontrollers, and see what’s possible in machine learning way out on the edge.

join-hack-chatOur Hack Chats are live community events in the Hackaday.io Hack Chat group messaging. This week we’ll be sitting down on Wednesday, September 11 at 12:00 PM Pacific time. If time zones have got you down, we have a handy time zone converter.

Click that speech bubble to the right, and you’ll be taken directly to the Hack Chat group on Hackaday.io. You don’t have to wait until Wednesday; join whenever you want and you can see what the community is talking about.

Bike-Mounted Synthetic-Aperture Radar Makes Detailed Images

Synthetic-aperture radar, in which a moving radar is used to simulate a very large antenna and obtain high-resolution images, is typically not the stuff of hobbyists. Nobody told that to [Henrik Forstén], though, and so we’ve got this bicycle-mounted synthetic-aperture radar project to marvel over as a result.

Neither the electronics nor the math involved in making SAR work is trivial, so [Henrik]’s comprehensive write-up is invaluable to understanding what’s going on. First step: build a 6-GHz frequency modulated-continuous wave (FMCW) radar, a project that [Henrik] undertook some time back that really knocked our socks off. His FMCW set is good enough to resolve human-scale objects at about 100 meters.

Moving the radar and capturing data along a path are the next steps and are pretty simple, but figuring out what to do with the data is anything but. [Henrik] goes into great detail about the SAR algorithm he used, called Omega-K, a routine that makes use of the Fast Fourier Transform which he implemented for a GPU using Tensor Flow. We usually see that for neural net applications, but the code turned out remarkably detailed 2D scans of a parking lot he rode through with the bike-mounted radar. [Henrik] added an auto-focus routine as well, and you can clearly see each parked car, light pole, and distant building within range of the radar.

We find it pretty amazing what [Henrik] was able to accomplish with relatively low-budget equipment. Synthetic-aperture radar has a lot of applications, and we’d love to see this refined and developed further.

[via r/electronics]

Neural Network Knows When Cat Wants To Go Outside

Neural networks are computer systems that are vaguely inspired by the construction of animal brains, and much like human brains, can be trained to obey the whims of the almighty domestic cat. [EdjeElectronics] has built just such a system, and his cat is better off for it.

The build uses a Raspberry Pi, fitted with the Pi Camera board, to image the area around the back door of the house. A Python script regularly captures images and passes them to a TensorFlow neural network for object recognition. The TensorFlow network returns object type and positions to the Python script. This information can be used to determine if there is a cat in the frame, and if it is inside or outside. If the cat remains in position for ten consecutive frames, a text message is sent via Twilio, indicating to the owner to let the cat in or out, as the case may be.

Thirty years ago, object classification was a pie-in-the-sky technology, but now you can run it on a $30 computer to figure out where your pets are. What a time we live in! A similar solution to this problem may be a cat door that unlocks via facial recognition. Video after the break.

[Thanks to Baldpower for the tip!]

Continue reading “Neural Network Knows When Cat Wants To Go Outside”

Machine Learning Crash Course From Google

We’ve been talking a lot about machine learning lately. People are using it for speech generation and recognition, computer vision, and even classifying radio signals. If you’ve yet to climb the learning curve, you might be interested in a new free class from Google using TensorFlow.

Of course, we’ve covered tutorials for TensorFlow before, but this is structured as a 15 hour class with 25 lessons and 40 exercises. Of course, it is also from the horse’s mouth, so to speak. Google says the class will answer questions like:

  • How does machine learning differ from traditional programming?
  • What is loss, and how do I measure it?
  • How does gradient descent work?
  • How do I determine whether my model is effective?
  • How do I represent my data so that a program can learn from it?
  • How do I build a deep neural network?

Continue reading “Machine Learning Crash Course From Google”

TensorFlow In Your Browser

If you want to explore machine learning, you can now write applications that train and deploy TensorFlow in your browser using JavaScript. We know what you are thinking. That has to be slow. Surprisingly, it isn’t, since the libraries use Graphics Processing Unit (GPU) acceleration. Of course, that assumes your browser can use your GPU. There are several demos available, include one where you train a Pac Man game to respond to gestures in your webcam to control the game. If you try it and then disable accelerated graphics in your browser options, you’ll see just what a speed up you can gain from the GPU.

Continue reading “TensorFlow In Your Browser”