Hackaday Links: June 14, 2020

You say you want to go to Mars, but the vanishingly thin atmosphere, the toxic and corrosive soil, the bitter cold, the deadly radiation that sleets down constantly, and the long, perilous journey that you probably won’t return from has turned you off a little. Fear not, because there’s still a way for you to get at least part of you to Mars: your intelligence. Curiosity, the Mars rover that’s on the eighth year of its 90-day mission, is completely remote-controlled, and NASA would like to add some self-driving capabilities to it. Which is why they’re asking for human help in classifying thousands of images of the Martian surface. By annotating images and pointing out what looks like soil and what looks like rock, you’ll be training an algorithm that one day might be sent up to the rover. If you’ve got the time, give it a shot — it seems a better use of time than training our eventual AI overlords.

We got a tip this week that ASTM, the international standards organization, has made its collection of standards for testing PPE available to the public. With titles like “Standard Test Method for Resistance of Medical Face Masks to Penetration by Synthetic Blood (Horizontal Projection of Fixed Volume at a Known Velocity)”, it seems like the standards body wants to make sure that that homebrew PPE gets tested properly before being put into service. The timing of this release is fortuitous since this week’s Hack Chat features Hiram Gay and Lex Kravitz, colleagues from the Washington University School of Medicine who will talk about what they did to test a respirator made from a full-face snorkel mask.

There’s little doubt that Lego played a huge part in the development of many engineers, and many of us never really put them away for good. We still pull them out occasionally, for fun or even for work, especially the Technic parts, which make a great prototyping system. But what if you need a Technic piece that you don’t have, or one that never existed in the first place? Easy — design and print your own custom Technic pieces. Lego Part Designer is a web app that breaks Technic parts down into five possible blocks, and lets you combine them as you see fit. We doubt that most FDM printers can deal with the fine tolerances needed for that satisfying Lego fit, but good enough might be all you need to get a design working.

Chances are pretty good that you’ve participated in more than a few video conferencing sessions lately, and if you’re anything like us you’ve found the experience somewhat lacking. The standard UI, with everyone in the conference organized in orderly rows and columns, reminds us of either a police line-up or the opening of The Brady Bunch, neither of which is particularly appealing. The paradigm could use a little rethinking, which is what Laptops in Space aims to do. By putting each participant’s video feed in a virtual laptop and letting them float in space, you’re supposed to have a more organic meeting experience. There’s a tweet with a short clip, or you can try it yourself. We’re not sure how we feel about it yet, but we’re glad someone is at least trying something new in this space.

And finally, if you’re in need of a primer on charlieplexing, or perhaps just need to brush up on the topic, [pileofstuff] has just released a video that might be just what you need. He explains the tri-state logic LED multiplexing method in detail, and even goes into some alternate uses, like using optocouplers to drive higher loads. We like his style — informal, but with a good level of detail that serves as a jumping-off point for further exploration.

Train All The Things Contest Update

Back in January when we announced the Train All the Things contest, we weren’t sure what kind of entries we’d see. Machine learning is a huge and rapidly evolving field, after all, and the traditional barriers that computationally intensive processes face have been falling just as rapidly. Constraints are fading away, and we want you to explore this wild new world and show us what you come up with.

Where Do You Run Your Algorithms?

To give your effort a little structure, we’ve come up with four broad categories:

  • Machine Learning on the Edge
    • Edge computing, where systems reach out to cloud resources but run locally, is all the rage. It allows you to leverage the power of other people’s computers the cloud for training a model, which is then executed locally. Edge computing is a great way to keep your data local.
  • Machine Learning on the Gateway
    • Pi’s, old routers, what-have-yous – we’ve all got a bunch of devices laying around that bridge space between your local world and the cloud. What can you come up with that takes advantage of this unique computing environment?
  • Machine Learning in the Cloud
    • Forget about subtle — this category unleashes the power of the cloud for your application. Whether it’s Google, Azure, or AWS, show us what you can do with all that raw horsepower at your disposal.
  • Artificial Intelligence Blinky
    • Everyone’s “hardware ‘Hello, world'” is blinking an LED, and this is the machine learning version of that. We want you to use a simple microprocessor to run a machine learning algorithm. Amaze us with what you can make an Arduino do.

These Hackers Trained Their Projects, You Should Too!

We’re a little more than a month into the contest. We’ve seen some interesting entries bit of course we’re hungry for more! Here are a few that have caught our eye so far:

  • Intelligent Bat Detector – [Tegwyn☠Twmffat] has bats in his… backyard, so he built this Jetson Nano-powered device to capture their calls and classify them by species. It’s a fascinating adventure at the intersection of biology and machine learning.
  • Blackjack Robot – RAIN MAN 2.0 is [Evan Juras]’ cure for the casino adage of “The house always wins.” We wouldn’t try taking the Raspberry Pi card counter to Vegas, but it’s a great example of what YOLO can do.
  • AI-enabled Glasses – AI meets AR in ShAIdes, [Nick Bild]’s sunglasses equipped with a camera and Nano to provide a user interface to the world. Wave your hand over a lamp and it turns off. Brilliant!

You’ve got till noon Pacific time on April 7, 2020 to get your entry in, and four winners from each of the four categories will be awarded a $100 Tindie gift card, courtesy of our sponsor Digi-Key. It’s time to ramp up your machine learning efforts and get a project entered! We’d love to see more examples of straight cloud AI applications, and the AI blinky category remains wide open at this point. Get in there and give machine learning a try!

Reaction Trainer Keeps You On Your Toes

In many sports, it’s important for competitors to be light on their feet, and able to react quickly to external stimuli. It all helps with getting balls in goals, and many athletes undergo reaction drills as part of their training regime. To help with this, [mblaz] set out to build a set of reaction trainers.

The training setup consists of a series of discs, each with glowing LEDs and a proximity sensor. The discs randomly light up, requiring a touch or wave to switch them off. At this point, another disc will light randomly, and so on.

The discs are built using an ATmega328 to run the show, with NRF24L01+ radios used to communicate between the modules. High brightness red LEDs are used for indication. An optical proximity sensor is used for its fast reaction time and low cost, while power comes via a small lithium polymer battery integrated into each disc.

We’re sure [mblaz] and his fellow athletes will find the rig to be useful in their training. There’s plenty of scope for electronics to help out with athletic training; this boxing trainer is a great example. If you’ve got a great sports engineering project of your own, don’t hesitate to send it in!

AI-Enabled Teletype Live Streams Nearly Coherent Conversations

If you’ve got a working Model 33 Teletype, every project starts to look like an excuse to use it. While the hammering, whirring symphony of a teleprinter going full tilt brings to mind a simpler time of room-sized computers and 300 baud connections, it turns out that a Teletype makes a decent AI conversationalist, within the limits of AI, of course.

The Teletype machine that [Hugh Pyle] used for this interesting project, a Model 33 ASR with the paper tape reader, is a nostalgia piece that figures prominently in many of his projects. As such, [Hugh] has access to tons of Teletype documentation, so when OpenAI released their GPT-2 text generation language model, he decided to use the docs as a training set for the model, and then use the Teletype to print out text generated by the model. Initial results were about as weird as you’d expect for something trained on technical docs from the 1960s. The next step was obvious: make a chat-bot out of it and stream the results live. The teletype can be seen clattering away in the recorded stream below, using the chat history as a prompt for generating text responses, sometimes coherent, sometimes disturbing, and sometimes just plain weird.

Alas, the chat-bot and stream are only active a couple of times a week, so you’ll have to wait a bit to try it out. But it looks like a fun project, and we appreciate the mash-up of retro tech and AI. We’ve seen teleprinters revived for modern use before, both for texting and Tweeting, but this one almost has a mind of its own.

Continue reading “AI-Enabled Teletype Live Streams Nearly Coherent Conversations”

Retrotechtacular: Balloons Go To War

To the average person, the application of balloon technology pretty much begins and ends with birthday parties. The Hackaday reader might be able to expand on that a bit, as we’ve covered several projects that have lofted various bits of equipment into the stratosphere courtesy of a high-altitude balloons. But even that is a relatively minor distinction. They might be bigger than their multicolored brethren, but it’s still easy for a modern observer to write them off as trivial.

But during the 1940’s, they were important pieces of wartime technology. While powered aircraft such as fighters and bombers were obviously more vital to the larger war effort, balloons still had numerous defensive and reconnaissance applications. They were useful enough that the United States Navy produced a training film entitled History of Balloons which takes viewers through the early days of manned ballooning. Examples of how the core technology developed and matured over time is intermixed with footage of balloons being used in both the First and Second World Wars, and parallels are drawn to show how those early pioneers influenced contemporary designs.

Even when the film was produced in 1944, balloons were an old technology. The timeline in the video starts all the way back in 1783 with the first piloted hot air balloon created by the Montgolfier brothers in Paris, and then quickly covers iterative advancements to ballooning made into the 1800’s. As was common in training films from this era, the various “reenactments” are cartoons complete with comic narration in the style of W.C. Fields which were designed to be entertaining and memorable to the target audience of young men.

While the style might seem a little strange to modern audiences, there’s plenty of fascinating information packed within the film’s half-hour run time. The rapid advancements to ballooning between 1800 and the First World War are detailed, including the various instruments developed for determining important information such as altitude and rate of climb. The film also explains how some of the core aspects of manned ballooning, like the gradual release of ballast or the fact that a deflated balloon doubles as a rudimentary parachute in an emergency, were discovered quite by accident.

When the film works its way to the contemporary era, we are shown the process of filling Naval balloons with hydrogen and preparing them for flight. The film also talks at length about the so-called “barrage balloons” which were used in both World Wars. Including a rather dastardly advancement which added mines to the balloon’s tethers to destroy aircraft unlucky enough to get in their way.

This period in human history saw incredible technological advancements, and films such as these which were created during and immediately after the Second World War provide an invaluable look at cutting edge technology from a bygone era. One wonders what the alternative might be for future generations looking back on the technology of today.

Continue reading “Retrotechtacular: Balloons Go To War”

But Can Your AI Recognize Slugs?

The common garden slug is a mystery. Observing these creatures as they slowly emerge from their slimy lairs each evening, it’s hard to imagine how much damage they can do. With paradoxical speed, they can mow down row after row of tender seedlings, leaving nothing but misery in their mucusy wake.

To combat this slug menace, [Tegwyn☠Twmffat] (the [☠] is silent) is developing this AI-powered slug busting system. The squeamish or those challenged by the ethics of slug eradication can relax: no slugs have been harmed yet. So far [Tegwyn] has concentrated on the detection of slugs, a considerably non-trivial problem since there are few AI models that are already trained for slugs.

So far, [Tegwyn] has acquired 5,712 images of slugs in their natural environment – no mean feat as they only come out at night, they blend into their background, and their slimy surface makes for challenging reflections. The video below shows moderate success of the trained model using a static image of a slug; it also gives a glimpse at the hardware used, which includes an Nvidia Jetson TX2. [Tegwyn] plans to capture even more images to refine the model and boost it up from the 50 to 60% confidence level to something that will allow for the remediation phase of the project, which apparently involves lasers. Although he’s willing to entertain other methods of disposal; perhaps a salt-shooting turret gun?

This isn’t the first garden-tending project [Tegwyn] has tackled. You may recall The Weedinator, his 2018 Hackaday Prize entry. This slug buster is one of his entries for the 2019 Hackaday Prize, which was just announced. We’re looking forward to seeing the onslaught of cool new projects everyone will be coming up with.

Continue reading “But Can Your AI Recognize Slugs?”

Using TensorFlow To Recognize Your Own Objects

When the time comes to add an object recognizer to your hack, all you need do is choose from many of the available ones and retrain it for your particular objects of interest. To help with that, [Edje Electronics] has put together a step-by-step guide to using TensorFlow to retrain Google’s Inception object recognizer. He does it for Windows 10 since there’s already plenty of documentation out there for Linux OSes.

You’re not limited to just Inception though. Inception is one of a few which are very accurate but it can take a few seconds to process each image and so is more suited to a fast laptop or desktop machine. MobileNet is an example of one which is less accurate but recognizes faster and so is better for a Raspberry Pi or mobile phone.

Collage of images for card datasetYou’ll need a few hundred images of your objects. These can either be scraped from an online source like Google’s images or you get take your own photos. If you use the latter approach, make sure to shoot from various angles, rotations, and with different lighting conditions. Fill your background with various other things and even have some things partially obscuring your objects. This may sound like a long, tedious task, but it can be done efficiently. [Edje Electronics] is working on recognizing playing cards so he first sprinkled them around his living room, added some clutter, and walked around, taking pictures using his phone. Once uploaded, some easy-to-use software helped him to label them all in around an hour. Note that he trained on 24 different objects, which are the number of different cards you get in a pinochle deck.

You’ll need to install a lot of software and do some configuration, but he walks you through that too. Ideally, you’d use a computer with a GPU but that’s optional, the difference being between three or twenty-four hours of training. Be sure to both watch his video below and follow the steps on his Github page. The Github page is kept most up-to-date but his video does a more thorough job of walking you through using the software, such as how to use the image labeling program.

Why is he training an object recognizer on playing cards? This is just one more step in making a blackjack playing robot. Previously he’d done an impressive job using OpenCV, even though the algorithm handled non-overlapping cards only. Google’s Inception, however, recognizes partially obscured cards. This is a very interesting project, one which we’ll be keeping an eye on. If you have any ideas for him, leave them in the comments below.

Continue reading “Using TensorFlow To Recognize Your Own Objects”