How Hertha Ayrton Enabled The Titanic To Call SOS

[Kathy] recently posted an interesting video about the connection of an electronics pioneer named [Hertha Ayrton] to the arc transmitter. The story starts with the observation of the arc lamp — which we learned was a typo of arch lamp.

[Hertha] was born into poverty, but — very odd for the day — obtained a science education. That’s probably a whole story in of itself. During her schooling, she fell in love with her professor [William Ayrton] and they wed.

Continue reading “How Hertha Ayrton Enabled The Titanic To Call SOS”

Modern Wizard Summons Familiar Spirit

In European medieval folklore, a practitioner of magic may call for assistance from a familiar spirit who takes an animal form disguise. [Alex Glow] is our modern-day Merlin who invoked the magical incantations of 3D printing, Arduino, and Raspberry Pi to summon her familiar Archimedes: The AI Robot Owl.

The key attraction in this build is Google’s AIY Vision kit. Specifically the vision processing unit that tremendously accelerates image classification tasks running on an attached Raspberry Pi Zero W. It no longer consumes several seconds to analyze each image, classification can now run several times per second, all performed locally. No connection to Google cloud required. (See our earlier coverage for more technical details.) The default demo application of a Google AIY Vision kit is a “joy detector” that looks for faces and attempts to determine if a face is happy or sad. We’ve previously seen this functionality mounted on a robot dog.

[Alex] aimed to go beyond the default app (and default box) to create Archimedes, who was to reward happy people with a sticker. As a moving robotic owl, Archimedes had far more crowd appeal than the vision kit’s default cardboard box. All the kit components have been integrated into Archimedes’ head. One eye is the expected Pi camera, the other eye is actually the kit’s piezo buzzer. The vision kit’s LED-illuminated button now tops the dapper owl’s hat.

Archimedes was created to join in Google’s promotion efforts. Their presence at this Maker Faire consisted of two tents: one introductory “Learn to Solder” tent where people can create a blinky LED badge, and the other tent is focused on their line of AIY kits like this vision kit. Filled with demos of what the kits can do aside from really cool robot owls.

Hopefully these promotional efforts helped many AIY kits find new homes in the hands of creative makers. It’s pretty exciting that such a powerful and inexpensive neural net processor is now widely available, and we look forward to many more AI-powered hacks to come.

Continue reading “Modern Wizard Summons Familiar Spirit”

ESP32 Boards With Displays: An Overview

The ESP8266 has become practically the 555 chip of WiFi connected microcontrollers. Traditionally, you’d buy one on a little breakout board with some pins and a few connectors, and then wire up anything else you need. The ESP8266’s big brother, the ESP32, hasn’t quite taken over from the ESP8266, but it has a lot more power and many more options. [Andreas] has a new video that shows seven new ESP32 boards that have integral displays. These boards can simplify a lot of applications where you need both WiFi and a user interface.

Of the boards examined, six of them have OLED displays, but one has an E-paper display. To summarize results, [Andreas] summarized his findings on these seven along with others in an online spreadsheet.

Continue reading “ESP32 Boards With Displays: An Overview”

Train object recognizer for cards

Using TensorFlow To Recognize Your Own Objects

When the time comes to add an object recognizer to your hack, all you need do is choose from many of the available ones and retrain it for your particular objects of interest. To help with that, [Edje Electronics] has put together a step-by-step guide to using TensorFlow to retrain Google’s Inception object recognizer. He does it for Windows 10 since there’s already plenty of documentation out there for Linux OSes.

You’re not limited to just Inception though. Inception is one of a few which are very accurate but it can take a few seconds to process each image and so is more suited to a fast laptop or desktop machine. MobileNet is an example of one which is less accurate but recognizes faster and so is better for a Raspberry Pi or mobile phone.

Collage of images for card datasetYou’ll need a few hundred images of your objects. These can either be scraped from an online source like Google’s images or you get take your own photos. If you use the latter approach, make sure to shoot from various angles, rotations, and with different lighting conditions. Fill your background with various other things and even have some things partially obscuring your objects. This may sound like a long, tedious task, but it can be done efficiently. [Edje Electronics] is working on recognizing playing cards so he first sprinkled them around his living room, added some clutter, and walked around, taking pictures using his phone. Once uploaded, some easy-to-use software helped him to label them all in around an hour. Note that he trained on 24 different objects, which are the number of different cards you get in a pinochle deck.

You’ll need to install a lot of software and do some configuration, but he walks you through that too. Ideally, you’d use a computer with a GPU but that’s optional, the difference being between three or twenty-four hours of training. Be sure to both watch his video below and follow the steps on his Github page. The Github page is kept most up-to-date but his video does a more thorough job of walking you through using the software, such as how to use the image labeling program.

Why is he training an object recognizer on playing cards? This is just one more step in making a blackjack playing robot. Previously he’d done an impressive job using OpenCV, even though the algorithm handled non-overlapping cards only. Google’s Inception, however, recognizes partially obscured cards. This is a very interesting project, one which we’ll be keeping an eye on. If you have any ideas for him, leave them in the comments below.

Continue reading “Using TensorFlow To Recognize Your Own Objects”

Retro Rebuild Recreates SGI Workstation Demos On The Go

When [Lawrence] showed us the Alice4 after Maker Faire Bay Area last weekend it wasn’t apparent how special the system was. The case is clean and white, adorned only with a big red button below a 7″ screen with a power switch around the back. When the switch is flicked the system boots to display a familiar animation and drops you at a menu. Poking around from here elicits a variety of self-contained graphics demos, some interactive. So this is a Raspberry Pi in a box playing videos, right? Not even close.

Often retro computing focuses on personal computer systems. When they were new the 8-bit graphics or intricate 2D sprites were state of the art, but now their appeal tends towards learning opportunities and the thrill of nostalgia. This may still be true of Alice4, the system [Brad, Lawrence, Mike, and Chris] put together to run Silicon Graphics (SGI) demos from the mid 1980’s but it’s not the whole story. [Lawrence] and [Brad] had both worked at SGI during its heyday and had fond memories of the graphics demos that shipped with those mammoth workstation. So they built Alice4 from the FPGA up to run those very same demos in real-time.

Thanks to Moore’s law, today’s embedded systems put yesterday’s powerhouses within reach. [Lawrence] and [Brad] found the old demo code in a ratty FTP server, and tailor-made Alice4’s software and hardware to run them natively. [Brad] wrote a libgl which implements the subset of the IrisGL API needed to support their selected set of demos. The libgl emits sets of triangles to the SDRAM where [Lawrence’s] HDL running on the onboard FPGA fetches them to interpolate color and depth and draw the result on-screen. Together they allow the $99 Altera Cyclone V development board at Alice4’s heart to run these state of the art demos in the palm of your hand.

Alice4 is open source and extensively documented. Peruse the archeology of reverse engineering the graphics API or the discussion of FIFO design in the FPGA. If those don’t sate your appetite check out a video of Alice4 in action after the break.

Continue reading “Retro Rebuild Recreates SGI Workstation Demos On The Go”

Grawler: Painless Cleaning For Glass Roofs

Part of [Gelstronic]’s house has a glass roof. While he enjoys the natural light and warmth, he doesn’t like getting up on a ladder to clean it every time a bird makes a deposit or the rainwater stains build up. He’s tried to make a cleaning robot in the past, but the 25% slope of the roof complicates things a bit. Now, with the addition of stepper motors and grippy tank treads, [Gelstronic] can tell this version of GRawler exactly how far to go, or to stay in one place to clean a spot that’s extra dirty.

GRawler is designed to clean on its way up the roof, and squeegee on the way back down. It’s driven by an Arduino Pro Micro and built from lightweight aluminium and many parts printed in PLA. GRawler also uses commonly-available things, which is always a bonus: the brush is the kind used to clean behind appliances, and the squeegee blade is from a truck-sized wiper. [Gelstronic] can control GRawler’s motors, the brush’s spin, and raise/lower the wiper blade over Bluetooth using an app called Joystick BT Commander. Squeak past the break to see it in action.

As far as we can tell, [Gelstronic] will still have to break out the ladder to place GRawler and move him between panels. Maybe the next version could be tethered, like Scrobby the solar panel-cleaning robot.

Continue reading “Grawler: Painless Cleaning For Glass Roofs”

Underwater distributed sensor network

Open Source Underwater Distributed Sensor Network

One way to design an underwater monitoring device is to take inspiration from nature and emulate an underwater creature. [Michael Barton-Sweeney] is making devices in the shape of, and functioning somewhat like, clams for his open source underwater distributed sensor network.

Underwater distributed sensor network descent and ascentThe clams contain the electronics, sensors, and means of descending and ascending within their shells. A bunch of them are dropped overboard on the surface. Their shells open, allowing the gas within to escape and they sink. As they descend they sample the water. When they reach the bottom, gas fills a bladder and they ascend back to the surface with their data where they’re collected in a net.

Thus far he’s made a few clams using acrylic for the shells which he’s blown himself. He soldered the electronics together free-form and gave them a conformal coating of epoxy. He’s also used a thermistor as a stand-in for other sensors and is already working on a saturometer, used for measuring the total dissolved gas (TDG) in the water. Knowing the TDG is useful for understanding and mitigating supersaturation of water which can lead to fish kills.

He’s also given a lot of thought into the materials used since some clams may not make it back up and would have to degrade or be benign where they rest. For example, he’s been using a lithium battery for now but would like to use copper on one shell and zinc on another to make a salt water battery, if he can make it produce enough power. He’s also considering using 3D printing since PLA is biodegradable. However, straight PLA could be subject to fouling by underwater organisms and would require cleaning, which would be time-consuming. PLA becomes soft when heated in a dishwasher and so he’s been looking into a PLA and calcium carbonate filament instead.

Check out his hackaday.io page where he talks about all these and more issues and feel free to make any suggestions.