How To Get Into Lost Wax Casting (with A Dash Of 3D Printing)

I’ve always thought that there are three things you can do with metal: cut it, bend it, and join it. Sure, I knew you could melt it, but that was always something that happened in big foundries- you design something and ship it off to be cast in some large angular building churning out smoke. After all, melting most metals is hard. Silver melts at 1,763 °F. Copper at 1,983 °F. Not only do you need to create an environment that can hit those temperatures, but you need to build it from materials that can withstand them.

Turns out, melting metal is not so bad. Surprisingly, I’ve found that the hardest part of the process for an engineer like myself at least, is creating the pattern to be replicated in metal. That part is pure art, but thankfully I learned that we can use technology to cheat a bit.

When I decided to take up casting earlier this year, I knew pretty much nothing about it. Before we dive into the details here, let’s go through a quick rundown to save you the first day I spent researching the process. At it’s core, here are the steps involved in lost wax, or investment, casting:

  1. Make a pattern: a wax or plastic replica of the part you’d like to create in metal
  2. Make a mold: pour plaster around the pattern, then burn out the wax to leave a hollow cavity
  3. Pour the metal: melt some metal and pour it into the cavity

I had been kicking around the idea of trying this since last fall, but didn’t really know where to begin. There seemed to be a lot of equipment involved, and I’m no sculptor, so I knew that making patterns would be a challenge. I had heard that you could 3D-print wax patterns instead of carving them by hand, but the best machine for the job is an SLA printer which is prohibitively expensive, or so I thought. Continue reading “How To Get Into Lost Wax Casting (with A Dash Of 3D Printing)”

Vacuum Tube Logic Hack Chat

Join us on Wednesday, December 9th at noon Pacific for the Vacuum Tube Logic Hack Chat with David Lovett!

For most of us, circuits based on vacuum tubes are remnants of a technological history that is rapidly fading from our collective memory. To be sure, there are still applications for thermionic emission, especially in power electronics and specialized switching applications. But by and large, progress has left vacuum tubes in a cloud of silicon dust, leaving mainly audiophiles and antique radio enthusiasts to figure out the hows and whys of plates and grids and filaments.

But vacuum tubes aren’t just for the analog world. Some folks like making tubes do tricks they haven’t had to do in a long, long time, at least since the birth of the computer age. Vacuum tube digital electronics seems like a contradiction in terms, but David Lovett, aka Usagi Electric on YouTube, has fallen for it in a big way. His channel is dedicated to working through the analog building blocks of digital logic circuits using tubes almost exclusively. He has come up with unique circuits that don’t require the high bias voltages typically needed, making the circuits easy to work with using equipment likely to be found in any solid-state experimenter’s lab.

David will drop by the Hack Chat to share his enthusiasm for vacuum tube logic and his tips for exploring the sometimes strange world of flying electrons. Join us as we discuss how to set up your own vacuum tube experiments, learn what thermionic emission can teach us about solid-state electronics, and maybe even get a glimpse of what lies ahead in his lab.

join-hack-chatOur Hack Chats are live community events in the Hackaday.io Hack Chat group messaging. This week we’ll be sitting down on Wednesday, December 9 at 12:00 PM Pacific time. If time zones have you tied up, we have a handy time zone converter.

Click that speech bubble to the right, and you’ll be taken directly to the Hack Chat group on Hackaday.io. You don’t have to wait until Wednesday; join whenever you want and you can see what the community is talking about.

Continue reading “Vacuum Tube Logic Hack Chat”

Hackaday Links Column Banner

Hackaday Links: December 6, 2020

By now you’ve no doubt heard of the sudden but not unexpected demise of the iconic Arecibo radio telescope in Puerto Rico. We have been covering the agonizing end of Arecibo from almost the moment the first cable broke in August to a eulogy, and most recently its final catastrophic collapse this week. That last article contained amazing video of the final collapse, including up-close and personal drone shots of the cable breaking. For a more in-depth analysis of the collapse, it’s hard to beat Scott Manley’s frame-by-frame analysis, which really goes into detail about what happened. Seeing the paint spalling off the cables as they stretch and distort under loads far greater than they were designed for is both terrifying and fascinating.

Exciting news from Australia as the sample return capsule from JAXA’s Hayabusa2 asteroid explorer returned safely to Earth Saturday. We covered Hayabusa2 in our roundup of extraterrestrial excavations a while back, describing how it used both a tantalum bullet and a shaped-charge penetrator to blast regolith from the surface of asteroid 162173 Ryugu. Samples of the debris were hoovered up and hermetically sealed for the long ride back to Earth, which culminated in the fiery re-entry and safe landing in the midst of the Australian outback. Planetary scientists are no doubt eager to get a look inside the capsule and analyze the precious milligrams of space dust. In the meantime, Hayabusa2, with 66 kilograms of propellant remaining, is off on an extended mission to visit more asteroids for the next eleven years or so.

The 2020 Remoticon has been wrapped up for most of a month now, but one thing we noticed was how much everyone seemed to like the Friday evening Bring-a-Hack event that was hosted on Remo. To kind of keep that meetup momentum going and to help everyone slide into the holiday season with a little more cheer, we’re putting together a “Holiday with Hackaday & Tindie” meetup on Tuesday, December 15 at noon Pacific time. The details haven’t been shared yet, but our guess is that this will certainly be a “bring-a-hack friendly” event. We’ll share more details when we get them this week, but for now, hop over to the Remo event page and reserve your spot.

On the Buzzword Bingo scorecard, “Artificial Intelligence” is a square that can almost be checked off by default these days, as companies rush to stretch the definition of the term to fit almost every product in the neverending search for market share. But even those products that actually have machine learning built into them are only as good as the data sets used to train them. That can be a problem for voice-recognition systems; while there are massive databases of utterances in just about every language, the likes of Amazon and Google aren’t too willing to share what they’ve leveraged from their smart speaker using customer base. What’s the little person to do? Perhaps the People’s Speech database will help. Part of the MLCommons project, it has 86,000 hours of speech data, mostly derived from audiobooks, a clever source indeed since the speech and the text can be easily aligned. The database also pulls audio and the corresponding text from Wikipedia and other random sources around the web. It’s a small dataset, to be sure, but it’s a start.

And finally, divers in the Baltic Sea have dredged up a bit of treasure: a Nazi Enigma machine. Divers in Gelting Bay near the border of Germany and Denmark found what appeared to be an old typewriter caught in one of the abandoned fishing nets they were searching for. When they realized what it was — even crusted in 80-years-worth of corrosion and muck some keys still look like they’re brand new — they called in archaeologists to take over recovery. Gelting Bay was the scene of a mass scuttling of U-boats in the final days of World War II, so this Engima may have been pitched overboard before by a Nazi commander before pulling the plug on his boat. It’ll take years to restore, but it’ll be quite a museum piece when it’s done.

Sufficiently Advanced Technology And Justice

Imagine that you’re serving on a jury, and you’re given an image taken from a surveillance camera. It looks pretty much like the suspect, but the image has been “enhanced” by an AI from the original. Do you convict? How does this weigh out on the scales of reasonable doubt? Should you demand to see the original?

AI-enhanced, upscaled, or otherwise modified images are tremendously realistic. But what they’re showing you isn’t reality. When we wrote about this last week, [Denis Shiryaev], one of the authors of one of the methods we highlighted, weighed in the comments to point out that these modifications aren’t “restorations” of the original. While they might add incredibly fine detail, for instance, they don’t recreate or restore reality. The neural net creates its own reality, out of millions and millions of faces that it’s learned.

And for the purposes of identification, that’s exactly the problem: the facial features of millions of other people have been used to increase the resolution. Can you identify the person in the pixelized image? Can you identify that same person in the resulting up-sampling? If the question put before the jury was “is the defendant a former president of the USA?” you’d answer the question differently depending on which image you were presented. And you’d have a misleading level of confidence in your ability to judge the AI-retouched photo. Clearly, informed skepticism on the part of the jury is required.

Unfortunately, we’ve all seen countless examples of “zoom, enhance” in movies and TV shows being successfully used to nab the perps and nail their convictions. We haven’t seen nearly as much detailed analysis of how adversarial neural networks create faces out of a scant handful of pixels. This, combined with the almost magical resolution of the end product, would certainly sway a jury of normal folks. On the other hand, the popularity of intentionally misleading “deep fakes” might help educate the public to the dangers of believing what they see when AI is involved.

This is just one example, but keeping the public interested in and educated on the deep workings and limitations of the technology that’s running our world is more important than ever before, but some of the material is truly hard. How do we separate the science from the magic?

Remoticon Video: How To Use Machine Learning With Microcontrollers

Going from a microcontroller blinking an LED, to one that blinks the LED using voice commands based on a data set that you trained on a neural net work is a “now draw the rest of the owl” problem. Lucky for us, Shawn Hymel walks us through the entire process during his Tiny ML workshop from the 2020 Hackaday Remoticon. The video has just now been published and can be viewed below.

This is truly an end-to-end Hello World for getting machine learning up and running on a microcontroller. Shawn covers the process of collecting and preparing the audio samples, training the data set, and getting it all onto the microcontroller. At the end of two hours, he’s able to show the STM32 recognizing and responding to two different spoken words. Along the way he pauses to discuss the context of what’s happening in every step, which will help you go back and expand in those areas later to suit your own project needs.

Continue reading “Remoticon Video: How To Use Machine Learning With Microcontrollers”

Hackaday Podcast 096: Diaphragm Engine, DIY Dish Washer, Forgotten Soviet Computers, And A Starlink Teardown

Hackaday editors Elliot Williams and Mike Szczys discuss the latest and greatest in geeky goodness. This week we saw a Soviet time capsule come to light with the discovery of a computer lab from a building abandoned in the 1990’s. A two-cycle compressed air engine shatters our expectations of what is involved in RC aircraft design. There’s a new toolkit for wireless hacking on the scene in the form of a revitalized HackRF PortaPack firmware fork. And what goes into dishwasher design? Find out in this exciting episode.

Take a look at the links below if you want to follow along, and as always, tell us what you think about this episode in the comments!

Direct download (55 MB)

Places to follow Hackaday podcasts:

Continue reading “Hackaday Podcast 096: Diaphragm Engine, DIY Dish Washer, Forgotten Soviet Computers, And A Starlink Teardown”

This Week In Security: IOS Wifi Incantations, Ghosts, And Bad Regex

I hope everyone had a wonderful Thanksgiving last week. My household celebrated by welcoming a 4th member to the family. My daughter was born on Wednesday morning, November 25th. And thus explains what I did last week instead of writing the normal Hackaday column. Never fear, we shall catch up today, and cover the news that’s fit to be noticed.

iOS Zero-click Wifi Attack

[Ian Beer] of Google’s Project Zero brings us the fruit of his lockdown-induced labors, a spectacular iOS attack. The target of this attack is the kernel code that handles AWDL, an Apple WiFi protocol for adhoc mesh networks between devices. The most notable feature that makes use of AWDL is AirDrop, Apple’s device-to-device file sharing system. Because AWDL is a proprietary protocol, the WiFi hardware can’t do any accelerated processing of packets. A few years back, there was an attack against Broadcom firmware that required a second vulnerability to jump from the WiFi chip to the device CPU. Here, because the protocol is all implemented in Apple’s code, no such pivot is necessary.

And as you’ve likely deduced, there was a vulnerability found. AWDL uses Type-Length-Value (TLV) messages for sending management data. For a security researcher, TLVs are particularly interesting because each data type represents a different code path to attack. One of those data types is a list of MAC addresses, with a maximum of 10. The code that handles it allocates a 60 byte buffer, based on that maximum. The problem is that there isn’t a code path to drop incoming TLVs of that type when they exceed 60 bytes. The remainder is written right past the end of the allocated buffer.

There is more fun to be had, getting to a full exploit, but the details are a bit too much to fully dive in to here. It interesting to note that [Ian] ran into a particular problem: His poking at the target code was triggering unexpected kernel panics. He discovered two separate vulnerabilities, both distinct from the vuln he was trying to exploit.

Finally, this exploit requires the target device to have AWDL enabled, and many won’t. But you can use Bluetooth Low Energy advertisements to trick the target device into believing an Airdrop is coming in from a trusted contact. Once the device enables AWDL to verify the request, the attack can proceed. [Ian] reported his findings to Apple way back in 2019, and this vulnerability was patched in March of 2020.

Via Ars Technica.
Continue reading “This Week In Security: IOS Wifi Incantations, Ghosts, And Bad Regex”