This week Hackaday Editor-in-Chief Elliot Williams and Managing Editor Tom Nardi take a close look at two pairs of projects that demonstrate the wildly different approaches that hackers can take while still arriving at the same conclusion. We’ll also examine the brilliant mechanism that the James Webb Space Telescope uses to adjust its mirrors, and marvel over a particularly well-developed bot that can do your handwriting for you. The finer points of living off home-grown algae will be discussed, and by the end of the show, you’ll learn the one weird trick to stopping chip fabs in their tracks.
Take a look at the links below if you want to follow along, and as always, tell us what you think about this episode in the comments!
Direct Download (~70 MB)
Episode 155 Show Notes:
What’s that Sound?
- This week’s sound was the CRY2001 voice scrambler. Congratulations to [Stella’s Dad]!
Interesting Hacks of the Week:
- Working Model Reveals Amazing Engineering Of Webb’s Mirror Actuators
- 3D Printed Maglev Switches Are So Hot Right Now
- PlottyBot: A DrawBot That Plots A Lot
- Invisible 3D Printed Codes Make Objects Interactive
- Cannonball Mold Makes A Dandy Integrating Sphere For Laser Measurements
- Is Your Flashlight A Lumen Liar? Build A DIY Integrating Sphere
- Move Aside Solar, We’re Installing An Algae Panel
Tom’s Olde Algae Reactor:
Quick Hacks:
- Elliot’s Picks:
- Tom’s Picks:
in the podcast it was speculated that the audio from the Cry2001 scrambled radio messages might be listened to with a SDR, but a look at the circuit diagram shows digital processing inserted between the processed analogue input and the processed analogue output so i think you would need to know the shared code the persons had agreed to use as well as the dsp algorithm to get any sense out of it. i think the reason you can tell that the sounds are voices is because of the cadence of the input and that the output must still remain within the voice band.
But human speech has _soooo little_ entropy. Vowels are just peaks in three formant frequencies, for instance, in a consistent ratio. (And artificial vowels made with only two formants sound fine.)
You “need” to know the shared code to do the demodulation right, but my guess was that you could back those out by using whatever code gives you the most plausible ratios of vowel frequencies. Probably depends on how often they’re switching, etc.
But yeah — I was spitballing. Everything’s easy when you don’t have to do it. :)