Electronic hackers and ham radio operators of a certain age have a soft spot for the Heathkit brand. Maybe that’s why we had a rush of nostalgia when we saw the Heathkit site had a new product. You may recall that Heathkit had gone the way of the dodo until a few years ago when the brand started to resurface. Their latest kit is a precision RF meter which is available on preorder.
Before there were websites and hacker spaces and all the modern push to “do it yourself,” Heathkit was teaching people electronics through kit building. Sure, they were known for ham radio and test equipment, but many people built stereos (hi-fi), TVs, radio control gear, computers, and even robots. All with manuals that are hard to imagine if you haven’t seen one. They were world-class.
Humanity is a planetwide force. We have the power to change our weather. We have the power to change the shape of the land. We have the power to selectively wipe a species from this earth if we choose. We’ve had this power for a while and we’re still coming to terms with it. Many of us even deny it.
With such power, what do we do? We have very few projects which are in line with our ability. Somewhere in the past few years, I feel like most of us have lost our audacity. We’ve culturally come to appreciate the safe bet too much. We pull the dreamers and doers down. We want to solve the small problems first, and see if we have time for the big problems later. We don’t dream big enough, and there is zero reason for this hesitation. We could leverage our planetwide power for planetwide improvements. Nothing is truly stopping us. No law, no government, nothing.
To put it simply, as far as technology goes, everything is still low-hanging fruit. We’ve barely done anything. Even some of our greatest accomplishments can happen randomly in nature. We’ve not left our planet in any numbers or for any length of time. Our cities are disorganized messes. In every single field today, the unexplored territory is orders larger than the explored. Yet despite this vast territory, there are very few explorers. People want to optimize the minutia of life. A slightly faster processor for a slightly smaller phone. It’s okay.
Yet that same small optimization applied to a larger effort could have vast positive impact. Those same microprocessors could catalog our planet or drive probes into space. The very same efforts we spend on micro upgrades could be leveraged if we just look at the bigger picture then get out of our own way. All that is lacking is ambition. Money, time, skill, industry, and people are all there, waiting. We have the need for and have the resources to support ten thousand Elon Musks, not just the one.
Big projects make us bigger than our cellphones and Facebook. When you see a rocket launch into the sky, suddenly, “the world” becomes, simply, “a world.” Order of magnitude improvements reduce the order of our perception of previously complex problems. They should be our highest goal. Whatever field you’re in, you should be trying to be ten times better than the top competitor.
However, there are some societal changes that have to occur before we can.
Nothing makes us feel more like we’re on Star Trek then saying “Computer, turn on desk light,” and watching the light turn on. Of course, normal people would have left the wake up word as “Alexa,” but we like “Computer” even if it does make it hard to watch Star Trek episodes without the home automation going crazy.
There’s a lot of hype right now about how voice recognition and artificial intelligence (AI) are transforming everything. We’ve even seen a few high-profile types warning that AI is going to come alive and put us in the matrix or something. That gets a lot of press, but we’re not sure we are even close to that, yet. Alexa and Google’s similar offerings are cool, there’s no doubt about it. The speech recognition is pretty good, although far from perfect. But the AI is really far off still.
Today’s devices utilize two rather rudimentary parts to provide an interaction with users. The first is how the devices pattern match language; it isn’t all that sophisticated. The other is the trivial nature of many of the apps, or — as Alexa calls them — skills. There are some good ones to be sure, but for every one useful application of the technology, there’s a dozen that are just text-to-speech of an RSS feed. Looking through the skills available we were amused at how many different offerings convert resistor color codes back and forth to values.
There was a time when building electronics meant learning the resistor color code. With today’s emphasis on surface mount components, though, it is less useful than it used to be. Still, like flossing, you really ought to do it. However, if you have an Amazon Alexa, it can learn the color code for you thanks to [Dennis Mantz].
Don’t have an Alexa? You can still try it in your browser, as we will show you shortly. There are at least eight similar skills out there like this one from [Steve Jernigan] or [Andrew Bergstrom’s] Resistor Reader.
Hardware hacking is a way of life here at Hackaday. We celebrate projects every day with hot glue, duct tape, upcycled parts, and everything in between. It’s open season to hack hardware. Out in the world, for some reason software doesn’t receive the same laissez-faire treatment. “Too many lines in that file” “bad habits” “bad variable names” the comments often rain down. Even the unsafest silliest of projects isn’t safe. Building a robot to shine lasers into a person’s eyes? Better make sure you have less than 500 lines of code per file!
Why is this? What makes readers and commenters hold software to a higher standard than the hardware it happens to be running on? The reasons are many and varied, and it’s a trend I’d like to see stopped.
Software engineering is a relatively young and fast evolving science. Every few months there is a new hot language on the block, with forums, user groups, and articles galore. Even the way software engineers work is constantly changing. Waterfall to agile, V-Model, Spiral model. Even software design methodologies change — from pseudo code to UML to test driven development, the list goes on and on.
Terms like “clean code” get thrown around. It’s not good enough to have software that works. Software must be well commented, maintainable, elegant, and of course, follow the best coding practices. Most of these are good ideas… in the work environment. Work is what a lot of this boils down to. Software engineers have to stay up to date with new trends to be employable.
There is a certain amount of “born again” mentality among professional software developers. Coders generally hate having change forced upon them. But when they find a tool or system they like, they embrace it both professionally, and in their personal projects. Then they’re out spreading the word of this new method or tool; on Reddit, in forums, to anyone who will listen. The classic example of this is, of course, editors like the vi vs emacs debate.
These are the Golden Years of electronics hacking. The home DIY hacker can get their hands on virtually any part that he or she could desire, and for not much money. Two economic factors underlie this Garden of Electronic Eden that we’re living in. Economies of scale make the parts cheap: when a factory turns out the same MEMS accelerometer chip for hundreds of millions of cell phones, their setup and other fixed costs are spread across all of these chips, and a $40 million factory ends up only costing $0.50 per unit sold.
But the unsung hero of the present DIY paradise is how so many different parts are available, and from so many different suppliers, many of them on the other side of the globe. “The Internet” you say, as if that explains it. Well, that’s not wrong, but it’s deeper than that. The reason that we have so much to choose from is that the marginal cost of variety has fallen, and with that many niche products and firms have become profitable where before they weren’t.
So let’s take a few minutes to sing the praises of the most important, and sometimes overlooked, facet of the DIY economy over the last twenty years: the falling marginal cost of variety.
Oh, for cryin’ out loud. That’s the last straw. We’ve seen one dangerous YouTube video too many. Are we honestly cursed with a false feedback system that unequitably rewards dangerous behavior in online videos? Obviously the answer is ‘yes’. Now the real question becomes, can we do anything about it?
Professional Driver on a Closed Course
Marketing is all about putting something in front of a consumer and getting their brain to go “awesome!”. The fast, loud, shiny, burny, and sharp things are all on the table for that task. It’s the primal part of your brain that gives you jolt, as if your amygdala forgot how to run from sabertooths (saberteeth?) and learned how to like and subscribe.
Back in the day, people were hurt and even killed when replicating stunts they saw done on television. To protect from litigation, companies started adding disclaimers — Don’t Try this at Home or my favorite: Professional Driver on a Closed Course.
But the thing is, commercials are big business. If someone gets hurt, there’s money to be had by assigning blame in a court of law. When the ability to produce and distribute video content was democratized by the coming of the Internet we lost those warnings and the common sense that went with them.
Going way back to this remote-control-a-real-car hack in 2009 I haven’t been able to shake the lack of consideration for danger in a project like this. I included it in the title, which ends with “(dangerously)”. While I wasn’t taken to task in the comments for that title, I have been chided for advocating for things as controversial as helmets when strapping your body to a moving object. Do a Ctrl-F on “helmet” in this article to see what I mean.
The people pulling off these hacks were doing it because it felt awesome and they wanted to document how that felt. They weren’t stars, they were hackers and the world mostly ignored them except in places like Hackaday. We might debate the lack of safety measures but most assumed anyone with skills to do this would take a beat to consider the risks. This was probably a false assumption.
It’s All About the Subs
Things have gotten worse since then. I can’t blame all of this on YouTube, but I’m going to try. One day, YouTube changed everything. They put together a perfect mix of easy uploading, great discoverability, and (most importantly) advertising revenue sharing. For some people, this became a business and not just a way to connect with the rest of the hacker community.
This is the rise of the subscriber base. It’s a vicious cycle — you need more people to like and subscribe so that their influence will push your channel to more people to like and subscribe. The problem is, the fastest way to this is that tricky amygdala again. For some, this is being funny, but for others this is speed, fireballs, and loud bangs, with no regard for life, limb, or eyeball.
But even the more mainstream content appears to be getting more and more dangerous. Our beloved [Colin Furze] is guilty of dangerous behavior. Not only did he burn himself testing a jet engine out without any safety gear, but turned the aftermath into another ad-supported video.
Which brings me to the straw that broke the camel’s back. Here’s a hack that’s based on the idea of hurting people. It’s what is (luckily) a crappy robot designed to recognize faces and shine lasers into any eyes it detects. Literally it’s conceived to shoot your eyes out. It’s using a red laser that likely won’t cause eye damage unless you intentionally stare into it without blinking, but that’s not discussed in the video, and someone who doesn’t know better replicating this with a different laser could easily cause irreparable damage to their sight.
Rocket Scientists Use Common Sense and So Should You
I was going to use the heading “This Isn’t Rocket Science”, but you don’t see rocket scientists testing new engine designs by lighting a fuse as they run away giggling in short sleeves and flip-flops. Those brilliantly intelligent people are tucked safely in a bunker at a safe distance with their hands hovering over the emergency kill switch as fire fighting equipment hangs out at arms reach. Rocket scientists know a lot about safety and so should you.
This is simple. We don’t have to invent anything to add safety to our hacks. Use common sense. Dress appropriately for your demo — as the situation dictates use reasonable fire-resistant clothing, helmet, etc. Wear protective glasses, laser spec’d goggles, and ear plugs; each whenever called for. Take fumes and particulates seriously and wear respiratory gear. Keep a fire extinguisher around. And if you’re making a video or posting images about it — which you should definitely do — snap a picture or give us a quick video cut to the safety precautions you’ve chosen.
I still want to see awesome projects on YouTube. But I also want to see the trend towards danger for clicks stopped. Let’s do dangerous stuff safely. And let’s be conspicuous about those safety measures. That combination is truly awesome.
Now get off my lawn, and wear your seat belt while doing so.
We all know that what we mean by hacker around here and what the world at large thinks of as a hacker are often two different things. But as our systems get more and more connected to each other and the public Internet, you can’t afford to ignore the other hackers — the black-hats and the criminals. Even if you think your data isn’t valuable, sometimes your computing resources are, as evidenced by the recent attack launched from unprotected cameras connected to the Internet.
As [Elliot Williams] reported earlier, Trustwave (a cybersecurity company) recently announced they had found a backdoor in some Chinese voice over IP gateways. Apparently, they left themselves an undocumented root password on the device and — to make things worse — they use a proprietary challenge/response system for passwords that is insufficiently secure. Our point isn’t really about this particular device, but if you are interested in the details of the algorithm, there is a tool on GitHub, created by [JacobMisirian] using the Trustwave data. Our interest is in the practice of leaving intentional backdoors in products. A backdoor like this — once discovered — could be used by anyone else, not just the company that put it there.