Impedance Mismatch

There are a few classic physics problems that it can really help to have a mental map of. One is, of course, wave propagation. From big-wave surfing, through loudspeaker positioning, to quantum mechanics, having an intuition for the basic dynamics of constructive and destructive interference is key. Total energy of a system, and how it splits and trades between kinetic and potential, is another.

We were talking about using a bike generator to recharge batteries on the Podcast last night, and we stumbled on a classic impedance mismatch situation. A pedaling person can put out 100 W, and a cell phone battery wants around 5 W to charge. You could pedal extremely lightly for nearly three hours, but I’d bet you’d rather hammer the bike for 10 minutes and get on with your life. The phone wants to be charged lightly — it’s high impedance — and you want to put out all your power at once — you’re a low impedance source.

The same phenomenon explains why you have to downshift your internal combustion automobile as you slow down. In high gear, it presents too high an impedance, and the motor can only turn so slowly before stalling. This is also why all vibrating string acoustic instruments have bridges that press down on big flat flexible surfaces, and why horns are horn shaped. Air is easy to vibrate, but to be audible you want to move a lot of it, so you spread out the power. Lifting a heavy rock with human muscle power is another classic impedance mismatch.

If these are fundamentally all the same problem, then they should all have similar solutions. The gear on the bike or the car, the bridge on a cello, the flared horn on the trumpet, and the lever under the boulder all serve to convert a large force over a short distance or time or area into a lower force over more distance, time, or area.

Pop quiz! What are the common impedance converters in the world of volts and amps? The two that come to my mind are the genafsbezre and the obbfg/ohpx pbairegre (rot13!). What am I missing?

The Open Source ASICs Hack Chat Redefines Possible

There was a time when all that was available to the electronics hobbyist were passive components and vacuum tubes. Then along comes the integrated circuit, and it changed everything. Fast forward a bit, and affordable programmable microcontrollers arrived on the scene. Getting started in electronics became far easier, and the line between hardware and software started to blur. Much more recently, the hobbyist community was introduced to field programmable gate arrays (FPGAs) and the tools necessary to work with them. While not as widely applicable as the IC or MCU, the proliferation of FPGAs among hardware hackers once again opened doors that were previously locked tight.

We’re currently on the edge of another paradigm shift, but it’s no surprise if you haven’t heard of it. After all, the last couple of years have been a bit unusual, so the 2020 announcement that Google was teaming up with SkyWater and Efabless to enable the design and manufacture of open source application-specific integrated circuits (ASICs) flew under the radar for many people. But not Matt Venn, the host of this week’s Hack Chat. For him, it was the opportunity he’d been waiting for.

Matt started like many of us, building electronic kits and building new gadgets out of old discarded hardware. He graduated to microcontrollers, and became particularly interested in FPGAs when the open source toolchains started hitting the scene. Of course by this point, it was much more than just a hobby for him. He was presenting a talk at the 2019 Week of Open Source Hardware in Switzerland when he saw Tim Edwards from Efabless demo a chip that had been made with open source tools. Unfortunately, the costs involved were still far too high for an individual to put their ideas into silicon.

So when Google and Skywater announced they would be footing the bill to have selected open source ASIC designs manufactured a few months later, Matt says he was in a good position to jump in. He has since started running the Zero to ASIC Course which aims to teach you how to produce your own chips using the open source Process Development Kit, and so far 160 people have taken him up on the offer.

As you might expect, many of the questions in the Chat had to do with what kind of designs you can actually produce using the 130 nm process. Especially given the limits on the physical space each creator’s circuit can take up on each multi-project wafer (MPW). Others wanted to know how difficult it would be to port over existing FPGA designs, or how well the process worked with analog applications. With the number of designs Matt has seen go through his course, he could answer many of the questions just by pointing to a particular individual’s ASIC. For instance, he held up the digital-to-analog converter from Harald Pretl and Thomas Parry’s 5 GHz satellite transceiver as prime analog examples.

So let’s say you put the work in to design an ASIC and it gets approved to be produced on a future MPW, what then? Well, first you have to hope everything goes according to plan. Matt explains that the initial run was almost a total write-off due to timing problems in the toolchain, though in the end, he was largely able to recover his own chip. But they’ve done several runs since then, so let’s assume there’s no production problems. What exactly ends up on your doorstep?

If you were expecting a handy DIP8, you might be disappointed. While some DIY friendly packages would be nice, right now the ASICs ship as wafer level chip scale package (WLCSP) with an unforgiving 0.5 mm pitch. If you can believe it, that’s actually an improvement over the first run, which shipped out as a bare die. Of course as Matt pointed out, anyone who’s gotten to the point of designing their own custom ASIC probably won’t be scared off by the prospect of some fine-pitch soldering. Some in the Chat wondered about the difficulty in getting compatible PCBs produced, but Matt said that in his experience OSH Park has been up to the challenge.

Like the Metal 3D Printing Hack Chat before it, this week’s session went over a topic that’s on the absolute cutting edge of what’s possible for hardware hackers and hobbyists. Truth be told, the vast majority of the people reading Hackaday are no more likely to send away for their own custom ASIC as they are to battle x-rays in an attempt to sinter metal with a homebrew electron gun. But that doesn’t make the fact that some folks out there doing it any less important, or inspiring. That said, if you do end up being one of those select few that can boast they’ve designed a custom chip of their own — don’t forget to send one of them our way.

We’re grateful Matt Venn was able, once again, to share his valuable experience in the realm of open source application-specific integrated circuits with us. If you haven’t checked them out already, the Zero to ASIC workshop he ran for Remoticon 2020 and his talk Open Source ASICs – A Year in Perspective from Remoticon 2021 are required viewing if you want to learn more about this fascinating new frontier in hardware hacking.


The Hack Chat is a weekly online chat session hosted by leading experts from all corners of the hardware hacking universe. It’s a great way for hackers connect in a fun and informal way, but if you can’t make it live, these overview posts as well as the transcripts posted to Hackaday.io make sure you don’t miss out.

Hackaday Podcast 160: Pedal Power, OpenSCAD In The Browser, Tasmanian Tigers, And The Coolest Knob

Join Hackaday Editor-in-Chief Elliot Williams and Managing Editor Tom Nardi as they tackle all the hacks that were fit to print this last week. Things start off with some troubling news from Shenzhen (spoilers: those parts you ordered are going to be late), and lead into a What’s That Sound challenge that’s sure to split the community right down the center. From there we’ll talk about human powered machines, bringing OpenSCAD to as many devices as humanly possible, and the finer points of installing your own hardware into a Pelican case. There’s a quick detour to muse on laser-powered interstellar probes, a Pi-calculating Arduino, and a surprisingly relevant advertisement from Sony Pictures. Finally, stay tuned to hear the latest developments in de-extinction technology, and a seriously deep dive into the lowly nail.

Or Direct Download, like an old-school boss!

Take a look at the links below if you want to follow along, and as always, tell us what you think about this episode in the comments!

Continue reading “Hackaday Podcast 160: Pedal Power, OpenSCAD In The Browser, Tasmanian Tigers, And The Coolest Knob”

This Week In Security: More Protestware, Another Linux Vuln, And TLStorm

It seems I have made my tiny, indelible mark on internet security history, with the term “protestware“. As far as I can tell, I first coined this particular flavor of malware while covering the Faker.js/Colors.js vandalism in January.

Yet another developer, [RIAEvangelist] has inserted some malicious code (Mirror, since the complaint has been deleted) in an existing project, in protest of something, in this case the war in Ukraine. The behavior here is to write a nice note on the desktop, preaching “peace not war”. However, a few versions of this sample have a nasty surprise — it does a GeoIP lookup, and attempts to wipe the entire drive if it detects a Russian location. Yes, node-ipc versions 10.1.1 and 10.1.2 contain straight-up malware. It’s not clear how many users ran the potentially malicious code, as it was quickly reverted and released 10.1.3. Up-to-date versions of node-ipc still create the desktop file, and Unity Hub has already confirmed they shipped the library in this state, and have since issued a hotfix.
Continue reading “This Week In Security: More Protestware, Another Linux Vuln, And TLStorm”

Remoticon 2021 // Arsenijs Tears Apart Your Laptop

Hackaday’s own [Arsenijs Picugins] has been rather busy hacking old laptops apart and learning what can and cannot be easily reused, and presents for the 2021 Hackaday Remoticon, a heavily meme-loaded presentation with some very practical advice.

Full HD, IPS LCD display with touch support, reused with the help of a dedicated driver board

What parts inside a dead laptop are worth keeping? Aside from removable items like RAM stick and hard drives, the most obvious first target is the LCD panel. These are surprisingly easy to use, with driver boards available on the usual marketplaces, so long as you make sure to check the exact model number of your panel is supported.

Many components inside laptops are actually USB devices, things like touch screen controllers, webcams and the like are usually separate modules, which simply take power and USB. This makes sense, since laptops already have a fair amount of external USB connectivity, why not use it internally too? Other items are a bit trickier: trackpads seem to be either PS/2 or I2C and need a bit more hardware support. Digital microphones mostly talk I2S, which means some microcontroller coding.

Some items need a little more care, however, so maybe avoid older Dell batteries, with their ‘spicy pillow’ tendencies. As [Arsenijs] says, take them when they are ripe for the picking, but not too ripe. Batteries need a little care and feeding, make sure you’ve got some cell protection, if you pull raw cells! Charging electronics are always on the motherboard, so that’s something you’ll need to arrange yourself if you take a battery module, but it isn’t difficult, so long as you can find your way around SMBus protocol.

These batteries are too ripe. Leave them alone.

Older laptops were much more modular and some even designed for upgrade or modification, and this miniaturization-driven trend of shrinking everything — where a laptop now needs to be thin enough to shave with — is causing some manufacturers to move in a much more proprietary direction regarding hardware design.

This progression conflicts with our concerns of privacy, repairability and waste elimination, resulting in closed boxes filled with unrepairable, non-reusable black boxes. We think it’s time to take back some of the hardware, so three cheers to those taking upon themselves the task to reverse engineer and publish reusability information, and long may it be possible to continue.

Continue reading “Remoticon 2021 // Arsenijs Tears Apart Your Laptop”

Our Favorite Things: Binary Search

You might not think that it would be possible to have a favorite optimization algorithm, but I do. And if you’re well-versed in the mathematical art of hill climbing, you might be surprised that my choice doesn’t even involve taking any derivatives. That’s not to say that I don’t love Newton’s method, because I do, but it’s just not as widely applicable as the good old binary search. And this is definitely a tool you should have in your toolbox, too.

Those of you out there who slept through calculus class probably already have drooping eyelids, so I’ll give you a real-world binary search example. Suppose you’re cropping an image for publication on Hackaday. To find the best width for the particular image, you start off with a crop that’s too thin and one that’s too wide. Start with an initial guess that’s halfway between the edges. If this first guess is too wide, you split the difference again between the current guess and the thinnest width. Updated to this new guess, you split the differences again.

But let’s make this even more concrete: an image that’s 1200 pixels wide. It can’t get wider than 1200 or thinner than 0. So our first guess is 600. That’s too thin, so we guess 900 — halfway between 600 and the upper limit of 1200. That ends up too wide, so we next guess 750, halfway between 600 and 900. A couple more iterations get us to 675, then 638, and then finally 619. In this case, we got down to the pixel level pretty darn fast, and we’re done. In general, you can stop when you’re happy, or have reached any precision goal.

[Ed note: I messed up the math when writing this, which is silly. But also brought out the point that I usually round the 50% mark when doing the math in my head, and as long as you’re close, it’s good enough.]

What’s fantastic about binary search is how little it demands of you. Unlike fancier optimization methods, you don’t need any derivatives. Heck, you don’t even really need to evaluate the function any more precisely than “too little, too much”, and that’s really helpful for the kind of Goldilocks-y photograph cropping example above, but it’s also extremely useful in the digital world as well. Comparators make exactly these kinds of decisions in the analog voltage world, and you’ve probably noticed the word “binary” in binary search. But binary search isn’t just useful inside silicon. Continue reading “Our Favorite Things: Binary Search”

Linux Fu: Simple Pipes

In the old days, you had a computer and it did one thing at a time. Literally. You would load your cards or punch tape or whatever and push a button. The computer would read your program, execute it, and spit out the results. Then it would go back to sleep until you fed it some more input.

The problem is computers — especially then — were expensive. And for a typical program, the computer is spending a lot of time waiting for things like the next punched card to show up or the magnetic tape to get to the right position. In those cases, the computer was figuratively tapping its foot waiting for the next event.

Someone smart realized that the computer could be working on something else while it was waiting, so you should feed more than one program in at a time. When program A is waiting for some I/O operation, program B could make some progress. Of course, if program A didn’t do any I/O then program B starved, so we invented preemptive multitasking. In that scheme, program A runs until it can’t run anymore or until a preset time limit occurs, whichever comes first. If time expires, the program is forced to sleep a bit so program B (and other programs) get their turn. This is how virtually all modern computers outside of tiny embedded systems work.

But there is a difference. Most computers now have multiple CPUs and special ways to quickly switch tasks. The desktop I’m writing this on has 12 CPUs and each one can act like two CPUs. So the computer can run up to 12 programs at one time and have 12 more that can replace any of the active 12 very quickly. Of course, the operating system can also flip programs on and off that stack of 24, so you can run a lot more than that, but the switch between the main 12 and the backup 12 is extremely fast.

So the case is stronger than ever for writing your solution using more than one program. There are a lot of benefits. For example, I once took over a program that did a lot of calculations and then spent hours printing out results. I spun off the printing to separate jobs on different printers and cut like 80% of the run time — which was nearly a day when I got started. But even outside of performance, process isolation is like the ultimate encapsulation. Things you do in program A shouldn’t be able to affect program B. Just like we isolate code in modules and objects, we can go further and isolate them in processes.

Doubled-Edged Sword

But that’s also a problem. Presumably, if you want to have two programs cooperate, they need to affect each other in some way. You could just use a file to talk between them but that’s notoriously inefficient. So operating systems like Linux provide IPC — interprocess communications. Just like you make some parts of an object public, you can expose certain things in your program to other programs.

Continue reading “Linux Fu: Simple Pipes”