Capacitor Decoupling Chaos, And Why You Should Abandon 100 NF

Everyone knows that the perfect capacitor to decouple the power rails around ICs is a 100 nF ceramic capacitor or equivalent, yet where does this ‘fact’ come from and is it even correct? These are the questions that [Graham] set out to answer once and for all. He starts with an in-depth exploration of the decoupling capacitor (and related) theory. [Graham] then dives into the way that power delivery is affected by the inherent resistance, capacitance, and inductance of traces. This is the problem that decoupling capacitors are supposed to solve.

Effectively, the decoupling capacitor provides a low-impedance path at high frequencies and a high-impedance path at low frequencies. Ideally, a larger value capacitor would be better, but since this is the real world and capacitors have ESL and ESR parameters, we get to look at impedance graphs. This is the part where we can see exactly what decoupling effect everyone’s favorite 100 nano-farad capacitors have, which as it turns out is pretty miserable.

Meanwhile, a 1 µF (ceramic) capacitor will have much better performance, as shown with impedance graphs for MLCC capacitors. As a rule of thumb, a single large decoupling capacitor is better, while two MLCC side-by-side can worsen noise. Naturally, one has to keep in mind that although ‘more capacity is better for decoupling’, there is still such a thing as ‘inrush current’ so don’t go too crazy with putting 1,000 µF decoupling capacitors everywhere.

Does A Radome Affect Radio?

Not too far away from where this is being written is one of Uncle Sam’s NATO outposts, a satellite earth station for their comms system. Its most prominent feature is a radome, a huge golf-ball-like structure visible for miles, that protects a large parabolic antenna from the British weather. It makes sense not just for a superpower to protect its antennas from the elements, and [saveitforparts] is doing the same with a geodesic dome for his radio telescope experiments. But what effect does it have on the received signal? He’s made a video to investigate.

The US military radome is likely constructed of special RF-transparent materials, but this smaller version has a fibreglass skin and an aluminium frame. When he compares internal and external sky scans made with a small motorised satellite TV antenna he finds that the TV satellites are just as strong, but that the noise floor is higher and the frame is visible in the scan. It’s particularly obvious with such small dish, and his planned larger array should improve matters.

We would be curious to know whether an offset-fed dish constructed to minimise ground noise reaching the LNB, would improve matters further. It’s no surprise that the frame doesn’t impede the TV satellites though, as it is many wavelengths wide at that frequency. The video is below the break, and meanwhile, we featured the antenna he’s using here in 2023.

Continue reading “Does A Radome Affect Radio?”

Software Lets You Paint Surface Patterns On 3D Prints

Just when you think you’ve learned all the latest 3D printing tricks, [TenTech] shows up with an update to their Fuzzyficator post-processing script. This time, the GPL v3 licensed program has gained early support for “paint-on” textures.

Fuzzyficator works as a plugin to OrcaSlicer, Bambu Studio, and PrusaSlicer. The process starts with an image that acts as a displacement map. Displacement map pixel colors represent how much each point on the print surface will be moved from its original position. Load the displacement map into Fuzzyficator, and you can paint the pattern on the surface right in the slicer.

This is just a proof of concept though, as [TenTech] is quick to point out. There are still some bugs to be worked out. Since the modifications are made to the G-code file rather than the model, the software has a hard time figuring out if the pattern should be pressed into the print, or lifted above the base surface. Rounded surfaces can cause the pattern to deform to fit the surface.

If you’d like to take the process into your own hands, we’ve previously shown how Blender can be used to add textures to your 3D prints.

Continue reading “Software Lets You Paint Surface Patterns On 3D Prints”

Soviet Wired Radio, How It Worked

At the height of the Cold War, those of us on the western side of the wall had plenty of choice over our radio listening, even if we stuck with our country’s monolithic broadcaster. On the other side in the Soviet Union, radio for many came without a choice of source, in the form of wired radio systems built into all apartments. [Railways | Retro Tech | DIY] grew up familiar with these wired radios, and treats us to a fascinating examination of their technology, programming, and ultimate decline.

In a Soviet apartment, usually in the kitchen, there would be a “Radio” socket on the wall. Confusingly the same physical dimension as a mains socket, it carried an audio signal. The box which plugged into it was referred to as a radio, but instead contained only a transformer, loudspeaker, and volume control. These carried the centralised radio station, piped from Moscow to the regions by a higher voltage line, then successively stepped down at regional, local, and apartment block level. A later refinement brought a couple more stations on separate sub-carriers, but it was the single channel speakers which provided the soundtrack for daily life.

The decline of the system came over the decades following the end of communism, and he describes its effect on the mostly older listenership. Now the speaker boxes survive as affectionate curios for those like him who grew up with them.

You probably won’t be surprised to find twisted-wire broadcasting in use in the West, too.

Continue reading “Soviet Wired Radio, How It Worked”

Preventing AI Plagiarism With .ASS Subtitling

Around two years ago, the world was inundated with news about how generative AI or large language models would revolutionize the world. At the time it was easy to get caught up in the hype, but in the intervening months these tools have done little in the way of productive work outside of a few edge cases, and mostly serve to burn tons of cash while turning the Internet into even more of a desolate wasteland than it was before. They do this largely by regurgitating human creations like text, audio, and video into inferior simulacrums and, if you still want to exist on the Internet, there’s basically nothing you can do to prevent this sort of plagiarism. Except feed the AI models garbage data like this YouTuber has started doing.

At least as far as YouTube is concerned, the worst offenders of AI plagiarism work by downloading the video’s subtitles, passing them through some sort of AI model, and then generating another YouTube video based off of the original creator’s work. Most subtitle files are the fairly straightfoward .srt filetype which only allows for timing and text information. But a more obscure subtitle filetype known as Advanced SubStation Alpha, or .ass, allows for all kinds of subtitle customization like orientation, formatting, font types, colors, shadowing, and many others. YouTuber [f4mi] realized that using this subtitle system, extra garbage text could be placed in the subtitle filetype but set out of view of the video itself, either by placing the text outside the viewable area or increasing its transparency. So now when an AI crawler downloads the subtitle file it can’t distinguish real subtitles from the garbage placed into it.

[f4mi] created a few scripts to do this automatically so that it doesn’t have to be done by hand for each one. It also doesn’t impact the actual subtitles on the screen for people who need them for accessibility reasons. It’s a great way to “poison” AI models and make it at least harder for them to rip off the creations of original artists, and [f4mi]’s tests show that it does work. We’ve actually seen a similar method for poisoning data sets used for emails long ago, back when we were all collectively much more concerned about groups like the NSA using automated snooping tools in our emails than we were that machines were going to steal our creative endeavors.

Thanks to [www2] for the tip!

Continue reading “Preventing AI Plagiarism With .ASS Subtitling”

Networking History Lessons

Do they teach networking history classes yet? Or is it still too soon?

I was reading [Al]’s first installment of the Forgotten Internet series, on UUCP. The short summary is that it was a system for sending files across computers that were connected, intermittently, by point-to-point phone lines. Each computer knew the phone numbers of a few others, but none of them had anything like a global routing map, and IP addresses were still in the future. Still, it enabled file transfer and even limited remote access across the globe. And while some files contained computer programs, others files contained more human messages, which makes UUCP also a precursor to e-mail.

What struck me is how intuitively many of this system’s natural conditions and limitations lead to the way we network today. From phone numbers came the need for IP addresses. And from the annoyance of having know how the computers were connected, and to use the bang notation to route a message from one computer to another through intermediaries, would come our modern routing protocols, simply because computer nerds like to automate hassles wherever possible.

But back to networking history. I guess I learned my networking on the mean streets, by running my own Linux system, and web servers, and mail servers. I knew enough networking to get by, but that mostly focused on the current-day application, and my beard is not quite grey enough to have been around for the UUCP era. So I’m only realizing now that knowing how the system evolved over time helps a lot in understanding why it is the way it is, and thus how it functions. I had a bit of a “eureka” moment reading about UUCP.

In physics or any other science, you learn not just the status quo in the field, but also how it developed over the centuries. It’s important to know something about the theory of the aether to know what special relativity was up against, for instance, or the various historical models of the atom, to see how they inform modern chemistry and physics. But these are old sciences with a lot of obsolete theories. Is computer science old enough that they teach networking history? They should!

AI Mistakes Are Different, And That’s A Problem

People have been making mistakes — roughly the same ones — since forever, and we’ve spent about the same amount of time learning to detect and mitigate them. Artificial Intelligence (AI) systems make mistakes too, but [Bruce Schneier] and [Nathan E. Sanders] make the observation that, compared to humans, AI models make entirely different kinds of mistakes. We are perhaps less equipped to handle this unusual problem than we realize.

The basic idea is this: as humans we have tremendous experience making mistakes, and this has also given us a pretty good idea of what to expect our mistakes to look like, and how to deal with them. Humans tend to make mistakes at the edges of our knowledge, our mistakes tend to clump around the same things, we make more of them when bored or tired, and so on. We have as a result developed controls and systems of checks and balances to help reduce the frequency and limit the harm of our mistakes. But these controls don’t carry over to AI systems, because AI mistakes are pretty strange.

The mistakes of AI models (particularly Large Language Models) happen seemingly randomly and aren’t limited to particular topics or areas of knowledge. Models may unpredictably appear to lack common sense. As [Bruce] puts it, “A model might be equally likely to make a mistake on a calculus question as it is to propose that cabbages eat goats.” A slight re-wording of a question might be all it takes for a model to suddenly be confidently and utterly wrong about something it just a moment ago seemed to grasp completely. And speaking of confidence, AI mistakes aren’t accompanied by uncertainty. Of course humans are no strangers to being confidently wrong, but as a whole the sort of mistakes AI systems make aren’t the same kinds of mistakes we’re used to.

There are different ideas on how to deal with this, some of which researchers are (ahem) confidently undertaking. But for best results, we’ll need to invent new ways as well. The essay also appeared in IEEE Spectrum and isn’t terribly long, so take a few minutes to check it out and get some food for thought.

And remember, if preventing mistakes at all costs is the goal, that problem is already solved: GOODY-2 is undeniably the world’s safest AI.