This week, Hackaday’s Elliot Williams and Kristina Panos met up over the tubes to bring you the latest news, mystery sound, and of course, a big bunch of hacks from the previous seven days or so.
In Hackaday news, we’ve got a new contest running! Read all about the 2025 Component Abuse Challenge, sponsored by DigiKey, and check out the contest page for all the details. In sad news, American Science & Surplus are shuttering online sales, leaving just the brick and mortar stores in Wisconsin and Illinois.
On What’s That Sound, it’s a results show, which means Kristina gets to take a stab at it. She missed the mark, but that’s okay, because [Montana Mike] knew that it was the theme music for the show Beakman’s World, which was described by one contestant as “Bill Nye on crack”.
After that, it’s on to the hacks and such, beginning with a really cool way to smooth your 3D prints in situ. JWe take a much closer look at that talking robot’s typewriter-inspired mouth from about a month ago. Then we discuss several awesome technological feats such as running code on a PAX credit card payment machine, using the alphabet as joinery, and the invention of UTF-8 in general. Finally, we discuss the detection of spicy shrimp, and marvel at the history of email.
Check out the links below if you want to follow along, and as always, tell us what you think about this episode in the comments!
Download in DRM-free MP3 and savor at your leisure.
Episode 338 Show Notes:
News:
- 2025 Hackaday Component Abuse Challenge: Let The Games Begin!
- American Science And Surplus Ends Online Sales
What’s that Sound?
- Congratulations to [Montana Mike], the Beakmaniest of them all!
Interesting Hacks of the Week:
- Smooth! Non-Planar 3D Ironing
- A Closer Look Inside A Robot’s Typewriter-Inspired Mouth
- Running Code On A PAX Credit Card Payment Machine
- Jointly Is A Typeface Designed For CNC Joinery
- Original Mac Limitations Can’t Stop You From Running AI Models
- UTF-8 Is Beautiful
Quick Hacks:
- Elliot’s Picks:
- Kristina’s Picks:
Regarding radiation damage: “Statistically speaking, it is the right thing to do”.
This is assuming that the means to mitigate the problem are themselves problem free – that you can only do good. Over-reacting on a minor problem and/or implementing ineffective remedies causes economic harm, public perception harm, political harm choosing worse solutions over better, etc. which causes other problems.
If the original problem was small, then even small unintended consequences of the remedy are likely to be statistically equal or worse than the problem you were originally trying to solve – an imperceptibly small number of cancer cases or birth defects where radioactive contamination may have been a contributing factor.
For example, the issue of suicides and abortions peaking after a nuclear event, because people are driven to a panic, or become destitute because of heavy-handed attempts at “controlling” the aftermath, driving people homeless, jobless, etc. where no such measures would have been necessary.
If you’re trying to smite a fly with an axe, be careful where you swing it.
Or, if proportional response is called for, a problem which is hard to prove to even exist should command an action in equal measure.
In ethics, we consider the principle: “First do no harm.”
https://en.wikipedia.org/wiki/Primum_non_nocere
Absolutely! But keeping some shrimp out of the country is not that case. When setting standards for food safety, it’s probably much better to err on the side of caution. In the case of an event like Fukushima, you have to take a ton of other factors into account, like uprooting entire populations, etc. That case is much harder.
But what I meant, but didn’t want to get too deep into the weeds in the podcast:
Testing for harm from low levels of radioactivity is hard. What we have are statistical / epidemiological studies where they try to correlate small exposures with tiny increases in a probability of getting the disease in the population. When they “test” for this, they are looking for significant results — that is they are looking for a value in some kind of regression coefficient (or some other stat) that exceeds some threshold. That threshold is set by saying that they want to be 95% sure that if there weren’t an effect, they would not say there were. That is, they set the threshold to regulate the false positive rate.
How often does the test actually pick up an effect when there is one? That is called the “power” of the test, and it’s not something under experimental control — it’s a fact of the data generating process and the stats used. As you slide the significance threshold up, you decrease the chance of a false positive, but also decrease the power — the chance of detecting a true effect, and vice-versa.
What matters for setting the threshold for consumer radiation safety, though, is finding a level below which there is no (or little) disease. So using evidence from studies where they control the false positive rate at the traditional 95% is the wrong thing to do.
The result of not doing so is that you get this nice linear relationship at high doses, where it’s easy to reach significant harm at the 95% confidence level, but when the harm gets small enough, it falls out due to the low power of the tests used, and you get something that looks like a thresholding effect — no significant disease at low doses — but it’s actually a side effect of controlling the level of the statistical tests in the face of potentially existant, but smaller effects.
What you care about from a public health standpoint is that there is no effect at a given low enough exposure level. But you can’t test for “no effect” because you don’t know what the power of the test is. You can only test for “there is an effect”. Absence of proof is not proof of absence, right? And because of that, the entire low-exposure regime is a big “we dunno”, statistically.
So then the regulators are in this position of making a threshold for safe levels of radiation, but they have to draw the threshold down in the region where the studies don’t provide any reliable guidance. So they take a stab, and draw a straight line down from what they do see. I don’t really have a dog in this fight, but if you asked me, I would say that the linear extrapolation is probably the best you can do without further evidence.
If you had a biological model where you showed the mechanisms that the body uses to more effectively combat the damage from lower levels of radiation, then that would be convincing in the threshold-model direction, but the regulators still have the problem of locating this kink in the curve down where they have no data, and they’re understandably unwilling to just guess because lives are on the line. (My read is that this is exactly where we are, both biology and public-health-wise.)
Still, if the linear model is a bit conservative, it’s probably the way you’d go if you’re trying to prevent people from getting cancer due to eating radioactive food. First do no harm, right?
But not “go ahead and do harm as long as it’s small enough that it’s hard to test for it with enough confidence to show up on a population epidemiological study at the 95% level.”
Anyway, that’s what I mean by “statistically sketchy” or whatever. There’s actually a branch of bayesian hypothesis testing that aims to handle this loss of power in the low end of the spectrum, but we’ll save that for my next TedX talk…
Depends. It may cost someone their job, and just the panic that rises makes people collectively throw away food and money.
If the damage caused by the shrimp would be something like 1-2 potential cases of cancer over the next 50 years, then this action, which seems completely insignificant or trivial, nevertheless ends up costing people millions of dollars, and does more harm overall.
That is why it’s sometimes better to do nothing even when you could do something, because even a tiny tiny misstep would be worse.
For example, the average cost of treating a cancer patient is between $100-200k
So if the whole hoopla around the shrimp starts costing more than a few hundred grand, the reaction to it would cause damage to the society at the same scale as the radioactive shrimp itself.
That’s a different argument entirely. That’s permitting future harm, whereas we’re talking about reacting to something which already happened, which we just have to deal with.
It’s like the trolley problem – whichever way you choose, someone’s going to get hurt. Do you pick the least harm?
Or, since we have to accept that doing something is inevitably risking and causing other harm, we’re really talking about the redistribution of harm to reduce the outcomes to the individuals affected the worst.
In doing so, the ethical concern becomes about whether the total harm is reduced, and whether the people who we harm instead consent to being harmed in this manner.
What I’m getting at is – this question is not simply about whether you can show there is an effect. That is not the deciding factor. It’s that not being able to confidently show an effect despite strong attempts suggests that the effect, whether it exists, is not much of an effect in the first place.
So even if it exists, do we really have to put so much effort into deal with it, or just let it be?
Another similar case is cellphones and cancer. People suspect it, there is some evidence of it, but multi-decade studies and meta-studies have never been able to show it conclusively happening in humans. People pour millions of man-hours and dollars into the problem, which ends up consuming more resources than simply saying “Okay, it might cause cancer, but we’re going to risk it anyways, because obviously we’re spending more effort here than whatever problems it is causing.” That’s basically what we’re doing in regards of RF exposure and permissible limits anyhow – the main concern is EMI with other devices and spectrum congestion rather than human safety.