Putting Some Numbers On Your NEMAs

It’s official: [Engineer Bo] wins the internet with a video titled “Finding NEMA 17,” wherein he builds a dynamometer to find the best stepper motor in the popular NEMA 17 frame size.

Like a lot of subjective questions, the only correct answer to which stepper is best is, “It depends,” and [Bo] certainly has that in mind while gathering the data needed to construct torque-speed curves for five samples of NEMA 17 motors using his homebrew dyno. The dyno itself is pretty cool, with a bicycle disc brake to provide drag, a load cell to measure braking force, and an optical encoder to measure the rotation of the motor under test. The selected motors represent a cross-section of what’s commonly available today, some of which appear in big-name 3D printers and other common applications.

[Bo] tested each motor with two different drivers: the TMC2209 silent driver to start with, and because he released the Magic Smoke from those, the higher current TB6600 module. The difference between the two drivers was striking, with lower torque and top speeds for the same settings on each motor using the TB6600, as well as more variability in the data. Motors did better across the board with the TBC6600 at 24 volts, showing improved torque at higher speeds, and slightly higher top speeds. He also tested the effect of microstepping on torque using the TBC6600 and found that using full steps resulted in higher torque across a greater speed range.

At the end of the day, it seems as if these tests say more about the driver than they do about any of the motors tested. Perhaps the lesson here is to match the motor to the driver in light of what the application will be. Regardless, it’s a nice piece of work, and we really appreciate the dyno design to boot — reminds us of a scaled-down version of the one [Jeremey Fielding] demonstrated a few years back.

Continue reading “Putting Some Numbers On Your NEMAs”

A Brief History Of Perpetual Motion

Conservation of energy isn’t just a good idea: It is the law. In particular, it is the first law of thermodynamics. But, apparently, a lot of people don’t really get that because history is replete with inventions that purport to run forever or produce more energy than they consume. Sometimes these are hoaxes, and sometimes they are frauds. We expect sometimes they are also simple misunderstandings.

We thought about this when we ran across the viral photo of an EV with a generator connected to the back wheel. Of course, EVs and hybrids do try to reclaim power through regenerative braking, but that’s recovering a fraction of the energy already spent. You can never pull more power out than you put in, and, in fact, you’ll pull out substantially less.

Not a New Problem

If you think this is a scourge of social media and modern vehicles, you’d be wrong. Leonardo da Vinci, back in 1494, said:

Oh ye seekers after perpetual motion, how many vain chimeras have you pursued? Go and take your place with the alchemists.

There was a rumor in the 8th century that someone built a “magic wheel,” but this appears to be little more than a myth. An Indian mathematician also claimed to have a wheel that would run forever, but there’s little proof of that, either. It was probably an overbalanced wheel where the wheel spins due to weight and gravity with enough force to keep the wheel spinning.

Continue reading “A Brief History Of Perpetual Motion”

Can We Ever Achieve Fusion Power?

Fusion power has long held the promise of delivering near-endless energy without as many unfortunate side effects as nuclear fission. But despite huge investment and some fascinating science, the old adage about practical power generation being 20 years away seems just as true as ever. But is that really the case? [Brian Potter] has written a review article for Construction Physics, which takes us through the decades of fusion research.

For a start, it’s fascinating to learn about the many historical fusion process, the magnetic pinch, the stelarator, and finally the basis of many modern reactors, the tokamak. He demonstrates that we’ve made an impressive amount of progress, but at the same time warns against misleading comparisons. There’s a graph comparing fusion progress with Moore’s Law that he debunks, but he ends on a positive note. Who knows, we might not need a Mr. Fusion to arrive from the future after all!

Fusion reactors are surprisingly easy to make, assuming you don’t mind putting far more energy in than you’d ever receive in return. We’ve featured more than one Farnsworth fusor over the years.

Tired With Your Robot? Why Not Eat It?

Have you ever tired of playing with your latest robot invention and wished you could just eat it? Well, that’s exactly what a team of researchers is investigating. There is a fully funded research initiative (not an April Fools’ joke, as far as we know) delving into the possibilities of edible electronics and mechanical systems used in robotics. The team, led by EPFL in Switzerland, combines food process engineering, printed and molecular electronics, and soft robotics to create fully functional and practical robots that can be consumed at the end of their lifespan. While the concept of food-based robots may seem unusual, the potential applications in medicine and reducing waste during food delivery are significant driving factors behind this idea.

The Robofood project (some articles are paywalled!) has clearly made some inroads into the many components needed. Take, for example, batteries. Normally, ingesting a battery would result in a trip to the emergency room, but an edible battery can be made from an anode of riboflavin (found in almonds and egg whites) and a cathode of quercetin, as we covered a while ago. The team proposed another battery using activated charcoal (AC) electrodes on a gelatin substrate. Water is split into its constituent oxygen and hydrogen by applying a voltage to the structure. These gasses adsorb into the AC surface and later recombine back into the water, providing a usable one-volt output for ten minutes with a similar charge time. This simple structure is reusable and, once expired, dissolves harmlessly in (simulated) gastric fluid in twenty minutes. Such a device could potentially power a GI-tract exploratory robot or other sensor devices.

But what use is power without control? (as some car tyre advert once said) Microfluidic control circuits can be created using a stack of edible materials, primarily oleogels, like ethyl cellulose, mixed with an organic oil such as olive oil. A microfluidic NOT gate combines a pressure-controlled switch with a fluid resistor as the ‘pull-up’. The switch has a horizontal flow channel with a blockage that is cleared when a control pressure is applied. As every electronic engineer knows, once you have a controlled switch and a resistor, you can build NOT gates and all the other logic functions, flip-flops, and memories. Although they are very slow, the control components are importantly edible.

Edible electronics don’t feature here often, but we did dig up this simple edible chocolate bunny that screams when you bite it. Who wouldn’t want one of those?

What You Can See With A SEM?

The last time we used a scanning electron microscope (a SEM), it looked like something from a bad 1950s science fiction movie. These days SEMs, like the one at the IBM research center, look like computers with a big tank poised nearby. Interestingly, the SEM is so sensitive that it has to be in a quiet room to prevent sound from interfering with images.

As a demo of the machine’s impressive capability, [John Ott] loads two US pennies, one facing up and one face down. [John] notes that Lincoln appears on both sides of the penny and then proves the assertion correct using moderate magnification under the electron beam.

Continue reading “What You Can See With A SEM?”

Human Brains Can Tell Deepfake Voices From Real Ones

Although it’s generally accepted that synthesized voices which mimic real people’s voices (so-called ‘deepfakes’) can be pretty convincing, what does our brain really think of these mimicry attempts? To answer this question, researchers at the University of Zurich put a number of volunteers into fMRI scanners, allowing them to observe how their brains would react to real and a synthesized voices.  The perhaps somewhat surprising finding is that the human brain shows differences in two brain regions depending on whether it’s hearing a real or fake voice, meaning that on some level we are aware of the fact that we are listening to a deepfake.

The detailed findings by [Claudia Roswandowitz] and colleagues are published in Communications Biology. For the study, 25 volunteers were asked to accept or reject the voice samples they heard as being natural or synthesized, as well as perform identity matching with the supposed speaker. The natural voices came from four male (German) speakers, whose voices were also used to train the synthesis model with. Not only did identity matching performance crater with the synthesized voices, the resulting fMRI scans showed very different brain activity depending on whether it was the natural or synthesized voice.

One of these regions was the auditory cortex, which clearly indicates that there were acoustic differences between the natural and fake voice, the other was the nucleus accumbens (NAcc). This part of the basal forebrain is involved in the cognitive processing of e.g. motivation, reward and reinforcement learning, which plays a key role in social, maternal and addictive behavior. Overall, the deepfake voices are characterized by acoustic imperfections, and do not elicit the same sense of recognition (and thus reward sensation) as natural voices do.

Until deepfake voices can be made much better, it would appear that we are still safe, for now.

The Guinness Brewery Invented One Of Science’s Most Important Statistical Tools

The Guinness brewery has a long history of innovation, but did you know that it was the birthplace of the t-test? A t-test is usually what underpins a declaration of results being “statistically significant”. Scientific American has a fascinating article all about how the Guinness brewery (and one experimental brewer in particular) brought it into being, with ramifications far beyond that of brewing better beer.

William Sealy Gosset (aka ‘Student’), self-trained statistician. [source: user Wujaszek, wikipedia]
Head brewer William Sealy Gosset developed the technique in the early 1900s as a way to more effectively monitor and control the quality of stout beer. At Guinness, Gosset and other brilliant researchers measured everything they could in their quest to optimize and refine large-scale brewing, but there was a repeated problem. Time and again, existing techniques of analysis were simply not applicable to their gathered data, because sample sizes were too small to work with.

While the concept of statistical significance was not new at the time, Gosset’s significant contribution was finding a way to effectively and economically interpret data in the face of small sample sizes. That contribution was the t-test; a practical and logical approach to dealing with uncertainty.

As mentioned, t-testing had ramifications and applications far beyond that of brewing beer. The basic question of whether to consider one population of results significantly different from another population of results is one that underlies nearly all purposeful scientific inquiry. (If you’re unclear on how exactly the t-test is applied and how it is meaningful, the article in the first link walks through some excellent and practical examples.)

Dublin’s Guinness brewery has a rich heritage of innovation so maybe spare them a thought the next time you indulge in statistical inquiry, or in a modern “nitro brew” style beverage. But if you prefer to keep things ultra-classic, there’s always beer from 1574, Dublin castle-style.