Physical Computing Used To Be A Thing

In the early 2000s, the idea that you could write programs on microcontrollers that did things in the physical world, like run motors or light up LEDs, was kind of new. At the time, most people thought of coding as stuff that stayed on the screen, or in cyberspace. This idea of writing code for physical gadgets was uncommon enough that it had a buzzword of its own: “physical computing”.

You never hear much about “physical computing” these days, but that’s not because the concept went away. Rather, it’s probably because it’s almost become the norm. I realized this as Tom Nardi and I were talking on the podcast about a number of apparently different trends that all point in the same direction.

We started off talking about the early days of the Arduino revolution. Sure, folks have been building hobby projects with microcontrollers built in before Arduino, but the combination of a standardized board, a wide-ranging software library, and abundant examples to learn from brought embedded programming to a much wider audience. And particularly, it brought this to an audience of beginners who were not only blinking an LED for the first time, but maybe even taking their first steps into coding. For many, the Arduino hello world was their coding hello world as well. These folks are “physical computing” natives.

Now, it’s to the point that when Arya goes to visit FOSDEM, an open-source software convention, there is hardware everywhere. Why? Because many successful software projects support open hardware, and many others run on it. People port their favorite programming languages to microcontroller platforms, and as they become more powerful, the lines between the “big” computers and the “micro” ones starts to blur.

And I think this is awesome. For one, it’s somehow more rewarding, when you’re just starting to learn to code, to see the letters you type cause something in the physical world to happen, even if it’s just blinking an LED. At the same time, everything has a microcontroller in it these days, and hacking on these devices is also another flavor of physical computing – there’s code in everything that you might think of as hardware. And with open licenses, everything being under version control, and more openness in open hardware than we’ve ever seen before, the open-source hardware world reflects the open-source software ethos.

Are we getting past the point where the hardware / software distinction is even worth making? And was “physical computing” just the buzzword for the final stages of blurring out those lines?

UScope: A New Linux Debugger And Not A GDB Shell, Apparently

[Jim Colabro] is a little underwhelmed with the experience of low-level debugging of Linux applications using traditional debuggers such as GDB and LLDB. These programs have been around for a long time, developing alongside Linux and other UNIX-like OSs, and are still solidly in the CLI domain.  Fed up with the lack of data structure support and these tools’ staleness and user experience, [Jim] has created UScope, a new debugger written from scratch with no code from the existing projects.

GBD, in particular, has quite a steep learning curve once you dig into its more advanced features. Many people side-step this learning curve by running GDB within Visual Studio or some other modern IDE, but it is still the same old debugger core at the end of the day. [Jim] gripes that existing debuggers don’t support modern data structures commonly used and have poor customizability. It would be nice, for example, to write a little code, and have the debugger render a data structure graphically to aid visualisation of a problem being investigated. We know that GDB at least can be customised with Python to create application-specific pretty printers, but perhaps [Jim] has bigger plans?

Continue reading “UScope: A New Linux Debugger And Not A GDB Shell, Apparently”

Learn New Tools, Or Hone Your Skill With The Old?

Buried in a talk on AI from an artist who is doing cutting-edge video work was the following nugget that entirely sums up the zeitgeist: “The tools are changing so fast that artists can’t keep up with them, let alone master them, before everyone is on to the next.” And while you might think that this concern is only relevant to those who have to stay on the crest of the hype wave, the deeper question resounds with every hacker.

When was the last time you changed PCB layout software or refreshed your operating system? What other tools do you use in your work or your extra-curricular projects, and how long have you been using them? Are you still designing your analog front-ends with LM358s, or have you looked around to see that technology has moved on since the 1970s? “OMG, you’re still using ST32F103s?”

It’s not a simple question, and there are no good answers. Proficiency with a tool, like for instance the audio editor with which I crank out the podcast every week, only comes through practice. And practice simply takes time and effort. When you put your time in on a tool, it really is an investment in that it helps you get better. But what about that newer, better tool out there?

Some of the reluctance to update is certainly sunk-cost fallacy, after all you put so much sweat and tears into the current tool, but there is also a real cost to overcome to learn the new hotness, and that’s no fallacy. If you’re always trying to learn a new way of doing something, you’re never going to get good at doing something, and that’s the lament of our artist friend. Honing your craft requires focus. You won’t know the odd feature set of that next microcontroller as well as you do the old faithful – without sitting down and reading the datasheet and doing a couple finger-stretching projects first.

Striking the optimal balance here is hard. On a per-project basis, staying with your good old tool or swapping to the new hotness is a binary choice, but across your projects, you can do some of each. Maybe it makes sense to budget some of your hacking time into learning new tools? How about ten percent? What do you think?

38C3: Save Your Satellite With These Three Simple Tricks

BEESAT-1 is a 1U cubesat launched in 2009 by the Technical University of Berlin. Like all good satellites, it has redundant computers onboard, so when the first one failed in 2011, it just switched over to the second. And when the backup failed in 2013, well, the satellite was “dead” — or rather sending back all zeroes. Until [PistonMiner] took a look at it, that is.

Getting the job done required debugging the firmware remotely — like 700 km remotely. Because it was sending back all zeroes, but sending back valid zeroes, that meant there was something wrong either in the data collection or the assembly of the telemetry frames. A quick experiment confirmed that the assembly routine fired off very infrequently, which was a parameter that’s modifiable in SRAM. Setting a shorter assembly time lead to success: valid telemetry frame.

Then comes the job of patching the bird in flight. [PistonMiner] pulled the flash down, and cobbled together a model of the satellite to practice with in the lab. And that’s when they discovered that the satellite doesn’t support software upload to flash, but does allow writing parameter words. The hack was an abuse of the fact that the original code was written in C++. Intercepting the vtables let them run their own commands without the flash read and write conflicting.

Of course, nothing is that easy. Bugs upon bugs, combined with the short communication window, made it even more challenging. And then there was the bizarre bit with the camera firing off after every flash dump because of a missing break in a case statement. But the camera never worked anyway, because the firmware didn’t get finished before launch.

Challenge accepted: [PistonMiner] got it working, and after fifteen years in space, and ten years of being “dead”, BEESAT-1 was taking photos again. What caused the initial problem? NAND flash memory needs to be cleared to zeroes before it’s written, and a bug in the code lead to a long pause between the two, during which a watchdog timeout fired and the satellite reset, blanking the flash.

This talk is absolutely fantastic, but may be of limited practical use unless you have a long-dormant satellite to play around with. We can nearly guarantee that after watching this talk, you will wish that you did. If so, the Orbital Index can help you get started.

FreeCAD Is Near 1.0

The open-source parametric 3D modelling software, FreeCAD, is out in a release candidate for version 1.0.  If you’ve tried FreeCAD before and found a few showstoppers, it might be a good time for you to test it out again because the two biggest of them have been solved in this latest version.

First, version 1.0 finally implements a solution to the “topological naming problem”. Imagine you want to put a hole into a surface. The program needs to know on which surface to put the hole, and so it refers to this surface by name / number. Now imagine you subdivide the surface, and both subsections get new names. Where does your hole go now?  If you want to dig into the issue, the inimitable [MangoJelly] has a great video about the topo naming problem. Practically, there were workarounds, like only adding chamfers after the main design has stabilized, but frankly it was a hassle to remember all of the tricks. This is a huge fix.

The second big fix concerns assemblies.  Older versions of FreeCAD were great for making single parts, but combining them all together inside the CAD program was always janky.  Version 1.0 combines the previous two patchwork assembly workbenches into one, and it’s altogether more pleasant to use. The constraints of how two parts move when held together with an axle just works now, and this is a big deal for multi-part models.

If you’re coming from any other parametric CAD program, most of FreeCAD will seem familiar to you, but there will also be workflow differences that will take some getting used to. In trade, what do you get? Scriptability in Python, real open source software, and all of the bells and whistles for free. Now that its two biggest pain points have been addressed,  FreeCAD has become a lot easier to love. We’re looking forward to some good V1.0 tutorials in the future, and we’ll keep you posted when we find them.

Get Thee To Git

While version control used to be reserved for big corporate projects, it is very mainstream these days. You can attribute much of that to Git, the software that has nearly displaced other version control. Git works well, it is versatile, and it scales well. It is easy to use as an individual developer or as part of a worldwide team. But Git is also one of those things that people don’t always study, they just sort of “pick it up” as they go. That motivated [Glasskube] to create “The Guide to Git I Never Had.”

If you are ready to click away because you are not a software person, hang on. Git is actually useful for many different kinds of data, and there are a number of hardware projects that use Git in some form. That’s especially true if the project has some code associated with it, but there are projects that consist of PCBs, reverse engineering documentation, or schematics.

Continue reading “Get Thee To Git”

Hackaday Links Column Banner

Hackaday Links: September 1, 2024

Why is it always a helium leak? It seems whenever there’s a scrubbed launch or a narrowly averted disaster, space exploration just can’t get past the problems of helium plumbing. We’ve had a bunch of helium problems lately, most famously with the leaks in Starliner’s thruster system that have prevented astronauts Butch Wilmore and Suni Williams from returning to Earth in the spacecraft, leaving them on an extended mission to the ISS. Ironically, the launch itself was troubled by a helium leak before the rocket ever left the ground. More recently, the Polaris Dawn mission, which is supposed to feature the first spacewalk by a private crew, was scrubbed by SpaceX due to a helium leak on the launch tower. And to round out the helium woes, we now have news that the Peregrine mission, which was supposed to carry the first commercial lander to the lunar surface but instead ended up burning up in the atmosphere and crashing into the Pacific, failed due to — you guessed it — a helium leak.
Continue reading “Hackaday Links: September 1, 2024”