When you think of NASA, you think of high-stakes, high-cost, high-pressure engineering, and maybe the accompanying red tape. In comparison, the hobby hacker has a tremendous latitude to mess up, dream big, and generally follow one’s bliss. Hopefully you’ll take some notes. And as always with polar extremes, the really fertile ground lies in the middle.
[Dan Maloney] and I were thinking about this yesterday while discussing the 50th flight of Ingenuity, the Mars helicopter. Ingenuity is a tech demo, carrying nothing mission critical, but just trying to figure out if you could fly around on Mars. It was planned to run for five flights, and now it’s done 50.
The last big tech demo was the Sojourner Rover. It was a small robotic vehicle the size of a microwave oven that they hoped would last seven days. It went for 85, and it gave NASA the first taste of success it needed to follow on with 20 years of Martian rovers.
Both of these projects were cheap, by NASA standards, and because they were technical demonstrators, the development teams were allowed significantly more design freedom, again by NASA standards.
None of this compares to the “heck I’ll just hot-air an op-amp off an old project” of weekend hacking around here, but I absolutely believe that a part of the tremendous success of both Sojourner and Ingenuity were due to the risks that the development teams were allowed to take. Creativity and successful design thrives on the right blend of constraint and freedom.
Will Ingenuity give birth to a long series of flying planetary rovers as Sojourner did for her rocker-bogie based descendants? Too early to tell. But I certainly hope that someone within NASA is noticing the high impact that these technical demonstrator projects have, and also noting why. The addition of a little bit of hacker spirit to match NASA’s professionalism probably goes a long way.
37 thoughts on “The Freedom To Fail”
Can’t wait for the Dragonfly mission to Titan!
As a hacker I think it’s important to aim for a predefined project failure rate. Personally, I aim for 20% and if my success rate creeps up too far I have to chose something more demanding / risky to counter balance or else I get told off by the cat.
Any specific reason for 20%? Just experience? I like the idea of being intentional about the balance between chaos and order.
Yep, just past experience and what I’ve become happy with.
I’d rather keep my aims to be successful practically always, but allow reverse feature creep to define successful when it is good enough, rather than as good as I wanted it to be. For instance a dead baby Bose radio got gutted and fitted with a decent audio card and Pi, battery power regulator but there are no display or buttons functional right now – it sounds really good and fits nicely in the corner of the workshop, not yet gone back to it. So that is a success…
Note: Possibly some failures along the way to getting there, scraping a part or two along the way is to be expected. But as I have not got the money to make it worth even trying to take on projects I am certain to fail at many many times….
20% is a great target. Pareto distribution.
Long live Vilfredo Pareto !!!
Freedom to fail is also one of the big reasons that SpaceX has come so far so quickly. Like the glorious failure that was the Starship launch on the 20th.
It also helps if you use the ‘Von Braun’ scale when measuring success.
0 – Failed to ignite.
1 – Ignited and immediately went out.
2 – Ignited, caught fire and exploded.
3 – Ignited, burned for briefly before catching fire and exploding.
4 – Ignited, burned for several minutes without catching fire, but failed to develop any thrust.
5 – Developed enough thrust to cause it to topple over, catch fire and explode.
6 – Lifted approximately 1m before toppling over, catching fire and exploding.
No overachievers in that list.
“This is the captain… We have a little problem with our entry sequence, so we may experience some slight turbulence and then… explode.” – Malcolm Reynolds in Serenity.
“Our landing might be interesting…”
“Oh God, we’re all gonna die”
Mal – “Just get us on the ground”
Wash – “That part will happen pretty definitely “
I worked in power supply design for many years.
One engineering manager was a strong proponent of increasing burn-in stress to have a few units fail the manufacturing burn-in process. He felt if nothing ever failed, you’re not learning anything to improve the product.
You can get the same result by constantly buying cheaper parts.
Just ask the big car makers. ‘On verge of bankruptcy’ is the perfect place to keep your suppliers. Nothing bad has ever come from that.
Interesting that these demo projects often significantly outlive their expected life, whereas some of the non-demo stuff fails (like that drill which didn’t work, and hubble’s first mirror…). Maybe confirmation bias here, but do we know any “demo” projects that completely failed? Would be interesting to compare the success rates.
Hate to tell you but rank and file knew it wouldn’t focus, but were threatened significantly if word got out
NASA leaks almost as badly as NATO headquarters, the saying was that anything shared with NATO would be in Moscow with the hour.
If anyone outside middle management had known about the problems with the mirror, it would have leaked.
Everyone, it seems, thought it was good until first light, though they were later determined to have been incompetent to have thought so. The mirror manufacturer had multiple instruments to measure the shape of the mirror: some old reliable ones and a shiny new custom-built one. They used the new one to guide the final figuring step. Once figuring was done, the old and new instruments disagreed about the shape of the mirror. The manufacturer trusted the shiny new one because it was shiny and new, and ignored the old reliable ones. After the telescope was launched, the shiny new instrument was found to have been assembled incorrectly. (Source: Wikipedia)
PS: The first time I tried to post this comment, I got a “nonce verification error”, presumably because I’d left the comment field open for many hours and the nonce had expired, but the error message could be more informative, and it would be nice if it gave an opportunity to copy the comment before reloading the page. (I tried to recover it with Typio, a form recovery extension, but it doesn’t seem to work on here, maybe somehow to do with the comment form being dynamically added to the page when you click “Reply”, so I had to retype it from memory.) And, just in case this comment ends up at the bottom, as sometimes happens, it’s intended as a reply to Piotrsko’s comment that’s currently at position 5.1.
No, InSight’s drill wasn’t necessary to the baseline mission. That’s why missions like that have multiple instruments – because they know that one or more of them can fail. Even Hubble’s original mirror, though very bad, would have still allowed Hubble to do the baseline science it was funded for.
The main reason that the “demo projects” seem to “succeed spectacularly” is because the baseline mission for them is set extremely low. Having been part of NASA mission planning, there’s significant effort involved in making sure that you “bake in” known risks to make sure that the baseline goal of the mission is achievable so long as you don’t have complete catastrophic failure.
“but do we know any “demo” projects that completely failed?”
Just look at the success rate of the cubesats that were launched on Artemis I. It’s like, 50% or less-ish. Easier to tell there because they all launched at once.
> Hubble’s first mirror
Fun fact: Hubble is still using that mirror to this day! There was a backup mirror, made by a different manufacturer and to the correct shape, but it would’ve been too expensive to install it (involving bringing the entire telescope back down to the ground and then launching it a second time). Instead, corrective optics were designed into the instruments to be installed on the telescope during servicing missions, starting with WFPC2, and a dedicated correction module, COSTAR, was installed to help the early instruments without correction built in (and later removed to make room for another instrument, once they all had their own correction).
“If we don’t succeed.. we run the risk of failure” .. Dan Quayle (Although he lowered the bar on dumb sayings, the current administration seems to easily stroll under it)
Thankfully, your article is a friendly reminder that failure is part of the discovery process, and sometimes reveals more about discovery than a success, which sometimes only confirms a previous idea.
It is true the middle ground is more fertile- in terms of financial support, but it too can hamper simple experiments. That said, the well-funded research can develop a lot of new tech, whereas the independent research can discover new applications for existing tech. Sourcing cheap components can be practical for new prototypes when the error tolerance level is lax and are not requiring strict performance levels, which broadens the proof-of-concept potential.
Doubtful… NASA is now too deeply entrenched in bureaucracy for any of their mainstream projects to exhibit true innovation, even with bloated budgets and decades of development. Everything is “designed by committee” and too risk-averse. The future is private, now, and that’s not a bad thing. Innovation rises from adversity and risk, not safe, bloated budgets and Congressional dictates. NASA has been in a rut for fifty years… time to move on.
But I have to wonder if it’s a good idea to leave it to the likes of Bezos and Musk. Especially considering the childish temper tantrum Bezos threw because Branson beat him to orbit.
We don’t “leave it to the likes of Bezos and Musk”, they were just the people who decided to do it. Plenty of other people could have done so in their place, but didn’t.
We don’t always get the chance to pick our heroes.
Next year’s Hackerday competition: Put a person into space and win $50,000 !!
Ralph Kramden (Jackie Gleason -The Honeymooners) predated NASA’s Moon landing… “One of these days Alice… To the moon !”
Ingenuity, still running even though it is using commercial off-the-shelf electronics NOT radiation hardened and, therefore, having FAR smaller feature size and component count with the resulting far greater computational power. I read that Ingenuity has much greater total computational capabilities that all past rovers combined. If highly redundant combinations of the same are allowed in rovers in the future some impressive AI might be possible that will NEVER be possible with the late 1990’s rad-hardened CPUs currently used.
Radiation-hardened central processor with PowerPC 750 Architecture: a BAE RAD 750
Operates at up to 200 megahertz speed, 10 times the speed in Mars rovers Spirit and Opportunity’s computers
2 gigabytes of flash memory (~8 times as much as Spirit or Opportunity)
256 megabytes of dynamic random access memory
256 kilobytes of electrically erasable programmable read-only memory
Damn your lack of editor:
“having FAR smaller feature size and much larger component count”
Ingenuity CPUs. Not mentioned are the MEMS sensors.
The Snapdragon processor from Intrinsyc with a Linux operating system performs high-level functions on the helicopter. The Snapdragon processor has a 2.26 GHz Quad-core Snapdragon 801 processor with 2 GB Random Access Memory (RAM), 32 GB Flash memory, a Universal Asynchronous Receiver Transmitter (UART), a Serial Peripheral Interface (SPI), General Purpose Input/Ouput (GPIO), a 4000 pixel color camera, and a Video Graphics Array (VGA) black-and-white camera. This processor implements visual navigation via a velocity estimate derived from features tracked in the VGA camera, filter propagation for use in flight control, data management, command processing, telemetry generation, and radio communication.
The Snapdragon processor is connected to two flight-control (FC) Microcontroller Units (MCU) via a Universal Asynchronous Receiver/Transmitter (UART). These MCU processor units operate redundantly, receiving and processing identical sensor data to perform the flight-control functions necessary to keep the vehicle flying in the air. At any given time, one of the MCU is active with the other waiting to be hot-swapped in case of a fault. The MCU from Texas Instruments is a TMS570LC43x high-reliability automotive processor operating at 300 MHz, with 512 K RAM, 4 MB flash memory, UART, SPI, GPIO.
Stanford University teaches students how to succeed.
Stanford does not tach students how to fail, Sandia labs contractor said.
Stanford female graduate got laid off from her important position.
And later died from alcohol poisoning, poster told by contractor.
Am I the only one who saw this post’s title and picture and thought it was an obituary?
(Ingenuity’s doing fine, BTW, and has been flying a lot in the last few months, largely due to having to keep ahead of Perseverance’s moving no-fly zone as they traveled along a canyon. Failing to stay ahead would’ve meant falling behind and not being able to get ahead again due to the canyon being too narrow for passing.)
Please be kind and respectful to help make the comments section excellent. (Comment Policy)