It’s easy to imagine that once a spacecraft leaves Earth’s atmosphere and is in a stable orbit, the most dangerous phase of the mission is over. After all, that’s when we collectively close the live stream and turn our attentions back to terrestrial matters. Once the fire and fury of the launch is over with, all the excitement is done. From that point on, it’s just years of silently sailing through the vacuum of space. What’s the worst that could happen?
Unfortunately, satellite radio provider Sirius XM just received a harsh reminder that there’s still plenty that can go wrong after you’ve slipped Earth’s surly bonds. Despite a flawless launch in early December 2020 on a SpaceX Falcon 9 and a reportedly uneventful trip to its designated position in geostationary orbit approximately 35,786 km (22,236 mi) above the planet, their brand new SXM-7 broadcasting satellite appears to be in serious trouble.
Maxar Technologies, prime contractor for the SXM-7, says they’re currently trying to determine what’s gone wrong with the 7,000 kilogram satellite. In a statement, the Colorado-based aerospace company claimed they were focused on “safely completing the commissioning of the satellite and optimizing its performance.” But the language used by Sirius XM in their January 27th filing with the U.S. Securities and Exchange Commission was notably more pessimistic. No mention is made of bringing SXM-7 online, and instead, the company makes it clear that their existing fleet of satellites will be able to maintain service to their customers until a replacement can be launched.
So what happened, and more importantly, is there any hope for SXM-7? Neither company has released any concrete details, and given the amount of money on the line, there’s a good chance the public won’t get the full story for some time. But we can theorize a bit based on what we do know, and make some predictions about where things go from here.
Today the National Science Foundation released a pair of videos that document the collapse of the Arecibo Observatory with incredible detail. A wide shot, apparently taken from the Visitors Center, shows the 900 ton instrument platform breaking free and swinging on the remaining support cables until it smashes into the edge of the dish. The second clip, recorded by an airborne drone, is focused directly on the cables as they failed. Both can be seen in the video embedded below.
Together, they produce an invaluable visual record of what finally brought the iconic radio telescope down. As was predicted by engineers earlier in the month, the failure of another support cable on tower 4 triggered a chain reaction that brought the entire platform crashing down onto the 305 meter reflector. Footage from a drone observing the top of tower 4 shows that the entire sequence, from the first visual wire break to the remaining cables being torn from their mounts, only took five seconds. While some initially doubted the NSF’s determination that it was too dangerous to repair Arecibo, this footage seems to prove just how tenuous the structural integrity of the Observatory really was.
These videos will hopefully help investigators who still need to determine why the cables failed in the first place. The cable in August didn’t snap, it simply pulled lose from its mount. It was suspected that the cable may have been incorrectly installed, but as it was only a backup, the situation was not seen as critical. But when the second cable failed in November it was found to have snapped at just 60% of its minimum breaking strength.
This immediately called into question the condition of the remaining cables, and ultimately lead to the decision by the NSF to proceed with a controlled demolition of the Observatory that would preserve as much of the scientific equipment as possible. Unfortunately, the remaining cables didn’t last long enough to put that plan into action.
[Tweepy]’s TV stopped working, and the experience is a brief reminder that if a modern appliance fails, it is worth taking a look inside because the failure might be something simple. In this case, the dead TV was actually a dead LED backlight, and the fix was so embarrassingly simple that [Tweepy] is tempted to chalk it up to negligently poor DFM (design for manufacture) at best, or even some kind of effort at planned obsolescence at worst.
What happened is this: the TV appeared to stop working, but one could still make out screen content while shining a bright light on the screen. Seeing this, [Tweepy] deduced that the backlight had failed, and opened up the device to see if it could be repaired. However, the reason for the backlight failure was a surprise. It was not the power supply, nor even any of the LEDs themselves; the whole backlight wouldn’t turn on because of a cheap little PCB-to-PCB connector, and the two small spring contacts inside that had failed.
From the outside things looked okay, but wiggling the connector made the backlight turn on and off, so the connection was clearly bad. Investigating further, [Tweepy] saw that the contact points of the PCBs and the two little conductors inside the connector showed clear signs of arcing and oxidation, leading to a poor connection that eventually failed, resulting in a useless TV. The fix wasn’t to clean the contacts; the correct fix was to replace the connector with a soldered connection.
Using that cheap little connector doubtlessly saved some assembly time at the factory, but it also led to failure within a fairly short amount of time. Had [Tweepy] not been handy with a screwdriver (or not bothered to investigate) the otherwise working TV would doubtlessly have ended up in a landfill.
It serves as a good reminder to make some time to investigate failures of appliances, even if one’s repair skills are limited, because the problem might be a simple one. Planned obsolescence is a tempting doorstep upon which to dump failures like this, but a good case can be made that planned obsolescence isn’t really a thing, even if manufacturers compromising products in one way or another certainly is.
After a decade in development, the Boeing CST-100 “Starliner” lifted off from pad SLC-41 at the Cape Canaveral Air Force Station a little before dawn this morning on its first ever flight. Officially referred to as the Boeing Orbital Flight Test (Boe-OFT), this uncrewed mission was intended to verify the spacecraft’s ability to navigate in orbit and safely return to Earth. It was also planned to be a rehearsal of the autonomous rendezvous and docking procedures that will ultimately be used to deliver astronauts to the International Space Station; a capability NASA has lacked since the 2011 retirement of the Space Shuttle.
Unfortunately, some of those goals are now unobtainable. Due to a failure that occurred just 30 minutes into the flight, the CST-100 is now unable to reach the ISS. While the craft remains fully functional and in a stable orbit, Boeing and NASA have agreed that under the circumstances the planned eight day mission should be cut short. While there’s still some hope that the CST-100 will have the opportunity to demonstrate its orbital maneuverability during the now truncated flight, the primary focus has switched to the deorbit and landing procedures which have tentatively been moved up to the morning of December 22nd.
While official statements from all involved parties have remained predictably positive, the situation is a crushing blow to both Boeing and NASA. Just days after announcing that production of their troubled 737 MAX airliner would be suspended, the last thing that Boeing needed right now was another high-profile failure. For NASA, it’s yet another in a long line of setbacks that have made some question if private industry is really up to the task of ferrying humans to space. This isn’t the first time a CST-100 has faltered during a test, and back in August, a SpaceX Crew Dragon was obliterated while its advanced launch escape system was being evaluated.
We likely won’t have all the answers until the Starliner touches down at the White Sands Missile Range and Boeing engineers can get aboard, but ground controllers have already started piecing together an idea of what happened during those first critical moments of the flight. The big question now is, will NASA require Boeing to perform a second Orbital Flight Test before certifying the CST-100 to carry a human crew?
Let’s take a look at what happened during this morning’s launch.
Sometimes, a project turns out to be harder than expected at every turn and the plug gets pulled. That was the case with [Chris Fenton]’s efforts to gain insight into his curling game by adding sensors to monitor the movement of curling stones as well as the broom action. Luckily, [Chris] documented his efforts and provided us all with an opportunity to learn. After all, failure is (or should be) an excellent source of learning.
The first piece of hardware was intended to log curling stone motion and use it as a way to measure the performance of the sweepers. [Chris] wanted to stick a simple sensor brick made from a Teensy 3.0 and IMU to a stone and log all the motion-related data. The concept is straightforward, but in practice it wasn’t nearly as simple. The gyro, which measures angular velocity, did a good job of keeping track of the stone’s spin but the accelerometer was a different story. An accelerometer measures how much something is speeding up or slowing down, but it simply wasn’t able to properly sense the gentle and gradual changes in speed that the stone underwent as the ice ahead of it was swept or not swept. In theory a good idea, but in practice it ended up being the wrong tool for the job.
The other approach [Chris] attempted was to make a curling broom with a handle that lit up differently based on how hard one was sweeping. It wasn’t hard to put an LED strip on a broom and light it up based on a load sensor reading, but what ended up sinking this project was the need to do it in a way that didn’t interfere with the broom’s primary function and purpose. Even a mediocre curler applies extremely high forces to a broom when sweeping in a curling game, so not only do the electronics need to be extremely rugged, but the broom’s shaft needs to be able to withstand considerable force. The ideal shaft would be a clear and hollow plastic holding an LED strip with an attachment for the load sensor, but no plastic was up to the task. [Chris] made an aluminum-reinforced shaft, but even that only barely worked.
You may have noticed, we’re fans of the Raspberry Pi here at Hackaday. Hardly a day goes by that we don’t feature a hack that uses a Pi somewhere in the build. As useful as the Pis are, they aren’t entirely without fault. We’ve talked about the problems with the PoE hat, and multiple articles about keeping SD cards alive. But a new failure mode has popped that is sometimes, but not always, caused by shorting the two power rails on the board.
The Pi 3 B+ has a new PMIC (Power Management Integrated Circuit) made by MaxLinear. This chip, the MxL7704, is a big part of how the Raspberry Pi foundation managed to make the upgrades to the Pi 3 without raising the price over $35.
A quick look at the Raspberry Pi forum shows that some users have been experiencing a specific problem with their new Raspberry Pi 3 B+ units, where the power LED will illuminate but the unit will not boot. The giveaway is zero voltage on the 3v3 pin. It’s a common enough problem that it’s even mentioned in the official boot problems thread.
Make sure the probe you are measuring with does not slip, and simultaneously touches any of the other GPIO pins, as that might instantly destroy your PI, especially shorting the 3V3 pin to the 5V pin will prove to be fatal.
[Mark Rehorst] has been busy with his Ultra MegaMax Dominator (UMMD) design for a 3D printer, and one of the many things he learned in the process was how not to design a 3D printed belt clamp. In the past, we saw how the UMMD ditched the idea of a lead screw in favor of a belt-driven Z axis, but [Mark] discovered something was amiss when the belts were flopping around a little, as though they had lost tension. Re-tensioning them worked, but only for a few days. It turned out that the belt clamp design he had chosen led to an interesting failure.
The belts used were common steel-core polyurethane GT2 belts, and the clamp design uses a short segment of the same belt to lock together both ends, as shown above. It’s a simple and effective design, but one that isn’t sustainable in the longer term.
The problem was that this design led to the plastic portion of the belt stretching out and sliding over the internal steel wires. The stretching of the polyurethane is clear in the image shown here, but any belt would have had the same problem in the clamp as it was designed. [Mark] realized it was a much better idea to use a design in which the belts fold over themselves, so the strain is more evenly distributed.
[Mark] has been sharing his experiences and design process when it comes to building 3D printers, so if you’re interested be sure to check out the UMMD and its monstrous 695 mm of Z travel.