The January 16th “Green Run” test of NASA’s Space Launch System (SLS) was intended to be the final milestone before the super heavy-lift booster would be moved to Cape Canaveral ahead of its inaugural Artemis I mission in November 2021. The full duration static fire test was designed to simulate a typical launch, with the rocket’s main engines burning for approximately eight minutes at maximum power. But despite a thunderous start start, the vehicle’s onboard systems triggered an automatic abort after just 67 seconds; making it the latest in a long line of disappointments surrounding the controversial booster.
When it was proposed in 2011, the SLS seemed so simple. Rather than spending the time and money required to develop a completely new rocket, the super heavy-lift booster would be based on lightly modified versions of Space Shuttle components. All engineers had to do was attach four of the Orbiter’s RS-25 engines to the bottom of an enlarged External Tank and strap on a pair of similarly elongated Solid Rocket Boosters. In place of the complex winged Orbiter, crew and cargo would ride atop the rocket using an upper stage and capsule not unlike what was used in the Apollo program.
There’s very little that could be called “easy” when it comes to spaceflight, but the SLS was certainly designed to take the path of least resistance. By using flight-proven components assembled in existing production facilities, NASA estimated that the first SLS could be ready for a test flight in 2016.
If everything went according to schedule, the agency expected it would be ready to send astronauts beyond low Earth orbit by the early 2020s. Just in time to meet the aspirational goals laid out by President Obama in a 2010 speech at Kennedy Space Center, including the crewed exploitation of a nearby asteroid by 2025 and a potential mission to Mars in the 2030s.
But of course, none of that ever happened. By the time SLS was expected to make its first flight in 2016, with nearly $10 billion already spent on the program, only a few structural test articles had actually been assembled. Each year NASA pushed back the date for the booster’s first shakedown flight, as the project sailed past deadlines in 2017, 2018, 2019, and 2020. After the recent engine test ended before engineers were able to collect the data necessary to ensure the vehicle could safely perform a full-duration burn, outgoing NASA Administrator Jim Bridenstine said it was too early to tell if the booster would still fly this year.
What went wrong? As commercial entities like SpaceX and Blue Origin move in leaps and bounds, NASA seems stuck in the past. How did such a comparatively simple project get so far behind schedule and over budget?
Raspberry Pi was synonymous with single-board Linux computers. No longer. The $4 Raspberry Pi Pico board is their attempt to break into the crowded microcontroller module market.
The microcontroller in question, the RP2040, is also Raspberry Pi’s first foray into custom silicon, and it’s got a dual-core Cortex M0+ with luxurious amounts of SRAM and some very interesting custom I/O peripheral hardware that will likely mean that you never have to bit-bang again. But a bare microcontroller is no fun without a dev board, and the Raspberry Pi Pico adds 2 MB of flash, USB connectivity, and nice power management.
As with the Raspberry Pi Linux machines, the emphasis is on getting you up and running quickly, and there is copious documentation: from “Getting Started” type guides for both the C/C++ and MicroPython SDKs with code examples, to serious datasheets for the Pico and the RP2040 itself, to hardware design notes and KiCAD breakout boards, and even the contents of the on-board Boot ROM. The Pico seems designed to make a friendly introduction to microcontrollers using MicroPython, but there’s enough guidance available for you to go as deep down the rabbit hole as you’d like.
Our quick take: the RP2040 is a very well thought-out microcontroller, with myriad nice design touches throughout, enough power to get most jobs done, and an innovative and very hacker-friendly software-defined hardware I/O peripheral. It’s backed by good documentation and many working examples, and at the end of the day it runs a pair of familiar ARM MO+ CPU cores. If this hits the shelves at the proposed $4 price, we can see it becoming the go-to board for many projects that don’t require wireless connectivity.
For many years now, the so-called ‘Blue Pill’ STM32 MCU development board has been a staple in the hobbyist community. Finding its origins as an apparent Maple Mini clone, the diminutive board is easily to use in breadboard projects thanks to its dual rows of 0.1″ pin sockets. Best of all, it only costs a few bucks, even if you can only really buy it via sellers on AliExpress and EBay.
Starting last year, boards with a black soldermask and an STM32F4 Access (entry-level) series MCUs including the F401 and F411 began to appear. These boards with the nickname ‘Black Pill’ or ‘Black Pill 2’. F103 boards also existed with black soldermask for a while, so it’s confusing. The F4xx Black Pills are available via the same sources as the F103-based Blue Pill ones, for a similar price, but feature an MCU that’s considerably newer and more powerful. This raises the question of whether it makes sense at this point to switch to these new boards.
When the Space Shuttle Atlantis rolled to a stop on its final mission in 2011, it was truly the end of an era. Few could deny that the program had become too complex and expensive to keep running, but even still, humanity’s ability to do useful work in low Earth orbit took a serious hit with the retirement of the Shuttle fleet. Worse, there was no indication of when or if another spacecraft would be developed that could truly rival the capabilities of the winged orbiters first conceived in the late 1960s.
While its primary function was to carry large payloads such as satellites into orbit, the Shuttle’s ability to retrieve objects from space and bring them back was arguably just as important. Throughout its storied career, sensitive experiments conducted at the International Space Station or aboard the Orbiter itself were returned gently to Earth thanks to the craft’s unique design. Unlike traditional spacecraft that ended their flight with a rough splashdown in the open ocean, the Shuttle eased itself down to the tarmac like an airplane. Once landed, experiments could be quickly unloaded and transferred to the nearby Space Station Processing Facility where science teams would be waiting to perform further processing or analysis.
For 30 years, the Space Shuttle and its assorted facilities at Kennedy Space Center provided a reliable way to deliver fragile or time-sensitive scientific experiments into the hands of researchers just a few hours after leaving orbit. It was a valuable service that simply didn’t exist before the Shuttle, and one that scientists have been deprived of ever since its retirement.
Until now. With the successful splashdown of the first Cargo Dragon 2 off the coast of Florida, NASA is one step closer to regaining a critical capability it hasn’t had for a decade. While it’s still not quite as convenient as simply rolling the Shuttle into the Orbiter Processing Facility after a mission, the fact that SpaceX can guide their capsule down into the waters near the Space Coast greatly reduces the time required to return experiments to the researchers who designed them.
The United Kingdom is somewhat unique in the world for requiring those households which view broadcast television to purchase a licence for the privilege. Initially coming into being with the Wireless Telegraphy Act in 1923, the licence was required for anyone receiving broadcast radio, before being expanded to cover television in 1946. The funds generated from this endeavour are used as the primary funding for the British Broadcasting Corporation.
Of course, it’s all well and good to require a licence, but without some manner of enforcement, the measure doesn’t have any teeth. Among other measures, the BBC have gone as far as employing special vans to hunt down illegally operating televisions and protect its precious income.
The Van Is Coming For You
To ensure a regular income, the BBC runs enforcement operations under the TV Licencing trade name, the entity which is responsible for administering the system. Records are kept of licences and their expiry dates, and investigations are made into households suspected of owning a television who have not paid the requisite fees. To encourage compliance, TV Licencing regularly sends sternly worded letters to those who have let their licence lapse or have not purchased one. In the event this fails, they may arrange a visit from enforcement officers. These officers aren’t empowered to forcibly enter homes, so in the event a homeowner declines to cooperate with an investigation, TV Licencing will apply for a search warrant. This may be on the basis of evidence such as a satellite dish or antenna spotted on the roof of a dwelling, or a remote spied on a couch cushion through a window.
Alternatively, a search warrant may be granted on the basis of evidence gleaned from a TV detector van. Outfitted with equipment to detect a TV set in use, the vans roam the streets of the United Kingdom, often dispatched to addresses with lapsed or absent TV licences. If the van detects that a set may be operating and receiving broadcast signals, TV Licencing can apply to the court for the requisite warrant to take the investigation further. The vans are almost solely used to support warrant applications; the detection van evidence is rarely if ever used in court to prosecute a licence evader. With a warrant in hand, officers will use direct evidence such as a television found plugged into an aerial to bring an evader to justice through the courts.
The modern consumer is not overly concerned with their phone conversations being monitored. For one thing, Google and Amazon have done a tremendous job of conditioning them to believe that electronic gadgets listening to their every word isn’t just acceptable, but a near necessity in the 21st century. After all, if there was a better way to turn on the kitchen light than having a recording of your voice uploaded to Amazon so they can run it through their speech analysis software, somebody would have surely thought of it by now.
But perhaps more importantly, there’s a general understanding that the nature of telephony has changed to the point that few outside of three letter agencies can realistically intercept a phone call. Sure we’ve seen the occasional spoofed GSM network pop up at hacker cons, and there’s a troubling number of StingRays floating around out there, but it’s still a far cry from how things were back when folks still used phones that plugged into the wall. In those days, the neighborhood creep needed little more than a pair of wire strippers to listen in on your every word.
Which is precisely why products like the TA-1356 Tap Trapper were made. It was advertised as being able to scan your home’s phone line to alert you when somebody else might be listening in, whether it was a tape recorder spliced in on the pole or somebody in another room lifting the handset. You just had to clip it onto the phone distribution panel and feed it a fresh battery once and awhile.
If the red light came on, you’d know something had changed since the Tap Trapper was installed and calibrated. But how did this futuristic defender of communications privacy work? Let’s open it up and take a look.
The history of storage devices is quite literally a race between the medium and the computing power as the bottleneck of preserving billions of ones and zeros stands in the way of computing nirvana. The most recent player is the Non-Volatile Memory Express (NVMe), something of a hybrid of what has come before.
The first generations of home computers used floppy disk and compact cassette-based storage, but gradually, larger and faster storage became important as personal computers grew in capabilities. By the 1990s hard drive-based storage had become commonplace, allowing many megabytes and ultimately gigabytes of data to be stored. This would drive up the need for a faster link between storage and the rest of the system, which up to that point had largely used the ATA interface in Programmed Input-Output (PIO) mode.
This led to the use of DMA-based transfers (UDMA interface, also called Ultra ATA and Parallel ATA), along with DMA-based SCSI interfaces over on the Apple and mostly server side of the computer fence. Ultimately Parallel ATA became Serial ATA (SATA) and Parallel SCSI became Serial Attached SCSI (SAS), with SATA being used primarily in laptops and desktop systems until the arrival of NVMe along with solid-state storage.
All of these interfaces were designed to keep up with the attached storage devices, yet NVMe is a bit of an odd duck considering the way it is integrated in the system. NVMe is also different for not being bound to a single interface or connector, which can be confusing. Who can keep M.2 and U.2 apart, let alone which protocol the interface speaks, be it SATA or NVMe?
Let’s take an in-depth look at the wonderful and wacky world of NVMe, shall we?