Developing Guidelines For Sustainable Spaceflight

In the early days of spaceflight, when only the governments of the United States and the Soviet Union had the ability to put an object into orbit, even the most fanciful of futurists would have had a hard time believing that commercial entities would one day be launching sixty satellites at a time. What once seemed like an infinite expanse above our heads is now starting to look quite a bit smaller, and it’s only going to get more crowded as time goes on. SpaceX is gearing up to launch nearly 12,000 individual satellites for their Starlink network by the mid-2020s, and that’s just one of the “mega constellations” currently in the works.

Just some of the objects in orbit around the Earth

It might seem like overcrowding of Earth orbit is a concern for the distant future, but one needs only look at recent events to see the first hints of trouble. On September 2nd, the European Space Agency announced that one of its research spacecraft had to perform an evasive maneuver due to a higher than acceptable risk of colliding with one of the first-generation Starlink satellites. Just two weeks later, Bigelow Aerospace were informed by the United States Air Force that there was a 1 in 20 chance that a defunct Russian Cosmos 1300 satellite would strike their Genesis II space station prototype.

A collision between two satellites in orbit is almost certain to be catastrophic, ending with both spacecraft either completely destroyed or severely damaged. But in the worst case, the relative velocity between the vehicles can be so great that the impact generates thousands of individual fragments. The resulting cloud of shrapnel can circle the Earth for years or even decades, threatening to tear apart any spacecraft unlucky enough to pass by.

Fortunately avoiding these collisions shouldn’t be difficult, assuming everyone can get on the same page before it’s too late. The recently formed Space Safety Coalition (SSC) is made up of more than twenty aerospace companies that realize the importance of taking proactive steps to ensure humanity retains the unfettered access to outer space by establishing some common “Rules of the Road” for future spacecraft.

Continue reading “Developing Guidelines For Sustainable Spaceflight”

DNS-over-HTTPS Is The Wrong Partial Solution

Openness has been one of the defining characteristics of the Internet for as long as it has existed, with much of the traffic today still passed without any form of encryption. Most requests for HTML pages and associated content are in plain text, and the responses are returned in the same way, even though HTTPS has been around since 1994.

But sometimes there’s a need for security and/or privacy. While the encryption of internet traffic has become more widespread for online banking, shopping, the privacy-preserving aspect of many internet protocols hasn’t kept pace. In particular, when you look up a website’s IP address by hostname, the DNS request is almost always transmitted in plain text, allowing all the computers and ISPs along the way to determine what website you were browsing, even if you use HTTPS once the connection is made.

The idea of also encrypting DNS requests isn’t exactly new, with the first attempts starting in the early 2000s, in the form of DNSCrypt, DNS over TLS (DoT), and others. Mozilla, Google, and a few other large internet companies are pushing a new method to encrypt DNS requests: DNS over HTTPS (DoH).

DoH not only encrypts the DNS request, but it also serves it to a “normal” web server rather than a DNS server, making the DNS request traffic essentially indistinguishable from normal HTTPS. This is a double-edged sword. While it protects the DNS request itself, just as DNSCrypt or DoT do, it also makes it impossible for the folks in charge of security at large firms to monitor DNS spoofing and it moves the responsibility for a critical networking function from the operating system into an application. It also doesn’t do anything to hide the IP address of the website that you just looked up — you still go to visit it, after all.

And in comparison to DoT, DoH centralizes information about your browsing in a few companies: at the moment Cloudflare, who says they will throw your data away within 24 hours, and Google, who seems intent on retaining and monetizing every detail about everything you’ve ever thought about doing.

DNS and privacy are important topics, so we’re going to dig into the details here.

Name Servers and Trust

The concept of the Domain Name System dates all the way back to its ARPANET days, when a single text file on each ARPANET node – called HOSTS.TXT  – contained the mapping of system names on the ARPANET to their numeric addresses. When you wrote this file yourself, it was easy to be sure it was correct. As the network grew, it became unrealistic to maintain both the central and local copies of this file. By the early 1980s efforts were underway to create a system to automate this process.

The first DNS name server (Berkeley Internet Name Domain Server, or BIND) was written in 1984 by a group of UC Berkeley students, based on RFC 882 and RFC 883. By 1987 the DNS standard had been revised a number of times, resulting in RFC 1034 and RFC 1035, which have largely remained unchanged since then.

The essential structure of DNS is that of a tree-like configuration, with its nodes and leaves subdivided into zones. The DNS root zone is the top-level zone, which consists out of thirteen root server clusters, which form the authoritative DNS root servers. Any newly set up DNS server (e.g. at an ISP or at a company) will end up getting its DNS records from at least one of those servers.

Each further DNS zone adds a further domain to the name system. Each country tends to manage its own domains, with special domains (like .org, .com) which aren’t bound to any specific country managed by a separate entity. When resolving a domain name using DNS, this means starting with the domain name (e.g. .com), then the name (e.g. ‘google’) and finally any sub-domains. This can involve a few trips through DNS zones if the requested data has not been cached already.

DNSSEC: Adding Trust to DNS

Before we get around to encrypting DNS requests, it’s important to be sure that the DNS server we’re talking to can be trusted. The need for this became clear during the 1990s, culminating into the first workable DNS Security Extensions (DNSSEC) standard (RFC 2353) and the revised RFC 4033 (DNSSEC-bis).

A map of the internet in 2006. (Opte project, CC BY 2.5)

DNSSEC works by signing the DNS lookup records with public-key cryptography. The authenticity of a DNS record can thus be verified by the public keys for the DNS root zone, which is the trusted third party in this scenario. Domain owners generate their own keys, which are signed by the zone operator and added to the DNS.

While DNSSEC allows one to be relatively certain that the responses one gets from the DNS resolver is genuine, it does require DNSSEC to be enabled in one’s OS. Unfortunately few OSes implement a DNS service that is more than just ‘DNSSEC-aware’, meaning that they do not actually validate the DNS responses. This means that today one cannot be sure that the DNS responses one receives are genuine.

The Problems with DoH

But let’s imagine that you are using DNSSEC. You’re now ready to encrypt the communication to add a level of privacy to the transaction. There are a number of motivations for keeping one’s DNS queries secret from prying eyes. The more innocent reasons include dodging corporate and ISP filters, preventing tracking of one’s internet habits and so on. More serious motivations include avoiding political persecution for expressing one’s views on the internet. Naturally, encrypting one’s DNS queries prevents people from snooping on those queries, but this ignores most larger security issues with DNS and of course every other communication protocol.

Here, the main contenders are DoT, using TLS, and the proposed DoH, using HTTPS. The most obvious difference between the two is the port they run on: DoT has a dedicated port, TCP 853, whereas DoH mixes in with other HTTPS traffic on port 443. This has the questionable benefit of DNS queries not being distinguishable at all, meaning that it removes options for network operators (private and corporate) to secure their own network, as one of the architects behind DNS, Paul Vixie, pointed out on Twitter last year.

The second main difference is that whereas DoT simply sends DNS queries over a TLS connection, DoH is essentially DNS-over-HTTP-over-TLS, resulting in its own mime Media Type of application/dns-message and significant added complexity. By mixing DoH in with existing protocols, it means that every DNS request and response goes through an HTTPS stack. For embedded applications this is a nightmare scenario, but it is also incompatible with nearly every piece of existing security hardware out there.

DoT has the other advantage that it’s already implemented and has been in use for far longer than DoH, with many parties, including Cloudflare, Google, some national ISPs and standard DNS server software like BIND supporting DoT out of the box. On Android Pie (version 9, for those keeping track) and later, DNS over TLS will be used by default if the selected DNS resolver supports DoT.

Why switch up to DoH just as DoT is finally gaining traction? By having rogue apps like Firefox circumvent the system’s DoT-based DNS and use its own DNS resolver over DoH instead, this makes for a highly opaque security situation. That DNS resolving would move into individual applications, as we see happening now, seems like a massive step backwards. Do you know which DNS resolver each application uses? If it mixes in with TCP port 443 traffic, how would you even know?

Encryption Doesn’t Stop Tracking

Two big parties behind DNS over HTTPS are Cloudflare and Mozilla, the latter of which has produced this cutesy little cartoon in which they try to explain DoH. Not unsurprisingly, in it they completely omit to mention DNSSEC (despite it being referenced as ‘crucial’ in RFC 8484), instead proposing something called Trusted Recursive Resolver (TRR), which seems to basically mean ‘use a trustworthy DNS resolver’, which for Mozilla means ‘Cloudflare’.

Unrelated to DoH, they mention a standard called ‘QNAME minimization’ (RFC 7816) which aims to reduce the amount of non-critical information the DNS resolver sends along to DNS, as covered by this Verisign blog article. As said, this standard has no bearing on DoH and would even work fine without any DNS encryption. Like DNSSEC it’s a further evolution of the DNS standard that improves its security and privacy aspects.

The kicker is in the ‘What isn’t fixed by TRR with DoH?’ section, however. As pointed out by experts on many occasions, encrypting DNS doesn’t prevent tracking. Any subsequent requests to the IP address that one so secretly resolved would still be visible clear as day. Everybody will still know that you’re visiting Facebook.com, or that risky dissident website. No amount of DNS and internet traffic encryption will hide information that is crucial to the functioning of a network like the internet.

The Future Internet is a Single Point of Failure?

Mozilla’s answer to the IP tracking problem is to essentially say that there is no problem, because of the Cloud. As more and more websites and content distribution networks (CDNs) get lumped onto a handful of services (Cloudflare, Azure, AWS, etc.), the meaning of that single IP becomes less and less meaningful, you just have to trust whichever Cloud service you pick to not steal your data, or go down for a day.

This year, there was a massive downtime event on June 24, when a configuration mistake at Verizon led to Cloudflare , Amazon, Linode and many others being unavailable for much of the day. Then on July 2nd of this year Cloudflare as a whole went down for about half an hour, taking down with it many websites that rely on its services.

Coincidentally Microsoft’s Cloud-hosted Office365 also had a multi-hour outage that same day, leaving many of its users stranded and unable to use the service. Meanwhile, on US Labor Day weekend, a power outage over at AWS’ US-East-1 data center led to 1 TB of customer data vanishing as the hardware it was stored on went FUBAR. Clearly there are some issues to be ironed out with this ‘centralizing the internet is good’ message.

What’s Old is New Again

It’s in many ways astounding that in this whole discussion about privacy and tracking there’s no mention of Virtual Private Networks (VPN). These solve the issues of encrypting your data and DNS queries, of hiding your IP address and so much more by simply moving the point where your PC or other internet-enabled device ‘exists’ on the internet. VPNs have been very commonly used by dissidents in authoritarian regimes for decades to get around internet censorship and along with specialized forms such as the Tor network are a crucial element in online freedom.

If one can trust a big commercial entity like Cloudflare in a scheme like DoH, then finding a trustworthy VPN provider who’ll not store or sell your data should be just as easy. Even better, the Opera browser comes with a free, built-in proxy that offers many benefits of VPN.

In summary, one can state that DoH honors its acronym by poorly doing what DoT already does. More focus should be on getting DNSSEC fully implemented everywhere along with DoT and QNAME minimization. And if true privacy by dodging tracking is your goal, then you should be looking at VPNs, especially if you’re a dissident trapped in some authoritarian regime.

Worn Out EMMC Chips Are Crippling Older Teslas

It should probably go without saying that the main reason most people buy an electric vehicle (EV) is because they want to reduce or eliminate their usage of gasoline. Even if you aren’t terribly concerned about your ecological footprint, the fact of the matter is that electricity prices are so low in many places that an electric vehicle is cheaper to operate than one which burns gas at $2.50+ USD a gallon.

Another advantage, at least in theory, is reduced overal maintenance cost. While a modern EV will of course be packed with sensors and complex onboard computer systems, the same could be said for nearly any internal combustion engine (ICE) car that rolled off the lot in the last decade as well. But mechanically, there’s a lot less that can go wrong on an EV. For the owner of an electric car, the days of oil changes, fouled spark plugs, and the looming threat of a blown head gasket are all in the rear-view mirror.

Unfortunately, it seems the rise of high-tech EVs is also ushering in a new error of unexpected failures and maintenance woes. Case in point, some owners of older model Teslas are finding they’re at risk of being stranded on the side of the road by a failure most of us would more likely associate with losing some documents or photos: a disk read error.

Continue reading “Worn Out EMMC Chips Are Crippling Older Teslas”

The Final Days Of The Fire Lookouts

For more than a century, the United States Forest Service has employed men and women to monitor vast swaths of wilderness from isolated lookout towers. Armed with little more than a pair of binoculars and a map, these lookouts served as an early warning system for combating wildfires. Eventually the towers would be equipped with radios, and later still a cellular or satellite connection to the Internet, but beyond that the job of fire lookout has changed little since the 1900s.

Like the lighthouse keepers of old, there’s a certain romance surrounding the fire lookouts. Sitting alone in their tower, the majority of their time is spent looking at a horizon they’ve memorized over years or even decades, carefully watching for the slightest whiff of smoke. The isolation has been a prison for some, and a paradise for others. Author Jack Kerouac spent the summer of 1956 in a lookout tower on Desolation Peak in Washington state, an experience which he wrote about in several works including Desolation Angels.

But slowly, in a change completely imperceptible to the public, the era of the fire lookouts has been drawing to a close. As technology improves, the idea of perching a human on top of a tall tower for months on end seems increasingly archaic. Many are staunchly opposed to the idea of automation replacing human workers, but in the case of the fire lookouts, it’s difficult to argue against it. Computer vision offers an unwavering eye that can detect even the smallest column of smoke amongst acres of woodland, while drones equipped with GPS can pinpoint its location and make on-site assessments without risk to human life.

At one point, the United States Forest Service operated more than 5,000 permanent fire lookout towers, but today that number has dwindled into the hundreds. As this niche job fades even farther into obscurity, let’s take a look at the fire lookout’s most famous tool, and the modern technology poised to replace it.

Continue reading “The Final Days Of The Fire Lookouts”

Europeans Now Have The Right To Repair – And That Means The Rest Of Us Probably Will Too

As anyone who has been faced with a recently-manufactured household appliance that has broken will know, sometimes they can be surprisingly difficult to fix. In many cases it is not in the interests of manufacturers keen to sell more products to make a device that lasts significantly longer than its warranty period, to design it with dismantling or repairability in mind, or to make spare parts available to extend its life. As hardware hackers we do our best with home-made replacement components, hot glue, and cable ties, but all too often another appliance that should have plenty of life in it heads for the dump.

Czech waste management workers dismantle scrap washing machines. Tormale [CC BY-SA 3.0].
Czech waste management workers dismantle scrap washing machines. Tormale [CC BY-SA 3.0].
If we are at a loss to fix a domestic appliance then the general public are doubly so, and the resulting mountain of electrical waste is enough of a problem that the European Union is introducing new rules governing their repairability. The new law mandates that certain classes of household appliances and other devices for sale within the EU’s jurisdiction must have a guaranteed period of replacement part availability and that they must be designed such that they can be worked upon with standard tools. These special classes include washing machines, dishwashers, refrigerators, televisions, and more.

Let’s dig into the ramifications of this decision which will likely affect markets beyond the EU and hopefully lead to a supply of available parts useful for repair and beyond.

Continue reading “Europeans Now Have The Right To Repair – And That Means The Rest Of Us Probably Will Too”

Off-World Cement Tested For The First Time

If the current Administration of the United States has their way, humans will return to the surface of the Moon far sooner than many had expected. But even if NASA can’t meet the aggressive timeline they’ve been given by the White House, it seems inevitable that there will be fresh boot prints on the lunar surface within the coming decades. Between commercial operators and international competition, we’re seeing the dawn of a New Space Race, with the ultimate goal being the long-term habitation of our nearest celestial neighbor.

Schmitt's dusty suit while retrieving samples from the Moon
An Apollo astronaut covered in lunar dust

But even with modern technology, it won’t be easy, and it certainly won’t be cheap. While commercial companies such as SpaceX have significantly reduced the cost of delivering payloads to the Moon, we’ll still need every advantage to ensure the economical viability of a lunar outpost. One approach is in situ resource utilization, where instead of transporting everything from Earth, locally sourced materials are used wherever possible. This technique would not only be useful on the Moon, but many believe it will be absolutely necessary if we’re to have any chance of sending a human mission to Mars.

One of the most interesting applications of this concept is the creation of a building material from the lunar regolith. Roughly analogous to soil here on Earth, regolith is a powdery substance made up of grains of rock and micrometeoroid fragments, and contains silicon, calcium, and iron. Mixed with water, or in some proposals sulfur, it’s believed the resulting concrete-like material could be used in much the same way it is here on Earth. Building dwellings in-place with this “lunarcrete” would be faster, cheaper, and easier than building a comparable structure on Earth and transporting it to the lunar surface.

Now, thanks to recent research performed aboard the International Space Station, we have a much better idea of what to expect when those first batches of locally-sourced concrete are mixed up on the Moon or Mars. Of course, like most things related to spaceflight, the reality has proved to be a bit more complex than expected.

Continue reading “Off-World Cement Tested For The First Time”

What On Earth Is A Pickle Fork And Why Is It Adding To Boeing’s 737 Woes?

It’s fair to say that 2019 has not been a good year for the aircraft manufacturer Boeing, as its new 737 MAX aircraft has been revealed to contain a software fault that could cause the aircraft to enter a dive and crash. Now stories are circulating of another issue with the 737, some of the so-called “Pickle forks” in the earlier 737NG aircraft have been found to develop cracks.

It’s a concerning story and there are myriad theories surrounding its origin but it should also have a reassuring angle: the painstaking system of maintenance checks that underpins the aviation industry has worked as intended. This problem has been identified before any catastrophic failures have occurred. It’s not the story Boeing needs at the moment, but they and the regulators will no doubt be working hard to produce a new design and ensure that it is fitted to aircraft.

The Role of the Pickle Fork

For those of us who do not work in aviation though it presents a question: what on earth is a pickle fork? The coverage of the story tells us it’s something to do with attaching the wing to the fuselage, but without a handy 737 to open up and take a look at we’re none the wiser.

Fortunately there’s a comprehensive description of one along with a review of wing attachment technologies from Boeing themselves, and it can be found in one of their patents. US9399508B2 is concerned with an active suspension system for wing-fuselage mounts and is a fascinating read in itself, but the part we are concerned with is a description of existing wing fixtures on page 12 of the patent PDF.

A cross-section of the aircraft wing fixing, in which we've highlighted the role of the pickle forks. (Boeing)
A cross-section of the aircraft wing fixing, in which we’ve highlighted the role of the pickle forks. (Boeing)

The pickle fork is an assembly so named because of its resemblance to the kitchen utensil, which attaches firmly to each side of the fuselage and has two prongs that extend below it where they are attached to the wing spar.

For the curious engineer with no aviation experience the question is further answered by the patent’s figure 2, which provides a handy cross-section. The other wing attachment they discuss involves the use of pins, leading to the point of the patented invention. Conventional wing fixings transmit the forces from the wing to the fuselage as a rigid unit, requiring the fuselage to be substantial enough to handle those forces and presenting a problem for designers of larger aircraft. The active suspension system is designed to mitigate this, and we’d be fascinated to hear from any readers in the comments who might be able to tell us more.

We think it’s empowering that a science-minded general public can look more deeply at a component singled out in a news report by digging into the explanation in the Boeing patent. We don’t envy the Boeing engineers in their task as they work to produce a replacement, and we hope to hear of their solution as it appears.

[via Hacker News]

[Header image: AMX Boeing 737 XA-PAM by Jean-Philippe Boulet CC-BY 3.0]