Accidental Climate Engineering With Disintegrating Satellites

For many decades humankind has entertained the notion that we can maybe tweak the Earth’s atmosphere or biosphere in such a way that we can for example undo the harms of climate change, or otherwise affect the climate for our own benefit. This often involves spreading certain substances in parts of the atmosphere in order to reflect or retain thermal solar radiation or induce rain.

Yet despite how limited in scope these attempts at such intentional experiments have been so far – with most proposals dying somewhere before being implemented – we have already embarked on a potentially planet-wide atmospheric reconfiguration that could affect life on Earth for centuries to come. This accidental experiment comes in the form of rocket stages, discarded satellites, and other human-made space litter that burn up in the atmosphere at ever increasing rates.

Rather than burning up cleanly into harmless components, this actually introduces metals and other compounds into the upper parts of the atmosphere. What the long-term effects of this will be is still uncertain, but with the most dire scenarios involving significant climate change and ozone layer degradation, we ought to figure this one out sooner rather than later.

Continue reading “Accidental Climate Engineering With Disintegrating Satellites”

The Curse Of The Everything Device

In theory having a single device that combines the features of multiple dedicated devices is a great idea, saving a lot of space, time and money. However, in reality it mostly means that these features now conflict with each other, force us to deal with more complex devices that don’t last nearly as long, and become veritable vampires for your precious attention.

Whereas in the olden days a phone was just used for phone calls, now it’s also a video and photo camera, multimedia computer, pager, and more, but at any point an incoming phone call can interrupt what you are doing. There’s also always the temptation of doom scrolling on one of the infinite ‘social media’ apps. Even appliances like televisions and refrigerators are like that now, adding ‘smarts’ that also vie for your attention, whether it’s with advertisements, notifications, or worse.

Meanwhile trying to simply do some writing work on your PC is a battle against easy distractions, leading people to flee to the digital equivalent of typewriters out of sheer desperation. Similarly, we increasingly see ‘dumb’ phones, and other single-task devices making a comeback, both as commercial options and as DIY projects by the community.

Are we seeing the end of the ‘everything device’ and the return to a more simple time?

Continue reading “The Curse Of The Everything Device”

The Requirements Of AI

The media is full of breathless reports that AI can now code and human programmers are going to be put out to pasture. We aren’t convinced. In fact, we think the “AI revolution” is just a natural evolution that we’ve seen before. Consider, for example, radios. Early on, if you wanted to have a radio, you had to build it. You may have even had to fabricate some or all of the parts. Even today, winding custom coils for a radio isn’t that unusual.

But radios became more common. You can buy the parts you need. You can even buy entire radios on an IC. You can go to the store and buy a radio that is probably better than anything you’d cobble together yourself. Even with store-bought equipment, tuning a ham radio used to be a technically challenging task. Now, you punch a few numbers in on a keypad.

The Human Element

What this misses, though, is that there’s still a human somewhere in the process. Just not as many. Someone has to design that IC. Someone has to conceive of it to start with. We doubt, say, the ENIAC or EDSAC was hand-wired by its designers. They figured out what they wanted, and an army of technicians probably did the work. Few, if any, of them could have envisoned the machine, but they can build it.

Does that make the designers less? No. If you write your code with a C compiler, should assembly programmers look down on you as inferior? Of course, they probably do, but should they?

If you have ever done any programming for most parts of the government and certain large companies, you probably know that system engineering is extremely important in those environments. An architect or system engineer collects requirements that have very formal meanings. Those requirements are decomposed through several levels. At the end, any competent programmer should be able to write code to meet the requirements. The requirements also provide a good way to test the end product.

Continue reading “The Requirements Of AI”

Real LED TVs Are Finally Becoming A Thing

Once upon a time, the cathode ray tube was pretty much the only type of display you’d find in a consumer television. As the analog broadcast world shifted to digital, we saw the rise of plasma displays and LCDs, which offered greater resolution and much slimmer packaging. Then there was the so-called LED TV, confusingly named—for it was merely an LCD display with an LED backlight. The LEDs were merely lamps, with the liquid crystal doing all the work of displaying an image.

Today, however, we are seeing the rise of true LED displays. Sadly, decades of confusing marketing messages have polluted the terminology, making it a confusing space for the modern television enthusiast. Today, we’ll explore how these displays work and disambiguate what they’re being called in the marketplace.

Continue reading “Real LED TVs Are Finally Becoming A Thing”

Practice Makes Perfect: The Wet Dress Rehearsal

If you’ve been even casually following NASA’s return to the Moon, you’re likely aware of the recent Wet Dress Rehearsal (WDR) for the Artemis II mission. You probably also heard that things didn’t go quite to plan: although the test was ultimately completed and the towering Space Launch System (SLS) rocket was fully loaded with propellant, a persistent liquid hydrogen leak and a few other incidental issues lead the space agency to delay further testing for at least a month while engineers make adjustments to the vehicle.

This constitutes a minor disappointment for fans of spaceflight, but when you’re strapping four astronauts onto more than five million pounds of propellants, there’s no such thing as being too cautious. In fact, there’s a school of thought that says if a WDR doesn’t shake loose some gremlins, you probably weren’t trying hard enough. Simulations and estimates only get you so far, the real thing is always more complex, and there’s bound to be something you didn’t account for ahead of time.

Continue reading “Practice Makes Perfect: The Wet Dress Rehearsal”

How Vibe Coding Is Killing Open Source

Does vibe coding risk destroying the Open Source ecosystem? According to a pre-print paper by a number of high-profile researchers, this might indeed be the case based on observed patterns and some modelling. Their warnings mostly center around the way that user interaction is pulled away from OSS projects, while also making starting a new OSS project significantly harder.

“Vibe coding” here is defined as software development that is assisted by an LLM-backed chatbot, where the developer asks the chatbot to effectively write the code for them. Arguably this turns the developer into more of a customer/client of the chatbot, with no requirement for the former to understand what the latter’s code does, just that what is generated does the thing that the chatbot was asked to create.

This also removes the typical more organic selection process of libraries and tooling, replacing it with whatever was most prevalent in the LLM’s training data. Even for popular projects visits to their website decrease as downloads and documentation are replaced by LLM chatbot interactions, reducing the possibility of promoting commercial plans, sponsorships, and community forums. Much of this is also reflected in the plummet in usage of community forums like Stack Overflow.

Continue reading “How Vibe Coding Is Killing Open Source”

Size (and Units) Really Do Matter

We miss the slide rule. It isn’t so much that we liked getting an inexact answer using a physical moving object. But to successfully use a slide rule, you need to be able to roughly estimate the order of magnitude of your result. The slide rule’s computation of 2.2 divided by 8 is the same as it is for 22/8 or 220/0.08. You have to interpret the answer based on your sense of where the true answer lies. If you’ve ever had some kid at a fast food place enter the wrong numbers into a register and then hand you a ridiculous amount of change, you know what we mean.

Recent press reports highlighted a paper from Nvidia that claimed a data center consuming a gigawatt of power could require half a million tons of copper. If you aren’t an expert on datacenter power distribution and copper, you could take that number at face value. But as [Adam Button] reports, you should probably be suspicious of this number. It is almost certainly a typo. We wouldn’t be surprised if you click on the link and find it fixed, but it caused a big news splash before anyone noticed.

Thought Process

Best estimates of the total copper on the entire planet are about 6.3 billion metric tons. We’ve actually only found a fraction of that and mined even less. Of the 700 million metric tons of copper we actually have in circulation, there is a demand for about 28 million tons a year (some of which is met with recycling, so even less new copper is produced annually).

Simple math tells us that a single data center could, in a year, consume 1.7% of the global copper output. While that could be true, it seems suspicious on its face.

Digging further in, you’ll find the paper mentions 200kg per megawatt. So a gigawatt should be 200,000kg, which is, actually, only 200 metric tons. That’s a far cry from 500,000 tons. We suspect they were rounding up from the 440,000 pounds in 200 metric tons to “up to a half a million pounds,” and then flipped pounds to tons.

Continue reading “Size (and Units) Really Do Matter”