Europe’s Proposed Right-To-Repair Law: A Game Changer, Or Business As Usual?

Recently, the European Commission (EC) adopted a new proposal intended to enable and promote the repair of a range of consumer goods, including household devices like vacuum cleaners and washing machines, as well as electronic devices such as smartphones and televisions. Depending on how the European Parliament and Council vote in the next steps, this proposal may shape many details of how devices we regularly interact with work, and how they can be repaired when they no longer do.

As we have seen recently with the Digital Fair Repair Act in New York, which was signed into law last year, the devil is as always in the details. In the case of the New York bill, the original intent of enabling low-level repairs on defective devices got hamstrung by added exceptions and loopholes that essentially meant that entire industries and types of repairs were excluded. Another example of ‘right to repair’ being essentially gamed involves Apple’s much-maligned ‘self repair’ program, that is both limited and expensive.

So what are the chances that the EU will succeed where the US has not?

Continue reading “Europe’s Proposed Right-To-Repair Law: A Game Changer, Or Business As Usual?”

Gordon Moore, 1929 — 2023

The news emerged yesterday that Gordon Moore, semiconductor pioneer, one of the founders of both Fairchild Semiconductor and Intel, and the originator of the famous Moore’s Law, has died. His continuing influence over all aspects of the technology which makes our hardware world cannot be overstated, and his legacy will remain with us for many decades to come.

A member of the so-called “Traitorous Eight” who left Shockley Semiconductor in 1957 to form Fairchild Semiconductor, he and his cohort laid the seeds for what became Silicon Valley and the numerous companies, technologies, and products which have flowed from that. His name is probably most familiar to us through “Moore’s Law,” the rate of semiconductor development he first postulated in 1965 and revisited a decade later, that establishes a doubling of integrated circuit component density every two years. It’s a law that has seemed near its end multiple times over the decades since, but successive advancements in semiconductor fabrication technology have arrived in time to maintain it. Whether it will continue to hold from the early 2020s onwards remains a hotly contested topic, but we’re guessing its days aren’t quite over yet.

Perhaps Silicon Valley doesn’t hold the place in might once have in the world of semiconductors, as Uber-for-cats app startups vie for attention and other semiconductor design hubs worldwide steal its thunder. But it’s difficult to find a piece of electronic technology, whether it was designed in Mountain View, Cambridge, Shenzhen, or wherever, that doesn’t have Gordon Moore and the rest of those Fairchild founders in its DNA somewhere. Our world is richer for their work, and that’s what we’ll remember Gordon Moore for.

You can read our thoughts on Moore’s famous law. If you ever wondered how Silicon Valley became the place for electronics, the story is probably much older than you think.

Plan To Jam Mobile Phones In Schools Is Madness

Mobile phones in schools. If you’re a teacher, school staffer, or a parent, you’ve likely got six hundred opinions about this very topic, and you will have had six hundred arguments about it this week. In Australia, push has come to shove, and several states have banned the use of mobile phones during school hours entirely. Others are contemplating doing the same.

In the state of New South Wales, the current opposition party has made it clear it will implement a ban if elected. Wildly, the party wants to use mobile phone jamming technology to enforce this ban whether students intend to comply or not. Let’s take a look at how jammers work in theory, and explore why using them in schools would be madness in practice.

Continue reading “Plan To Jam Mobile Phones In Schools Is Madness”

The Rise And (Eventual) Fall Of The SIM Card

There are few devices that better exemplify the breakneck pace of modern technical advancement than the mobile phone. In the span of just a decade, we went from flip phones and polyphonic ringtones to full-fledged mobile computers with quad-core processors and gigabytes of memory.

While rapid advancements in computational power are of course nothing new, the evolution of mobile devices is something altogether different. The Razr V3 of 2003 and the Nexus 5 of 2013 are so vastly different that it’s hard to reconcile the fact they were (at least ostensibly) designed to serve the same purpose — with everything from their basic physical layout to the way the user interacts with them having undergone dramatic changes in the intervening years. Even the network technology they use to facilitate voice and data communication are different.

Two phones, a decade apart.

Yet, there’s at least one component they share: the lowly SIM card. In fact, if you don’t mind trimming a bit of unnecessary plastic away, you could pull the SIM out of the Razr and slap it into the Nexus 5 without a problem. It doesn’t matter that the latter phone wasn’t even a twinkling in Google’s eye when the card was made, the nature of the SIM card means compatibility is a given.

Indeed there’s every reason to believe that very same card, now 20 years old, could be installed in any number of phones on the market today. Although, once again, some minor surgery would be required to pare it down to size.

Such is the beauty of the SIM, or Subscriber Identity Module. It allows you to easily transfer your cellular service from one phone to another, with little regard to the age or manufacturer of the device, and generally without even having to inform your carrier of the swap. It’s a simple concept that has served us well for almost as long as cellular telephones have existed, and separates the phone from the phone contract.

So naturally, there’s mounting pressure in the industry to screw it up.

Continue reading “The Rise And (Eventual) Fall Of The SIM Card”

Will A.I. Steal All The Code And Take All The Jobs?

New technology often brings with it a bit of controversy. When considering stem cell therapies, self-driving cars, genetically modified organisms, or nuclear power plants, fears and concerns come to mind as much as, if not more than, excitement and hope for a brighter tomorrow. New technologies force us to evolve perspectives and establish new policies in hopes that we can maximize the benefits and minimize the risks. Artificial Intelligence (AI) is certainly no exception. The stakes, including our very position as Earth’s apex intellect, seem exceedingly weighty. Mathematician Irving Good’s oft-quoted wisdom that the “first ultraintelligent machine is the last invention that man need make” describes a sword that cuts both ways. It is not entirely unreasonable to fear that the last invention we need to make might just be the last invention that we get to make.

Artificial Intelligence and Learning

Artificial intelligence is currently the hottest topic in technology. AI systems are being tasked to write prose, make art, chat, and generate code. Setting aside the horrifying notion of an AI programming or reprogramming itself, what does it mean for an AI to generate code? It should be obvious that an AI is not just a normal program whose code was written to spit out any and all other programs. Such a program would need to have all programs inside itself. Instead, an AI learns from being trained. How it is trained is raising some interesting questions.

Humans learn by reading, studying, and practicing. We learn by training our minds with collected input from the world around us. Similarly, AI and machine learning (ML) models learn through training. They must be provided with examples from which to learn. The examples that we provide to an AI are referred to as the data corpus of the training process. The robot Johnny 5 from “Short Circuit”, like any curious-minded student, needs input, more input, and more input.

Continue reading “Will A.I. Steal All The Code And Take All The Jobs?”

Repurposing Old Smartphones: When Reusing Makes More Sense Than Recycling

When looking at the specifications of smartphones that have been released over the past years, it’s remarkable to see how aspects like CPU cores, clockspeeds and GPU performance have improved during this time, with even new budget smartphones offering a lot of computing power, as well as a smattering of sensors. Perhaps even more remarkable is that of the approximately 1.5 billion smartphones sold each year, many will be discarded again after a mere two years of use. This seems rather wasteful, and a recent paper by Jennifer Switzer and colleagues proposes that a so-called Computational Carbon Intensity (CCI) metric should be used to determine when it makes more sense to recycle a device than to keep using it.

What complicates the decision of when it makes more sense to reuse than recycle is that there are many ways to define when a device is no longer ‘fit for purpose’. It could be argued that the average smartphone is still more than good enough after two years to be continued as a smartphone for another few years at least, or at least until the manufacturer stops supplying updates. Beyond the use as a smartphone, they’re still devices with a screen, WiFi connection and a capable processor, which should make it suitable for a myriad of roles.

Unfortunately, as we have seen with the disaster that was Samsung’s ‘upcycling’ concept a few years ago, or Google’s defunct Project Ara, as promising as the whole idea of ‘reuse, upcycle, recycle’ sounds, establishing an industry standard here is frustratingly complicated. Worse, over the years smartphones have become ever more sealed-up, glued-together devices that complicate the ‘reuse’ narrative.

Continue reading “Repurposing Old Smartphones: When Reusing Makes More Sense Than Recycling”

Answering Some Pico Balloon Questions

When the US Air Force shot down some suspected Chinese spy balloons a couple of weeks ago, it was widely reported that one of the targets might have been a much more harmless amateur radio craft. The so-called pico balloon K9YO was a helium-inflated Mylar balloon carrying a tiny solar-powered WSPR beacon, and it abruptly disappeared in the same place and time in which the USAF claimed one of their targets. When we covered  the story it garnered a huge number of comments both for and against the balloonists, so perhaps it’s worth returning with the views of a high-altitude-ballooning expert.

[Dave Akerman] has been sending things aloft for a long time now, we think he may have been one of the first to put a Raspberry Pi aloft back in 2012. In his blog post he attempts to answer the frequently asked questions about pico balloons, their legality, whether they should carry a beacon, and what the difference is between these balloons and the latex “weather balloon” type we’re familiar with. It’s worth a read, because not all of us are part of the high-altitude balloon community and thus it’s good to educate oneself.

Meanwhile, you can read our original report here.