A stack of Activation Locked MacBooks destined for the shredder in refurbisher [John Bumstead]’s workshop.

Apple IOS 18’s New Repair Assistant: Easier Parts Pairing Yet With Many Limitations

Over the years, Apple has gone all-in on parts pairing. Virtually every component in an iPhone and iPad has a unique ID that’s kept in a big database over at Apple, which limits replacement parts to only those which have their pairing with the host system officially sanctified by Apple. With iOS 18 there seems to be somewhat of a change in how difficult getting a pairing approved, in the form of Apple’s new Repair Assistant. According to early responses by [iFixit] and in a video by [Hugh Jeffreys] the experience is ‘promising but flawed’.

As noted in the official Apple support page, the Repair Assistant is limited to the iPhone 15+, iPad Pro (M4) and iPad Air (M2), which still leaves many devices unable to make use of this feature. For the lucky few, however, this theoretically means that you can forego having to contact Apple directly to approve new parts. Instead the assistant will boot into its own environment, perform the pairing and calibration and allow you to go on your merry way with (theoretically) all functionality fully accessible.

Continue reading “Apple IOS 18’s New Repair Assistant: Easier Parts Pairing Yet With Many Limitations”

A Brand-New Additive PCB Fab Technique?

Usually when we present a project on these pages, it’s pretty cut and dried — here’s what was done, these are the technologies used, this was the result. But sometimes we run across projects that raise far more questions than they answer, such as with this printed circuit board that’s actually printed rather than made using any of the traditional methods.

Right up front we’ll admit that this video from [Bad Obsession Motorsport] is long, and what’s more, it’s part of a lengthy series of videos that document the restoration of an Austin Mini GT-Four. We haven’t watched the entire video much less any of the others in the series, so jumping into this in the middle bears some risk. We gather that the instrument cluster in the car is in need of a tune-up, prompting our users to build a PCB to hold all the instruments and indicators. Normally that’s pretty standard stuff, but jumping to the 14:00 minute mark on the video, you’ll see that these blokes took the long way around.

Starting with a naked sheet of FR4 substrate, they drilled out all the holes needed for their PCB layout. Most of these holes were filled with rivets of various sizes, some to accept through-hole leads, others to act as vias to the other side of the board. Fine traces of solder were then applied to the FR4 using a modified CNC mill with the hot-end and extruder of a 3D printer added to the quill. Components were soldered to the board in more or less the typical fashion.

It looks like a brilliant piece of work, but it leaves us with a few questions. We wonder about the mechanics of this; how is the solder adhering to the FR4 well enough to be stable? Especially in a high-vibration environment like a car, it seems like the traces would peel right off the board. Indeed, at one point (27:40) they easily peel the traces back to solder in some SMD LEDs.

Also, how do you solder to solder? They seem to be using a low-temp solder and a higher temperature solder, and getting right in between the melting points. We’re used to seeing solder wet into the copper traces and flow until the joint is complete, but in our experience, without the capillary action of the copper, the surface tension of the molten solder would just form a big blob. They do mention a special “no-flux 96S solder” at 24:20; could that be the secret?

We love the idea of additive PCB manufacturing, and the process is very satisfying to watch. But we’re begging for more detail. Let us know what you think, and if you know anything more about this process, in the comments below.

Continue reading “A Brand-New Additive PCB Fab Technique?”

Small Steam Generator Creates Educational Experience

Steam turbines have helped drive a large chunk of our technological development over the last century or so, and they’ll always make for interesting DIY. [Hyperspace Pirate] built a small turbine and boiler in his garage, turning fire into flowing electrons, and learning a bunch in the process.

[Hyperspace Pirate] based the turbine design on 3D printed Pelton-style turbines he had previously experimented with, but milled it from brass using a CNC router. A couple of holes had to be drilled in the side of the rotor to balance it. The shaft drives a brushless DC motor to convert the energy from the expanding steam into electricity.

To avoid the long heat times required for a conventional boiler, [Hyperspace Pirate] decided to use a flash boiler. This involves heating up high-pressure water in a thin coil of copper tube, causing the water to boil as it flows down the tube. To produce the high-pressure water feed the propane tank for the burner was also hooked up to the water tank to pressurize it, removing the need for a separate pump or compressed air source. This setup allows the turbine to start producing power within twelve seconds of lighting the burner — significantly faster than a conventional boiler.

Throughout the entire video [Hyperspace Pirate] shows his calculation for the design and tests, making for a very informative demonstration. By hooking up a variable load and Arduino to the rectified output of the motor, he was able to measure the output power and efficiency. It came out to less than 1% efficiency for turning propane into electricity, not accounting for the heat loss of the boiler. The wide gaps between the turbine and housing, as well as the lack of a converging/diverging nozzle on the input of the turbine are likely big contributing factors to the low efficiency.

Like many of his other projects, the goal was the challenge of the project, not practicality or efficiency. From a gyro-stabilized monorail, to copper ingots from algaecide and and a DIY cryocooler, he has sure done some interesting ones.

Continue reading “Small Steam Generator Creates Educational Experience”

How Pollution Controls For Cargo Ships Made Global Warming Worse

In 2020 international shipping saw itself faced with new fuel regulations for cargo ships pertaining to low sulfur fuels (IMO2020). This reduced the emission of sulfur dioxide aerosols from these ships across the globe by about 80% practically overnight and resulting in perhaps the biggest unintentional geoengineering event since last century.

As detailed in a recent paper by [Tianle Yuan] et al. as published in Nature, by removing these aerosols from the Earth’s atmosphere, it also removed their cooling effect. Effectively this change seems to have both demonstrated the effect of solar engineering, as well as sped up the greenhouse effect through radiative forcing of around 0.2 Watt/m2 of the global ocean.

The inadvertent effect of the pollution by these cargo ships appears to have been what is called marine cloud brightening (MCB), with the increased reflectivity of said clouds diminishing rapidly as these pollution controls came into effect. This was studied by the researchers using a combination of satellite observations and a chemical transport model, with the North Atlantic, the Caribbeans and South China Sea as the busiest shipping channels primarily affected.

Although the lesson one could draw from this is that we should put more ships on the oceans burning high-sulfur fuels, perhaps the better lesson is that MCB is a viable method to counteract global warming, assuming we can find a method to achieve it that doesn’t also increase acid rain and similar negative effects from pollution.

Featured image: Time series of global temperature anomaly since 1980. (Credit: Tianle Yuan et al., Nature Communications Earth Environment, 2024)

Clockwork Rover For Venus

Venus hasn’t received nearly the same attention from space programs as Mars, largely due to its exceedingly hostile environment. Most electronics wouldn’t survive the 462 °C heat, never mind the intense atmospheric pressure and sulfuric acid clouds. With this in mind, NASA has been experimenting with the concept of a completely mechanical rover. The [Beardy Penguin] and a team of fellow students from the University of Southampton decided to try their hand at the concept—video after the break.

The project was divided into four subsystems: obstacle detection, mechanical computer, locomotion (tracks), and the drivetrain. The obstacle detection system consists of three (left, center, right) triple-rollers in front of the rover, which trigger inputs on the mechanical computer when it encounters an obstacle over a certain size. The inputs indicate the position of each roller (up/down) and the combination of inputs determines the appropriate maneuver to clear the obstacle. [Beardy Penguin] used Simulink to design the logic circuit, consisting of AND, OR, and NOT gates. The resulting 5-layer mechanical computer quickly ran into the limits of tolerances and friction, and the team eventually had trouble getting their design to work with the available input forces.

Due to the high-pressure atmosphere, an on-board wind turbine has long been proposed as a viable power source for a Venus rover. It wasn’t part of this project, so it was replaced with a comparable 40 W electric motor. The output from a logic circuit goes through a timing mechanism and into a planetary gearbox system. It changes output rotation direction by driving the planet gear carrier with the sun gear or locking it in a stationary position.

As with many undergraduate engineering projects, the physical results were mixed, but the educational value was immense. They got individual subsystems working, but not the fully integrated prototype. Even so, they received several awards for their project and even came third in an international Simulink challenge. It also allowed another team to continue their work and refine the subsystems. Continue reading “Clockwork Rover For Venus”

Geochron world time clock

Geochron: Another Time, Another Timeless Tale

The Geochron World Time Indicator is a clock that doubles as a live map of where the sun is shining on the Earth. Back in its day, it was a cult piece that some have dubbed the “Rolex on the wall.” Wired’s recent coverage of the clock reminded us of just how cool it is on the inside. And to dig in, we like [Attoparsec]’s restoration project on his own mid-1980s Geochron, lovingly fixing up a clock he picked up online.

[Attoparsec]’s recent restoration shares insights into the clock’s fascinating mechanics. Using a synchronous motor, transparent slides, and a lighted platen, the Geochron works like a glorified slide projector, displaying the analemma—a figure-eight pattern that tracks the sun’s position over the year.

But if you’re looking for a digital version, way back in 2011 we showcased [Justin]’s LED hack of FlorinC’s “Wise Clock”, which ingeniously emulated the Geochron’s day-night pattern using RGB LEDs, swapping out the faceplate for a world map printed on vellum. That’s probably a much more reasonable way to go these days. Why haven’t we seen more remakes of these?

The Glacial IPv6 Transition: Raising Questions On Necessity And NAT-Based Solutions

A joke in networking circles is that the switch from IPv4 to IPv6 is always a few years away. Although IPv6 was introduced in the early 90s as a result of the feared imminent IPv4 address drought courtesy of the blossoming Internet. Many decades later, [Geoff Huston] in an article on the APNIC blog looks back on these years to try to understand why IPv4 is still a crucial foundation of the modern Internet while IPv6 has barely escaped the need to (futilely) try to tunnel via an IPv4-centric Internet. According to a straight extrapolation by [Geoff], it would take approximately two more decades for IPv6 to truly take over from its predecessor.

Although these days a significant part of the Internet is reachable via IPv6 and IPv6 support comes standard in any modern mainstream operating system, for some reason the ‘IPv4 address pool exhaustion’ apocalypse hasn’t happened (yet). Perhaps ironically, this might as [Geoff] postulates be a consequence of a lack of planning and pushing of IPv6 in the 1990s, with the rise of mobile devices and their use of non-packet-based 3G throwing a massive spanner in the works. These days we are using a contrived combination of TLS Server Name Indication (SNI), DNS and Network Address Translation (NAT) to provide layers upon layers of routing on top of IPv4 within a content-centric Internet (as with e.g. content distribution networks, or CDNs).

While the average person’s Internet connection is likely to have both an IPv4 and IPv6 address assigned to it, there’s a good chance that only the latter is a true Internet IP, while the former is just the address behind the ISP’s CG-NAT (carrier-grade NAT), breaking a significant part of (peer to peer) software and services that relied on being able to traverse an IPv4 Internet via perhaps a firewall forwarding rule. This has now in a way left both the IPv4 and IPv6 sides of the Internet broken in their own special way compared to how they were envisioned to function.

Much of this seems to be due to the changes since the 1990s in how the Internet got used, with IP-based addressing of less importance, while giants like Cloudflare, AWS, etc. have now largely become ‘the Internet’. If this is the path that we’ll stay on, then IPv6 truly may never take over from IPv4, as we will transition to something entirely else. Whether this will be something akin to the pre-WWW ‘internet’ of CompuServe and kin, or something else will be an exciting revelation over the coming years and decades.

Header: Robert.Harker [CC BY-SA 3.0].