Comparing A Clone Raspberry Pi Pico 2 With An Original One

Although [Thomas] really likes the Raspberry Pi Pico 2 and the RP2350 MCU, he absolutely, totally, really doesn’t like the micro-USB connector on it. Hence he jumped on the opportunity to source a Pico 2 clone board with the same MCU but with a USB-C connector from AliExpress. After receiving the new board, he set about comparing the two to see whether the clone board was worth it after all. In the accompanying video you can get even more details on why you should avoid this particular clone board.

In the video the respective components of both boards are analyzed and compared to see how they stack up. The worst issues with the clone Pico 2 board are an improper USB trace impedance at 130 Ω with also a cut ground plane below it that won’t do signal integrity any favors.

There is also an issue with the buck converter routing for the RP2350 with an unconnected pin (VREG_FB) despite the recommended layout in the RP2350 datasheet. Power supply issues continue with the used LN3440 DC-DC converter which can source 800 mA instead of the 1A of the Pico 2 version and performed rather poorly during load tests, with one board dying at 800 mA load.

Continue reading “Comparing A Clone Raspberry Pi Pico 2 With An Original One”

Optical Combs Help Radio Telescopes Work Together

Very-long baseline interferometry (VLBI) is a technique in radio astronomy whereby multiple radio telescopes cooperate to bundle their received data and in effect create a much larger singular radio telescope. For this to work it is however essential to have exact timing and other relevant information to accurately match the signals from each individual radio telescope. As VLBI is used for increasingly higher ranges and bandwidths this makes synchronizing the signals much harder, but an optical frequency comb technique may offer a solution here.

In the paper by [Minji Hyun] et al. it’s detailed how they built the system and used it with the Korean VLBI Network (VLB) Yonsei radio telescope in Seoul as a proof of concept. This still uses the same hydrogen maser atomic clock as timing source, but with the optical transmission of the pulses a higher accuracy can be achieved, limited only by the photodiode on the receiving end.

In the demonstration up to 50 GHz was possible, but commercial 100 GHz photodiodes are available. It’s also possible to send additional signals via the fiber on different wavelengths for further functionality, all with the ultimate goal of better timing and adjustment for e.g. atmospheric fluctuations that can affect radio observations.

Rewinding A Car Alternator For 240 Volt

Two phases installed on the stator. (Credit: FarmCraft101, YouTube)
Two phases installed on the stator. (Credit: FarmCraft101, YouTube)

As part of his quest to find the best affordable generator for his DIY hydroelectric power system, [FarmCraft101] is trying out a range of off-the-shelf and DIY solutions, with in his most recent video trying his hands at the very relaxing activity of rewiring the stator of an alternator.

Normally car alternators output 12VDC after internal rectification, but due to the hundreds of meters from the turbine to the shed, he’d like a higher voltage to curb transmission losses. The easiest way to get a higher voltage out of a car alternator is to change up the wiring on the stator, which is definitely one of those highly educational tasks.

Disassembling an alternator is easy enough, but removing the copper windings from the stator is quite an ordeal, as they were not designed to ever move even a fraction of a millimeter after assembly.

With that arduous task finished, the rewinding was done using 22 AWG copper enamel wire, compared to the original 16 AWG wire, and increasing the loops per coil from 8 to 30. This rewinding isn’t too complicated if you know what you’re doing, with each coil on each of the three windings placed in an alternating fashion, matching the alternating South/North poles on the rotor.

Continue reading “Rewinding A Car Alternator For 240 Volt”

How Resident Evil 2 For The N64 Kept Its FMV Cutscenes

Originally released for the Sony PlayStation in 1998, Resident Evil 2 came on two CDs and used 1.2 GB in total. Of this, full-motion video (FMV) cutscenes took up most of the space, as was rather common for PlayStation games. This posed a bit of a challenge when ported to the Nintendo 64 with its paltry 64 MB of cartridge-based storage. Somehow the developers managed to do the impossible and retain the FMVs, as detailed in a recent video by [LorD of Nerds]. Toggle the English subtitles if German isn’t among your installed natural language parsers.

Instead of dropping the FMVs and replacing them with static screens, a technological improvement was picked. Because of the N64’s rather beefy hardware, it was possible to apply video compression that massively reduced the storage requirements, but this required repurposing the hardware for tasks it was never designed for.

The people behind this feat were developers at Angel Studios, who had 12 months to make it work. Ultimately they achieved a compression ratio of 165:1, with software decoding handling the decompressing and the Reality Signal Processor (RSP) that’s normally part of the graphics pipeline used for both audio tasks and things like upscaling.

Continue reading “How Resident Evil 2 For The N64 Kept Its FMV Cutscenes”

KDE Binds Itself Tightly To Systemd, Drops Support For Non-Systemd Systems

The KDE desktop’s new login manager (PLM) in the upcoming Plasma 6.6 will mark the first time that KDE requires that the underlying OS uses systemd, if one wishes for the full KDE experience. This has especially the FreeBSD community upset, but will also affect Linux distros that do not use systemd. The focus of the KDE team is clear, as stated in the referenced Reddit thread, where a KDE developer replies that the goal is to rely on systemd for more tasks in the future. This means that PLM is just the first step.

In the eyes of KDE it seems that OSes that do not use systemd are ‘niche’ and not worth supporting, with said niche Linux distros that would be cut out including everything from Gentoo to Alpine Linux and Slackware. Regardless of your stance on systemd’s merits or lack thereof, it would seem to be quite drastic for one of the major desktop environments across Linux and BSD to suddenly make this decision.

It also raises the question of in how far this is related to the push towards a distroless and similarly more integrated, singular version of Linux as an operating system. Although there are still many other DEs that will happily run for the foreseeable future on your flavor of GNU/Linux or BSD – regardless of whether you’re more about about a System V or OpenRC init-style environment – this might be one of the most controversial divides since systemd was first introduced.

Top image: KDE Plasma 6.4.5. (Credit: Michio.kawaii, Wikimedia)

Teardown Of An Apple AirTag 2 With Die Shots

There are a few possible ways to do a teardown of new electronics like the Apple AirTag 2 tracker, with [electronupdate] opting to go down to the silicon level, with die shots of the major ICs in a recent teardown video. Some high-resolution photos are also found on the separate blog page.

First we get to see the outside of the device, followed by the individual layers of the sandwiched rings of the device, starting with the small speaker, which is surrounded by the antenna for the ultrawide band (UWB) feature.

Next is the PCB layer, with a brief analysis of the main ICs, before they get lifted off and decapped for an intimate look at their insides. These include the Nordic Semiconductor nRF52840 Bluetooth chip, which also runs the firmware of the device.

The big corroded-looking grey rectangle on the PCB is the UWB chip assembly, with the die shot visible in the heading image. It provides the localization feature of the AirTag that allows you to tell where the tag is precisely. In the die analysis we get a basic explanation of what the structures visible are for. Basically it uses an array of antennae that allows the determination of time-of-flight and with it the direction of the requesting device relative to it.

In addition to die shots of the BT and UWB chips we also get the die shot of the Bosch-made accelerometer chip, as well as an SPI memory device, likely an EEPROM of some description.

As for disabling the speaker in these AirTag 2 devices, it’s nestled deep inside, well away from the battery. This is said to make disabling it much harder without a destructive disassembly, yet as iFixit demonstrated, it’s actually fairly easy to do it non-destructively.

Continue reading “Teardown Of An Apple AirTag 2 With Die Shots”

How Vibe Coding Is Killing Open Source

Does vibe coding risk destroying the Open Source ecosystem? According to a pre-print paper by a number of high-profile researchers, this might indeed be the case based on observed patterns and some modelling. Their warnings mostly center around the way that user interaction is pulled away from OSS projects, while also making starting a new OSS project significantly harder.

“Vibe coding” here is defined as software development that is assisted by an LLM-backed chatbot, where the developer asks the chatbot to effectively write the code for them. Arguably this turns the developer into more of a customer/client of the chatbot, with no requirement for the former to understand what the latter’s code does, just that what is generated does the thing that the chatbot was asked to create.

This also removes the typical more organic selection process of libraries and tooling, replacing it with whatever was most prevalent in the LLM’s training data. Even for popular projects visits to their website decrease as downloads and documentation are replaced by LLM chatbot interactions, reducing the possibility of promoting commercial plans, sponsorships, and community forums. Much of this is also reflected in the plummet in usage of community forums like Stack Overflow.

Continue reading “How Vibe Coding Is Killing Open Source”