Google Will Require Developer Verification Even For Sideloading

Do you like writing software for Android, perhaps even sideload the occasional APK onto your Android device? In that case some big changes are heading your way, with Google announcing that they will soon require developer verification for all applications installed on certified Android devices – meaning basically every mainstream device. Those of us who have distributed Android apps via the Google app store will have noticed this change already, with developer verification in the form of sending in a scan of your government ID now mandatory, along with providing your contact information.

What this latest change thus effectively seems to imply is that workarounds like sideloading or using alternative app stores, like F-Droid, will no longer suffice to escape these verification demands. According to the Google blog post, these changes will be trialed starting in October of 2025, with developer verification becoming ‘available’ to all developers in March of 2026, followed by Google-blessed Android devices in Brazil, Indonesia, Thailand and Singapore becoming the first to require this verification starting in September of 2026.

Google expects that this system will be rolled out globally starting in 2027, meaning that every Google-blessed Android device will maintain a whitelist of ‘verified developers’, not unlike the locked-down Apple mobile ecosystem. Although Google’s claim is that this is for ‘security’, it does not prevent the regular practice of scammers buying up existing – verified – developer accounts, nor does it harden Android against unscrupulous apps. More likely is that this will wipe out Android as an actual alternative to Apple’s mobile OS offerings, especially for the hobbyist and open source developer.

One of the photo-detector spheres of ARCA (Credit: KM3NeT)

Confirmation Of Record 220 PeV Cosmic Neutrino Hit On Earth

Neutrinos are exceedingly common in the Universe, with billions of them zipping around us throughout the day from a variety of sources. Due to their extremely low mass and no electric charge they barely ever interact with other particles, making these so-called ‘ghost particles’ very hard to detect. That said, when they do interact the result is rather spectacular as they impart significant kinetic energy. The resulting flash of energy is used by neutrino detectors, with most neutrinos generally pegging out at around 10 petaelectronvolt (PeV), except for a 2023 event.

This neutrino event which occurred on February 13th back in 2023 was detected by the KM3NeT/ARCA detector and has now been classified as an ultra-high energy neutrino event at 220 PeV, suggesting that it was likely a cosmogenic neutrinos. When we originally reported on this KM3-230213A event, the data was still being analyzed based on a detected muon from the neutrino interaction even, with the researchers also having to exclude the possibility of it being a sensor glitch.

By comparing the KM3-230213A event data with data from other events at other detectors, it was possible to deduce that the most likely explanation was one of these ultra-high energy neutrinos. Since these are relatively rare compared to neutrinos that originate within or near Earth’s solar system, it’ll likely take a while for more of these detection events. As the KM3NeT/ARCA detector grid is still being expanded, we may see many more of them in Earth’s oceans. After all, if a neutrino hits a particle but there’s no sensor around to detect it, we’d never know it happened.


Top image: One of the photo-detector spheres of ARCA (Credit: KM3NeT)

Very Efficient APFC Circuit In Faulty Industrial 960 Watt Power Supply

The best part about post-mortem teardowns of electronics is when you discover some unusual design features, whether or not these are related to the original fault. In the case of a recent [DiodeGoneWild] video involving the teardown of an industrial DIN-rail mounted 24 V, 960 Watt power supply, the source of the reported bang was easy enough to spot. During the subsequent teardown of this very nicely modular PSU the automatic power factor correction (APFC) board showed it to have an unusual design, which got captured in a schematic and is explained in the video.

Choosing such a APFC design seems to have been done in the name of efficiency, bypassing two of the internal diodes in the bridge rectifier with the external MOSFETs and ultrafast diodes. In short, it prevents some of the typical diode voltage drops by removing diodes in the path of the current.

Although not a new design, as succinctly pointed out in the comments by [marcogeri], it’s explained how even cutting out one diode worth of voltage drop in a PSU like this can save 10 Watt of losses. Since DIN rail PSUs rarely feature fans for active cooling, this kind of APFC design is highly relevant and helps to prevent passively cooled PSUs from spiraling into even more of a thermal nightmare.

As for the cause behind the sooty skid marks on one of the PCBs, that will be covered in the next video.

Continue reading “Very Efficient APFC Circuit In Faulty Industrial 960 Watt Power Supply”

Dealing With The 1970s EPROM Chaos In 2025

It could be argued that erasable programmable ROMs (EPROMs) with their quaint UV-transparent windows are firmly obsolete today in an era of various flavors of EEPROMs. Yet many of these EPROMs are still around, and people want to program them. Unfortunately, the earliest EPROMs were made during a time when JEDEC standardization hadn’t taken root yet, leading to unique pinouts, programming voltages, and programming sequences, as [Anders Nielsen] explains in a recent video.

[Anders]’s Relatively Universal-ROM-Programmer project recently gained the ability to program even the oldest types of EPROMs, something which required modifying the hardware design to accommodate EPROMs like Ti’s TMS2716 and the similar-but-completely-different TMS2516. Although not the hardest thing to support – requiring just a diode and resistor added to the BOM along with a firmware update – it’s just one of those pre-standardization traps.

As [Anders] put it, it’s sometimes good to be unencumbered by the burden of future knowledge. Who would have willingly subjected themselves to the chaos of incompatible pinouts, voltages, etc., if they had known beforehand that in a few years EEPROMs and JEDEC standardization would make life so much easier? Maybe that’s why messing with retro hardware like this is fun, as afterwards you can go back to the future.

Continue reading “Dealing With The 1970s EPROM Chaos In 2025”

How Intel’s 386 Protects Itself From ESD, Latch-up And Metastability

To connect the miniature world of integrated circuits like a CPU with the outside world, a number of physical connections have to be made. Although this may seem straightforward, these I/O pads form a major risk to the chip’s functioning and integrity, in the form of electrostatic discharge (ESD), a type of short-circuit called a latch-up and metastability through factors like noise. Shielding the delicate ASIC from the cruel outside world is the task of the I/O circuitry, with [Ken Shirriff] recently taking an in-depth look at this circuity in Intel’s 386 CPU.

The 386 die, zooming in on some of the bond pad circuits. (Credit: Ken Shirriff)
The 386 die, zooming in on some of the bond pad circuits. (Credit: Ken Shirriff)

The 386 has a total of 141 of these I/O pads, each connected to a pin on the packaging with a delicate golden bond wire. ESD is on the top of the list of potential risks, as a surge of high voltage can literally blow a hole in the circuitry. The protective circuit for this can be seen in the above die shot, with its clamping diodes, current-limiting resistor and a third diode.

Latch-up is the second major issue, caused by the inadvertent creation of parasitic structures underneath the P- and NMOS transistors. These parasitic transistors are normally inactive, but if activated they can cause latch-up which best case causes a momentary failure, but worst case melts a part of the chip due to high currents.

To prevent I/O pads from triggering latch-up, the 386 implements ‘guard rings’ that should block unwanted current flow. Finally there is metastability, which as the name suggests isn’t necessarily harmful, but can seriously mess with the operation of the chip which expects clean binary signals. On the 386 two flip-flops per I/O pad are used to mostly resolve this.

Although the 386’s 1985-era circuitry was very chonky by today’s standards, it was still no match for these external influences, making it clear just how important these protective measures are for today’s ASICs with much smaller feature sizes.

Playing DOOM On The Anker Prime Charging Station

At this point the question is no longer whether a new device runs DOOM, but rather how well. In the case of Anker’s Prime Charging Station it turns out that it’s actually not too terrible at controlling the game, as [Aaron Christophel] demonstrates. Unlike the similar Anker power bank product with BLE and a big display that we previously covered, this device has quite the capable hardware inside.

Playing a quick game of Doom while waiting for charging to finish. (Credit: Aaron Christophel, YouTube)
Playing a quick game of DOOM while waiting for charging to finish. (Credit: Aaron Christophel, YouTube)

According to [Aaron], inside this charging station you’ll not only find an ESP32-C3 for Bluetooth Low Energy (BLE) duty, but also a 150 MHz Synwit SWM341RET7 (Chinese datasheet) ARM-based MCU along with 16 MB of external flash and 8 MB of external RAM. Both of these are directly mapped into the MCU’s memory space. The front display has a 200×480 pixel resolution.

This Synwit MCU is a bit of a curiosity, as it uses ARM China’s Star-MC1 architecture for which most of the information is in Chinese, though it’s clear that it implements the ARMv8-M profile. It can also be programmed the typical way, which is what [Aaron] did to get DOOM on it, with the clicky encoder on the side of the charging station being the sole control input.

As can be seen in the video it makes for a somewhat awkward playing experience, but far more usable than one might expect, even if running full-screen proved to be a bit too much for the hardware.

Continue reading “Playing DOOM On The Anker Prime Charging Station”

Building A Robotic Arm Without Breaking The Bank

There are probably at least as many ways to construct a robotic arm as there are uses for them. In the case of [Thomas Sanladerer] his primary requirement for the robotic arm was to support a digital camera, which apparently has to be capable of looking vaguely menacing in a completely casual manner. Meet Caroline, whose styling and color scheme is completely coincidental and does not promise yummy moist cake for anyone who is still alive after all experiments have been run.

Unlike typical robotic arms where each joint in the arm is directly driven by a stepper motor or similar, [Thomas] opted to use a linear rail that pushes or pulls the next section of the arm in a manner that’s reminiscent of the action by the opposing muscles in our mammalian appendages. This 3D printer-inspired design is pretty sturdy, but the steppers like to skip steps, so he is considering replacing them with brushless motors.

Beyond this, the rest of the robotic arm uses aluminium hollow stock, a lot of 3D printed sections and for the head a bunch of Waveshare ST3215 servos with internal magnetic encoder for angle control. One of these ~€35 ST3215s did cook itself during testing, which is somewhat worrying. Overall, total costs was a few hundred Euro, which for a nine-degree robotic arm like this isn’t too terrible.

Continue reading “Building A Robotic Arm Without Breaking The Bank”