We know, we know. Generally speaking, you should try and switch your household devices over to rechargeable cells rather than using disposable alkaline batteries. But while they might seem increasingly quaint in the lithium-ion era, features such as a long shelf life make it worth keeping a pack of disposables around. So which ones should you buy? That’s what [Moragor] wanted to find out with his personal battery analyzer.
Designed as a shield for the Arduino Mega 2560, the analyzer combines a small programmable electronic load with a INA219 current sensor, OLED display, and SD card reader. The user selects the cutoff voltage and discharge rate before the test begins, and once it’s running, data is collected every second and saved to the SD card for later analysis. Once the battery voltage reaches the predetermined value, the test is over and you’re ready to put a new cell through its paces.
After testing 27 different brands of batteries, [Moragor] tabulated all the data and produced some helpful charts to illustrate the results. With few exceptions, the performance level for most of the batteries was remarkably similar. If anything, the test seemed to show that higher tier batteries from companies like Duracell and Energizer actually performed slightly worse than the mid-range offerings. Perhaps the biggest surprise is that, when the per-cell cost was factored in, the local cheapo batteries provided a better value than anything else in the test.
While the selection of battery brands may be different from where you live, the data [Moragor] collected is still a fascinating even if you don’t recognize some of the names on the chart. Of particular note is the confirmation that lithium batteries handily outperformed any of the Alkaline cells tested when it came to high-drain applications. We’d still rather they came in rechargeable form, but at least it’s a step in the right direction.
This is a story about a successful system that nevertheless failed to make the cut. An experimental LED brightness adjustment is something [Mitxela] explored in a project for a high-precision clock; one that shows time down to the nearest millisecond, and won’t flicker or otherwise look weird when photographed with a high-speed camera. To pull this off means reinventing many things about a clock display, including how to handle brightness adjustment elegantly. Now, to be clear, the brightness adjustment idea described here is something that did not end up being used, but it’s interesting enough that [Mitxela] wrote it up and we’re very glad he did.
The idea was to have a smooth and seamless automatic brightness adjustment, ideally with no added components. Since LEDs can be used as light sensors, [Mitxela] saw an opportunity to use elements of the clock displays themselves as sensors. This is how it works: a charge in the p-n junction that makes up an LED will decay at a rate proportional to the amount of light hitting the junction. By measuring the speed of this decay, it’s therefore possible to tell how much light is hitting the LED. It’s effective and elegant, but there are a few practical issues to deal with.
The first failed idea was to employ as sensors the unused decimal points in the seven-segment LED modules, but that turned out to have issues. One was the common-cathode wiring of the display modules; this makes them very convenient to drive as displays, but made using the decimal point as a light sensor impractical. The other issue was that the built-in diffuser that makes the displays easier to read absorbs a lot of ambient light. A much better option was to use the LEDs in the colon separators between digits, since they’re independent. Naturally they still have to light up in addition to being used as sensors, but [Mitxela] made a successful prototype by performing the necessary measurements in between the LEDs being driven by PWM.
Despite how clever and efficient the solution was, in the end what sank it was the fact that the LEDs just don’t do a very good job of sensing ambient light for this purpose. The LEDs are simply too directional. Even after sanding away the top (lens) part of the LEDs, they still had a very narrow field of view. As [Mitxela] describes it, tilting the clock towards the ceiling could send it to full brightness, and the shadow of one’s head falling across the clock would plummet it into “night mode” dimness. In short, it responded to what was directly in front of it, rather than the ambient light level as a whole.
It’s a reminder that sometimes a solution simply won’t tick all the right boxes, and it can happen for unexpected reasons. Still, LEDs are versatile things. Not only can they sense light, but as the name implies they’re also diodes. As diodes can be used as temperature sensors that means LEDs can as well.
If you’re a reader of Hackaday, then you’ve almost certainly encountered an Espressif part. The twin microcontroller families ESP8266 and ESP32 burst onto the scene and immediately became the budget-friendly microcontroller option for projects of all types. We’ve seen the line expand recently with the ESP32-C3 (packing a hacker-friendly RISC-V core) and ESP32-S3 with oodles of IO and fresh new CPU peripherals. Now we have a first peek at the ESP32-C6; a brand new RISC-V based design with the hottest Wi-Fi standard on the block; Wi-Fi 6.
There’s not much to go on here besides the standard Espressif block diagram and a press release, so we’ll tease out what detail we can. From the diagram it looks like the standard set of interfaces will be on offer; they even go so far as to say “ESP32-C6 is similar to ESP32-C3” so we’ll refer you to [Jenny’s] excellent coverage of that part. In terms of other radios the ESP32-C6 continues Espressif’s trend of supporting Bluetooth 5.0. Of note is that this part includes both the coded and 2 Mbps Bluetooth PHYs, allowing for either dramatically longer range or a doubling of speed. Again, this isn’t the first ESP32 to support these features but we always appreciate when a manufacturer goes above and beyond the minimum spec.
The headline feature is, of course, Wi-Fi 6 (AKA 802.11ax). Unfortunately this is still exclusively a 2.4GHz part, so if you’re looking for 5GHz support (or 6GHz in Wi-Fi 6E) this isn’t the part for you. And while Wi-Fi 6 brings a bevy of features from significantly higher speed to better support for mesh networks, that isn’t the focus here either. Espressif have brought a set of IoT-centric features; two radio improvements with OFDMA and MU-MIMO, and the protocol feature Target Wake Time.
OFDMA and MU-MIMO are both different ways of allowing multiple connected device to communicate with an access point simultaneously. OFDMA allows devices to slice up and share channels more efficiency; allowing the AP more flexibility in allocating its constrained wireless resources. With OFDMA the access point can elect to give an entire channel to a single device, or slice it up to multiplex between more than once device simultaneously. MU-MIMO works similarly, but with entire antennas. Single User MIMO (SU-MIMO) allows an AP and connected device to communicate using a more than one antenna each. In contrast Multi User MIMO (MU-MIMO) allows APs and devices to share antenna arrays between multiple devices simultaneously, grouped directionally.
Finally there’s Target Wake Time, the simplest of the bunch. It works very similarly to the Bluetooth Low Energy (4.X and 5.X) concept of a connection interval, allowing devices to negotiate when they’re next going to communicate. This allows devices more focused on power than throughput to negotiate long intervals between which they can shut down their wireless radios (or more of the processor) to extended battery life.
These wireless features are useful on their own, but there is another potential benefit. Some fancy new wireless modes are only available on a network if every connected device supports them. A Wi-Fi 6 network with 10 Wi-Fi 6 devices and one W-Fi 5 (802.11ac) one may not be able to use all the bells and whistles, degrading the entire network to the lowest common denominator. The recent multiplication of low cost IoT devices has meant a corresponding proliferation of bargain-basement wireless radios (often Espressif parts!). Including new Wi-Fi 6 exclusive features in what’s sure to be an accessible part is a good start to alleviating problems with our already strained home networks.
When will we start seeing the ESP32-C6 in the wild? We’re still waiting to hear but we’ll let you know as soon as we can get our hands on some development hardware to try out.
Thanks to friend of the Hackaday [Fred Temperton] for spotting this while it was fresh!
Since the introduction of the Raspberry Pi Compute Module 4, power users have wanted to use NVMe drives with the diminutive ARM board. While it was always possible to get one plugged in through an adapter on the IO Board, it was a bit too awkward for serious use. But as [Jeff Geerling] recently discussed on his blog, we’re not only starting to see CM4 carrier boards with full-size M.2 slots onboard, but the Raspberry Pi Foundation has unveiled beta support for booting from these speedy storage devices.
The MirkoPC board that [Jeff] looks at is certainly impressive on its own. Even if you don’t feel like jumping through the hoops necessary to actually boot to NVMe, the fact that you can simply plug in a standard drive and use it for mass storage is a big advantage. But the board also breaks out pretty much any I/O you could possibly want from the CM4, and even includes some of its own niceties like an RTC module and I2S DAC with a high-quality headphone amplifier.
Once the NVMe drive is safely nestled into position and you’ve updated to the beta bootloader, you can say goodbye to SD cards. But don’t get too excited just yet. Somewhat surprisingly, [Jeff] finds that booting from the NVMe drive is no faster than the SD card. That said, actually loading programs and other day-to-day tasks are far snappier once the system gets up and running. Perhaps the boot time can be improved with future tweaks, but honestly, the ~7 seconds it currently takes to start up the CM4 hardly seems excessive.
At the extreme budget end of tube audio lie single-tube amplifiers usually using very cheap small-signal pentodes. They’ve appeared here before in various guises, and a fitting addition to those previous projects comes from [Kris Slyka]. It’s a classic circuit with a transformer output, and it provides enough amplification to drive a pair of headphones or even a speaker at low levels.
Most tube enthusiasts will instantly recognize the anode follower circuit with a transformer in the anode feed through which the output is taken. The tube works in Class A, which means that it’s in its least efficient mode but the one with the least distortion. The transformer itself isn’t an audio part, but a small mains transformer taken from a scrap wall wart. It serves not only for isolation, but also to transform the high impedance output from the tube into a low impedance suitable for driving a headphone or speaker.
The HT voltage is a relatively low 24 V, but it still manages to drive headphones acceptably. Speaker levels require a pre-amp, but even then it’s likely that this circuit is pushing the tube beyond what it’s capable of with a speaker. The more it operates towards the edge of its performance envelope the more distortion it will generate and the worse a sound it will produce. This isn’t such a problem in a guitar application as here, but hi-fi enthusiasts may find it to be too much. It would be interesting to subject it as a headphone amplifier to a series of audio tests to evaluate the effect of a mains transformer over a dedicated audio one.
As time has gone by and PCB assembly companies have reached further into the space of affordability for our community, the available types of board have multiplied. No longer are we limited to FR4 with a green solder mask, we can have all colours of the rainbow and a variety of substrates. The folks at BotFactory have taken things a step further with their PCB printer though, by printing a fully-functional PCB on a quarter.
As a base layer the printed five passes of insulation on the coin, before printing the traces. Holes are left in the insulation to create a form of via that connects to the coin. On the board is an ATtiny2313 microcontroller that flashes an LED, and on the reverse side of the coin is a CR2032 cell that’s secured with a set of bolts and washers. You can see it taking shape in the video below the break.
It’s true that an LED flasher isn’t exciting, and that this is a marketing stunt for BotFactory’s printer. But it’s an inventive one, and reminds us that with a bit of ingenuity anything can become a board. We’ve had our share over the years, and instantly springing to mind is this stretchable PCB.
It’s an exciting time in the world of microprocessors, as the long-held promise of devices with open-source RISC-V cores is coming to fruition. Finally we might be about to see open-source from the silicon to the user interface, or so goes the optimistic promise. In fact the real story is considerably more complex than that, and it’s a topic [Andreas Speiss] explores in a video that looks at the issue with a wide lens.
He starts with the basics, looking at the various layers of a computer from the user level down to the instruction set architecture. It’s a watchable primer even for those familiar with the topic, and gives a full background to the emergence of RISC-V. He then takes Espressif’s ESP32-C3 as an example, and breaks down its open-source credentials. The ISA of the processor core is RISC-V with some extensions, but he makes the point that the core hardware itself can still be closed source even though it implements an open-source instruction set. His conclusion is that while a truly open-source RISC-V chip is entirely possible (as demonstrated with a cameo Superconference badge appearance), the importance of the RISC-V ISA is in its likely emergence as a heavyweight counterbalance to ARM’s dominance in the sector. Whether or not he is right can only be proved by time, but we can’t disagree that some competition is healthy.