Galaxy Users Accuse Samsung Of Throttling Performance And Benchmark Rigging

A lot of Samsung Galaxy users think that Samsung has been throttling smartphone performance, so much so that they don’t live up to their published specifications. At issue is the game optimizing service (GOS) which is intended to throttle the CPU while playing games to prevent overheating. S22 owners have recently discovered that it’s not only games that are throttled, but there’s a list of over 10,000 apps which are subject to GOS control, and there is no way to disable it.

What they’re really upset over is the fact that popular benchmarking apps are not subject to GOS throttling — something that’s hard to see as anything but a blatant attempt to game the system. In fact, this past weekend the folks at Geekbench banned four generations of Samsung Galaxy phones (S10, S20, S21, S22) for benchmark manipulation.

Admittedly, thermal management is critical on today’s incredibly powerful handheld devices, and the concept of throttling is an accepted solution in the industry. But people are upset at the opaqueness and lack of control of GOS, not to mention cherry picking apps in order to excel at benchmarks. Furthermore Samsung has removed their vapor chamber cooling system from recent models. This makes GOS even more important and looks like a cost-savings measure that may have backfired. Currently there’s a petition with the government claiming false advertising, and users are actively pursuing a lawsuit against Samsung.

A Close Look At USB Power

It’s not a stretch to say that most devices these days have settled on USB as their power source of choice. While we imagine you’ll still be running into the occasional wall wart and barrel jack for the foreseeable future, at least we’re getting closer to a unified charging and power delivery technology. But are all USB chargers and cables created equal?

The answer, of course, is no. But the anecdotal information we all have about dud USB gear is just that, which is why [Igor Brkić] wanted to take a more scientific approach. Inspired by the lighting bolt icon the Raspberry Pi will flash on screen when the voltage drops too low, he set out to make a proper examination of various USB chargers and cables to see which ones aren’t carrying their weight.

In the first half of his investigation, [Igor] tests four fairly typical USB chargers with his TENMA 72-13200 electronic load. Two of them were name brand, and the other just cheap clones. He was surprised to find that all of the power supplies not only met their rated specifications, but in most cases, over-performed by a fair amount. For example the Lenovo branded charger that was rated for only 1 A was still putting out a solid 5 V at 1.7 A. Of course there’s no telling what would happen if you ran them that high for hours or days at a time, but it does speak to their short-term burst capability at least.

He then moved onto the USB cables, were things started to fall apart. The three generic cables saw significant voltage drops even at currents as low as 0.1 A, though the name brand cable with 20 AWG power wires did fare a bit better. But by .5 A they were all significantly below 5 V, and at 1 A, forget about it. Pulling anything more than that through these cables is a non-starter, and in general, you’ll need to put at least 5.2 V in if you want to actually run a USB device on the other side.

Admittedly this might not be groundbreaking research, but we appreciate [Igor] taking a scientific approach and tabulating all the information. If you’re still getting low voltage warnings on the Pi after swapping out your cheapo cables, then maybe the problem is actually elsewhere.

NVMe Boot Finally Comes To The Pi Compute Module 4

Since the introduction of the Raspberry Pi Compute Module 4, power users have wanted to use NVMe drives with the diminutive ARM board. While it was always possible to get one plugged in through an adapter on the IO Board, it was a bit too awkward for serious use. But as [Jeff Geerling] recently discussed on his blog, we’re not only starting to see CM4 carrier boards with full-size M.2 slots onboard, but the Raspberry Pi Foundation has unveiled beta support for booting from these speedy storage devices.

The MirkoPC board that [Jeff] looks at is certainly impressive on its own. Even if you don’t feel like jumping through the hoops necessary to actually boot to NVMe, the fact that you can simply plug in a standard drive and use it for mass storage is a big advantage. But the board also breaks out pretty much any I/O you could possibly want from the CM4, and even includes some of its own niceties like an RTC module and I2S DAC with a high-quality headphone amplifier.

Once the NVMe drive is safely nestled into position and you’ve updated to the beta bootloader, you can say goodbye to SD cards. But don’t get too excited just yet. Somewhat surprisingly, [Jeff] finds that booting from the NVMe drive is no faster than the SD card. That said, actually loading programs and other day-to-day tasks are far snappier once the system gets up and running. Perhaps the boot time can be improved with future tweaks, but honestly, the ~7 seconds it currently takes to start up the CM4 hardly seems excessive.

NVMe drives are exciting pieces of tech, and it’s good to see more single-board computers support it. While it might not help your CM4 boot any faster, it definitely offers a nice kick in performance across the board and expands what the system is capable of. Continue reading “NVMe Boot Finally Comes To The Pi Compute Module 4”

Hands-On: The RISC-V ESP32-C3 Will Be Your New ESP8266

We just got our hands on some engineering pre-samples of the ESP32-C3 chip and modules, and there’s a lot to like about this chip. The question is what should you compare this to; is it more an ESP32 or an ESP8266? The new “C3” variant has a single 160 MHz RISC-V core that out-performs the ESP8266, and at the same time includes most of the peripheral set of an ESP32. While RAM often ends up scarce on an ESP8266 with around 40 kB or so, the ESP32-C3 sports 400 kB of RAM, and manages to keep it all running while burning less power. Like the ESP32, it has Bluetooth LE 5.0 in addition to WiFi.

Espressif’s website says multiple times that it’s going to be “cost-effective”, which is secret code for cheap. Rumors are that there will be eight-pin ESP-O1 modules hitting the streets priced as low as $1. We usually require more pins, but if medium-sized ESP32-C3 modules are priced near the ESP8266-12-style modules, we can’t see any reason to buy the latter; for us it will literally be an ESP8266 killer.

On the other hand, it lacks the dual cores of the ESP32, and simply doesn’t have as many GPIO pins. If you’re a die-hard ESP32 abuser, you’ll doubtless find some features missing, like the ultra-low-power coprocessor or the DACs. But it does share a lot of the ESP32 standouts: the LEDC (PWM) peripheral and the unique parallel I2S come to mind. Moreover, it shares the ESP-IDF framework with the ESP32, so despite running on an entirely different CPU architecture, a lot of code will run without change on both chips just by tweaking the build environment with a one-liner.

One of these things is not like the other

If you were confused by the chip’s name, like we were, a week or so playing with the new chip will make it all clear. The ESP32-C3 is a lot more like a reduced version of the ESP32 than it is like an improvement over the ESP8266, even though it’s probably destined to play the latter role in our projects. If you count in the new ESP32-S3 that brings in USB, the ESP32 family is bigger than just one chip. Although it does seem odd to lump the RISC-V and Tensilica CPUs together, at the end of the day it’s the peripherals more than the CPUs that differentiate microcontrollers, and on that front the C3 is firmly in the ESP32 family.

Our takeaway: the ESP32-C3 is going to replace the ESP8266 in our projects, but it won’t replace the ESP32 which simply has more of everything when we need it. The shared codebase and peripheral architecture makes it easier to switch between the two when we don’t need the full-blown ESP32. In that spirit, we welcome the newcomer to the family.

But naturally, we’ve got a lot more to say about it. Specifically, we were interested in exactly what the RISC-V core brought to the table, and ran the module through power and speed comparisons with the ESP32 and ESP8266 — and it beats them both by a small margin in our benchmarks. We’ve also become a lot closer friends with the ESP-IDF SDK that all of the ESP32 family chips use, and love how far it has come in the last year or so. It’s not as newbie-friendly as ESP-Arduino, for sure, but it’s a ton more powerful, and we’re totally happy to leave the ESP8266 SDK behind us.

Continue reading “Hands-On: The RISC-V ESP32-C3 Will Be Your New ESP8266”

Lowering JavaScript Timer Resolution Thwarts Meltdown And Spectre

The computer security vulnerabilities Meltdown and Spectre can infer protected information based on subtle differences in hardware behavior. It takes less time to access data that has been cached versus data that needs to be retrieved from memory, and precisely measuring time difference is a critical part of these attacks.

Our web browsers present a huge potential surface for attack as JavaScript is ubiquitous on the modern web. Executing JavaScript code will definitely involve the processor cache and a high-resolution timer is accessible via browser performance API.

Web browsers can’t change processor cache behavior, but they could take away malicious code’s ability to exploit them. Browser makers are intentionally degrading time measurement capability in the API to make attacks more difficult. These changes are being rolled out for Google Chrome, Mozilla Firefox, Microsoft Edge and Internet Explorer. Apple has announced Safari updates in the near future that is likely to follow suit.

After these changes, the time stamp returned by performance.now will be less precise due to lower resolution. Some browsers are going a step further and degrade the accuracy by adding a random jitter. There will also be degradation or outright disabling of other features that can be used to infer data, such as SharedArrayBuffer.

These changes will have no impact for vast majority of users. The performance API are used by developers to debug sluggish code, the actual run speed is unaffected. Other features like SharedArrayBuffer are relatively new and their absence would go largely unnoticed. Unfortunately, web developers will have a harder time tracking down slow code under these changes.

Browser makers are calling this a temporary measure for now, but we won’t be surprised if they become permanent. It is a relatively simple change that blunts the immediate impact of Meltdown/Spectre and it would also mitigate yet-to-be-discovered timing attacks of the future. If browser makers offer a “debug mode” to restore high precision timers, developers could activate it just for their performance tuning work and everyone should be happy.

This is just one part of the shock wave Meltdown/Spectre has sent through the computer industry. We have broader coverage of the issue here.

BeagleBone Pin-Toggling Torture Test

Benchmarks often get criticized for their inability to perfectly model the real-world situations that we’d like them to. So take what follows in the limited scope that it’s intended, and don’t read too much into it. [Joonas Pihlajamaa]’s experiments with toggling a hardware pin as fast as possible on different single-board computers can still show us something.

The take-home result won’t surprise anyone who’s worked with a single-board computer: the higher-level interfaces are simply slow compared to direct memory-mapped GPIO access. But really slow. We’re talking around 5 kHz from Python or any of the file-based interfaces to the pins versus 3 MHz for direct access. Worse, as you’d expect when a non-realtime operating system is in the middle, there are glitches on the order of ten milliseconds with all the file-based methods.

This test only tells us so much, though, and it’s not really taking advantage of the BeagleBone Black’s ace in the hole, the PRUs — onboard hardware processors that bring real-time IO capabilities to the system. We’d like to see a re-write of the code to take advantage of libpruio, for instance. A 20 MHz square wave is a piece of cake with the PRUs.

Of course, it’s not interacting, which is probably in the spirit of the benchmark as written. But if raw hardware speed on a BeagleBone is the goal, it’s likely that the PRUs are going to feature prominently in the solution.

Benchmarking The Raspberry Pi 2

The Raspberry Pi has only been available for a few days, but already those boards are heading through the post office and onto workbenches around the world. From the initial impressions, we already know this quad-core ARMv7 system boots in about half the time, but other than that, there aren’t many real benchmarks that compare the new Raspberry Pi 2 to the older Raspi 1 or other similar tiny Linux dev boards. This is the post that fixes that.

A word of warning, though: these are benchmarks, and benchmarks aren’t real-world use cases. However, we can glean a little bit of information about the true performance of the Raspberry Pi 2 with a few simple tools.

For these tests, I’ve used Roy Longbottom’s Raspberry Pi benchmarking tools, nbench, and a few custom tools to determine how fast both hardware versions of the Raspberry are in real-world use cases.

Continue reading “Benchmarking The Raspberry Pi 2”