ASUS GPU Uses Gyroscope To Warn For Sagging Cards

It’s not really an understatement to say that over the years videocards (GPUs) — much like CPU coolers — have become rather chonky. Unfortunately, the PCIe slots they plug into were never designed with multi-kilogram cards in mind. All this extra weight is of course happily affected by gravity.

The dialog in Asus' GPU Tweak software that shows the degrees of sag for your GPU. (Credit: Asus)

The problem has gotten to the point that the ASUS ROG Astral RTX 5090 card added a Bosch Sensortec BMI323 inertial measurement unit (IMU) to provide an accelerometer and angular rate (gyroscope) measurements, as reported by [Uniko’s Hardware] (in Chinese, see English [Videocardz] article).

There are so-called anti-sag brackets that provide structural support to the top of the GPU where it isn’t normally secured. But since this card weighs in at over 6 pounds (3 kilograms) for the air cooled model, it appears the bracket wasn’t enough, and active monitoring was necessary.

The software allows you to set a sag angle at which you receive a notification, which would presumably either allow you to turn off the system and readjust the GPU, or be forewarned when it is about to rip itself loose from the PCIe slot and crash to the bottom of the case.

Import GPU: Python Programming With CUDA

Every few years or so, a development in computing results in a sea change and a need for specialized workers to take advantage of the new technology. Whether that’s COBOL in the 60s and 70s, HTML in the 90s, or SQL in the past decade or so, there’s always something new to learn in the computing world. The introduction of graphics processing units (GPUs) for general-purpose computing is perhaps the most important recent development for computing, and if you want to develop some new Python skills to take advantage of the modern technology take a look at this introduction to CUDA which allows developers to use Nvidia GPUs for general-purpose computing.

Of course CUDA is a proprietary platform and requires one of Nvidia’s supported graphics cards to run, but assuming that barrier to entry is met it’s not too much more effort to use it for non-graphics tasks. The guide takes a closer look at the open-source library PyTorch which allows a Python developer to quickly get up-to-speed with the features of CUDA that make it so appealing to researchers and developers in artificial intelligence, machine learning, big data, and other frontiers in computer science. The guide describes how threads are created, how they travel along within the GPU and work together with other threads, how memory can be managed both on the CPU and GPU, creating CUDA kernels, and managing everything else involved largely through the lens of Python.

Getting started with something like this is almost a requirement to stay relevant in the fast-paced realm of computer science, as machine learning has taken center stage with almost everything related to computers these days. It’s worth noting that strictly speaking, an Nvidia GPU is not required for GPU programming like this; AMD has a GPU computing platform called ROCm but despite it being open-source is still behind Nvidia in adoption rates and arguably in performance as well. Some other learning tools for GPU programming we’ve seen in the past include this puzzle-based tool which illustrates some of the specific problems GPUs excel at.

Laptop GPU Upgrade With Just A Little Reballing

Modern gaming laptops are in an uncomfortable spot – often too underpowered for newest titles, but too bulky to be genuinely portable. It doesn’t help they’re not often upgradeable, so you’re stuck with what you’ve bought – unless, say, you’re a hacker equipped some tools for PCB reflow? If that’s the case, welcome to [TechModLab]’s video showing you the process of upgrading a laptop’s soldered-on NVIDIA GPU, replacing the 3070 chip with a 3080.

You don’t need much – the most exotic tool is a BGA rework station, holding the mainboard steady&stiff and heating a specific large chip on the board with an infrared lamp from above. This one is definitely a specialty tool, but we’ve seen hackers build their own. From there, some general soldering tools like flux and solder wick, a stencil for your chip, BGA balls, and a $20 USB-C hotplate are instrumental for reballing chips – tools you ought to have.

Reballing was perhaps the hardest step of the journey – instrumental for preparing the GPU before the transplant. Afterwards, only a few steps were needed – poking a BGA ball that didn’t connect, changing board straps to adjust for the new VRAM our enterprising hacker added alongside the upgrade, and playing with the driver process install a little. Use this method to upgrade from a lower-end binned GPU you’re stuck with, or perhaps to repair your laptop if artifacts start appearing – it’s a worthwhile reminder about methods that laptop repair shops use on the daily.

Itching to learn more about BGAs? You absolutely should read this article series by our own [Robin Kearey]. We’ve mostly seen reballing used for upgrading RAM on laptop and Raspberry Pi boards, but seeing it being used for an entire laptop is nice – it’s the same technique, just scaled up, and you always can start by practicing at a smaller scale. Now, it might feel like we’ve left the era of upgradable GPUs on laptops, and today’s project might not necessarily help your worries – but the Framework 16 definitely bucks the trend.

Continue reading “Laptop GPU Upgrade With Just A Little Reballing”

ROG Ally Community Rebuilds The Proprietary Asus EGPU

As far as impressive hacks go, this one is more than enough for your daily quota. You might remember the ROG Ally, a Steam Deck-like x86 gaming console that’s graced our pages a couple of times. Now, this is a big one – from the ROG Ally community, we get a fully open-source eGPU adapter for the ROG Ally, built by reverse-engineering the proprietary and overpriced eGPU sold by Asus.

We’ve seen this journey unfold over a year’s time, and the result is glorious – two different PCBs, one of them an upgraded drop-in replacement board for the original eGPU, and another designed to fit a common eGPU form-factor adapter. The connector on the ROG Ally is semi-proprietary, but its cable could be obtained as a repair part. From there, it was a matter of scrupulous pinout reverse-engineering, logic analyzer protocol captures, ACPI and BIOS decompiling, multiple PCB revisions and months of work – what we got is a masterpiece of community effort.

Do you want to learn how the reverse-engineering process has unfolded? Check out the Diary.md – it’s certainly got something for you to learn, especially if you plan to walk a similar path; then, make sure to read up all the other resources on the GitHub, too! This achievement follows a trend from the ROG Ally community, with us having featured dual-screen mods and battery replacements before – if it continues the same way, who knows, maybe next time we will see a BGA replacement or laser fault injection.

Learn GPU Programming With Simple Puzzles

Have you wanted to get into GPU programming with CUDA but found the usual textbooks and guides a bit too intense? Well, help is at hand in the form of a series of increasingly difficult programming ‘puzzles’ created by [Sasha Rush]. The first part of the simplification is to utilise the excellent NUMBA python JIT compiler to allow easy-to-understand code to be deployed as GPU machine code. Working on these puzzles is even easier if you use this linked Google Colab as your programming environment, launching you straight into a Jupyter notebook with the puzzles laid out. You can use your own GPU if you have one, but that’s not detailed.

The puzzles start, assuming you know nothing at all about GPU programming, which is totally the case for some of us! What’s really nice is the way the result of the program operation is displayed, showing graphically how data are read and written to the input and output arrays you’re working with. Each essential concept for CUDA programming is identified one at a time with a real programming example, making it a breeze to follow along. Just make sure you don’t watch the video below all the way through the first time, as in it [Sasha] explains all the solutions!

Confused about why you’d want to do this? Then perhaps check out our guide to CUDA first. We know what you’re thinking: how do we use non-nVIDIA hardware? Well, there’s SCALE for that! Finally, once you understand CUDA, why not have a play with WebGPU?

Continue reading “Learn GPU Programming With Simple Puzzles”

Hacking An NVIDIA CMP 170HX Crypto GPU For EM Sim Work

A few years back NVIDIA created a dedicated cryptocurrency mining GPU, the CMP 170HX. This was a heavily restricted version of its flagship A100 datacenter accelerator, using the same GA100 chip. It was intended for accelerating Ethash, the Etherium proof-of-work algorithm, and nothing else. [niconiconi] bought one to use for accelerating PCB electromagnetic simulations and put a lot of effort into repairing the card, converting it to water-cooling, and figuring out how best to use this nobbled GPU.

Typically, the GA100 silicon sits in the center of the mighty A100 GPU card and would be found in a server rack, cooled by forced air. This was not an option at home, so an off-the-shelf water-cooling block was wedged in. During this process, [niconconi] found that the board wouldn’t power on, so they went on a deep dive into the power supply tree with the help of a leaked A100 schematic. The repair and modifications can be found in the appendix, right down to the end of the article. It is a long read to get there.

Continue reading “Hacking An NVIDIA CMP 170HX Crypto GPU For EM Sim Work”

Las Vegas’ Sphere: Powered By Nvidia GPUs And With Impressive Power Bill

A daytime closeup of the LED pucks that comprise the exosphere of the Sphere in Paradise, Nevada (Credit: Y2kcrazyjoker4, Wikimedia)
A daytime closeup of the LED pucks that comprise the exosphere of the Sphere in Paradise, Nevada (Credit: Y2kcrazyjoker4, Wikimedia)

As the United States’ pinnacle of extravaganza, the Las Vegas Strip and the rest of the town of Paradise are on a seemingly never-ending quest to become brighter, glossier and more over the top as one venue tries to overshadow the competition. A good example of this is the ironically very uninspiredly named Sphere, which has both an incredibly dull name and yet forms a completely outrageous entertainment venue with a 54,000 m2 (~3.67 acre) wrap-around interior LED display (16 x 16K displays) and an exterior LED display (‘Exosphere’) consisting out of 1.23 million LED ‘pucks’. Although opened in September of 2023, details about the hardware that drives those displays have now been published by NVidia in a recent blog post.

Driving all these pixels are around 150 NVidia RTX A6000 GPUs, installed in computer systems which are networked using NVidia BlueField data processing units (DPUs) and NVidia ConnectX-6 NICs (up to 400 Gb/s), with visual content transferred from Sphere Studios in California to the Sphere. All this hardware uses about 45 kW of power when running at full blast, before adding the LED displays and related hardware to the total count, which is estimated to be up to 28 MW of power and causing local environmentalists grief despite claims by the owner that it’ll use solar power for 70% of the power needs, despite many night-time events. Another item that locals take issue with is the amount of light pollution that the exterior display adds.

Although it’s popular to either attack or defend luxurious excesses like the Sphere, it’s interesting to note that the state of Nevada mostly gets its electricity from natural gas. Meanwhile the 2.3 billion USD price tag for the Sphere would have gotten Nevada 16.5% of a nuclear power station like Arizona’s Palo Verde (before the recurring power bill), but Palo Verde’s reactor spheres are admittedly less suitable for rock concerts.