Homebrew GPU Tackles Quake

Have you ever wondered how a GPU works? Even better, have you ever wanted to make one? [Dylan] certainly did, because he made FuryGPU — a fully custom graphics card capable of playing Quake at over 30 frames per second.

As you might have guessed, FuryGPU isn’t in the same league as modern graphics card — those are made of thousands of cores specialized in math, which are then programmed with whatever shaders you want. FuryGPU is a more “traditional” GPU, it has dedicated hardware for all the functions the GPU needs to perform and doesn’t support “shader code” in the same way an AMD or NVIDIA GPU does. According to [Dylan], the hardest part of the whole thing was writing Windows drivers for it.

On his blog, [Dylan] tells us all about how he went from the obligatory [Ben Eater] breadboard CPU to playing with FPGAs to even larger FPGAs to bear the weight of this mighty GPU. While this project isn’t exactly revolutionary in the GPU world, it certainly is impressive and we impatiently wait to see what comes next.

Continue reading “Homebrew GPU Tackles Quake

The Raspberry Pi 5 Can Use External Graphics Cards Now

The Raspberry Pi line is full of capable compact computers, but they’ve never been the strongest in the bunch when it comes to graphical output. Nor have they been particularly expandable in that regard. However, that’s all beginning to change, with [Jeff Geerling] reporting success getting external GPUs to work on the Raspberry Pi 5.

Unlike previous Raspberry Pis, the Raspberry Pi 5 has a less quirky implementation for its PCI Express bus. Previous editions have thrown up issues when trying to work with GPUs, but [Jeff] has found much more success this time around. He’s gotten an AMD RX 460 to work with the setup, and has got it running quite a bit of the glmark2 test regime. He’s working on a variety of other AMD cards too, but suspects NVidia parts could be harder due to some initialization issues that are proving difficult to quash.

It still takes some funky adapters and a lot of work, but finally GPUs are starting to work with the platform. Keep up with his list of card trials on the PiPCI website. We’ve seen [Jeff]’s work with earlier iterations of the Raspberry Pi before, too. Video after the break.

Continue reading “The Raspberry Pi 5 Can Use External Graphics Cards Now”

Here’s Why GPUs Are Deep Learning’s Best Friend

If you have a curiosity about how fancy graphics cards actually work, and why they are so well-suited to AI-type applications, then take a few minutes to read [Tim Dettmers] explain why this is so. It’s not a terribly long read, but while it does get technical there are also car analogies, so there’s something for everyone!

He starts off by saying that most people know that GPUs are scarily efficient at matrix multiplication and convolution, but what really makes them most useful is their ability to work with large amounts of memory very efficiently.

Essentially, a CPU is a latency-optimized device while GPUs are bandwidth-optimized devices. If a CPU is a race car, a GPU is a cargo truck. The main job in deep learning is to fetch and move cargo (memory, actually) around. Both devices can do this job, but in different ways. A race car moves quickly, but can’t carry much. A truck is slower, but far better at moving a lot at once. Continue reading “Here’s Why GPUs Are Deep Learning’s Best Friend”

The Tale Of The Final EVGA GPU Overclocking Record

It’s not news that EVGA is getting out of the GPU card game, after a ‘little falling out’ with Nvidia. It’s sad news nonetheless, as this enthusiastic band of hardware hackers has a solid following in certain overclocking and custom PC circles. The Games Nexus gang decided to fly over to meet up with the EVGA team in Zhonghe, Taiwan, and follow them around a bit as they tried for one last overclocking record on the latest (unreleased, GTX4090-based) GPU card. As you will note early on in the video, things didn’t go smoothly, with their hand-lapped GPU burning out the PCB after a small setup error. Continue reading “The Tale Of The Final EVGA GPU Overclocking Record”

Hackaday Links Column Banner

Hackaday Links: July 3, 2022

Looks like we might have been a bit premature in our dismissal last week of the Sun’s potential for throwing a temper tantrum, as that’s exactly what happened when a G1 geomagnetic storm hit the planet early last week. To be fair, the storm was very minor — aurora visible down to the latitude of Calgary isn’t terribly unusual — but the odd thing about this storm was that it sort of snuck up on us. Solar scientists first thought it was a coronal mass ejection (CME), possibly related to the “monster sunspot” that had rapidly tripled in size and was being hyped up as some kind of planet killer. But it appears this sneak attack came from another, less-studied phenomenon, a co-rotating interaction region, or CIR. These sound a bit like eddy currents in the solar wind, which can bunch up plasma that can suddenly burst forth from the sun, all without showing the usually telltale sunspots.

Then again, even people who study the Sun for a living don’t always seem to agree on what’s going on up there. Back at the beginning of Solar Cycle 25, NASA and NOAA, the National Oceanic and Atmospheric Administration, were calling for a relatively weak showing during our star’s eleven-year cycle, as recorded by the number of sunspots observed. But another model, developed by heliophysicists at the U.S. National Center for Atmospheric Research, predicted that Solar Cycle 25 could be among the strongest ever recorded. And so far, it looks like the latter group might be right. Where the NASA/NOAA model called for 37 sunspots in May of 2022, for example, the Sun actually threw up 97 — much more in line with what the NCAR model predicted. If the trend holds, the peak of the eleven-year cycle in April of 2025 might see over 200 sunspots a month.

So, good news and bad news from the cryptocurrency world lately. The bad news is that cryptocurrency markets are crashing, with the flagship Bitcoin falling from its high of around $67,000 down to $20,000 or so, and looking like it might fall even further. But the good news is that’s put a bit of a crimp in the demand for NVIDIA graphics cards, as the economics of turning electricity into hashes starts to look a little less attractive. So if you’re trying to upgrade your gaming rig, that means there’ll soon be a glut of GPUs, right? Not so fast, maybe: at least one analyst has a different view, based mainly on the distribution of AMD and NVIDIA GPU chips in the market as well as how much revenue they each draw from crypto rather than from traditional uses of the chips. It’s important mainly for investors, so it doesn’t really matter to you if you’re just looking for a graphics card on the cheap.

Speaking of businesses, things are not looking too good for MakerGear. According to a banner announcement on their website, the supplier of 3D printers, parts, and accessories is scaling back operations, to the point where everything is being sold on an “as-is” basis with no returns. In a long post on “The Future of MakerGear,” founder and CEO Rick Pollack says the problem basically boils down to supply chain and COVID issues — they can’t get the parts they need to make printers. And so the company is looking for a buyer. We find this sad but understandable, and wish Rick and everyone at MakerGear the best of luck as they try to keep the lights on.

And finally, if there’s one thing Elon Musk is good at, it’s keeping his many businesses in the public eye. And so it is this week with SpaceX, which is recruiting Starlink customers to write nasty-grams to the Federal Communications Commission regarding Dish Network’s plan to gobble up a bunch of spectrum in the 12-GHz band for their 5G expansion plans. The 3,000 or so newly minted experts on spectrum allocation wrote to tell FCC commissioners how much Dish sucks, and how much they love and depend on Starlink. It looks like they may have a point — Starlink uses the lowest part of the Ku band (12 GHz – 18 GHz) for data downlinks to user terminals, along with big chunks of about half a dozen other bands. It’ll be interesting to watch this one play out.

Asahi GPU Hacking

[Alyssa Rosenzweig] has been tirelessly working on reverse engineering the GPU built into Apple’s M1 architecture as part of the Asahi Linux effort. If you’re not familiar, that’s the project adding support to the Linux kernel and userspace for the Apple M1 line of products. She has made great progress, and even got primitive rendering working with her own open source code, just over a year ago.

Trying to mature the driver, however, has hit a snag. For complex rendering, something in the GPU breaks, and the frame is simply missing chunks of content. Some clever testing discovered the exact failure trigger — too much total vertex data. Put simply, it’s “the number of vertices (geometry complexity) times amount of data per vertex (‘shading’ complexity).” That… almost sounds like a buffer filling up, but on the GPU itself. This isn’t a buffer that the driver directly interacts with, so all of this sleuthing has to be done blindly. The Apple driver doesn’t have corrupted renders like this, so what’s going on?
Continue reading “Asahi GPU Hacking”

NVIDIA Releases Drivers With Openness Flavor

This year, we’ve already seen sizeable leaks of NVIDIA source code, and a release of open-source drivers for NVIDIA Tegra. It seems NVIDIA decided to amp it up, and just released open-source GPU kernel modules for Linux. The GitHub link named open-gpu-kernel-modules has people rejoicing, and we are already testing the code out, making memes and speculating about the future. This driver is currently claimed to be experimental, only “production-ready” for datacenter cards – but you can already try it out!

The Driver’s Present State

Of course, there’s nuance. This is new code, and unrelated to the well-known proprietary driver. It will only work on cards starting from RTX 2000 and Quadro RTX series (aka Turing and onward). The good news is that performance is comparable to the closed-source driver, even at this point! A peculiarity of this project – a good portion of features that AMD and Intel drivers implement in Linux kernel are, instead, provided by a binary blob from inside the GPU. This blob runs on the GSP, which is a RISC-V core that’s only available on Turing GPUs and younger – hence the series limitation. Now, every GPU loads a piece of firmware, but this one’s hefty!

Barring that, this driver already provides more coherent integration into the Linux kernel, with massive benefits that will only increase going forward. Not everything’s open yet – NVIDIA’s userspace libraries and OpenGL, Vulkan, OpenCL and CUDA drivers remain closed, for now. Same goes for the old NVIDIA proprietary driver that, I’d guess, would be left to rot – fitting, as “leaving to rot” is what that driver has previously done to generations of old but perfectly usable cards. Continue reading “NVIDIA Releases Drivers With Openness Flavor”