A Dedicated GPU For Your Favorite SBC

The Raspberry Pi is famous for its low cost, versatile and open Linux environment, and plentiful I/O, making it a perfect device not only for its originally-intended educational purposes but for basically every hobbyist from gardeners to roboticists to amateur radio operators. Most builds tend to make use of the GPIO pins which allow easy connections to various peripherals and sensors, but the Pi also supports PCI devices which means that, in theory, it could use a GPU in much the same way that a modern computer would. After plenty of testing and development, [Jeff Geerling] brings us this custom graphics card interface for the Raspberry Pi.

The testing for all of these graphics cards has been done with a Pi Compute Module 4 and the end result is an interface device which looks much like a graphics card itself. It splits the PCI bus out onto a more familiar x16 slot connector and adds physical connections for power, USB, and Ethernet. When plugged into the carrier board, the Compute Module can be attached to any of a number of graphics cards, including the latest and highest-end of Nvidia and AMD offerings.

Perhaps unsurprisingly, though, the 4090 and 7900 cards don’t work with the Raspberry Pi. This is partially due to the 32-bit limitations of the Pi and other memory mapping issues, but even after attempting some workarounds Nvidia’s cards aren’t open-source enough to test properly (although the card is recognized by the Pi) and AMD’s drivers crash the system even after compiling a custom kernel. [Jeff] did find an Nvidia card that worked, although it requires using the USB interface and second-hand cards are selling for around $3000 USD. For a more economical choice there are some other graphics cards that he was eventually able to get working, albeit not with perfect performance, including some of the ones we’ve seen him test already.

Continue reading “A Dedicated GPU For Your Favorite SBC”

The Tale Of The Final EVGA GPU Overclocking Record

It’s not news that EVGA is getting out of the GPU card game, after a ‘little falling out’ with Nvidia. It’s sad news nonetheless, as this enthusiastic band of hardware hackers has a solid following in certain overclocking and custom PC circles. The Games Nexus gang decided to fly over to meet up with the EVGA team in Zhonghe, Taiwan, and follow them around a bit as they tried for one last overclocking record on the latest (unreleased, GTX4090-based) GPU card. As you will note early on in the video, things didn’t go smoothly, with their hand-lapped GPU burning out the PCB after a small setup error. Continue reading “The Tale Of The Final EVGA GPU Overclocking Record”

showing the connector after its torn down from the side of the wire solder points, showing how thin are the metal pads, and also that one wire has already broken off

NVIDIA Power Cables Are Melting, This May Be Why

NVIDIA has recently released their lineup of 40-series graphics cards, with a novel generation of power connectors called 12VHPWR. See, the previous-generation 8-pin connectors were no longer enough to satiate the GPU’s hunger. Once cards started getting into the hands of users, surprisingly, we began seeing pictures of melted 12VHPWR plugs and sockets online — specifically, involving ATX 8-pin GPU power to 12VHPWR adapters that NVIDIA provided with their cards.

Now, [Igor Wallossek] of igor’sLAB proposes a theory about what’s going on, with convincing teardown pictures to back it up. After an unscheduled release of plastic-scented magic smoke, one of the NVIDIA-provided connectors was destructively disassembled. Turned out that these connectors weren’t crimped like we’re used to, but instead, the connectors had flat metal pads meant for wires to solder on. For power-carrying connectors, there are good reasons this isn’t the norm. That said, you can make it work, but chances are not in favor of this specific one.

The metal pads in question seem to be far too thin and structurally unsound, as one can readily spot, their cross-section is dwarfed by the cross-section of cables soldered to them. This would create a segment of increased resistance and heat loss, exacerbated by any flexing of the thick and unwieldy cabling. Due to the metal being so thin, the stress points seem quite flimsy, as one of the metal pads straight up broke off during disassembly of the connector.

If this theory is true, the situation is a blunder to blame on NVIDIA. On the upside, the 12VHPWR standard itself seems to be viable, as there are examples of PSUs with native 12HPWR connections that don’t exhibit this problem. It seems, gamers with top-of-the-line GPUs can now empathize with the problems that we hackers have been seeing in very cheap 3D printers.

Hackaday Links Column Banner

Hackaday Links: July 3, 2022

Looks like we might have been a bit premature in our dismissal last week of the Sun’s potential for throwing a temper tantrum, as that’s exactly what happened when a G1 geomagnetic storm hit the planet early last week. To be fair, the storm was very minor — aurora visible down to the latitude of Calgary isn’t terribly unusual — but the odd thing about this storm was that it sort of snuck up on us. Solar scientists first thought it was a coronal mass ejection (CME), possibly related to the “monster sunspot” that had rapidly tripled in size and was being hyped up as some kind of planet killer. But it appears this sneak attack came from another, less-studied phenomenon, a co-rotating interaction region, or CIR. These sound a bit like eddy currents in the solar wind, which can bunch up plasma that can suddenly burst forth from the sun, all without showing the usually telltale sunspots.

Then again, even people who study the Sun for a living don’t always seem to agree on what’s going on up there. Back at the beginning of Solar Cycle 25, NASA and NOAA, the National Oceanic and Atmospheric Administration, were calling for a relatively weak showing during our star’s eleven-year cycle, as recorded by the number of sunspots observed. But another model, developed by heliophysicists at the U.S. National Center for Atmospheric Research, predicted that Solar Cycle 25 could be among the strongest ever recorded. And so far, it looks like the latter group might be right. Where the NASA/NOAA model called for 37 sunspots in May of 2022, for example, the Sun actually threw up 97 — much more in line with what the NCAR model predicted. If the trend holds, the peak of the eleven-year cycle in April of 2025 might see over 200 sunspots a month.

So, good news and bad news from the cryptocurrency world lately. The bad news is that cryptocurrency markets are crashing, with the flagship Bitcoin falling from its high of around $67,000 down to $20,000 or so, and looking like it might fall even further. But the good news is that’s put a bit of a crimp in the demand for NVIDIA graphics cards, as the economics of turning electricity into hashes starts to look a little less attractive. So if you’re trying to upgrade your gaming rig, that means there’ll soon be a glut of GPUs, right? Not so fast, maybe: at least one analyst has a different view, based mainly on the distribution of AMD and NVIDIA GPU chips in the market as well as how much revenue they each draw from crypto rather than from traditional uses of the chips. It’s important mainly for investors, so it doesn’t really matter to you if you’re just looking for a graphics card on the cheap.

Speaking of businesses, things are not looking too good for MakerGear. According to a banner announcement on their website, the supplier of 3D printers, parts, and accessories is scaling back operations, to the point where everything is being sold on an “as-is” basis with no returns. In a long post on “The Future of MakerGear,” founder and CEO Rick Pollack says the problem basically boils down to supply chain and COVID issues — they can’t get the parts they need to make printers. And so the company is looking for a buyer. We find this sad but understandable, and wish Rick and everyone at MakerGear the best of luck as they try to keep the lights on.

And finally, if there’s one thing Elon Musk is good at, it’s keeping his many businesses in the public eye. And so it is this week with SpaceX, which is recruiting Starlink customers to write nasty-grams to the Federal Communications Commission regarding Dish Network’s plan to gobble up a bunch of spectrum in the 12-GHz band for their 5G expansion plans. The 3,000 or so newly minted experts on spectrum allocation wrote to tell FCC commissioners how much Dish sucks, and how much they love and depend on Starlink. It looks like they may have a point — Starlink uses the lowest part of the Ku band (12 GHz – 18 GHz) for data downlinks to user terminals, along with big chunks of about half a dozen other bands. It’ll be interesting to watch this one play out.

NeRF: Shoot Photos, Not Foam Darts, To See Around Corners

Readers are likely familiar with photogrammetry, a method of creating 3D geometry from a series of 2D photos taken of an object or scene. To pull it off you need a lot of pictures, hundreds or even thousands, all taken from slightly different perspectives. Unfortunately the technique suffers where there are significant occlusions caused by overlapping elements, and shiny or reflective surfaces that appear to be different colors in each photo can also cause problems.

But new research from NVIDIA marries photogrammetry with artificial intelligence to create what the developers are calling an Instant Neural Radiance Field (NeRF). Not only does their method require far fewer images, as little as a few dozen according to NVIDIA, but the AI is able to better cope with the pain points of traditional photogrammetry; filling in the gaps of the occluded areas and leveraging reflections to create more realistic 3D scenes that reconstruct how shiny materials looked in their original environment.

NVIDIA-Instant-NeRF-3D-Mesh

If you’ve got a CUDA-compatible NVIDIA graphics card in your machine, you can give the technique a shot right now. The tutorial video after the break will walk you through setup and some of the basics, showing how the 3D reconstruction is progressively refined over just a couple of minutes and then can be explored like a scene in a game engine. The Instant-NeRF tools include camera-path keyframing for exporting animations with higher quality results than the real-time previews. The technique seems better suited for outputting views and animations than models for 3D printing, though both are possible.

Don’t have the latest and greatest NVIDIA silicon? Don’t worry, you can still create some impressive 3D scans using “old school” photogrammetry — all you really need is a camera and a motorized turntable.

Continue reading “NeRF: Shoot Photos, Not Foam Darts, To See Around Corners”

NVIDIA Releases Drivers With Openness Flavor

This year, we’ve already seen sizeable leaks of NVIDIA source code, and a release of open-source drivers for NVIDIA Tegra. It seems NVIDIA decided to amp it up, and just released open-source GPU kernel modules for Linux. The GitHub link named open-gpu-kernel-modules has people rejoicing, and we are already testing the code out, making memes and speculating about the future. This driver is currently claimed to be experimental, only “production-ready” for datacenter cards – but you can already try it out!

The Driver’s Present State

Of course, there’s nuance. This is new code, and unrelated to the well-known proprietary driver. It will only work on cards starting from RTX 2000 and Quadro RTX series (aka Turing and onward). The good news is that performance is comparable to the closed-source driver, even at this point! A peculiarity of this project – a good portion of features that AMD and Intel drivers implement in Linux kernel are, instead, provided by a binary blob from inside the GPU. This blob runs on the GSP, which is a RISC-V core that’s only available on Turing GPUs and younger – hence the series limitation. Now, every GPU loads a piece of firmware, but this one’s hefty!

Barring that, this driver already provides more coherent integration into the Linux kernel, with massive benefits that will only increase going forward. Not everything’s open yet – NVIDIA’s userspace libraries and OpenGL, Vulkan, OpenCL and CUDA drivers remain closed, for now. Same goes for the old NVIDIA proprietary driver that, I’d guess, would be left to rot – fitting, as “leaving to rot” is what that driver has previously done to generations of old but perfectly usable cards. Continue reading “NVIDIA Releases Drivers With Openness Flavor”

NVIDIA Unveils Jetson AGX Orin Developer Kit

When you think of high-performance computing powered by NVIDIA hardware, you probably think of applications leveraging the capabilities of the company’s graphics cards. In many cases, you’d be right. But naturally there are situations where the traditional combination of x86 computer and bolt-on GPU simply isn’t going to cut it; try packing a modern gaming computer onto a quadcopter and let us know how it goes.

For these so-called “edge computing” situations, NVIDIA offers the Jetson line of ARM single-board computers which include a scaled-down GPU that gives them vastly improved performance for machine learning applications than something like the Raspberry Pi. Today during their annual GPU Technology Conference (GTC), NVIDIA announced the immediate availability of the Jetson AGX Orin Developer Kit, which the company promises can deliver “server-class AI performance” in a package small enough for use in IoT or robotics.

As with the earlier Jetsons, the palm-sized development kit acts as a sort of breakout board for the far smaller module slotted into it. This gives developers access to the full suite of the connectivity and I/O options offered by the Jetson module in a desktop-friendly form that makes prototyping the software side of things much easier. Once the code is working as intended, you can simply pop the Jetson module out of the development kit and install it in your final hardware.

NVIDIA is offering the Orin module in a range of configurations, depending on your computational needs and budget. At the high end is the AGX Orin 64 GB at $1599 USD; which offers a 12-core ARM Cortex-A78AE processor, 32 GB of DDR5 RAM, 64 GB of onboard flash, and a Ampere GPU with 2048 CUDA cores and 64 Tensor cores, which all told enables it to perform an incredible 275 trillion operations per second (TOPS).

At the other end of the spectrum is the Orin NX 8 GB, a SO-DIMM module that delivers 70 TOPS for $399. It’s worth noting that even this low-end flavor of the Orin is capable of more than double the operations per second as 2018’s Jetson AGX Xavier, which until now was the most powerful entry in the product line.

The Jetson AGX Orin Developer Kit is available for $1,999 USD, and includes the AGX Orin 64 GB module. Interestingly, NVIDIA says the onboard software is able to emulate any of of the lower tier modules, so you won’t necessarily have to swap out the internal modules if your final hardware will end up using one of the cheaper modules. Of course the inverse of that is even folks who only planned on using the more budget-friendly units either have to shell out for an expensive dev kit, or try to spin their own breakout board.

While the $50 USD Jetson Nano is far more likely to be on the workbench of the average Hackaday reader, we have to admit that the specs of these new Orin modules are very exciting. Then again, we’ve covered several projects that used the previously top-of-the-line Jetson Xavier, so we don’t doubt one of you is already reaching for their wallet to pick up this latest entry into NVIDIA’s line of diminutive powerhouses.