Intel GPUs On Raspberry Pi Is So Wrong It Feels Right

While you might not know it from their market share, Intel makes some fine GPUs. Putting one in a PC with an AMD processor already feels a bit naughty, but AMD’s x86 processors still ultimately trace their lineage all the way back to Intel’s original 4004. Putting that same Intel GPU into a system with an ARM processor, like a Raspberry Pi, or even better, a RISC V SBC? Why, that seems downright deviant, and absolutely hack-y. [Jeff Geerling] shares our love of the bizarre, and has been working tirelessly to get a solid how-to guide written so we can all flout the laws of god and man together.

According to [Jeff], all of Intel’s GPUs should work, though not yet flawlessly. In terms of 3D acceleration, OpenGL works well, but Vulkan renders are going to get texture artifacts if they get textures at all. The desktop has artifacts, and so do images; see for yourself in the video embedded below. Large language models are restricted to the not-so-large, due to memory addressing issues. ARM and RISC V both handle memory somewhat differently than x86 systems, and apparently the difference matters.

The most surprising thing is that we’re now at a point that you don’t need to recompile the Linux kernel yourself to get this to work. Reconfigure, yes, but not recompile. [6by9] has a custom kernel all ready to go. In testing on his Pi5, [Jeff] did have to manually recompile Mesa, however–unsurprisingly, the version for Raspberry Pi wasn’t built against the iris driver for Intel GPUs, because apparently the Mesa devs are normal.

Compared to AMD cards, which already work quite well, the Intel cards don’t shine on the benchmark, but that wasn’t really the point. The point is expanding the hardware available to SBC users, and perhaps allowing for sensible chuckle at the mis-use of an “Intel Inside” sticker. (Or cackle of glee, depending on your sense of humour. We won’t judge.) [Jeff] is one of the people working at getting these changes upstreamed into the Linux kernel and Raspberry Pi OS, and we wish him well in that endeavour.

Now, normally we wouldn’t encourage a completely unknown fellow like this [Jeff] of whom no one has ever heard of to be poking about in the kernel, but we have a good feeling about this guy. It’s almost as if like he’s been at this a while. That couldn’t be, could it? Surely we’d have noticed him.

6 thoughts on “Intel GPUs On Raspberry Pi Is So Wrong It Feels Right

  1. Intel has used old 486 cores inside their iGPUs to schedule shader work, so it wouldn’t surprise me if there was one (or more) in their discrete cards. Nvidia might use RISC-V for their cards’ GSP processors to avoid licence fees, but intel doesn’t have to pay anyone to use x86.

    1. There are 2 coes, called GuC and HuC, but they are not x86 (The GuC was originally x86, but Intel switched to a licenced core a few generations ago – I can’t remember which core they use now). These cores are indeed in discrete too

      The HuC manages media DRM and firmware loading. The GuC does way more than shader scheduling. It offloads ALL submission to the GPU (Though it can still be bypassed), and various other low level functions.

  2. What is the problem with compiling a Linux kernel? I do it since the 90ies whenever I like to update something. That is normal for Linuxuser and it is 10times easier than compiling a gcc or Qt or the BS-software that is done by our youngsters that need droelfzig different scriptlanguage and special-makes. .-)

    1. No issues, but for ARM you have to cross compile, unless you have a very beefy setup and a lot of time. The guy running the Armbian project has a youtube channel with his homelab and build servers, where he compiles all his distros for literally 50+ different SBC models. 64-128 GB RAM really help.

Leave a Reply to Carl BreenCancel reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.