Open Source GPU Released

GPLGPU

Nearly a year ago, an extremely interesting project hit Kickstarter: an open source GPU, written for an FPGA. For reasons that are obvious in retrospect, the GPL-GPU Kickstarter was not funded, but that doesn’t mean these developers don’t believe in what they’re doing. The first version of this open source graphics processor has now been released, giving anyone with an interest a look at what a late-90s era GPU looks like on the inside, If you’re cool enough, there’s also enough supporting documentation to build your own.

A quick note for the PC Master Race: this thing might run Quake eventually. It’s not a powerhouse. That said, [Bunnie] had a hard time finding an open source GPU for the Novena laptop, and the drivers for the VideoCore IV in the Raspi have only recently been open sourced. A completely open GPU simply doesn’t exist, and short of a few very, very limited thesis projects there hasn’t been anything like this before.

Right now, the GPL-GPU has 3D graphics acceleration working with VGA on a PCI bus. The plan is to update this late-90s setup to interfaces that make a little more sense, and add DVI and HDMI output. Not bad for a failed Kickstarter, right?

25 thoughts on “Open Source GPU Released

      1. Wouldn’t this be similar to how ffmpeg and other open source decoding libraries work? The end user / distributer of compiled binaries is responsible for the royalties?

        If they just publish VHDL / Verilog for an FPGA, which could be used as a hardware decoder card, would they be in the clear?

        (A complete CPU core + decoder would probably be way to slow as FPGA)

        1. Yeah I guess technically the MPEG-LA could go after everyone who distributes or perhaps even uses ffmpeg binaries, but I guess it would be a bit of a wasted effort, like trying to stop music piracy. I don’t think there’s anything they can do about the source code though.

          IIRC its possible to build ffmpeg / libavconv without the non free codecs.

  1. Ha Ha, he beat me to it. I was designing an FPGA based GPU (for an SoC project) a while ago but then life got in the way. Oh well, I guess mine would have had vertex and pixel shaders. Good work! I gotta see how (if?) he handled triangle clipping.

    1. P.S. I’m not sure I’d call this a ‘GPU’ as it doesn’t appear to have a programmable pipeline, just a register interface. However, it is very complete HDL for a video card, with optional PCI interface and CRT controller.

      1. It’s a GPU – pipelining is a modern attribute – a GPU is simply a graphics processor. This one is based off of the Number 9 series of chips that was released in the DOS/early Windows days, and helped set the bar for 2D acceleration. You should still release your work.

        1. I was referring to the ‘graphics pipeline’ as the transform, lighting, rasterization etc. stages in the GPU, not the electrical ‘put logic stages between banks of flip-flops’ pipelining (though they are related :).

          Maybe it’s just me, but when I hear ‘GPU’ I think of a processor that executes a compiled program to form images (though modern ones may be used exclusively for computation). I generally consider non-programmable devices to be video controllers or accelerators, since their behavior is dictated solely through registers. I would still classify some 90’s era devices as GPUs, most notably the Rendition Verite. And of course just because it isn’t programmable doesn’t mean it’s easy to make. In fact, I designed mine to be programmable BECAUSE I didn’t want to deal with endless state machines.

          I never actually implemented any of my GPU in an FPGA, in fact I never finished designing it (though not much was left). All my diagrams were down to the ‘primitive’ level though, and what I simulated appeared to work. I’m sure I’ll get back to working on it, but if I don’t I’ll at least clean it up, document it, and stick it on Github.

  2. I really just got started in FPGA (still looking for that reasonably priced board to cut my teeth on) but isn’t this a poor use for an FPGA?

    What I mean is doing a “SoftCPU” on an FPGA is costly in terms of the gates consumed. A GPU is just a very specialized processor.

    I can see uses for this as part of a dynamically configured bit of hardware. Need a specific CPU and GPU to run old Commodore game, upload a new bitmap and viola! Feel like playing Quake? Done. But using a FPGA dedicated to nothing else but a single GPU? Doesn’t that take some of the power of the FPGA away?

    What amI missing?

    1. What else would you choose to do the development on?
      FPGA based prototypes are nice because they can be turned into ICs eventually (given enough demand at least). Metalized gate arrays are a nice inbetween for lower quantity production runs.

    2. Even low end FPGAs can contain the equivalent of hundreds of thousands of logic gates plus SRAM and multipliers, which is plenty for a basic processor. What’s more you can design it just how you want. Need special hardware accelerated instructions, a powerful coprocessor or full custom peripherals? An FPGA can do that.

      Also, while the per unit cost of a CPU or GPU may be a lot cheaper and faster than an FPGA, that is only because they are mass produced. The upfront costs of a custom chip in a modern process are in the tens of millions of dollars. Large FPGAs are often used to prototype such custom chips.

      Now if you can just use a standard processor or microcontroller in your project, then it is probably cheaper and easier to do so. But FPGAs are the only hobbyist accessible way to have a lot of custom logic running at high speed.

    3. All points true. And an Intel i7 is a far better processing tool than a ATtiny13 – so why bother with the later? Most hard GPUs (not currently obsolete/EoL) are expensive, come in difficult to manage packaging options, require high speed design modeling for memory traces, and are generally proprietary architectures. Even though slower than an ASIC, a FPGA based design can evolve and grow openly. And Bruno’s project is a turbo charged jump start on that end.

    4. A normal CPU is general purpose and not that parallel (like what you can achieve on the FPGA in terms of computation). However, GPUs can be made massively parallel. The modern way to do things (OpenGL, Direct3D) uses two programs (shaders) that run on many instances, like threads if you like. One of them is positioning triangles and the other one is drawing pixels – and the program is run for every vertex of the triangle/every pixel in the final image. + there are other stuff going on in the background of the graphics pipeline that’s also “embarrassingly parallel”, like the vertex and pixel shaders.

      So, while the FPGA isn’t suitable for this per se, it has its advantages for ease-of-development (described in the posts above), and it’s probably more fun to make a GPU than a CPU given that you can use all that parallel power.

  3. Noticed that the “ssi_sfifo” module used in the pixel pipeline looks to be implying distributed ram. Wont that chew the FPGA up rather savagely? Might one prefer to swap that out for a bram based fifo implementation of somekind? Inspirational stuff to look at though, much respect & many thanks to the authors. :)

    1. (can’t wait to see how this compiles, starting to wonder if the spread out nature of distributed ram might actually be better for evenly laying out the pipeline stages?)

    2. I just checked. It compiles into BRAM when the FIFO is deep enough, at least if it’s alone in the design. I’m using Altera’s Quartus II, but I assume it would work in Xilinx’s Vivado or ISE.

  4. I wonder what it’d cost to buy the Rendition IP from Micron? Rendition was briefly atop the GPU heap at one point in the late 90’s but didn’t have the money to sustain that. They never did finish a complete OpenGL for the Verite 2100 and 2200. IIRC they did for the Verite 1000. Not long after the 2×00 chip was introduced, nVidia and ATi leapfrogged ahead, leaving Rendition, S3, Trident, 3DFx and everyone else way behind.

    Micron bought the company and put up a web page for it, said they’d get the drivers completed for the 2100 and 2200 – and that’s all that happened. No more chips based on the technology, no finished drivers, nothing.

    Far as I know buying Rendition was a total loss for Micron.

  5. This is the Number Nine Ticket2Ride, BTW.
    Same code that was on the “GPL Graphics Accelerator” kickstarter a while ago.

    Basically, he didn’t get enough money that he wanted to open source it, so he decided to GPL it and drum up support.

    Anyway its pretty well written, if obtuse sometimes, but don’t get carried away thinking it’ll run Crysis. It’s a DirectX 6-era fixed function pipeline. You could maaayybe run Quake 3 with it. No pixel/vertex shaders here.

    While it’s a good reference, there isn’t really much thats re-usable about it.

  6. >That said, [Bunnie] had a hard time finding an open source GPU for the Novena laptop,

    I suspect you meant “a GPU with open source drivers”. He could have used something with a Mali GPU that has open drivers but I guess the openness of the of the rest of the i.mx6 won him over.

  7. Evidently HAD has broken me because my first thought was “but Quake used software rendering” rather than something at all related to the actual content.

    It’s cool that Bunnie managed to get this working. Realistically I don’t see this ever being truly competitive with the Big Two graphics card makers, but I suspect that’s not the point.

Leave a Reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.