Slug Algorithm For On-GPU Rendering Of Fonts With Bézier Curves Now In Public Domain

The Slug Algorithm has been around for a decade now, mostly quietly rendering fonts and later entire GUIs using Bézier curves directly on the GPU for games and other types of software, but due to its proprietary nature it didn’t see much adoption outside of commercial settings. This has now changed with its author, [Eric Lengyel], releasing it to the public domain without any limitations.

Originally [Eric] had received a software patent in 2019 for the algorithm that would have prevented anyone else from implementing it until the patent’s expiration in 2038. Since 2016 [Eric] and his business have however had in his eyes sufficient benefit from the patent, making it unnecessary to hold on to it any longer and retain such exclusivity.

To help anyone with implementing their own version of the algorithm, there is a GitHub repository containing reference shader implementations with plenty of inline comments that should help anyone with some shader experience get started.

Although pretty niche in the eyes of the average person, the benefits of using on-GPU rendering of elements like fonts are obvious in terms of rendering optimization. With this change open source rendering engines for games and more can finally also use it as well.

Thanks to [Footleg] for the tip.

16 thoughts on “Slug Algorithm For On-GPU Rendering Of Fonts With Bézier Curves Now In Public Domain

  1. Can it be used to render simple vector graphics too? I remember Doom3/Quake 4 hand to render their in game UI screens as a stack of big raster images.

      1. As someone who has done game development professionally for the past 20 years, with a specialization on UI/UX programming within that field since 2017, I can tell you that Slug is hands-down the absolute top of the pops when it comes to visual quality. If you’re not convinced it’s better than SDF/MSDF, you simply haven’t used it in your day-to-day work and seen the difference first-hand.

        SDF and MSDF font rendering has gotten popular over the years among FOSS (and even commercial) projects almost specifically due to Slug’s per-unit pricing and patent encumbrance. As pointed out in the very article that you linked, SDF has the fundamental issue of lacking definition – the ability to represent sharp corners, and MSDF makes things better but not necessarily perfect. Problems can still crop up during extreme magnification or minification, and both SDF and MSDF tend to get worse when projected into a 3D scene rather than restricted to the orthographic plane. Trust me, we went with MSDFs for the font solution on Helldivers 2, and getting the atlas generators tuned exactly right for all situations was a dive into hell in and of itself.

        Slug side-steps the fiddliness of the three-step data-munging process involved in SDF/MSDF: Vector data (TTF/OTF) -> Raster data -> On-screen data, and eliminates the middle one. While you still end up having to process the TTF/OTF into a texture containing band data and a texture containing curve data, the two combined are a provably mathematically-accurate representation of the vector data contained within the font files themselves.

        As for it being light on “how you use it,” the HLSL shaders over on the Github repo are incredibly well-documented, far beyond anything I’ve ever tended to see on the job: https://github.com/EricLengyel/Slug

        For generating the band-data texture and curve-data texture, 30 seconds of searchin on Google turned up this other repo, Sluggish – https://github.com/mightycow/Sluggish – of which the relevant parts are all released under a public-domain license (the “unlicense”).

        The above two will get you pretty-looking individual glyphs on-screen, so for kerning and glyph layouting, HarfBuzz is permissively-licensed as well – https://github.com/harfbuzz/harfbuzz

        All of these taken together puts Slug font rendering as something that should take an experienced developer, or even an intermediate developer, something on the order of a day or two to integrate into an engine with which they already have familiarity.

        1. I would naively hope that since the base library is now free, the high level paid frameworks will consider integrating it into their pipeline. So like one experienced developer at Unity and one at Unreal will do some work and all the intermediate developers will silently benefit from sharper UI text.

          1. A colleague asked me that, and frankly, I give it at most a few days to a couple weeks before both the folks at Epic and the folks at Unity have workable solutions integrated into their pipelines. Epic has some of the top-tier graphics folks within the game industry, their people can read these shaders and associated whitepapers like a teenager would read “See Spot Run”.

            Just as an off-the-cuff guess, I would expect to see support for Slug rendering in projects like Godot even sooner than it lands in UE5 or Unity, as there are fewer layers of bureaucracy, approval, and code reviews to sort through on a purely FOSS project like Godot.

            Suffice to say, this is a HUGE change, and mad props to Eric Lengyel for behaving how I believe commercial software should operate: If you have something really great, make your bag, then open it up for everyone.

        2. You might never read this. But first let me thank you for your detailed answer. It makes it clearer that this is a big deal.

          Also, it shows the “experts problem”, it’s clear to you because you are extremely familiar with the subject, as assume anyone else will be at least 10% experienced with that subject. As someone who’s glyph rendering is limited to using libfreetype2. Nothing what I read sparks familiarity for me. Until I read your post it wasn’t even clear to me that you need “band-data” and “curve-data”.

          But, I’m glad to read that this could be revolutionary for a lot of UI libraries. (But I’ll be sticking with my ugly libfreetype2 solution for now, yes, this likely has you looking disappointed in me right now)

    1. The paper shows an example of rendering a colored emoji, so it should be possible. It does mention that it has to loop over all colors though, which is probably slow for more than a few colors. (Gradients would be impractical)

  2. The USPTO has been ignoring the 2014 Alice SCOTUS jurisprudence, which mostly ban software patents. They finance themselves over the number of patents it grants. Recently the USPTO self expanded its practice to grant patents got AI, without any public debate nor a legislative change.

    PS: my association FFII.org has opposed Software patents in the EU since 1998.

  3. I wonder if this is why in Microsoft Flight Simulator they have adjustments for the cockpit instrument refresh rate, etc. never assumed this 2D sort of thing would be a heavy lift, but I guess I don’t know squat. Anybody who game devs care to politely enlighten us?

    1. As a game dev with about 2 decade sof experience: Cockpit-instrument refresh rate is more likely so that long-time fans of the series can preserve the staggered refresh rate in order to have a more nostalgic experience if they choose to do so.

      I’m not a pilot, but perhaps there are some instruments that also have a non-realtime update rate, thus making adjusing such a parameter more accurate to real life.

      What you need to really be aware of is the massive change in UI complexity that started occurring sometime between 2010 and 2015.

      In games, UI was always seen as the “who cares” subject, the thing that could be foisted off onto some junior. It’s just a menu with some options, it’s a static HUD with a few readouts, how hard can it be?

      Once artists and designers realized that they could have a more engaging gameplay experience by tightening the relationship between the game and the UI, things started to change. Rather than a set of diffuse-textured elements on an orthographic plane, games started wanting UI to be added into the scene with perspective projection (Metroid Prime), or there was a desire for significantly more movement and gameplay-relevant glitch effects (Detroit: Become Human).

      Developers have also become more aware that aside from some intransigent whining from people who feel the inexplicable need to gatekeep the concept of playing frickin’ games – folks who no reasonable person pays any attention to anyway – usability and accessibility are more important now than they ever have been.

      As I like to tell colleagues: You can have the most compelling and engaging gameplay, the most whiz-bang graphical features, spatial audio, networking that doesn’t block the user, but if your UI is so obtuse that the average user has to spend more time than necessary working out how to simply start a game, then the user is going to say “screw it,” refund the game, and move on to another game.

      With UI systems now supporting shaders, various wowie-zowie graphical effects, perspective projection, layering against in-world objects and other features that people see but don’t understand, yes, the performance impact of UI systems has gotten larger. Slug helps minimize that impact while also offering top-tier visual fidelity when it comes to rendering text, both in-world (i.e. decals) and in a more standard UI context.

  4. I cannot wait to try it, hoping that it will beat sdf/msdf text rendering. thank you Eric Lengyel! and thank you Maya for another great article.

Leave a Reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.