Firefox Brings The Fire: Shifting From GLX To EGL

Firefox logo displayed on screen

You may (or may not) have heard that Firefox is moving from GLX to EGL for the Linux graphics stack. It’s an indicator of which way the tides are moving in the software world. Let’s look at what it means, why it matters, and why it’s cool.

A graphics stack is a complex system with many layers. But on Linux, there needs to be an interface between something like OpenGL and a windowing system like X11. X11 provides a fundamental framework for drawing and moving windows around a display, capturing user input, and determining focus, but little else. An X11 server is just a program that manages all the windows (clients). Each window in X11 is considered a client. A client connects to the server over a Unix process socket or the internet.

OpenGL focuses on what to draw within the confines of the screen space given by the window system. GLX (which stands for OpenGL Extension to the X window system) was originally developed by Silicon Graphics. It has changed over the years, gaining hardware acceleration support and DRI (Direct Rendering Interface). DRI is a way for OpenGL to talk directly to the graphical hardware if the server and the client are on the same computer. At its core, GLX provides OpenGL functions to X11, adds to the X protocol by allowing 3d rendering commands to be sent, and an extension that reads rendering commands and passes them to OpenGL.

EGL (Embedded-System Graphics Library) is a successor of GLX, but it started with a different environment in mind. Initially, the focus was embedded systems, and devices such as Android, Raspberry Pi, and Blackberry heavily lean on EGL for their graphical needs. Finally, however, Wayland decided to use EGL as GLX brought in X11 dependencies, and EGL offers closer access to hardware.

When Martin Stránský initially added Wayland support to Firefox, he used EGL instead of GLX. Additionally, the Wayland implementation had zero-copy GPU buffer sharing via DMABUF (a Linux kernel subsystem for sharing buffers). Unfortunately, Firefox couldn’t turn on this improved WebGL’s performance for X11 (it existed but was never stable enough). Nevertheless, features kept coming making Wayland (and consequently EGL) a more first-class citizen. Now EGL will be enabled by default in Firefox 94+ with Mesa 21+ drivers (Mesa is an implementation of OpenGL, Vulkan, and other specifications that translate commands into instructions the GPU can understand).

Why This Move Matters

As mentioned earlier, EGL has two crucial features: zero-copy shared buffers and partial damage support. Zero-copy means WebGL can be sandboxed and fast. Partial damage means the whole window doesn’t need to be redrawn if only a small part is changed, saving power. This shift also speaks to the ongoing tides in the software world. Slowly but surely, the world is moving towards the EGL/Wayland style of compositing. This change mainly means fewer abstractions and layers and closer access to the hardware. EGL benefits simply from being newer and (hopefully) less buggy with strange edge cases. Additionally, running Wayland natively by default in Firefox rather than through XWayland is a significant shift.

Anecdotally, people who have tried it say the performance gains have been stellar, particularly when watching videos. The shared buffer helps as, for many GPUs, video is decoded (converting the compressed stream like h.264 into a raw bitmap) and then composited. Having a shared buffer and closer access to hardware allows the GPU to transfer that decoded frame directly into the compositor buffer, rather than making a trip to CPU RAM and back out to the GPU for NUMA machines.

To many of us, Firefox and other incredibly complex programs are mysterious boxes of wizardry. A peek inside to see the dedicated people who make them and how they make decisions and weigh tradeoffs is fascinating.

Curious about more Linux internals? Why not dive into a journey to main()?

55 thoughts on “Firefox Brings The Fire: Shifting From GLX To EGL

  1. That’s too bad.

    Any major piece of software including X11 dependencies could help slow down the adoption of Wayland. It’s sad to see each one go!

    Sadly the kids don’t want a fun OS that does cool things like let you run your program on one machine but display it on a second. I guess they just want an OSX that they didn’t have to pay for.

    1. X11 remote display doesn’t work right, in fact it never did. Sound is completely unsupported, security is non existent, 3D rendering is not supported, clipboard is just broken. Anything you can do with remote X11 can be done more securely and with more support by using VNC.

      When you have problems with your remote X11 setup, who can you call for support? Nobody, that’s who, because nobody supports it.

      1. > X11 remote display doesn’t work right, in fact it never did.

        What an utter nonsense. X11 worked perfectly fine 20 years ago and still does. One can use it for CAD applications, performance distinction between a local and a remote screen is almost nonexistent. One can rotate complex 3D models just as fast as on the local machine. In comparison, VNC is an exercise in patience. I still use X11 today (Ubuntu 20.04 LTS), because it’s more reliable and less buggy than Wayland.

        The only problem with X11 is the unwillingness of developers to modernize it. Creating some fancy new thing is more fun for them, just because it’s new.

        1. Actually the issue lies with the fact that only two people have an understanding of X11.

          X11 is full of hacks to work around different issues, the fact that X11 once included a print server is nuts.

          People often talk about systemd having too much feature creep but will then defend X11 till the cows come home actually amazes me.

        2. “worked perfectly fine 20 years ago and still does.”

          It only “worked” because network security was a joke 20 years ago. Running X11 remote is a security disaster. Even tunneling it over SSH is just a partial fix since it’s still in the clear on both machines.

        3. If you have a fast LAN, sure. But even over a remote VPN, it’s really bad. And anyway, CAD doesn’t need sound – which I assume is a large point against X11 forwarding.

          I love X11 forwarding as a capability, but I’d love for it to be replaced with an even better protocol. Wayland may not be the replacement, but X11 is sure not the thing we should want continued.

    2. Likewise… being able to run ssh -CY user@remotehost x11client and have x11client appear on the screen in front of you is a brilliant feature.

      Needed to print off a PDF on the weekend… printer was at home, I was staying with family… my netbook was not configured to talk to the printer at home directly, but my laptop was. Brought up the VPN, scp‘d the file over… then ran a PDF viewer up. Worked fine. Latency can be a problem with X11 protocol because it is very “chatty”, but otherwise it’s a fine protocol when used over secure tunnels / networks.

      I’ve also got a Tektronix X terminal laying around here, and while seeing the world in 8-bit colour is not exactly groundbreaking, it’s still a neat party trick.

        1. You can read the printed copy with a livecam, and maybe turn pages with something hacked with an arduino. Just to keep all simpler than having a printer at home. Better if something in the pipeline is written in rust. Just joking…

        2. Redhatter has the PDF. House-sitter needs the PDF and has physical access to the printer. House-sitter doesn’t have a device that can print to the printer (they aren’t on that network or it’s an older phone that can be painful for setting up printing on, etc) or Redhatter doesn’t want to upload the PDF to a cloud storage system and share the link. Or it was simply easier and faster to do it remotely than to send a link to it to the house-sitter, etc.

        1. VNC was great when first invented, but i’m sad to say it’s objectively inferior to MSFT RDP today. crank down the bandwidth, or up the latency, or introduce a cross-platform requirement where one of the platforms don’t easily support ssh tunneling of the connection for security, and VNC starts showing its age. heaven forbid you should need all three.

          1. This. RDP is positively state-of-the-art compared to VNC. USB forwarding, printer forwarding, clipboard support; seamless application support – for goodness sake, we need that in open source protocols!; basically everything you need to forget you’ve ever logged into anything and that you’re viewing a remote workstation right now.

            I’m not even kidding. I used to run all my university computing off an AWS Windows instance, and the latency and performance over RDP was so good I could literally forget that I wasn’t sitting at a physical Windows machine. In fact, it was likely faster to use that VM than use the actual Windows 7 install deployed on the local machine.

  2. Wow. My ADHD is getting in the way – I couldn’t get past the statement ” An X11 server is just a program that manages all the windows (clients).” An X11 server is not a window manager. XFCE, Gnome, and KDE are not X11 servers, but window managers. The X11 server is not a window manager, either – they are separate processes designed for specific functions.

    1. Clients != Windows. A client can have multiple windows all managed over the same network connection. The clients talk to the display server, not the window manager. The display server manages the network connections to the clients.
      The window manager is equivalent to a waiter in a restaurant, it manages the details but is not involved in the process of producing and consuming the display data.

      By the way it is perfectly okay to run one full screen app in X11 with no window manager at all, it works just fine.

      1. > By the way it is perfectly okay to run one full screen app in X11 with no window manager at all, it works just fine.

        Yes, indeed – I’ve noticed that when my blackbox conf didn’t fire up properly ;-)

  3. Although I’ve used X11 in both Unix and Linux displays, I was under the impression that it is a very inefficient way to communicate (i.e. display) graphics.
    So, am I wrong, WRONG, or WAY WRONG????

    1. X11 was designed in the 1980s to run on 1980s video. There is no support for any modern display technology. X11 can’t take advantage of any of it, it assumes you have a Sun 3 or a VAX station and dumbs down everything to that level. It’s like taking a tricycle on the freeway.

      1. Yet more nonsense. X11 supports every display technology available on the displaying terminal. It even allows remote applications to use technologies of the local display, which don’t exist on the remote side. Think of a graphics application running on a display-less server, forwarding all graphics to the user’s terminal. AFAIK, Wayland can’t do such things.

        X11 is a brilliant abstraction of user interfaces. That’s why it survived all the decades and Wayland developers have a hard time to be better at all.

        1. X11 survived due to lack of options and ruthless devs who were willing to do the very hard work getting around X in order to make it work.
          Also, Wayland!==X11. X11 is a mess of conflated intentions that’s both too complicated and inadequate.
          Talk to actual X devs. Talk to Ajax, Daniel Stone, Keith Packard. You don’t see them lamenting the replacement of X.
          Users only know that things work out don’t work now but have either forgotten or never experienced the massive headaches X caused to the community back in the 00’s when the modern stack had largely been formed and ignoring the inadequacies of X was no longer possible.
          Sure, a lot works now but that’s only due to developers who were willing to do the work necessary to bypass X as much as possible.

    2. It’s efficient but insecure on a local machine.

      Also nobody uses plain X11 forwarding anymore anyway they rely on ssh encapsulating it for them, the same is possible with wayland, it wouldn’t work exactly the same but it would be possible.

  4. Given Mozilla’s stream of questionable decisions, such as: not officially supporting Thunderbird, removing bookmark functionality in FirefoxMobile, data collection by default, changing the APIs so often that no one can keep up, the quest to gather all of all of your activity and paste it on a homescreen, and whoring itself to its partners–I am worried about any change.

    In both of my computer’s, I have GPU acceleration off, and cores set to one. It’s a freak’in browser, it shouldn’t take over your whole system.

    1. If you are only running on one core then any browser is going to take up your whole system, even lynx can get bogged down on a complex page.

      If you have GPU acceleration off you might as well be running on an old ATI Mach 8 video card.

      The Firefox changelist is usually very long, do you audit each change or only the ones mentioned in hackaday articles?

      1. Mozzilla is steady becoming an subside outlet, Firefox, is the worst example of how an software should be managed.
        Facts:
        * about Memory GPU CPU firefox y by very long the worst navigator, it exhaust even workstations if you are not aware what it does.
        * code Quality/Readbility: simple it barely exists, most Firefox Code while OpenSource actually is what us (coders) name ObscureSource despite not being closed or privative, it in practical sense its impossible to approach to audit or recycle into another project, unless you copy paste everything assuming not know what exactly it does.
        *stability: even if there are people with resources and goog intentions to audit and improve Firefox Code, it wont last enough.
        *apologetic instead committed developers: there are N forums where you find serious memoey bug reports and the way to reproduce, try it and ask developers for help, but dont care try your very own memory bugs, report and watch what moz devs tells you, not need to digg deeper, open youtube on any video that admits comments, try to write some comment with youtube embedded emojis, try to put some emojis in your text (dont use your keyboard), dont post it for a while, soon youll see your system exhaust and ram being aggressively mapped to disk until it craches with an beautidul BSOD.

    2. Tell that to the website content creators – go to a motherboard makers page for your motherboard and have all manner of stupidly fancy graphics that really do need serious graphical procressing – so GPU acceleration is a good idea, when all you bloody wanted was to download the latest BIOS version, or read the manual to make sure to put the RAM in the right slots etc..

      One of the reasons I really like the solar powered version of the low tech magazine – its so easy to render and not full of other bloat (though they have taken that minimal footprint idea to a visually striking extreme) I’d not be shocked if somebody actually did browse to it on a really retro machine – if more websites were somewhere on that end not having graphics acceleration would actually make sense…

      1. +1
        The real problem is that we are the Morlocks, but the shiny eye candy is what the Eloi love.
        We want things to be efficient and fast, the Eloi can not see it unless it is shiny no matter how much wasteful bloat.

        1. So true.

          My website is hand typed HTML 1.1(ish) with the most modern tag being the paragraph tag. It’s super fast, super small, and is very easy to navigate. I have some pictures where unavoidable, but otherwise, it’s all text. I love it.

          1. I don’t understand the HTML1.1 thing. I mean, you are not alone to praise “Older better times”.
            But I don’t understand why you don’t use HTML5 and CSS.
            I too hand typed my website, and, golly, CSS is a life saver. I remember when I needed to add tags everywhere in the HTML to have colors or styles.
            Now, my HTML is the data (as it should be) and the CSS is the apparence (as it should be).
            No need for fancy animated stuff, or whatever, it’s just really easier to write your CSS once, and apply to all your HTML pages.

            The problem is not in the HTML5 / CSS3 technology, it’s that most website are created by incompetent people that copy paste stuff to pre build template, and then add a layer of tracking on top of that.

            HTML5 and CSS3 allows to make beautiful and lightweight simple pages, I feel it dumb to private you of modern comfort when it don’t impact performances.

    3. Yeah I suppose that’s more secure but you sound like you want to get the kids off your lawn. Meanwhile, in 2021, the browser is pretty much the only application most people use to do everything. It should be able to use your whole system, because it’s probably the only thing running.

      But if you want to keep partying like it’s 1995 and only display static HTML, more power to you. Just don’t expect anyone else to do that.

      1. The shitty thing is – every relatively big app considers that it can take all your system. And I have one system, not multiple. (browser, ms teams, IDE, and so on).

        I have nothing against web apps, problem is – that only web apps tend to remain (even if they are packaged as real ones).

  5. It’s about time to have a dedicated hardware for the browser.

    It’s just dictating too much ow how the operating system under it is supposed to work. Not enough that Firefox mandates pulseaudio (yah, I can cheat it with apulse, thankyouverymuch), but now it wants to talk directly to the GPU.

    The least trusted piece of software in my box.

    “Zero-copy means WebGL can be sandboxed …”

    Yeah, right. How does one see that those browser outfits are a Spawn of The Propaganda Industry?

    And no, Chrom*ium et al are far worse.

    I’ve the impression I ended up in some wrong branch of reality.

    1. > but now it wants to talk directly to the GPU.

      Sorry to break it to you, but that has now been the case for many years. That’s what DRI and surrounding tech (GBM, dma-buf, …) is for; bar X11 forwarding, the days of indirect rendering by piping all GPU commands through the display server are long gone.

  6. Wayland is developed for 13 years now and what do they do? Celebrate to be slightly faster than X11 in one or two use cases.

    What a waste of resources. Having put all this workforce into X11 had made a stellar technology. Too bad, many developers still think starting from scratch and throwing away decades of experience and bug fixing is more efficient than improving already existing software.

  7. We use lab machines with Android devices connected to them for testing. My development machine is a VM that I can remote to from my thin client pretty much anywhere. I use scrcpy to see the screen of the Android device. It forwards just like any other X11 application. So I ssh into the machine and can have a nice little window to my device. If I had to use VNC it would be a worse experience in almost every way.

  8. It’s got two pain points – video acceleration, and 3D. If you have a powerful machine that runs the X11 server, but a very weak X11 client machine, advanced 3D is out of reach. Those two are perfectly viable use cases in 2021 (or even 2017): I want to be able to game and watch Youtube remotely, why can’t I do that?

  9. As is obvious from this thread, lots of us will still be using X11 in 10 years, and many others will be using something else.

    And the people using the new system refuse to believe we have reasons. And yet, we do. Among them, valuing normal networking capabilities above graphics speed.

    In any case, EGL can be used under either windowing system, and is the future for most applications.

    As is so often the case, the application devs wrote a buggy interface and blamed some other part of the system, on account of their second attempt with that other part replaced being better written than the first part.

Leave a Reply to sleepingCancel reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.