WebAssembly: What Is It And Why Should You Care?

If you keep up with the field of web development, you may have heard of WebAssembly. A relatively new kid on the block, it was announced in 2015, and managed to garner standardised support from all major browsers by 2017 – an impressive feat. However, it’s only more recently that the developer community has started to catch up with adoption and support.

So, what is it? What use case is so compelling that causes such quick browser adoption? This post aims to explain the need for WebAssembly, a conceptual overview of the technical side, as well as a small hands-on example for context.

What is WebAssembly?

Javascript has been around since the 90s. To be honest, it’s done a pretty good job; whilst it certainly has its drawbacks, it’s succeeded at creating an interactive modern web, even escaping from the browser more recently.

But, inevitably, the type of applications being served up on the web today are far more demanding than could have been imagined when Javascript was first created. Computationally intensive applications and tasks like 3D graphics, video/music editing and VR/AR have traditionally only been able to run natively, because Javascript inherently doesn’t have the performance capabilities to support them.

But what if it were different? What if there was a way to run low-level, optimised code in the browser with near-native performance? That’s exactly the mission statement that WebAssembly aims to fulfill, and it’s a pretty compelling one.

All this talk can evoke grumblings from those who argue that intensive applications are still best off running natively. But ultimately, no-one can deny the convenience that web applications provide end users – no installation means no disk space eaten, no local security worries, and no installation process. It also provides a large amount of appeal to developers, who only need to deal with one platform. Code only needs to be written once, and is far easier to support and maintain.

WebAssembly was created with these high-level goals:

  • Harness common hardware capabilities, be portable and efficient
  • Produce modular binaries which use imports and exports in a similar way to Javascript objects
  • Support non-browser embedding
  • Integrate into the existing web platform (enforce the same security policies, access browser functionality through the same APIs available to Javascript etc)

Note: if you’re interested in reading more about the security of WebAssembly, the official docs provide a good overview, and [Lin Clark] has done a great deep dive on memory access.

How does it work?

Designed to operate at the lowest possible level without compilation for a specific platform, WebAssembly runs directly in the browser, and is the second ever language to be directly understood by browsers. This means that whilst it doesn’t produce machine code, it’s at a low enough level that the browser needs to do very little work to execute it.

WebAssembly comes in two flavours, the .wat text file, and the .wasm raw binary. The two are exactly equivalent and exist purely for convenience; binary code is shipped to the browser but text files are human-readable. Note that sending binary code to the browser instead of Javascript source files also has benefits for the page download size.

Whilst it’s perfectly possible to write WebAssembly yourself, you probably don’t need to, just like you don’t always write in your favorite CPU’s assembly language. WebAssembly is designed to be a compile target for languages like C/C++ and Rust.

The end result is that you can expect WebAssembly to execute at pretty similar speeds to a native application – usually just 10-20% slower. Of course, it’s harder to put a figure on how much faster than Javascript it is, since it depends heavily on the use case and platform.

Because existing C/C++ codebases can be compiled to WebAssembly, it’s easy to re-use program logic when porting from native applications to the web. That’s what Autodesk did with web.autocad.com. They were able to use the same C++ core from their existing desktop apps, and slot the WebAssembled version into a Javascript UI, meaning the AutoCad editor is now available in the browser. The beauty of this is that the C++ core can continue to be improved and debugged by C++ developers who don’t need to know anything about Javascript or the web UI.

How does it compare to asm.js?

You might be reading this and thinking that some parts sound familiar. You’d be right: asm.js has been around for a while now, and also allows cross compiling C for the browser. Asm.js is just an optimised subset of Javascript which the browser has to do less work to interpret/execute. It’s an approach which certainly works, but is essentially a hack around the limitations of Javascript. It also suffered from the problem that it was never standardised, so performance improvements were inconsistent across different platforms.

WebAssembly automatically supersedes asm.js, simply by being a standard directly understood by browsers. It’s faster, more consistent, quicker to download and easier to cache that asm.js.

Figma moved their online editor from asm.js to WebAssembly and talk about the benefits in this article, which is worth reading if you’re interested in the finer details of how they differ.

How do I use it?

Now we’ll walk through a super simple example of using WebAssembly, to illustrate the mechanics of the process. Let’s suppose that we have some computationally intensive graphics code which is written in C, which we want to be able to use in the browser.

#include <emscripten.h>

EMSCRIPTEN_KEEPALIVE
int compute(int a, int b) {
    // Loads of maths
	return a + 2 * b;
}

For this simple example, all we need to do is include emscripten and let it know that we want to export the compute function. Emscripten is what we’ll be using to compile our source code to WebAssembly. It’s a powerful tool, which does a lot of work behind the scenes to make things run smoothly – from simple things like generating helper HTML and JS files, to automatically converting OpenGL calls to WebGL and providing different levels of build optimisation. We can use it to compile our example like so:

emcc graphics.c -o graphics.js -O3 -s WASM=1 -s EXTRA_EXPORTED_RUNTIME_METHODS='["cwrap"]' 

There are a few flags to say that we’re compiling with aggressive optimisation, to WebAssembly, and want to let Emscripten know that we need access to the cwrap function (more on that in a bit).

This compilation process produces a binary graphics.wasm file as well as a handy graphics.js Javascript “glue” file, which contains some conveniently generated helper code for quickly getting off the ground with our new WebAssembly module.

Let’s now go ahead and create a page which uses this newly compiled module.

As you can see, the helper file graphics.js makes it trivial to use our module. After including graphics.js, we simply use cwrap to bind our compiled compute function to a compute function in the Javascript namespace. Then we’re free to use it as we would a normal Javascript function – in this case we just update an HTML element with the result of calling the function. You can see below that the result is as we would expect.

It’s clear that for a simple example like this, it takes very little effort to import a compiled WebAssembly module which contains functions that can be used interchangeably with Javascript equivalents in many cases. This is part of the conceptual appeal of WebAssembly; as it matures and becomes more widely used, it will be easy to find and make use of libraries which implement optimised code with virtually no extra overhead for the developer. You might even end up using libraries built on WebAssembly without even realising.

Conclusion

We’re certainly likely to see more of WebAssembly in the near future. For the majority of sites, it’s not something which is required or even useful, but it’s nice to know that the capabilities are there if higher performance is needed or existing C/C++ code could be reused. WebAssembly is simply the natural evolution of the browser to keep pace with the experiences that developers want to deliver on the web – that can only be a good thing.

45 thoughts on “WebAssembly: What Is It And Why Should You Care?

  1. That’s great!

    So long as they are working on the deficiencies of client side programming how about an alternative to (eventual replacement of) javascript for the non-low level stuff.

    I’m thinking something sane that wasn’t originally created just to enable a bit of fluffy eye candy. I’m thinking how about a language with actual class, function, struct and interface keywords.

    All of this a function is a function is a class is an object marklar is just marklar and it really frosts my marklar to be forced to use it!

    1. Getting every browser to comply with a new language is a non-starter. They aren’t even all in agreement on how to execute javascript (though they have gotten quite a bit better at that). The best solution right now is to use a language that will compile to javascript. That way you can write in a language you prefer and the browser still gets to run javascript.

      There are quite a few options out there now: Typescript, Elm, ReasonML, ScalaJs, Purescript, to name a few. Even the latest versions of javascript will transpile to older versions with more browser support giving you the option of using new features without IE and Safari holding you back too much.

      1. Why would you want to use a language that compiles into and then runs as javascript? That’s just adding yet another layer of horrendous bloat on top of the already miserably-performing javascript jungle we’re stuck with in 2019. There’s no way that would be preferable.

        Unless, as the article says:
        “It also provides a large amount of appeal to developers.”

        That’s the real reason for a lot of these shenanigans. For some weird reason, many shops conflate user experience with dev experience. Now we have barely-functioning massive lumps of code to display a few bytes of text in a blog post more slowly than it worked in the early 2000s, despite our current god-like hardware. They said up there it’s for convenience that nobody could deny–but is that really true? Is it so undeniably convenient, in this age of ludicrously cheap solid-state local storage, to have everything function as a lazily-developed and bloated web app? Something that was vomited out during a few agile sprints and pushed the instant it became minimally viable, since (in theory anyway) you can update it whenever? Keeping everything on the cloud enables public undisclosed alpha testing like that, and also conveniently denies any form of user control. Is it so convenient to access badly-performing web apps that you can never truly own and might be pulled out from under you or ‘updated’ to exploit user data as a resource once the real owners hit a monetization wall? Which party is it convenient for?

        This is more aimed at the article, by the way. Gotta rant.

        I don’t think more obfuscated javascript is a good idea, but web apps using low-level languages probably aren’t a great idea either. I think developer convenience and development-cycle stinginess is forcing it on users who probably wouldn’t prefer it if they were actually given the choice, which is apparently just not in vogue anymore. God my job would suck if all my software was on the cloud!

        Software behaves as a gas–it always expands to fill the shape and size of its container.

        1. “Why would you want to use a language that compiles into and then runs as javascript?”
          Because I don’t want to write code in javascript. Which, by the way, is the same reason I don’t write assembly code. If I can write code in a type checked and compiled language instead of an all errors are run-time errors scripting language, that’s what I’m going to do. My code runs in browsers, so it has to end up as javascript somehow, that’s unavoidable, but I’m going to do my best to have as many checks in place to verify my code works before a user sees it as I can.

        2. This “JavaScipt is slow as molasses” meme needs to die already.
          JS is stupid-quick now. It can run Unreal Engine 3 without lag at 60FPS. And likely better now since that was several years ago and there has been pretty major compilation optimizations since then. (particularly with Canvas)

          The reason all these guff websites are slow is due to crap like jQuery and so on, all these horrific high-overhead libraries that, due to the ability to chain things to infinity, makes it horribly slow.
          JS has this really nasty scoping issue that means there is SIGNIFICANT overhead every layer you go deeper in to a chain.
          Every attempt to avoid that can make JS that runs incredibly fast and for a large number of things runs better than a surprisingly high number of NATIVE applications! Particularly some older code in Windows like the old MS Paint. Hell, even the new MS Paint. Old MS Paint, now that runs like molasses. Even old JS IE-era days was faster than that crap.
          I have no idea what they done in MS Paint to make it that crappy.

          It’s no longer 2005. JS is so quick that CPU timing attacks (Spectre, Meltdown) were possible in it.
          These have, for the most part, been patched. But those are about as good as most of the other supposed patches we’ve seen! LOL. I expect they’ll be defeated sooner or later like some recent Intel ones were.

          1. So what I hear you saying is you want the more NUMEROUS and BETTER attacks that web assembly can offer.
            Also Unreal Engine 3 is 14 years old now and was HIGHLY optimised for the terrible DVD based consoles of the day so “FPS” is nothing to brag about, how about number of milliseconds till the last texture streams in…. 1000,2000,3000,4000…

          2. Sure, Vanilla JS is quite fast and capable enough for Real Programmers ™ But unfortunately people keep using frameworks like jQuery no matter how many times you send them to http://vanilla-js.com. After a few years, a popular web app might call 3 or 4 huge code frameworks just to render a static page, as each new web intern loads his or her favorite unnecessary crutch.

      2. With the edge browser, safari and chrome all running from the same web engine, it doesn’t take much to implement a different language in a wide variety of browsers. If google puts its mind to it, I think it can be done. Firefox will have to follow suit.
        Not that there’s a need for it, as you said anything can be transpiled to javascript.

        1. “With the edge browser, safari and chrome all running from the same web engine, it doesn’t take much to implement a different language in a wide variety of browsers. If google puts its mind to it, I think it can be done. Firefox will have to follow suit.”

          What i don’t get is how this doesn’t scare people. With Microsoft switching to the chromium web engine we now have google mainly in control of how the new standards of the web are developed. I Think that this is the main reason that google needs to be broken up into smaller companies as we are right back in the 90’s dealing with the same issues that we had with Microsoft and internet explorer. Hopefully the EU will realize this and try and take some action as we could probably correctly assume that the US wont do a thing about it and probably already has TLA plants inside google (which would also explain why they are trying to silently continue project dragonfly)

        2. Edge and Chrome are both using V8 as their javascript engine now, but I believe Safari is still using JavascriptCore, and Firefox uses SpiderMonkey. I think if Google were serious about running a different language in the browser, they’d ship Chrome with Dart VM, especially since it’s apparently much faster than V8, yet they don’t. My suspicion is that, even though the major browsers all seem to be playing nice (finally), there’s still quite a bit of tension between the companies and if any one of them were to try something as major as supporting a new language, the others would resist.

      3. This may not be such a big deal. I know Microsoft has a Blazor project (Razor on the front end) that uses WebAssembly. It works by having the server send a stripped down .net famework (I mean the supporting libraries not specifically “.net framework”) dls to the browser. The browser does not need to support .net or Razor, just the webassembly piece.
        I have no idea what security headaches will prop up from all this.

    2. I generally seek moderation when people take a stong position of criticism or favor (i.e. it’s not that good’, or it’s not that bad)

      JavaScript can be a beautiful language when used to it’s stengths. Trying to force it to work like something it is not (a class based language) when it works just fine if you use it how it is designed (a prototype based language) can make things difficult for sure. A function is not a class: the trouble comes when you try to make it one, using a paradigm meant for a different language.

  2. I just sort of scanned this article, so I didn’t see what interests me most about building a web page…
    Security.
    Is there any consideration that a web noob like me can build a page, and not end up with more holes in it than a screen door?

    1. I don’t think WebAssembly is going to help (or hinder) you much in that regard. It really comes down to being able to run JavaScript in your browser. If you can do something dangerous with plain JavaScript, then you can do something equally as dangerous with WebAssembly. It also depends on what kind of web page you are trying to create. Is it something that has front-end/back-end communication, or is it something that can be served as static content with a bit of front-end behaviour?

    2. This is actually as easy as finger paints!

      Use HTML. Done. You have no security holes. You actually can’t have security holes.

      If it is actually a legit “web application” instead of a “web page,” then you’ll need to understand security. But most web sites could just be a collection of linked pages, and don’t really need to be web applications.

      For example, in many cases you don’t need a live database; you could export that data into static HTML pages. Now when the data changes you have to update the site before it shows up, but on the other hand, no live data means no live exploits or hacking of that data.

      1. The amount of overhead that would create would be horrendous from a business perspective.
        For that to work effectively you’d have to export entire database tables out as static html.
        Then find some way of filtering based on the users selections server side which would be even more overhead.
        Typically the one thing you don’t want is running massive queries on a database / exporting the entire table on a database that has a lot of data and has to be updated often.
        Not to mention joined database queries

    1. I can’t help but feel that’s like serving a medium raw steak to an infant.
      I mean I understand your desire from a “well why the fuck not” perspective of curiosity, but in the end Chromebooks were designed for being nothing more than the hardware link to the web.
      A Facebook portal to a mere mortal if you would.

  3. “All this talk can evoke grumblings from those who argue that intensive applications are still best off running natively.”

    I’m sure someone would argue the consequences of taking a platform who’s original intent wasn’t being a place for applications and trying to make the paradigm fit. Kind of like seeing how many clowns one can stuff in a VW bug, just because.

    1. “Kind of like seeing how many clowns one can stuff in a VW bug, just because.”

      So… as important as something some people will likely devote their entire lives to, then.

  4. “…WebAssembly runs directly in the browser, and is the second ever language to be directly understood by browsers.” really niggles my pedantic bone! HTML & CSS are languages, and I’m pretty sure they’re directly “understood” by browsers!

    1. Your pedantic bone has insufficient ligamenture connecting it to context; language in this case being short for “programming language.” And HTML and CSS of course being mere markup languages, not programming languages.

  5. My gut tells me it’ll primarily be used for even more dickish adverts and consumer privacy violations.
    But that might just be my experience with the big “players” of the web being ruthless dickbags.

    1. That was my thinking too.

      “What if there was a way to run low-level, optimised code in the browser with near-native performance?”

      That means if I don’t turn it off before I even start browsing, then by the time I want to turn it off my GUI will be non-responsive and I’ll have to ssh into my desktop from a tablet just to kill the browser process!

      No, no, no, no, and no, pretty much sums up my desire to even allow this “feature.”

  6. Given that many MANY languages can be transpiled down to JS these days, and given that most JS engines do JIT, I’m struggling to see the point of WASM… so… it’s like a non-native assembly, an assembly for a sandboxed virtual machine if you like, so… it’s trying to be Java instead of JS? Like that’s a GOOD thing?

  7. “[JavaScript] even escaping from the browser more recently.”

    You mean recently like ca. 24 ago when Netscape introduced servers-side scripting for Netscape Enterprise Server? About 3 months after JS debuted in the browser.
    (ref Wikipedia)

  8. Nice demo. Tried this out and noticed some errata here:
    In the html page the cwrap mapping is a bit different (you specify function name, return param, calling params) and also do not use names but on of these number, array, string (using another name like a,b here defaults to ‘number’ but it gives the incorrect impression that the names matter):
    const compute = Module.cwrap(‘compute’,’number’, [‘number’,’number’]);

    In graphics.c the EMSCRIPTEN_KEEPALIVE is to be added after the return value just before function name according to the documentation… Although I tested that both ways work and I like just adding the line above the way it’s shown here. The keepalive is to be added to all functions you want to call from js, alternatively a compiler flag to emscripten with method names is also allowed.

    1. I was thinking just that, too. Like Java, WebAssembly is targeting a common agreed-upon “virtual machine”.

      Unlike Java, Wasm isn’t controlled by one company, so other companies don’t have their backs up over using it, like many did with Java (cough -M$-cough). Also Wasm has hooks into JS by design.

      1. “Like Java, except with hooks into the DOM instead of needing to run in its own stoopid rectangle” has SOME appeal admittedly, but I still have a gut feeling that “JS JIT-compiled to native” ought to get better performance than “HLL-of-choice into WASM into native”. WASM feels like it might save a bit of bandwidth, but probably not much more than minification and gzip… and WASM might add a bit of DRM-ish-ness, make it harder to reverse engineer / “see the source”, but I that’s probably counts AGAINST it :-D

    2. In a sense what’s being defined is the thing your compiling to (in the case of Java, the Java byte code) rather than the thing you compiling from which could be anything such as javascript, typescript, react, vuejs, c++, rust etc

  9. All due respect; but this is bullshit, Ben:
    “…no *local* security worries…”
    Whatever runs locally, in the browser (pretty much always), someone’s got cause to worry — I know, people choose not to, but, um, it ain’t universal.

    P.S. Sandboxes don’t put those concerns to rest, I’ve seen enough to be skeptical of the qality of software offered to me on assurances.

    1. I don’t worry. The reason I don’t worry is that I’m using scriptblock, uMatrix, and uBlock Origin.

      By default, nothing gets to run code. If I wanted dynamic behavior from the site, I turn it on in scriptblock. But that only loads things they host themselves. Third party stuff I have to turn on domain-by-domain in uMatrix. If it isn’t considered malware by uBlock, then it will finally load.

      When I attempt to visit a site and I only see a blank screen, instead of turning things on, I turn more things off; CSS. Turning off CSS makes any site that is “accessible” visible. It has to or it won’t work with the accessibility tools. If it isn’t accessible, it is probably pretty lame anyways; I can acquire data elsewhere.

      1. OK, I’m about to be pessimistic again, here.
        AIUI, the effectiveness of script-blockers et al. is under jeopardy, from changes in the underlying API — particularly Chrome/Chromium, but because Firefox has adopted the same add-on API, I think it will surely follow suit. (FWIW I’m not wholly confident that WebExtensions were ever as accommodating as Firefox’s previous API (I’m no expert, but) I seem to remember Giorgio Maone mentioning something about how chrome addons couldn’t fully prevent scripts from executing, even if blocked from visibly manipulating the DOM or whatever.)
        So, I’ve seen people say that creating a truly secure browser (for normal use cases) is practically impossible. At the risk of sounding intractably defeatist, I don’t that assesment is far off. For one thing, just by observing, I’ve come to view Google and Mozilla[1] particularly with a fair amount cynicism (did you know? every[2] “new” browser uses code maintained by Google employees. Browsers are a lot of work!). To be fair though, why is it hard? Don’t website … publishers (whatever they are, a broad and diverse group) make it difficult because they so love all the *aaS, the whiz-bang presentation, the, um, intelligence-gathering, and whatever I’m neglecting among the wild possibilities that become necessary to use whatever they offer. And these things do become necessary for users in non-negotiable ways. It’s a tangled, interdependent ecosystem — little wonder some look at it and say it’s impossible; among those who have clout/ability/whatever, putting user safety and comfort as a first priority is rare.

        TL;DR Browsers are not “user agents”, there’s the rub ( I’m a pedant, you could protect users from themselves — whole nother can of worms)

        Pardon me ranting, but my motivation is optimistic. I think more comprehensive appreciation can lead to better decision making. Change is constant, and sometimes that brings opportunities to nudge things in a better (or worse) direction. All grandiose about web browsers — but *hey, it’s a big deal to me!*

        [1] not to make this about blaming, but to clear I’m lumping together as bad influence and corruptee. They have their own distinct disfunctions.
        [2] almost

  10. “no installation means no disk space eaten, no local security worries, and no installation process. ”

    So no installation means no installation process? Brilliant.
    No installation means no disk space eaten? And running on a browser? Sure..
    No installation means no security worries? Over the web, and running on a browser? Right…
    Smells like Silicon Valley already.

    “It also provides a large amount of appeal to developers, who only need to deal with one platform. Code only needs to be written once, and is far easier to support and maintain.”

    “Write once, run anywhere”…. Have we heard this somewhere before?
    “large amount of appeal to developers” .. in a clueless-agile-hipster sorta way I guess.

  11. For a while now it’s been fairly common for websites with modern approaches to do something called “transpiling”. This is where we take a different language then covert it down to javascript, typically whatever common standard is in use by most browsers at the moment.
    This includes stuff like

    Typescript – which adds type safety to the language
    it also prevents you from doing random stupid shit like assigning a int to a string or adding a new class property without declaring it (my favourite personally coming from a C# / .Net background) and also allows for things like auto completion since the IDE can predict what properties or functions should exist.
    Even though it’s Microsoft originated I’ve noticed a few projects like Vue3.0 that are going to be moving to it, I think it’s gaining popularity.

    Next generation javascript
    want to code in javascript but also want to use all the new language features not yet supported by all the browsers? no problem. Just write it in the new latest version standard then transpile it down to the more commonly used by most browser standard

    React
    This is sort of a hybrid of javascript and html, has the benefit of automatically updating things in the UI on property changes

    VueJS
    Similar to React but breaks down everything into components with a handlebars template file, css / scss, typescript / javascript etc. (again personally my favourite)

    Strictly speaking these sorts of frameworks allow you to do things that would be very difficult and time consuming to write directly into js.
    While modern javascript might be fairly fast now. It does make more sense if your going to “transpile” or “compile” something to a common standard to compile it down to something more lower level for performance reasons.
    To give an example https://github.com/AssemblyScript/assemblyscript

    But you can also compile C++, Rust, insert language of your choice.
    Granted there’s probably going to be security concerns during the early days but at the same time if all these different langs can be compiled down to a single standard, it does put a lot (but not all) of the bugs into one place.
    Since folks are compiling to javascript at the moment anyways it can’t hurt to give them something that has better performance / is pre-compiled. if anything it makes the whole transpiling thing a bit more sane.
    The way things currently are makes me cringe at the implementation but also amazed that it works so well.

    Webpack’s current approach is to try and optimise and uglify (remove line returns) the overall components into a single blob for performance reasons. Typically when debugging in browser there’s also support for “source map’s” which are basically auto generated files which allow the web console or IDE to map compiled code back to the original source for debugging.

  12. Escaped from the Browser RECENTLY?

    It was used in UOX3 if not the original UOX Emulator back in *1999-2000* right after the original open source release of Gecko/Spidermonkey. While Node.JS and company are new, embedded javascript outside of the browser has been around for 20 years now.

Leave a Reply to David DavisCancel reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.