Open Credo

December 11, 2020 | Cloud, Cloud Native, Kubernetes, Microservices

WebAssembly – Where is it going?

“WebAssembly is a safe, portable, low-level code format designed for efficient execution and compact representation.” – W3C

In this blog, I’ll cover the different applications of Wasm and WASI, some of the projects that are making headway, and the implications for modern architectures and distributed systems.


Mateus Pimenta

Mateus Pimenta

Lead Consultant

WebAssembly – Where is it going?

“WebAssembly is a safe, portable, low-level code format designed for efficient execution and compact representation.” – W3C

When WebAssembly (Wasm) first reached major browser support back in 2017, it presented a secure alternative to speed up computation on the web. But when Mozilla unveiled the WebAssembly System Interface (WASI) in 2019, that was a boom moment. WebAssembly has paved its way out of the browser to virtually everywhere. A complete revolution in the way we code, package, distribute and run our applications is coming!

In this blog, I’ll cover the different applications of Wasm and WASI, some of the projects that are making headway, and the implications for modern architectures and distributed systems.

Wasm in browsers

In November 2017, Wasm reached support on all major browsers (Microsoft Edge, Safari, Google Chrome and Firefox). An official W3C standard was proposed and reached recommendation status soon after – released in December 2019. That means you can now build processing-heavy applications that run inside the browser with near-native performance and without being hit by performance limitations of the JavaScript engine and cross-browser problems.

With WebAssembly, it’s possible to still develop your typical single page application in JavaScript but interact with your Wasm code to off-load complex operations via the  WebAssembly JS API.

Gaming was the initial application for Wasm, and it gained a lot of traction when game engines like Unreal engine announced Wasm support. However, Wasm is not limited to gaming. There are many other applications for image processing, video processing or a full-blown sound processing studio that so far were limited to desktops and servers.

Machine learning (ML) model inference is also an excellent application of Wasm. Front-end applications can start to use complex ML models like face detection, speech-to-text and other AI models directly on the browser without the need of a supporting backend. All the heavy calculations are taken out of the JavaScript engine and run as native code.

Tensorflow.js is one of the leading projects in the ML space and offers a Wasm backend. Tensorflow.js is also able to use the power of multithreading and Single Instruction Multiple Data (SIMD) to speed up this process even further. Once the Threading and SIMD specifications are officially released, we should have a few other projects exploiting those new capabilities.

The ability to execute compute-intensive applications on the browser has two significant consequences. First, software companies can start to offer more desktop-only applications through the browser. With this, they can reap the benefits of the easy binary distribution, software updates and security patches rollout and provide a deeper Software-as-a-Service (SaaS) experience to their customers.

Secondly, heavy-processing backend services can be shifted to the browser using the customer’s machine to provide a low-latency and responsive experience and at the same time reduce the compute capacity and operation costs to run those services in the cloud or a data centre.

But WebAssembly doesn’t stop here.

Wasm in… well, everywhere

In March 2019, Mozilla announced that the WebAssembly System Interface (WASI) specification was on the way. But what does this mean?

So far, Wasm was capable of only running sandboxed native code when in a browser. With WASI, Wasm is now capable of interacting with the host system to read/write files, create network sockets and much more. So now, we can compile applications, services, tools and run just like we usually would in our machines – all with the same portability, security and compact binary format that we get from WebAssembly.

The implications here are quite profound. Let’s take a look at them one by one.

Containers and Container Orchestration

“A self-contained artefact that can run as native code in isolation from the rest of the system”. If you thought that that sounds like a container, you are absolutely right. And I could not write a blog referencing Wasm and Containers without mentioning that Solomon Hykes, the creator of Docker, already stated that there would be no need for Docker if Wasm and WASI were available before. It’s not that we will be replacing our containerised workloads with WebAssembly all of a sudden. But we are looking at a great technology that could fill the gaps and shortcoming of containers.

So it’s entirely possible that many container-based products like Kubernetes, AWS ECS, AWS Lambda (now with containers!), Google Cloud Run, Azure Containers Instances and others, could start running WebAssembly applications alongside traditional containers. And this world might not be that far out. Deis Labs owned by Microsoft already created an experimental project, Krustlet, that adds WebAssembly support to Kubernetes! Expect others to come soon.

There are a few benefits to WebAssembly over Containers.

First, even though traditional container security is feature-rich, it’s often complicated because the processes have direct access to the kernel, and even if used with proper security contexts, there’s still a large attack surface to exploit. There’s a trend recently to use small hypervisors and mini-VMs like gVisor, Firecracker and Kata Containers to limit the container’s exposure to the host kernel and provide better isolation. So security is still a significant concern in the container world.

WebAssembly, on the other hand, comes with a different security model that provides a true sandbox for the native code to run on. That means controlled access to system calls, memory safety and less attack surface.

Secondly, containers are in general heavy-weight artefacts. Most of the time, they are around a few hundred megabytes. Alpine or Distroless can help significantly to reduce the overall size. However, the fact is that they still need to ship system libraries, interpreters, virtual machines, some tooling, scripts and eventually your application code. With WebAssembly, the artefact is simply your application (~15% smaller binary actually). The effect of that is that scheduling a WebAssembly on a Kubernetes node, for instance, would need less download time and be able to cache many more “images” on that node with the same disk size.

When you combine this small footprint with some of the current implementations of the Wasm runtime start-up times, problems like cold starts that we usually see in cloud functions, are nearly ineligible. Fastly’s Lucet with ahead-of-time compilation (AOT), start-up times can be around 50 microseconds. With Cloudflare running Google’s V8, it’s about 5 milliseconds.

Without this performance penalty on the first requests or during scaling, it’s entirely possible to have a full-blown application deployed without any actual process running until a request comes in – with zero pre-allocation or ongoing cost. It is the holy grail of serverless computing.

Edge Computing

For a couple of years, Cloud providers and CDNs have been offering general-purpose computing at the edge like AWS Lambda@Edge and Cloudflare Workers – most of them based on JavaScript. But now, WebAssembly is coming as a powerful ally to push them to the limits.

WebAssembly, with all the benefits discussed previously, in terms of security, performance and start-up times, joined with the highly distributed point-of-presence network offered by CDNs can create a new level of distributed systems and compelling application architectures. Those new applications would not only have reduced latency and better responsiveness for users but would have distinguished resilience properties and reduced the workload on central cloud locations and data centres.

And this is available today. Fastly offers Fastly Compute@Edge, a pure WebAssembly solution, and Cloudflare Workers has added support for WebAssembly to their current offering.

Embedding, IoT, Mobile, Machine Learning, etc

Wasm is truly getting everywhere.

You can embed Wasm into your own application. For instance, applications that allow users to upload custom code, like rules engines, could use WebAssembly to evaluate rules securely. Wasmer is probably the best Wasm runtime for this task as it supports many language bindings.

For IoT, there are a few Wasm runtimes like wasm-micro-runtime and wasm3. They bring the potential to simplify the portability of code across a multitude of different microcontrollers. Wasm3 also supports Android and iOS mobile devices!

Intel’s engineers have been working on wasi-nn. An effort to create a Wasm module to standardise a machine learning interface to guarantee a model’s portability between different architectures.

There are so many fascinating projects pushing the boundaries and discovering different use cases for WebAssembly that it is challenging to mention them all here.

So, where are we today?

WebAssembly has probably been the first time all different browsers agreed on something unanimously. WebAssembly in browsers is an approved specification and W3C recommendation. So, it’s production-ready for browsers.

WebAssembly, as a specification, is also steadily evolving. Many more modules are at advanced specification stages, and soon we should have many more features like threading added to it. Language-wise, there’s already a good list of languages that can compile to WebAssembly with more maturing every day.

The WebAssembly community is vibrant and active. There’s an ever-growing list of different Wasm projects, runtimes, tools, acronyms that you can happily get lost in.

But outside the browser, apart from edge computing, it’s still early days. It’s not that we will be replacing containers anytime soon. It’s more likely that we will see orchestration systems executing a mix of different types of workloads depending on the specific use cases.

The WASI standardisation is a recent move, and most proposals are still in phase 2 of 4. WebAssembly/WASI performance has also been scrutinised, and it’s not clear how close to native times it will get for general use cases.

The ecosystem needs a bit more consolidation, as well. Hopefully, the recently formed ByteCode Alliance with four of the big WebAssembly sponsors will unite the different fronts and help deliver Wasm and WASI to its full potential.

But keep an eye on it! It’s exciting and evolving fast!

Try it out


This blog is written exclusively by the OpenCredo team. We do not accept external contributions.



Twitter LinkedIn Facebook Email