Blog Open Source About

Modern Cloud Application Delivery: WASM and WASI

---

I described in a previous blog post that modularity will play a key role in future enterprise applications. This is demonstrated in the current trends of serverless functions or containerized architectures. However, those solutions are not perfect:

One of the inventors of Docker, one of the many tools supporting containerized applications, states on Twitter in 2019

Solomon Hykes: “if WASM+WASI existed in 2008, we wouldn’t have needed to created Docker. That’s how important it is. Webassembly on the server is the future of computing. A standardized system interface was the missing link. Let’s hope WASI is up to the task!”

I will explain in this blog post what are WebAssembly (WASM) and WASM System Interface (WASI) as well as present what benefits can be gained with respect to the contemporary solutions for modularised applications on server, in browsers and mobiles. Another topic is the large ecosystem of WASM and WASI. Furthermore, I will also explain the relationship to Open Neural Network Exchange (ONNX), an open format to represent machine learning models. Finally, I will give an outlook highlighting the importance of WASM and WASI for future enterprise applications.

WebAssembly (WASM)

WebAssembly (WASM) is a standard by the World Wide Web Consortium (W3C) with contributions of large software vendors.

Essentially it is a format for binary code, similar as you may know it from portable executable on Windows or the Executable and Linkable Format (ELF) format on Linux.

WASM binary code can be interpreted just in time (JIT) and/or compiled ahead of time (AOT). A combination can be used to immediately start executing a program and reusing the results for AOT compilation to execute it even faster when it is executed next time.

The main difference is that the format is standardised across hardware platforms and extremely efficient. Furthermore, application code in many different languages can be compiled to WASM. Contrary to other languages that use a platform independent instruction set, WASM has been designed for higher performance, portability, modularity and safety.

Since WASM binary code needs to run on many platforms, some highly specialised CPU instructions specific to a small subset of CPUs/GPUs/TPUs are not supported yet. However, there are extensions to the standard currently work in progress that enable also those types of instructions. Additionally, the WASM binary code can be further optimized by the WASM runtime to leverage those functionalities. Furthermore, you can combine WASM binary code with native code that execute the instructions natively.

Originally, WASM has been designed for web applications executed in the browser. However, WASM applications are more and more executed also in server backends as well as serverless applications due to its efficiency, safety, portability and the support for many programming languages. Going further, one can, by leveraging WASM, even move computation dynamically between browser and server/serverless backend depending on the capabilities of the client. WASM it is supported by many browsers, operating systems and platforms.

WASM modules can also load other WASM modules making it suitable for a very flexible, dynamic and safe plugin systems.

WASM has additionally text representation called WebAssembly Text Format (WAT).

WASM is not the first of its kind. There have been since a long time approaches to write portable application running on several operating systems and platforms:

Those approaches were very successful and are still nowadays. Javascript is the language of the Web as it is easy to understand and write. Python is used for shell scripting and data processing applications. Java is used in many server based application and the heavily used in backends of Hyperscalers, such as Amazon Web Services (AWS) or Google Cloud. Its original vision was “write once run everywhere”.

WASM seems to be very similar to the second approach, but there are differences. Usually previously approaches contained a very complex virtual execution environment that contained a lot of functionality/complexity, such as the Java garbage collection, that was not needed for all programming languages/use cases and could even hinder certain type of applications. For example, very efficient highly concurrent applications or applications on embedded devices. Usually the runtimes were also big, consuming hundreds of megabytes of binaries to be deployed with the application. While those can in theory be fit to any setting, it often then becomes much less efficient as they have not been designed with this in mind. Also the larger the runtime is, the more attack surface it offers for security attacks and contains more technical debts. Thinking about maintenance and versioning one has to maintain different versions of those runtimes for different applications.

WASM offers here a different approach. It is much more lightweight and has been designed with having a potentially very constrained environment available. Similarly, WASI has been designed highly modular so that one only needs to bundle the modules needed for an application to run.

Interpreted languages have a similar issue as the one of interpreted binary code requiring large runtimes. They also are usually significantly slower than equivalent compiled versions of a program.

As a conclusion, the main difference of WASM is that it is simple, safe, high performant, highly modular and universal portable between backends and frontends.

WASM Sandbox

The WASM Sandbox is a modern security layer for WASM applications. It can encompass various techniques and is subject to further improvements, such as:

See also the wasmtime sandbox features, browser sandboxes (e.g. Firefox) or wasmer runtime protection.

However, the sandbox mechanism in WASM is subject to further evolution and also the user running an application needs to make decisions to provide minimal permissions via the sandbox and at the same time leverage all features of the application. For example, a user might need to give access to a specific folder in the home directory so that the application can read/write configuration files, but not access all pictures in the home folder. This is similar to what mobile applications provide already nowadays - even those run as web applications in a mobile browser.

WASM System Interface (WASI)

While WebAssembly provides portable binary code, it misses a portable standard library code, such as glibc, the Java standard library or the rust std library, so that developers do not have to rewrite everything from scratch. This is especially important for backends or serverless applications. When running a WASM application in a browser, one can use the W3C Web APIs providing access to a wide range of functionality.

Luckily there is the WASM System Interface (WASI), which provides such a standard library for all environments. The need for WASI grew in 2018/2019 as developers were pushing the boundaries of WASM for browser based applications to sophisticated backend and serverless applications. Core design principles are security, modularity and portability similarly to the principles of WASM.

WASI can be understood as a conceptual operating system, similar to WASM being a binary code for a conceptual machine. In this way, software compiled for WASM using WASI does not need to know and does not incorporate operating system specific code.

Browsers nowadays are mini-operating systems themselves providing many functions as part of the previously described W3C Web APIs. However, they are designed for the needs of browsers and not backends or serverless applications. Hence, WASI was developed to cater for their needs.

WASI still requires that a core runtime library, which I describe later, is installed for the specific operating system. Nevertheless, everything on top of WASI is platform and operating system independent.

WASI has a core that can be extended depending on the needs with further WASI-specific modules. This also means that a big runtime is not needed taking into account also devices with much less capabilities into account.

Other similar concepts, such as the Java standard library, only integrated modularisation much later in their evolution and hence are far away from the modularisation of WASI. Furthermore, still nowadays the other approaches faces the issue that the modularisation is still incomplete and users anyway need to install the full library.

Supported Programming Languages

WASM can be supported in two different ways:

As written before, Rust was one of the first programming languages supporting WASM and has one of the most active communities in this space. However, most popular languages have now stable WASM support and also less popular languages have at least basic support. Given that most modern compiler frameworks support WASM, the choice of languages is very large - probably much larger than any similar predecessor technology.

The following table gives an overview on WASM support in different popular programming languages in alphabetical order. The table is non-exhaustive and several link collection exists describing even more languages (cf. here or here). Note: Some tools are experimental.

Programming language Can load and execute WASM Code Can be compiled to WASM
C - wasmtime - wasmer - wasmedge - llvm (optional emscripten)
C++ - wasmtime - wasmer - llvm (optional emscripten)
C#/F# (.NET) - wasmtime - Blazor (implicitly, only frontend applications) - RyuJIT to llvm conversion (experimental) - the Blazor runtime is written in WebAssembly and runs .NET applications in Microsoft Common Intermediate Language (CIL)
Go - wasmtime - wasmer - llvm (optional emscripten)
Java/JVM - wasmer - wasmtime (not official) kawamuray or bluejeckyll - Bytecoder - TeaVM (experimental) - theoretically one can compile a Java Runtime Environment to WASM
PHP - wasmer - llvm (optional emscripten)
Python - wasmtime - wasmer - llvm (optional emscripten) - the Python interpreter is compiled to WASM
R - theoretically possible as R can load any runtimes using their C API or using Node.js - llvm (optional emscripten) - the R interpreter is compiled to WASM
Rust - wasmtime - wasmer - wasmedge - llvm (optional emscripten)
Typescript/Javascript - As part of the browser it is standard - as part of the backend using node.js - AssemblyScript (Typescript-like)

Non-exhaustive List of Popular Programming Language Support for WASM

You may observe that some languages are scripting languages, such as Python. In those cases, the interpreter of the scripting language is compiled to WASM and the original scripting language is left as it is, but the WASM version of the interpreter interprets it.

Ecosystem

Compiler

Nowadays, compiler have become highly modular with clear separation and interfaces. This makes it easy to create new programming languages that can compile to new hardware platforms that did not even exist when the language was created - without any modification of the language itself.

Modern compiler stacks usually contain the following elements:

This is the reason why WASM is so wide spread across different technologies. LLVM is such a compiler stack and one needs to only implement for the programming language a frontend and it has out of the box support for multiple plattforms and operating systems. Since LLVM together with Clang is very popular among programming languages and it supports WASM as a target out of the box, many programming languages can be directly compiled to WASM. Emscripten facilitates further the generation of WASM binary code.

Runtimes

Since WASM is platform-independent code it needs a runtime so that it can be executed on any platform. The runtime can work in two ways:

Usually the execution itself is much faster when using AOT WASM binary code, but requires the first time longer as the precompiled code needs to be generated first. Furthermore new improvements to the runtime in terms of performance optimizations or security sandboxing require another regeneration of the optimized platform-specific code.

JIT is suitable when you want to immediately start the execution, especially large binaries where it is unlikely that all code paths are followed by a single user.

Additionally, runtimes provide implementation of WASI and further modules that are not part of WASI, such as WASI-NN for running machine learning code on specialised hardware such as GPUs, TPUs or FPGA.

Find here an non-exhaustive list of wasm runtimes and their key features

Runtime Features
Any browser currently all major browsers on Desktop and mobile support running of WASM binary code embedded into a webpage. Note: there are the WebAPIs which provide rich feature sets, but are not WASI compatible, thus there are special wrappers needed for WASI, such as wasmer/wasi
wasmtime supported by the Bytecode Alliance which has several large software vendors as its members, full support of the WASI standard, targets the backend/cloud/serverless/machine learning, runs in Kubernetes via krustlet, supports many platforms and operating systems
wasmedge supported by the Cloud Native Computing Foundation, it targets cloud native, edge, and decentralized applications. It powers serverless apps, embedded functions, microservices, smart contracts, and IoT devices, runs in Kubernetes, supports many platforms and operating systems
wasmer supported by Wasmer, targets backend/cloud/serverless/machine learning/browser, supports many platforms and operating systems
wamr supported by the Bytecode Alliance, has a small footprint suitable for embedded devices, Internet of Things (IoT), smart contracts, cloud etc.
krustlet supported by the Bytecode Alliance, kubelet to run Webassembly workloads in Kubernetes
node.js runtime for running Javascript on the server side with support for running WebAssembly/WASI binary code

Non-exhaustive list of wasm runtimes

There are the following things to note:

Repositories

All modern application development and delivery frameworks support module repositories or registries. There, modules of applications are stored in a versioned manner. They are fetched during development or runtime to build and deliver an application. Even smaller (non-wasm) applications are using tenth of modules from different registries usually based on Open Source technologies.

Examples for those repositories for other languages are Maven Central (Java), npm.js (Javascript/Typescript), Pypi (Python), CRAN (R), Open Container Initiative (OCI) (Containers).

There are now also more and more repositories for WASM modules appearing, such as WebAssemblyHub. Another example is the WebAssembly Package Manager (WAPM).

While there is not yet a standard, there are promising standardisation efforts that can reuse OCI compliant registries for distributing WASM modules (e.g. here).

Relationship to Open Neural Network Exchange (ONNX)

ONNX is an open format built to represent machine learning models. It is an universal format to run machine learning models on any platform, framework and operating system. For example, it can run machine learning models in a browser or cloud service without the need to change/adapt code. They can be generated and run in Pytorch, Tensorflow or other frameworks. Hence, it is similar to WASM, but with a narrow focus on machine learning models.

I believe both WASM and ONNX are very complementary and suitable in combination to built complex machine learning applications that are portable, secure and efficient.

Use Cases

Serverless applications

The serverless paradigm enables developers to focus on business functionality and deploying it without the need to take care about the infrastructure. Essentially, they provide the function developed in various languages and criteria/events when to run it. The rest is handled transparently in the background.

These functions can be written in many different languages, such as Typescript, Python or Java. Although the developer does not take care on where as well as how it is run, many cloud providers provide different options also in terms of computing, e.g. x86-64 hardware, ARM on servers, RISC on servers and custom hardware. For each of those options the developer needs to provide specific packages, which makes the deployment of serverless applications more complex and error-prone.

Furthermore, the runtimes, such as Python or Java, are quite big and thus requiring more memory and CPU than needed. This can be especially relevant if there is a high frequency of execution of serverless functions. Additionally, the isolation is usually based on container technologies, such as Linux cgroups, which could be improved.

Within all those aspects, WASM can address those issues. The runtimes are very lightweight and can be modularised at a fine granular level. The execution of WASM binary code can be optimized for the desired target architecture without the need that the developer provides and tests platform-specific binaries. A sandbox model brings further benefits in terms of security.

Server

While the serverless case is focused on individual potentially high frequent business transaction, the server is a long-running process that may include high frequency business transactions, long running business processes or enable communication between different clients.

However, also here the benefits of WASM come into play that it requires less resources, is portable and has a sandboxed security model.

Browser applications

WebAssembly originated in browser applications. Originally, the main reason were high performance web applications across browser and platforms.

While it is not limited to offline applications, it enabled also more high performance rich-client type of applications in the browser that do not require an internet connection, such as video editing, video player, machine learning, encryption, enterprise applications and more.

Some examples:

WASM has also the potential to replace many mobile or Desktop specific applications by integrating it in Progressive Web Applications (PWA).

Dynamic hybrid Server-Browser apps

This is a more advanced use case. However, imagine you have an application that can dynamically decide to run all or some parts of it on a server or a device - depending on the capabilities of a device.

This is especially interesting for supporting multiple devices for a long time - even if they do not run anymore the most up-to-date hardware. Those older devices can leverage the capabilities of a server, while one does not need to use so much server infrastructure if users use a capable more recent device.

At the moment, there are no frameworks that make implementation for such an approach easy. While one can easily create different modules and simply move them from client to server and vice versa, the communication between both can be a challenge, especially if some graphical elements are streamed to the device instead of running them on the device itself. Nevertheless, I believe if a framework is provided for this then it will become trivial to do.

Federated Learning

Federated Learning is about running training of a machine learning model in a decentralised manner on different devices contributing to a central model used for inference. The idea is that the original dataset for training is not shared, e.g. for data privacy reasons or performance, but still the same powerful inference can be done as if the model would have been trained centrally on one large dataset.

Since many different devices and server platforms can be involved, it makes sense to leverage WASM here as well. It can be complemented by ONNX as described above. However, ONNX alone would not be sufficient as WASM would be needed for the communication and integration layer as well as providing access to special hardware, such as GPU.

Conclusions

WASM, WASI and their ecosystems have clear advantages for serverless, serverbased and client applications implemented in any of the many supported programming languages:

While WASM is already usable for certain types of applications and stable runtime exists, there is clearly also the need to develop further, especially in the area of further standardisation of the security sandbox in context of WASI, multithreading and large memory. Nevertheless, those activities are on the near-term roadmap and given the existing large scale applications as well as the investment of large technology companies they are likely to be available in the existing WASM runtimes soon.

Outlook

As mentioned before, there are further extensions to the existing WASM standard on the roadmap.

Also the ecosystem will benefit from further development:

More and more mobile applications will be realised as progressive web applications (PWA) with selected WASM modules to reduce the costs for developing applications for different mobile platforms significantly. Since PWA and WASM are supported out of the box on IOS and Android, this can bring faster innovation and more security to those platforms.