I described in a previous blog post that modularity will play a key role in future enterprise applications. This is demonstrated in the current trends of serverless functions or containerized architectures. However, those solutions are not perfect:
- Given the trend of many different computing architectures, such as ARM on servers, Internet of Things (IoT) Edge devices or smartphone processors, this increases the complexity of deployment significantly.
- Especially containerized architectures are difficult to secure and they contain still way too much unnecessary application code leading to security and reliability issues. This is even more problematic as large container repositories are prone to high security risks, because container images are difficult to secure even for experts and malicious actors can easily sneak in insecure configurations that look innocent at first as well as second glance or fulfill their malicious task only in combination with other container images.
- Modular applications themselves are still based on traditional operating system based techniques using Java binaries, Python Modules, C(++) applications etc. They have the disadvantages that they have large runtimes that have historically not included the latest paradigms for secure applications in the runtime itself. Furthermore, they still contain a lot of legacy baggage which is difficult to remove as the modularization of those runtimes is very limited.
One of the inventors of Docker, one of the many tools supporting containerized applications, states on Twitter in 2019
I will explain in this blog post what are WebAssembly (WASM) and WASM System Interface (WASI) as well as present what benefits can be gained with respect to the contemporary solutions for modularised applications on server, in browsers and mobiles. Another topic is the large ecosystem of WASM and WASI. Furthermore, I will also explain the relationship to Open Neural Network Exchange (ONNX), an open format to represent machine learning models. Finally, I will give an outlook highlighting the importance of WASM and WASI for future enterprise applications.
WebAssembly (WASM)
WebAssembly (WASM) is a standard by the World Wide Web Consortium (W3C) with contributions of large software vendors.
Essentially it is a format for binary code, similar as you may know it from portable executable on Windows or the Executable and Linkable Format (ELF) format on Linux.
WASM binary code can be interpreted just in time (JIT) and/or compiled ahead of time (AOT). A combination can be used to immediately start executing a program and reusing the results for AOT compilation to execute it even faster when it is executed next time.
The main difference is that the format is standardised across hardware platforms and extremely efficient. Furthermore, application code in many different languages can be compiled to WASM. Contrary to other languages that use a platform independent instruction set, WASM has been designed for higher performance, portability, modularity and safety.
Since WASM binary code needs to run on many platforms, some highly specialised CPU instructions specific to a small subset of CPUs/GPUs/TPUs are not supported yet. However, there are extensions to the standard currently work in progress that enable also those types of instructions. Additionally, the WASM binary code can be further optimized by the WASM runtime to leverage those functionalities. Furthermore, you can combine WASM binary code with native code that execute the instructions natively.
Originally, WASM has been designed for web applications executed in the browser. However, WASM applications are more and more executed also in server backends as well as serverless applications due to its efficiency, safety, portability and the support for many programming languages. Going further, one can, by leveraging WASM, even move computation dynamically between browser and server/serverless backend depending on the capabilities of the client. WASM it is supported by many browsers, operating systems and platforms.
WASM modules can also load other WASM modules making it suitable for a very flexible, dynamic and safe plugin systems.
WASM has additionally text representation called WebAssembly Text Format (WAT).
Excurs: Related technologies
WASM is not the first of its kind. There have been since a long time approaches to write portable application running on several operating systems and platforms:
- Interpreted programming languages, such as Javascript or Python
- Interpreted binary code for a virtual execution environment available on many different platforms, such as Java bytecode on a Java Virtual Machine. Usually different programming languages can be compiled to the binary code and run on various platforms.
Those approaches were very successful and are still nowadays. Javascript is the language of the Web as it is easy to understand and write. Python is used for shell scripting and data processing applications. Java is used in many server based application and the heavily used in backends of Hyperscalers, such as Amazon Web Services (AWS) or Google Cloud. Its original vision was „write once run everywhere„.
WASM seems to be very similar to the second approach, but there are differences. Usually previously approaches contained a very complex virtual execution environment that contained a lot of functionality/complexity, such as the Java garbage collection, that was not needed for all programming languages/use cases and could even hinder certain type of applications. For example, very efficient highly concurrent applications or applications on embedded devices. Usually the runtimes were also big, consuming hundreds of megabytes of binaries to be deployed with the application. While those can in theory be fit to any setting, it often then becomes much less efficient as they have not been designed with this in mind. Also the larger the runtime is, the more attack surface it offers for security attacks and contains more technical debts. Thinking about maintenance and versioning one has to maintain different versions of those runtimes for different applications.
WASM offers here a different approach. It is much more lightweight and has been designed with having a potentially very constrained environment available. Similarly, WASI has been designed highly modular so that one only needs to bundle the modules needed for an application to run.
Interpreted languages have a similar issue as the one of interpreted binary code requiring large runtimes. They also are usually significantly slower than equivalent compiled versions of a program.
As a conclusion, the main difference of WASM is that it is simple, safe, high performant, highly modular and universal portable between backends and frontends.
WASM Sandbox
The WASM Sandbox is a modern security layer for WASM applications. It can encompass various techniques and is subject to further improvements, such as:
- Pointers only can refer to the WASM internal linear memory. The WASM module does not know and cannot access any outside virtual memory.
- Callstacks are not accessible by WASM modules. This means that buffer-overflows/stack-smashing attacks are very difficult.
- All control transfers to another part of the program (e.g. calling it) are type-checked, ie one cannot jump into the middle of a function.
- Interactions to the inside-and outside world are done via import/exports. One cannot do system calls or similar activities directly.
- Arbitrary behaviour is forbidden in the WASM standard.
See also the wasmtime sandbox features, browser sandboxes (e.g. Firefox) or wasmer runtime protection.
However, the sandbox mechanism in WASM is subject to further evolution and also the user running an application needs to make decisions to provide minimal permissions via the sandbox and at the same time leverage all features of the application. For example, a user might need to give access to a specific folder in the home directory so that the application can read/write configuration files, but not access all pictures in the home folder. This is similar to what mobile applications provide already nowadays – even those run as web applications in a mobile browser.
WASM System Interface (WASI)
While WebAssembly provides portable binary code, it misses a portable standard library code, such as glibc, the Java standard library or the rust std library, so that developers do not have to rewrite everything from scratch. This is especially important for backends or serverless applications. When running a WASM application in a browser, one can use the W3C Web APIs providing access to a wide range of functionality.
Luckily there is the WASM System Interface (WASI), which provides such a standard library for all environments. The need for WASI grew in 2018/2019 as developers were pushing the boundaries of WASM for browser based applications to sophisticated backend and serverless applications. Core design principles are security, modularity and portability similarly to the principles of WASM.
WASI can be understood as a conceptual operating system, similar to WASM being a binary code for a conceptual machine. In this way, software compiled for WASM using WASI does not need to know and does not incorporate operating system specific code.
Browsers nowadays are mini-operating systems themselves providing many functions as part of the previously described W3C Web APIs. However, they are designed for the needs of browsers and not backends or serverless applications. Hence, WASI was developed to cater for their needs.
WASI still requires that a core runtime library, which I describe later, is installed for the specific operating system. Nevertheless, everything on top of WASI is platform and operating system independent.
WASI has a core that can be extended depending on the needs with further WASI-specific modules. This also means that a big runtime is not needed taking into account also devices with much less capabilities into account.
Other similar concepts, such as the Java standard library, only integrated modularisation much later in their evolution and hence are far away from the modularisation of WASI. Furthermore, still nowadays the other approaches faces the issue that the modularisation is still incomplete and users anyway need to install the full library.
Supported Programming Languages
WASM can be supported in two different ways:
- Programming languages can load and execute WASM code using a WASM Runtime
- Software written in a programming language can be compiled to WASM code that can be directly run by WASM runtime and integrated with WASM modules written in other languages
As written before, Rust was one of the first programming languages supporting WASM and has one of the most active communities in this space. However, most popular languages have now stable WASM support and also less popular languages have at least basic support. Given that most modern compiler frameworks support WASM, the choice of languages is very large – probably much larger than any similar predecessor technology.
The following table gives an overview on WASM support in different popular programming languages in alphabetical order. The table is non-exhaustive and several link collection exists describing even more languages (cf. here or here). Note: Some tools are experimental.
Programming language | Can load and execute WASM Code | Can be compiled to WASM |
C | – wasmtime – wasmer – wasmedge | – llvm (optional emscripten) |
C++ | – wasmtime – wasmer | – llvm (optional emscripten) |
C#/F# (.NET) | – wasmtime – Blazor (implicitly, only frontend applications) | – RyuJIT to llvm conversion (experimental) – the Blazor runtime is written in WebAssembly and runs .NET applications in Microsoft Common Intermediate Language (CIL) |
Go | – wasmtime – wasmer | – llvm (optional emscripten) |
Java/JVM | – wasmer – wasmtime (not official) kawamuray or bluejeckyll | – Bytecoder – TeaVM (experimental) – theoretically one can compile a Java Runtime Environment to WASM |
PHP | – wasmer | – llvm (optional emscripten) |
Python | – wasmtime – wasmer | – llvm (optional emscripten) – the Python interpreter is compiled to WASM |
R | – theoretically possible as R can load any runtimes using their C API or using Node.js | – llvm (optional emscripten) – the R interpreter is compiled to WASM |
Rust | – wasmtime – wasmer – wasmedge | – llvm (optional emscripten) |
Typescript/Javascript | – As part of the browser it is standard – as part of the backend using node.js | – AssemblyScript (Typescript-like) |
You may observe that some languages are scripting languages, such as Python. In those cases, the interpreter of the scripting language is compiled to WASM and the original scripting language is left as it is, but the WASM version of the interpreter interprets it.
Ecosystem
Compiler
Nowadays, compiler have become highly modular with clear separation and interfaces. This makes it easy to create new programming languages that can compile to new hardware platforms that did not even exist when the language was created – without any modification of the language itself.
Modern compiler stacks usually contain the following elements:
- Frontend: Essentially implements parsing and checking of the source code in the original programming language and translate its into a lower level intermediate representation (IR)
- Middle End: Platform independent optimization of the code in IR
- Backend: Takes the optimized IR and generates low level optimized platform-specific binary code code. This is where the WASM binary code is created.
This is the reason why WASM is so wide spread across different technologies. LLVM is such a compiler stack and one needs to only implement for the programming language a frontend and it has out of the box support for multiple plattforms and operating systems. Since LLVM together with Clang is very popular among programming languages and it supports WASM as a target out of the box, many programming languages can be directly compiled to WASM. Emscripten facilitates further the generation of WASM binary code.
Runtimes
Since WASM is platform-independent code it needs a runtime so that it can be executed on any platform. The runtime can work in two ways:
- Just-in-time (JIT): Parses the WASM binary code during execution in near real-time
- Ahead-of-Time (AOT): Parses the full binary code before execution and translates it into a highly optimized platform-specific code.
Usually the execution itself is much faster when using AOT WASM binary code, but requires the first time longer as the precompiled code needs to be generated first. Furthermore new improvements to the runtime in terms of performance optimizations or security sandboxing require another regeneration of the optimized platform-specific code.
JIT is suitable when you want to immediately start the execution, especially large binaries where it is unlikely that all code paths are followed by a single user.
Additionally, runtimes provide implementation of WASI and further modules that are not part of WASI, such as WASI-NN for running machine learning code on specialised hardware such as GPUs, TPUs or FPGA.
Find here an non-exhaustive list of wasm runtimes and their key features
Runtime | Features |
Any browser | currently all major browsers on Desktop and mobile support running of WASM binary code embedded into a webpage. Note: there are the WebAPIs which provide rich feature sets, but are not WASI compatible, thus there are special wrappers needed for WASI, such as wasmer/wasi |
wasmtime | supported by the Bytecode Alliance which has several large software vendors as its members, full support of the WASI standard, targets the backend/cloud/serverless/machine learning, runs in Kubernetes via krustlet, supports many platforms and operating systems |
wasmedge | supported by the Cloud Native Computing Foundation, it targets cloud native, edge, and decentralized applications. It powers serverless apps, embedded functions, microservices, smart contracts, and IoT devices, runs in Kubernetes, supports many platforms and operating systems |
wasmer | supported by Wasmer, targets backend/cloud/serverless/machine learning/browser, supports many platforms and operating systems |
wamr | supported by the Bytecode Alliance, has a small footprint suitable for embedded devices, Internet of Things (IoT), smart contracts, cloud etc. |
krustlet | supported by the Bytecode Alliance, kubelet to run Webassembly workloads in Kubernetes |
node.js | runtime for running Javascript on the server side with support for running WebAssembly/WASI binary code |
There are the following things to note:
- Many programming languages are supported, but the runtimes are often implement in Rust due to its security, portability and efficiency. Nevertheless, you do not need to know any Rust as the runtime can execute WASM binary code written in any language.
- A lot of functionality is overlapping and standards such as WebAssembly/WASI are well-supported, although the standards themselves are developing
- There are differences in special use cases, such as machine learning in WebAssembly or edge devices
- WASM runtimes are highly modular and usually small tenth of megabytes compared to, for example, Java environments which are hundreds of megabyte
- WASM runtimes do not enforce complex runtime environments, e.g. Garbage collection. This is good for languages, such as Rust, that do not need them. However, it is also no problem to add those features if needed – simply compile them to WASM binary code
- Nearly any programming language can include a WASM runtime as a library and load dynamically WASM modules into a given application
Repositories
All modern application development and delivery frameworks support module repositories or registries. There, modules of applications are stored in a versioned manner. They are fetched during development or runtime to build and deliver an application. Even smaller (non-wasm) applications are using tenth of modules from different registries usually based on Open Source technologies.
Examples for those repositories for other languages are Maven Central (Java), npm.js (Javascript/Typescript), Pypi (Python), CRAN (R), Open Container Initiative (OCI) (Containers).
There are now also more and more repositories for WASM modules appearing, such as WebAssemblyHub. Another example is the WebAssembly Package Manager (WAPM).
While there is not yet a standard, there are promising standardisation efforts that can reuse OCI compliant registries for distributing WASM modules (e.g. here).
Relationship to Open Neural Network Exchange (ONNX)
ONNX is an open format built to represent machine learning models. It is an universal format to run machine learning models on any platform, framework and operating system. For example, it can run machine learning models in a browser or cloud service without the need to change/adapt code. They can be generated and run in Pytorch, Tensorflow or other frameworks. Hence, it is similar to WASM, but with a narrow focus on machine learning models.
I believe both WASM and ONNX are very complementary and suitable in combination to built complex machine learning applications that are portable, secure and efficient.
Use Cases
Serverless applications
The serverless paradigm enables developers to focus on business functionality and deploying it without the need to take care about the infrastructure. Essentially, they provide the function developed in various languages and criteria/events when to run it. The rest is handled transparently in the background.
These functions can be written in many different languages, such as Typescript, Python or Java. Although the developer does not take care on where as well as how it is run, many cloud providers provide different options also in terms of computing, e.g. x86-64 hardware, ARM on servers, RISC on servers and custom hardware. For each of those options the developer needs to provide specific packages, which makes the deployment of serverless applications more complex and error-prone.
Furthermore, the runtimes, such as Python or Java, are quite big and thus requiring more memory and CPU than needed. This can be especially relevant if there is a high frequency of execution of serverless functions. Additionally, the isolation is usually based on container technologies, such as Linux cgroups, which could be improved.
Within all those aspects, WASM can address those issues. The runtimes are very lightweight and can be modularised at a fine granular level. The execution of WASM binary code can be optimized for the desired target architecture without the need that the developer provides and tests platform-specific binaries. A sandbox model brings further benefits in terms of security.
Server
While the serverless case is focused on individual potentially high frequent business transaction, the server is a long-running process that may include high frequency business transactions, long running business processes or enable communication between different clients.
However, also here the benefits of WASM come into play that it requires less resources, is portable and has a sandboxed security model.
Browser applications
WebAssembly originated in browser applications. Originally, the main reason were high performance web applications across browser and platforms.
While it is not limited to offline applications, it enabled also more high performance rich-client type of applications in the browser that do not require an internet connection, such as video editing, video player, machine learning, encryption, enterprise applications and more.
Some examples:
- Amazon Prime uses it for high-performance video playing on more than 8000 device types
- A video-editor
- Various games
- Machine learning
- A Linux virtual machine
- A full Python interpreter in the browser
- A full office suite (LibreOffice) in the browser
- A JupyterLab environment that works fully without the need to create a server (neither locally nor remote)
- Terrarium – a multi-language deployment platform for WASM
- Porting old complex legacy applications into Web applications in an instant by recompiling them for WASM/WASI, e.g. by using wasi-libc
- A web shell that can load WASM modules from a repository
- … many more
WASM has also the potential to replace many mobile or Desktop specific applications by integrating it in Progressive Web Applications (PWA).
Dynamic hybrid Server-Browser apps
This is a more advanced use case. However, imagine you have an application that can dynamically decide to run all or some parts of it on a server or a device – depending on the capabilities of a device.
This is especially interesting for supporting multiple devices for a long time – even if they do not run anymore the most up-to-date hardware. Those older devices can leverage the capabilities of a server, while one does not need to use so much server infrastructure if users use a capable more recent device.
At the moment, there are no frameworks that make implementation for such an approach easy. While one can easily create different modules and simply move them from client to server and vice versa, the communication between both can be a challenge, especially if some graphical elements are streamed to the device instead of running them on the device itself. Nevertheless, I believe if a framework is provided for this then it will become trivial to do.
Federated Learning
Federated Learning is about running training of a machine learning model in a decentralised manner on different devices contributing to a central model used for inference. The idea is that the original dataset for training is not shared, e.g. for data privacy reasons or performance, but still the same powerful inference can be done as if the model would have been trained centrally on one large dataset.
Since many different devices and server platforms can be involved, it makes sense to leverage WASM here as well. It can be complemented by ONNX as described above. However, ONNX alone would not be sufficient as WASM would be needed for the communication and integration layer as well as providing access to special hardware, such as GPU.
Conclusions
WASM, WASI and their ecosystems have clear advantages for serverless, serverbased and client applications implemented in any of the many supported programming languages:
- Portability
- Modularity
- Security
- Performance
While WASM is already usable for certain types of applications and stable runtime exists, there is clearly also the need to develop further, especially in the area of further standardisation of the security sandbox in context of WASI, multithreading and large memory. Nevertheless, those activities are on the near-term roadmap and given the existing large scale applications as well as the investment of large technology companies they are likely to be available in the existing WASM runtimes soon.
Outlook
As mentioned before, there are further extensions to the existing WASM standard on the roadmap.
Also the ecosystem will benefit from further development:
- A sandbox security model for WASI where users can decide what a given WASM module can access – this will be probably similar to the situation today where a user can decide, for example, if a web application has access to a camera or the filesystem
- Bringing together the world of WASI and the world of the Web APIs in the browser, e.g. similar to wasmer/wasi
- Instruction sets for specific hardware operations, such as cryptographic operations or machine learning acceleration (see WASI-NN)
- Business programming languages (4GL), such as ABAP for SAP, can use WASM and be compiled to WASM to make them available on any device and server (see here for an example on ABAP)
- While it is possible to run wasm applications in all popular serverless runtimes nowadays, this can be standardised and available as a default choice. Users will not need anymore to choose the target platform, but can choose security, reliability, performance and cost requirements and the platform provider automatically selects the hardware platform based on those choices.
More and more mobile applications will be realised as progressive web applications (PWA) with selected WASM modules to reduce the costs for developing applications for different mobile platforms significantly. Since PWA and WASM are supported out of the box on IOS and Android, this can bring faster innovation and more security to those platforms.
Schreibe einen Kommentar