Choosing a WebAssembly Run-Time

Choosing a WebAssembly Run-Time

I expand on these ideas in my S4 presentation WebAssembly at the IoT Edge.

As I mentioned in my article Why Am I Excited About WebAssembly?, selecting a WebAssembly run-time is challenging because there are many to choose from, they have different characteristics, and they are evolving rapidly:

For the edge, Wasmer, Wasmtime, Wasm3, WasmEdge, and others, all seem like viable options that can be used as a library from various programming languages. WebAssembly is portable across run-times, but changing from one host run-time to another would be a major investment once a significant amount of code or tooling has been developed. If WebAssembly is only used for pure functions, however, porting code from one host run-time to another may be relatively straightforward.

In this article, I share my thoughts on selecting a WebAssembly run-time, then I experiment with changing the WebAssembly run-time as a practical example.

Run-Time Considerations

Perhaps the most compelling feature of WebAssembly is its sandboxed execution model. However, this means the security of the platform is critically dependent on the quality of the WebAssembly run-time that implements the sandbox.

V8 is the WebAssembly run-time used by Google Chrome and Node.js. V8 is written in C++, a language that is notoriously prone to security vulnerabilities because developers manage memory directly.[1] This leads to issues like heap out-of-bounds, use after free, and the use of uninitialized variables. V8 has been around since 2008 and it has received enormous scrutiny given its prominent use. But even with this scrutiny, V8 has had critical security bugs. A Common Vulnerabilities and Exposures (CVE) search returns hundreds of results for V8. Trust in V8 has been built over time, by taking security seriously, openly disclosing vulnerabilities, responding quickly, documenting the security policy, eliminating bugs through tools for static and dynamic analysis, and supporting security research through academic research grants and bug bounty programs. Trust in newer WebAssembly run-times—particularly ones that will be embedded in industrial computing and IoT, controlling critical infrastructure—will be built similarly: over time, through widespread adoption, and through a responsible, responsive, and transparent security process.

From my perspective, as of this writing, the leading WebAssembly run-time is Wasmtime. Through a partnership with the Bytecode Alliance, Wasmtime has published guidelines for reporting and disclosing security vulnerabilities. Wasmtime has also been intentional about securing dependencies using cargo vet for the Rust programming language, minimizing the use of unsafe code in Rust, using continuous and targeted fuzzing, attempting to formally verify security-critical code, and focusing on strongly-typed APIs that can be statically verified. Nick Fitzgerald wrote an excellent article entitled Security and Correctness in Wasmtime expanding on these topics.

WebAssembly run-times will also benefit from choosing a safe implementation language. V8 was developed in C++ to deliver native performance when safer alternatives did not exist at the time. Both Wasmtime and Wasmer are written in Rust, which delivers native performance in addition to ensuring type-safety and memory-safety at compile-time. Even when unsafe code must be written in Rust, it is preferable to C or C++, in my opinion, because the code is demarked by Rust’s unsafe keyword, and extra scrutiny can be given to these sections of code.[2]

I am skeptical of WebAssembly run-times that continue to use C or C++, especially without explicitly justifying this decision and publishing guidelines for how they are ensuring the code, including dependencies, is safe. I used both of these programming languages for years and I have a fondness for them, but I believe it is unsafe to continue to develop software in C and C++, especially for critical infrastructure, when alternatives like Rust exist.[3] The WebAssembly Micro Runtime (WAMR), a lightweight WebAssembly run-time with a small footprint targeted at embedded environments, is mainly written in C. Like Wasmtime, WAMR is part of the Bytecode Alliance and follows their security policy, but if I am to adopt it, I want to understand more about why it continues to be implemented in C, and all the steps being taken to secure the WebAssembly sandbox.

It makes it very hard to compose software, because even if you and I both know how to write memory-safe C, it’s very hard for us to have an interface boundary where we can agree about who does what.
Bryan Cantrill on the dangers of composing code in C.

As of this writing, a CVE search for Wasmtime returns 14 results. A search for Wasmer and WAMR returns zero results. I find it hard to believe that these two projects have never has a security vulnerability and I can’t help but attribute this to the emphasis they have so far placed on reporting and disclosure.

Changing the WebAssembly Run-Time

If you start developing with one run-time and another one emerges that has better security, more trust, superior tooling, or improved performance, how difficult would it be to port your existing code to this new run-time? In my article WebAssembly at the IoT Edge: A Motivating Example I provided an example using WebAssembly to update code on a point-of-sale terminal without requiring a firmware release. The example in the article used Wasmer as the WebAssembly run-time. Let’s see what would be involved to port the example to Wasmtime.

The Wasmer code to load the WebAssembly and export the pricing function was as follows:

let wasm_bytes = std::fs::read("price_func.wasm")?;

let store = Store::default();

let module = Module::new(&store, &wasm_bytes)?;

let import_object = imports! {};

let instance = Instance::new(&module, &import_object).unwrap();

let price_func = instance
    .exports
    .get_native_function::<(i32, i32, i32), i32>("price_func")
    .unwrap();

Using Wasmtime, the code is not much different:

let engine = Engine::default();

let module = Module::from_file(&engine, "price_func.wasm").unwrap();

let mut store = Store::new(&engine, ());

let instance = Instance::new(&mut store, &module, &[]).unwrap();

let price_func = instance
    .get_typed_func::<(i32, i32, i32), i32>(&mut store, "calculate_price")
    .unwrap();

In the Wasmer example, calling the pricing function looked like this:

let price = price_func
    .call(count, unit_price, hour)
    .unwarp();

Switching to Wasmtime, the calling semantics change to the following:

 let price = price_func
     .call(&mut store, (count, unit_price, hour))
     .unwrap();

Overall, pretty similar, and relatively straightforward to change from one run-time to another. Some things worth emphasizing:

  • Moving to a new WebAssembly run-time did not require any changes to the WebAssembly itself (i.e., the price function exported from the WebAssembly module).
  • The team developing the firmware for the point-of-sale terminal could swap out the WebAssembly run-time without the team developing the billing calculation ever knowing or caring.
  • If the team that develops the billing calculation runs the same WebAssembly in the cloud—perhaps using yet another WebAssembly run-time that integrates well with Kubernetes for dynamically scaling workloads and back-testing pricing models—they can continue to do this, even if the run-time used by the firmware is different.

Conclusion

WebAssembly run-times are evolving rapidly and the canonical ones have yet to emerge.[4] If you invest in WebAssembly as a technology, changing to a different run-time in the future should not be a huge amount of work and many of your investments will be preserved. Most WebAssembly run-times will be comparable, especially with the increasing emphasis on the WebAssembly System Interface (WASI) and the WebAssembly Component Model to help ensure interoperability and portability. However, if a security vulnerability is found in your run-time of choice, this could lead to patching millions of IoT devices in the best-case scenario, or in the worst case, a large-scale disruption to critical infrastructure through cyberattack. Selecting a trusted run-time that has an open disclosure process and makes all attempts to minimize vulnerabilities is of utmost importance.[5] I commend Wastime for taking the lead with a responsible and transparent security policy. To be relevant, the other run-times need to do the same.


  1. Using smart pointers, like std::unique_ptr, in modern C++ means developers do not have to manage memory as explicitly as they used to, but many C++ projects do not use these patterns. ↩︎

  2. For example, Chromium’s "The Rule Of 2" can be applied: code should never handle more than two of 1) untrustworthy inputs, 2) code written in an unsafe language (e.g., C, C++, or unsafe Rust), and 3) code that runs outside of a sandbox with high privilege. ↩︎

  3. Adam Crain believes that the adoption of Rust in ICS is inevitable, not just because of security, but because of the productivity. See Adam’s excellent talk from S4: Applying the Rust Programming Language in ICS. ↩︎

  4. As I said in Why Am I Excited About WebAssembly?, some consolidation of run-times may eventually be beneficial. ↩︎

  5. I should add, if you are avoiding WebAssembly out of skepticism for the technology but implementing your own mechanisms for sandboxing, portability, dynamic reloading, etc., I think you are fooling yourself: your code will also have security vulnerabilities, but without the benefit of the scrutiny and support of a large developer community. ↩︎