A RedMonk Conversation: WASM Component Model with Fastly

Share via Twitter Share via Facebook Share via Linkedin Share via Reddit

Get more video from Redmonk, Subscribe!

Luke Wagner (Distinguished Engineer at Fastly) joins Rachel Stephens of RedMonk to talk more about:

  • comparing and contrasting how code executes in a WASM module vs a container
  • the power of the WASM component model, with an animated explainer of how calls run through the stack
  • a discussion about the ByteCode Alliance and the standards they are helping implement

This was a RedMonk video, sponsored by Fastly.

Related resources

Transcript

Rachel Stephens: Hi, welcome to RedMonk Conversations. I am Rachel Stephens and I am delighted to be here today with one of the original people who have done amazing work in the WebAssembly space. So I have with me Luke Wagner from Fastly. Luke, could you just do a quick intro? I met you originally when you were at Mozilla, but I would love to hear just who you are, what you’re working on, and how you got here.

Luke Wagner: Yeah, I work at Fastly in the Technology Research and Incubation team, and I work on WebAssembly standards and evolution. I’ve been Fastly for a little over four years, and I’m also a co-chair of the WebAssembly working Group in the W3C. And I’m championing some of these proposals that we I think, talking about later.

Rachel: Yeah. And so the original WebAssembly idea came out of Mozilla. Is that right? You were part of the original team that kind of was working on this?

Luke: Yeah, we were working on a project called asm.js. That kind of morphed into a–it was a funny thing with JavaScript all about performance–and then it morphed into what we call WebAssembly today. But I got to do that with other people in Google–colleagues at Google and Apple and then Microsoft.

Rachel: Yes. And I think we’ll get into this later. I think one of the fun and amazing things about WebAssembly as a technology, and I think this is true of a lot of great technologies, is that they are cross industry cutting, and they really are pulling in people from a lot of different things. And so I love that it didn’t start just with the Firefox browser, but it started with kind of pulling in a whole bunch of people from the chromium space. And so always interesting. But anyway, so we’re not here for doing a whole history of Wasm. I think a lot of people at this point have probably heard of it. 2015. Is that right?

Luke: 2017 is when it shipped in browsers. But we started work on 2015.

Rachel: Okay. So it’s been around for a while. Probably a lot of people are at least surface level familiar with WebAssembly at this point. It started out as a portable mechanism to kind of let people quickly and securely run executables in their web pages, but what people are maybe less aware of is how it has evolved in the intervening years and where it is now. So could you talk to us a little bit about what it means to run WebAssembly outside of the browser, and what is the WASI environment looking like?

Luke: Yeah. And this is a whole surprising thing because when we started, we weren’t sure whether people would even want to run WebAssembly outside the browser. But we could look at JavaScript, which had started in the browser, and then it was exploding and NodeJS and we’re like, hey, maybe, maybe that’ll happen with WebAssembly. But we weren’t sure because like, are we just drinking our own Kool-Aid, thinking this is like the coolest thing ever? But in fact, yeah, there’s been a bunch of reasons that people are wanting to run WebAssembly outside the browser, including at Fastly, where I work. And, you know, it started with, you know, WebAssembly is fully developed around sandboxing, right? When you’re running code from the web that you don’t trust, you’ve got to sandbox it to protect your system from whatever is in that tab and the tabs from each other. And it turns out when you are a cloud platform or an edge platform or any of these other places running your customers’ code, you have to protect your platform from, you know, potentially malicious customer and the customers from each other. So it’s kind of the same situation as kind of flipping the directions. And so it started with caring about sandboxing. But then, there’s lots of ways to do sandboxing. What makes people excited about WebAssembly is and one big thing is cold start performance. So in Fastly we can start a WebAssembly sandbox in 35 microseconds, often even better. And that allows us to make a fresh sandbox for each request that comes into Fastly. So we can isolate all the requests from each other.

So that’s like really fast. Last year at Microsoft’s Build conference, their CTO, Mark Russinovich, demoed a micro VM sort of approach that is able to start WebAssembly micro VMs in like less than a millisecond, but fully hardware isolated from each other, which is also really impressive. So it was like cold starts, a whole a whole reason to want to use WebAssembly. Also cost savings. And earlier this year, at one of the Wasm conferences, ZEISS showed a use case that they worked on with Fermyon, where they were able to show a 60% cost reduction. Because WebAssembly modules are so smaller, they can pack a bunch more on the same hardware and just get better density. So cost savings there. And also they showed being able to easily migrate it from x86 to ARM because of the portability of Wasm. So cost savings. And then lastly we see people who like serverless the operational model. It’s like very simple. It’s auto scaling you know, just less to manage. But it’s kind of worried about lock in vendor lock in, because a lot of the serverless today is very specific to what cloud you’re using. And so WebAssembly kind of offers the promise of being standardized serverless. So you have Wasm for the code. And then WASI for these interfaces of how does that code talk to like the surrounding environment. So that’s where WASI gets in saying how it’s a standard way to talk to a key value store or a database or do a HTTP request to the outside world.

Rachel: Love it. And that was such a great overview of just why people are interested in this space. I think what’s also interesting is just the number of various players that you have mentioned in this space. So we kind of talk about it started in the browser. You’re talking edge providers, which is Fastly. We talked about Azure which is cloud. We have Fermyon on, which is kind of a specific space or provider in the Wasm space. So we have a whole bunch of people who have kind of organized in and around this technology in various ways. And so I kind of want to take my question here into the consortium space. And let’s talk a little bit about the ByteCode Alliance, because we have people who are really working to make sure that Wasm is running across various runtimes, devices, environments. And so how is this cross-industry effort look from the inside. And can you maybe talk a little bit like what are the problems that the ByteCode Alliance specifically is tackling?

Luke: Yeah, totally. So the ByteCode Alliance was founded originally by Mozilla, Fastly, Intel, and Red Hat, and I think around 2019. And but now there’s like over or around 30 companies as far as last I checked on the website. And you know, and what everyone’s doing there is working on shared open source implementations of these open standards that are being worked on in the W3C. So ByteCode Alliance isn’t making standards, but rather shareable implementations of these standards, which then inform and we co-develop with the standards. So we’re working together to share common code. So one area where we work a lot on is in upstream implementations of Wasm stuff in different languages. So we’re actively working on JavaScript, Rust, Python, Go and. Net and C and C++ and maybe a few others that I’m not aware of. There’s joint work with the CNCF about how do we all the code to run Wasm efficiently on like Kubernetes and other cloud native execution environments, particularly in containerd, is one particular project that has a lot of Wasm integration. And also things around OCI registries: how can we use OCI registries as already standardized and widely deployed to distribute Wasm artifacts? And then lastly, there’s a really exciting recent thing is the SIG Embedded, which is a special interest group around a bunch of embedded companies focusing on how can we have common shared open source implementations of these standards in a bunch of embedded environments that are running Wasm with like less than a megabyte?

Rachel: Love it. So this is cross industry, cross standards, and implementations, cross foundations. This is a big effort that a lot of people are putting time and money and effort into. And so I think that’s really interesting when we talk about the importance of standards in terms of helping us drive this forward, and it’s a topic that I think we are going to talk about a lot more today in depth, specifically as it relates to the Wasm component model. And so I think rather than me try to explain it to everyone. Let’s hear from the expert. Can you help us understand what’s going on with the Wasm component model? And as you kind of talked about kind of the OCI containerd space, how is code executing differently in a Wasm module than it is in a container? Let’s start there, and then we can go into more of like the standards based questions.

Luke: Yeah. Well that’s a great comparison point because there’s a lot of similarities and some interesting differences between Wasm and containers. So starting with the similarities, you know, you can where you can kind of think of like components as just like a Wasm container. You know, containers have separate memory address spaces for the different containers. And this is really valuable when you want to have like separate languages running in different containers, like it’s totally normal today, which we kind of take for granted that this container’s Go and this one’s Python. And because and that’s easy to do because they have fully separate memories. And language runtimes tend to want to take over all the memory in a process. So that’s that’s what you want. So components similarly put different components in different memories. Second containers have different namespaces. And this is like the whole kernel features that allowed containers in the first place is separating the file systems and network namespaces and the configuration namespaces of different containers. And similarly components have, there’s no ambient file system or ambient whatever, components have to import whatever they can do, and different components can be given different imports. So they’re kind of namespaced in that same way. And then lastly, the way especially now with kind of API first sort of design of microservices, people start in the container world with like an IDL, like OpenAPI or gRPC, and then they codegen bindings for that, whatever functions they declare. And this like factors out a bunch of otherwise redundant and also error prone and maybe even security sensitive work. And you put that into the bindings and it makes your life a lot easier.

And similarly, in components we have an IDL called WIT. And you write your WIT interface once, and then you generate the bindings that you use to call into and out of your component. So that’s ways that containers are are similar to components. But then talking about kind of what makes them interesting and different, why don’t we just do stick Wasm in a container is one thing is components can call directly between each other just doing like function calls. So not cross process, not cross thread, same thread, same call stack even, I can call across a component boundary which makes these calls really fast. And furthermore, if I have like a handle to a resource like I’m passing HTTP requests around, I can just pass that pointer across the boundary without having to copy the whole thing. And in the container world, I would have had to take that, serialize it all to a socket–so a bunch of syscalls to write into a buffer. Even if the other container is running on the same machine, I’m gonna have to serialize through the OS to go through a socket. And so we’re talking about cutting out a lot of overhead when components communicate between each other. So that’s one thing. The second is containers take sort of a hands off approach to how the language inside runs. There’s like, look, I give you a process and some sockets and you do whatever you want. Whereas with components, we’re talking about integrating closely with the language ecosystems, and we want to make a component feel like a module or a package in your language’s own native idioms and like source language.

So for example, like JavaScript has like imports. So you can import from another JavaScript module. You can say import foo from bar. And what we’re have working is you can say import foo from bar, where bar is a component, and foo is just a function exported by that component. And because components have these high level types from the IDL WIT, I can just say this function takes a string and returns a string, and then JavaScript just gets to pass a JavaScript string and get back a JavaScript string. And that can kind of just work, which is just a much easier way to develop and a lot tighter integration. And also I’m just doing function calls. I’m not like having to talk HTTP or sockets or anything when I just want to like call across a boundary. And then the last one is that components are not like running on separate machines. They’re not in separate processes, so they’re not a partial failure boundary. If just like I don’t. Expect the module I’m calling to crash and go away. Components are all kind of in the same partial failure zone. And this again is part of what lets us run them all in the same process and have a really low overhead way of talking between them. So, you know, overall summarizing these, I would say like containers or components are sort of like a lovechild between containers and modules. They have the isolation of the containers, but they have kind of the developer experience and ergonomics of of a module.

Rachel: Love it. That’s a great metaphor. So one of the things that you talked about was the function calls across things. Can you could you walk us through that and how a function call kind of runs through the stack and help narrate how these things integrate together?

Luke: Yeah, totally. And for this, maybe I can just go through this fun animation. This is borrowed from a talk I gave earlier. And but, you know, sometimes the visuals, you know, help explain how these things go. And there’s a lot of different bits and bytes involved. So let’s say I’ve got two components here that I’m going to develop. And as I said earlier, they’re isolated, so they have separate memories. And now let’s say this component it is implementing a secret store. Its job is to implement a secret store interface, which I’ll show in just a second. And it’s going to do that by making HTTP requests to some database, that S3 or something like that. So this is the world that this component implements. So I’m showing WIT here. And when I describe this component I’m saying here’s what it needs to run. It needs to be able to make outbound HTTP requests. So I’m importing that. And here’s what I need. Here’s what I’m doing I’m exporting a secret store interface. The secret store interface looks like this I say there is a resource called a secret. And a resource is like an opaque thing that I’m not copying it around. It’s like at an arm’s length. It’s like a file. I have a descriptor to it. But I can’t see the insides. And what’s a secret? Well, it’s kind of it’s like a demo or a little example. So it’s just a silly little interface, but it lets me, I can expose the secret and see, like what’s actually this string inside of this secret. And then there’s a get function that says, given the name of a secret, can I look up the secret? So this is a overly simplified for this demo, or an example secret store interface that I want to implement.

And then on the other side, I want to consume that. I want to import that secret store interface, and I want to use it from whatever, whatever I’m doing which under my run function. So these are the two components. And because I’m using component model I can do them I can implement them in different languages. So maybe I’m going to implement my secret store in Rust. And so this Rust code I won’t show you the full implementation. I’m going to use some dot dot dots judiciously here. But I’m going to implement a trait. And this trait was generated for me from this WIT that I wrote. So I’m going to implement a secret store trait. And to do that I’m going to implement this get function. And I get to take a nice Rust type like a string, because over here in the get function I took a string. So that turns into a Rust string. And then it returns a secret. So Rust has an ownership model. So I’m going to return a known secret. And I can implement that somehow by making an HTTP request. And then I’m going to implement another trait for the secret where I’m going to implement this expose function that was here. I take a self handle to the internal state of the secret and I return a string. So that’s what the Rust code kind of roughly looks like.

From the JavaScript side. It’s super simple to use this thing. I just say import get from the secret store and gets this function. And then from my run function I can just call get passing it a JavaScript string, and I’ll get back that secret. And then I can call the expose method on it because there’s an expose method here. So I can do that. So that’s the setup here. So let’s see kind of how this works. Like at a low level how the runtime implements this. So a function call comes in to the run function. And now I’m in the JavaScript code or the Wasm that’s implementing the JavaScript code. And that implementation of the Wasm runtime or the JavaScript that’s running inside Wasm is going to copy this key string into the Wasm’s linear memory. And we’re going to get a pointer to that at just some random address like address for.

And now I want to pass that key to the get function. So I’m going to call get. So I’m going to say here, get the key that I’m pointing to at address 4. But if I just pass this 4 all the way to Rust, Rust does not have access to JavaScript’s memory. So if it tried to look at 4, it would just crash because it’s going to be some garbage here at 4. So we need the component model in the in the middle here to act kind of like the role of an operating system. And the component model knows okay, this is a pointer to a string. So I’m going to do a copy of this string from JavaScript’s memory into Rust’s memory. And that’s going to be wherever the Rust allocator puts it like at address 8. And so now I have a pointer into Rust memory of a string that’s laid out like Rust expected. So I pass that into Rust. So Rust gets a key that’s at address 8. And this is not this is like kind of what the low level Wasm code gets. You know of course what I wrote in Rust was much higher level.

So my Get function synthesizes some URL it wants to get based on what this key is and fetches it from the database. And I’m not going to that detail. And I get back the secret. And so let’s say the secret is XYZ and it’s set address one. So now I want to I don’t want to return just the actual string because I want to, like, wrap it up into this secret resource. So what do I do? I use another feature of the component model, which is I put it into a resource. So I take this address and I put it into a resource which is this little purple blob. And I say, okay, now I’ve got a handle to this resource. And a resource encapsulates this detail. So the clients of the resource can’t see the 1. They can only see that there is a secret. And I currently have a handle to it in my address space. Now I want to return it to my caller. So I say return the secret at handle zero and then the component model knows what’s up. So it just moves that handle to the caller and says, okay, now JavaScript has that handle to that secret and JavaScript’s table at index zero. And so JavaScript gets back zero and so it gets zero. So it says, okay, I have a secret, but I can’t see this thing inside. So if I want to call, let’s say the expose method or I want to expose and get the actual contents, I’m going to say, okay, expose the secret add index zero. And so I’m going to call back. And the component model knows what’s up. And it’s like, all right, well that’s I’m going to pull out what’s actually in that secret, and I’m going to get the 1, and I’m going to pass that into the Rust code. And the Rust code now says, all right, I know this is the secret you’re calling expose on. I can expose it to you. So I’m just going to now return that secret directly from Rust. And so I just return the address of the string I want to copy and it gets copied. And now JavaScript gets it. It’s now JavaScript now has the actual exposed secret.

So what this kind of call sequence illustrates is this is talking between components is a lot like interprocess communication. We’re copying values between memories and we’re passing handles via these tables. And that’s kind of like file descriptors being passed between processes. But we’re doing this all within one OS process. Or we can do it if we want to. And there was not a context switch. This was just a call stack just running on a single stack in the process. And we’re just doing function calls. So it’s, you know, it can be a lot faster even though we’re getting that same strong degree of isolation. So yeah, hopefully this example kind of paints a little bit of a picture there.

Rachel: That’s great. We talked a bit about performance and kind of overhead performance; is their high performance in kind of these calls moving back and forth?

Luke: Yeah, if you compare this with what a container would have done, you know, it would have done a socket write and that copies all the bytes into, you know, a stream, and then it transfers it across. And then how do we even pass handles across containers is a whole question. So it’s yeah, we’re reducing a lot of context switches, a lot of copies, a lot of trips into the operating system.

Rachel: Gotcha. So, so performance is definitely one thing. Can you run us through some other kind of key benefits of the component model?

Luke: Yeah. So so here I’m kind of refer to a talk I gave last year called What is a Component (and Why)? That was at WasmCon. And so we had four taglines kind of summaries of like why might you even care about the component model. So one tagline was SDKs for free. And this was a thing I kind of got to earlier, which is if you’re writing a platform, you kind of want to let your customers choose whatever language they want. You don’t want to say, okay, these are the three languages I’ve blessed and you know, and if you want another one, you have to contact your business representative. No, you want to say, here’s the interface. I’m going to write it once in an IDL, WIT, and then any language that binds to WIT can be used on my platform. So I’m kind of decoupling these things and I’m letting the customers choose their own language. So and also I’m doing a lot less work. It’s a lot of it’s a pain to, to maintain an SDK Fastly started its Wasm before there was any component model. So we do maintain our own SDKs and it’s a lot of work. You add a feature and now you have to do the additional work of adding it to each of the SDKs and you get these weird in-betweens where like, oh, well, we have the feature, but it’s not yet in the SDK, so it’s annoying. We want to get out of that business and say that’s we can divest that work to the language communities, so we get SDKs for free.

Second one is when I have all these components, they’re like packages of code that I can reuse across language boundaries. So now I implement some functionality in Rust or C, and I package it up and I can reuse it from JavaScript. And so before components you could take that C code. You have to compile it to a, a NodeJS extension or plugin and wrap that up. And that’s a lot of manual work to take that language into that other language. So that’s a lot of work. With this basically anything I package up in a component I can reuse from any other language, as if it were kind of a package in that own language’s registry. And also these packages are isolated, so they don’t inherit all the ambient file system and networking capabilities of the caller. I can give them as little privileges as I want’. So I can say, I’m going to run this code and yeah, you can decompress this image, but you can’t talk to the file system like you shouldn’t need to do that. If all you’re doing is decompressing an image. And that makes me feel, you know, avoids, helps me mitigate all these supply chain attacks that are becoming a prevalent problem in registries today. So secure polyglot packages is number two.

The third one, which we kind of talked about also with this example, is what we call modularity without microservices, which is I want to isolate this code because isolation is great for like software engineering reasons. Different teams working on different isolated code bases is, you know, a nice way to avoid all this, sort of like big ball of mud, monolithic architectures. So people do that. But if the only alternative is like microservices, like there’s a huge deployment and management cost and performance cost to actually putting every team in another microservice. And, you know, there’s lots of stories about people overrotating on that. And you get like a thousand microservices and it’s, you know, can be a nightmare. So now components give me kind of an in-between solution where I can have the modularity of separate teams working on separate components, but I don’t have to fully deploy them as wholly separate microservices, which, you know, has all that pain.

And then the last one, which is kind of the most abstract, but I think also one of the most compelling is is called virtual platform layering, which is if you’re a platform and you want to offer your customers, you know, a certain API, you can use components to implement your platform unbeknownst to your customers. And you can switch between do I want to implement this natively, like in my core platform, like that’s written in like, you know, whatever my implementation language is and it’s, you know, this, this is maybe what you want to do, but it’s like harder to push out these updates. You have to, you know, updating the base layer of the platform is like can be a slow process. And any part of code in this trusted computing base can crash the whole system. So you have to be very careful and slow about it. But if you want to rapidly develop features and push them out there, you can develop them as isolated components that are part of your platform, and then your customer code doesn’t know that oh, the other side of this host call I’m making is also Wasm. There’s this strong isolation boundary. So it gives you this flexibility as a platform vendor to kind of rapidly iterate on features, change how you implement it, change the language, you implement it, and that’s useful for building a platform. So yeah, those are the those are the four kind of big reasons we’re thinking about.

Rachel: Wonderful. That sounds like a great talk. Can you remind me again where you gave it? Yeah, I.

Luke: Think it was at WasmCon last year. Yeah. It’s called What is a Component (and why)?

Rachel: Perfect. We’ll be sure to link to it because it sounds wonderful. All right. So last but not least, I just want to hear about a little bit how you’re actually using this in terms of–I hate the phrase dogfooding, but how are you dogfooding this? How does this actually get used at Fastly?

Luke: And it’s actually that last virtual platform layering that, that we’re starting with. Because what’s great about virtual platform layering is we can take an existing–so we already run customers Wasm and they make host calls to do various things. We can take some of those host calls and at the moment, some of them we implement using a microservice. So that host call goes out and calls to a microservice. And why do we do that? Well, we want it to isolate that code from the customer’s code. We can’t have the customers code like walking over and like messing with our billing. So we put them in separate microservices. And so because we’re doing that, we involve all the overhead of like service chaining and calling through a network stack. So the first experiment is can we take one of those microservices, recompile it as a component and link it directly with our customer code. So now the call to the host call is just a local, you know, fast function call. So we’ve improved the latency of this of this host call without the customer doing anything, without them touching anything. And, and we’ve reduced our operational overhead. So that’s our first big experiment. And it lets us be users of all these open source ByteCode Alliance tools that we’ve been building. But now we get to be on the other side of that and be on the consuming end and then feed that back into the process. You know, sand off the rough edges and feed that back into the design of all these standards and tools.

Rachel: Wonderful. Wonderful. Well, Luke, this has been so enlightening. I think you have such a wonderful way of describing things in a way that is both complex and technical and also very understandable. So I appreciate you taking the time to come chat. If people are interested in exploring more about working with Fastly or looking at Wasm in any kind of way, Fastly or not Fastly, what do you recommend people do?

Luke: Yeah, well, if you want to check out and play with the component model, there’s a great online book called component-model.bytecodealliance.org if you just Google Component Model ByteCode Alliance, this is the top hit. So that will help you write your hello world in several different languages and then run it locally and see how you can do a lot of things with components. So that’s if you’re interested in that. And that’s, you know, open standards and open source, you know, focus. And if you want to try out running Wasm on Fastly, if you want to take write Wasm once and then deploy it to a network that’ll run your code just mere small milliseconds away from basically everyone in the world, in hundreds of PoPs all over the world. You can try out Fastly and this is fastly.com/signup/ we just added the ability to create an account for free and so you can, in a few clicks be running code all over the world in Wasm from there. So and we also have a great learning resource which is fastly.com/learning/what-is-webassembly/ with hyphens between those words. And that’s a great launching point of learning more about the details of WebAssembly and how to use WebAssembly from Fastly and just general learning. Great collection of resources.

Rachel: Wonderful. Well, thank you so much. We’ll get links to all of these things in the show notes. If anyone is excited to follow up. And Luke, thank you again for your time. I really appreciate it.

More in this series

Conversations (85)