A RedMonk Conversation: WASM and Edge Compute – Building a Performant Cloud with Akamai and Fermyon

A RedMonk Conversation: WASM and Edge Compute – Building a Performant Cloud with Akamai and Fermyon

Share via Twitter Share via Facebook Share via Linkedin Share via Reddit

Get more video from Redmonk, Subscribe!

What happens when you build an ultra low-latency compute platform and then realize the round trips are going to kill you? Absent changing physics or your cap table: you partner!

This RedMonk Conversation explores how Fermyon – a platform that compiles serverless compute into Wasm – partnered with Akamai – a company that started out as a CDN and has evolved into a full-fledged alternative cloud – to build a better together story. Join RedMonk’s Research Director Rachel Stephens, Fermyon CEO Matt Butcher, and Akamai’s Director of Product Management Allen Duet to discuss the future of serverless computing at the edge.

This RedMonk video is sponsored by Akamai.

Links

Transcript

**Rachel Stephens** (00:14)
Hi everyone. I’m Rachel Stephens with RedMonk and today I’m so excited to be joined by Allen Duet and Matt Butcher. And we’re going to be talking about Wasm, Edge Compute, and the future of cloud performance. It’s going to be a great conversation. And one of the reasons I’m so excited is we have a couple of different organizations with us as guests today. So Allen is part of Akamai and Matt is the CEO of Fermyon Let’s go ahead and have you both jump in and introduce yourselves and maybe also just introduce us to your companies as well. Matt, could you kick us off?

**Matt Butcher** (00:43)
I’m Matt Butcher. I’m the CEO of Fermyon. We started the company in about 2021 with the vision to sort of pioneer a new wave of cloud computing powered by WebAssembly. And we started out by building an open source tool called Spin that was designed to make it super, super easy for developers to build serverless applications using the language of their choice and have it compiled to WebAssembly, be able to deploy, test, run locally.

And then we carried on from that and said, okay, now that we’ve got a good local developer story, we need to build some places to deploy this. So we built SpinCube, another open source project. Both Spin and SpinCube are part of CNCF, the same organization that hosts Kubernetes. And ⁓ SpinCube allows you to run these kinds of Spin applications inside of Kubernetes. And then we built a Fermyon platform for Kubernetes, which is a platform that allows you to do very high scaling serverless functions inside of your Kubernetes cluster.

But what I’m the most excited about is about in March of this year, we launched with Akamai Fermyon-Wasm functions, which is the world’s fastest serverless platform running inside of the world’s most distributed network. And so I’m really excited about that because that has really opened up this whole story about edge computing and high performance computing at edge with everything from GPUs to

to key value storage and all the offerings that Akamai already brings to the table. And so I’m really excited to get to talk about that today

**Rachel Stephens** (02:10)
Allen, before we dive all the way in, can we please hear more about you and what you’re doing with

**Allen Duet** (02:14)
Sure, yeah. Hi, Allen Duet, Director of Product Management here at Akamai and our cloud division. I head up our cloud native and our cloud networking offerings. And really, the story of Akamai is that for many, many years, been around as our content delivery network.

and has grown into various other areas, including security and with the acquisition of Linode a few years back, our cloud division. so Akamai is a real premise here is saying, we come from a legacy of building this globally distributed product with our CDN. And as a result,

we see this demand from customers kind of moving forward, how to build what we’re calling Edge Native applications, the modern way to kind of bring technology, especially low latency kind of experiences to customers and their ability to bring that through in cloud. And so with Akamai Cloud.

We’ve got this massive set of resources, 1,000 terabit per second capacity across the network, 4,100 locations. We’re in 130 countries, 26 core data centers here for us. And really, that’s…

coupled with that massive network that Matt had mentioned earlier around being able to reach out and get to people in that low latency way. And so it’s a really, really interesting kind of overlap with Wasm here for us. That’s why we’re so excited to be working with Fermyon because as you know, like Wasm and low latency really, really work well together.

**Rachel Stephens** (03:34)
Yes, I love all of these things and one of the things I want to just underpin about what you were just talking about just for people who maybe have not kept up with Akamai over its various iterations and then there’s probably people who know Akamai purely as a CDN but you all have via acquisitions and via technical growth in the product in the last probably half decade or so have really become also an edge compute and distributed compute platform.

Do want to maybe just walk people through that one more time to make sure everyone kind of understands the fullness of the platform?

**Allen Duet** (04:07)
Yeah, absolutely. that’s right. And what we’re really doing here is saying we’re taking the classic kind of core cloud compute resource. So Dense.

offerings of virtual machines, database as a service, networking services, et cetera, making them available to customers with a really, really attractive egress pricing. So we feel like we compete really well there in data centers where usability are key factors for us, right? We’re not the complexity of hyperscalers. We are specialized to the extent that we bring those primitives to customers, let them build the solutions that they want on top of our world-class network. so beyond the

the ⁓ core data centers, we then have this strategy where we’re bringing what we call distributed data centers and then our edge locations available to customers as well. And so that really allows customers to kind of start in the classic sense from the way that you would normally architect a cloud product, but also take advantage and grow into a global system where you can legitimately, as you’ll hear Matt talk about here, bring forward technologies that then get distributed to locations in thousands of places around the

world, help provide this extremely low latency experience and then integrate it into, you know, Akamai’s CDN and security products. so, you know, from, from our end, this is really sort of the evolution as, as you mentioned here earlier of CDN moving into a let’s bring these technologies to the world kind of business. And I think like we oftentimes are compared to hyperscalers, but the reality here is that like, we think that the way to build applications here, this, what we’re calling an edge native kind of experience moving forward.

is a fundamentally different way to do this. And we see a variety of different solution topologies, including AI, just all over a distributed compute kind of experience. And when you talk about distributed compute, it is getting away from the legacy sort of, here’s your single data center, move forward with it. And so, yeah, that’s really where Akamai Cloud is centered, but the Akamai business is really heavily invested in this particular point of view as well.

**Matt Butcher** (06:11)
And if I can, if I can augment what he said from Fermyon’s point of view, the thing that made Akamai such an attractive partner to build this with is that in addition to having the sort of standard and even advanced edge story that you might find with some of the other edge and CDN providers, the Linode offering really brings a full accoutrement of like,

IaaS type primitives. So we could build things using the tools that we already were comfortable with Kubernetes and object storage and virtual machines and then have those available around the world almost instantly. It has been a fantastic experience to work on this platform because of the integration of the kind of classic edge plus the IaaS layer that they brought now.

**Rachel Stephens** (06:55)
I think that’s definitely kind of a, peanut butter and chocolate story in terms of bringing together general compute capabilities with edge latency speeds. Can we dive in just a little bit? You kind of talked at the beginning, Matt, about serverless compiling to Wasm. Can we talk a little bit more? think.

**Matt Butcher** (07:11)
Mm-hmm.

**Rachel Stephens** (07:12)
At this point with this audience, I think it’s fair to assume we have a baseline understanding of WebAssembly and the benefits that technology brings in terms of sandboxing, fast cold starts, portability, things like that. So I think we can definitely touch on those benefits but I think in particular, I would love to talk about what you see the value in the distributed network.

**Matt Butcher** (07:21)
Mm-hmm.

Yeah, because WebAssembly was built sort of for the world’s most security sensitive environment, the web browser, and for the world’s most performance sensitive environment, the web browser, it had some of those key characteristics that we as a bunch of cloud compute nerds really resonated with, right? The fact that you can cold start a WebAssembly binary in under a millisecond. That is from the time we get a request to the time your user code is running, under a millisecond.

that was a huge thing for us. The fact that the security sandboxing is just second to none. It’s a fantastic security environment meant that we can run side by side user code knowing that there’s a low risk of one user being able to attack another. There’s a low risk of the user being able to attack the underlying substrate. That kind of security model is just sort of a quintessential feature of a good cloud service, whether it’s containers or virtual machines or web assembly, you need that layer of sandboxing.

and WebAssembly just piqued our interest because it checked off those two fundamental check boxes. And then in addition to that, it’s an open standard that’s standardized by the venerable W3C, right? And it’s a standard that’s evolving and has broad language support and broad vendor support. And we saw all those features and said, if we are to give a new definition of serverless, right? A serverless V2, something that’s much faster and…

and more robust than you would see with, AWS Lambda, we need to build it on a compute engine that has those characteristics. So we got very excited about WebAssembly, even in 2019, 2020, before we even started Fermyon, simply because it had those characteristics. But then as we began building it, we said, okay, we really want to make sure.

that the rudiments of this are in open source. We saw the way that virtual machines flourished. We saw the way that Docker containers flourished. All these other cloud technologies had at their core, an open source engine. And, you know, having worked on OpenStack and then worked on Kubernetes, to me, it was like the obvious truth that we needed to make sure we built a good, solid, open source developer experience and developer and runtime. And so that’s what we said about.

with when we built Spin and when we built SpinCube. But then, as we get going, we decided that the second big hill we needed to climb was making it very easy for developers to get something deployed globally and not have to set up all the infrastructure on their own. And that’s really what got us to Fermyon Wasm functions with Akamai. think so, we start with an open source story, build the rudiments that way.

**Rachel Stephens** (09:55)
Gotcha.

**Matt Butcher** (10:03)
and then find a way to make it easier and more powerful for enterprises, particularly large enterprises who need strong guarantees about security and performance like WebAssembly provides, but also strong support models and hosted models and the ability to call somebody when something goes wrong. And this has ended up being sort of the blending of the open source model in a really strong commercial.

**Rachel Stephens** (10:24)
you talked about some of the limitations of previous generations of serverless cloud compute. we have kind of slower cold starts, we have not open source, things of that nature. You don’t just talk about how WebAssembly solves some of those problems in a way that previous generations of cloud computing couldn’t.

**Matt Butcher** (10:41)
Yeah, one of my favorite anecdotes is that it takes about 100 milliseconds to blink your eye, right? So when we tune into speeds.

**Rachel Stephens** (10:50)
I’m very blinky

all of a sudden.

**Allen Duet** (10:52)
You

**Matt Butcher** (10:52)
Yeah, know,

and everybody in the audience, everybody’s like finds themselves blinking and going, how fast was that? But it takes about 100 milliseconds to blink your eye. know, Google and Amazon and all of these large ⁓ e-commerce, e-tailer style sites have done tons of research on user retention and cart retention. And they have discovered that you have about 100 milliseconds, that’s one blink of an eye to show the user that you’re making progress on loading their website. Otherwise, in one blink of an eye, they start to lose interest.

**Rachel Stephens** (10:55)
you

**Matt Butcher** (11:21)
Right? So anybody who’s doing Core Web Vitals are paying close attention to their website loading and performance and knows these numbers cold. Yet somehow, know, serverless, which has been very popular, particularly for utility computing, has gotten away with sort of these slow cold starts. An AWS Lambda function takes between 200 and 500 milliseconds to cold start. That means by the time your code is starting to execute,

you have already lost something like 10 % of your eyeballs that were on the site to begin with if you don’t do some kind of caching or something in front of it. And so you find yourself making all these performance trade-offs if you’re gonna try and make Gen1 serverless work really well for delivering directly to customers. And we saw that problem close up when we were all at Microsoft. The first 10 engineers from Fermyon, we all came from Microsoft, we worked in Azure and we saw close up.

**Allen Duet** (12:08)
Thank

**Matt Butcher** (12:11)
you know, the limitations of systems like Azure functions or Google cloud functions or Amazon Lambda. And we said, that’s a problem we could tackle because developers tell us over and over again, we love writing serverless functions. It’s easy. It’s a small code base to maintain. You know, it’s a highly flexible medium for us to get straight to the business logic and do the work we need to do without having to worry about long running servers and write it and pulling in 50,000 lines of dependencies just to get all of that up and running. And so we heard that over and over again.

but we’re racing the blink of an eye. And so in order to hit that, we had to find a solution that allowed us to cold start much, much faster. We looked at a lot of technology. We actually tried hyper optimizing containers. We looked at virtual machines and all of these various strategies. And in the end, we were probably as surprised as anybody else that the solution sat in our web browser. And we went, well, if we just take this engine and pull it out and put it on the other side of the server connection.

we can build a system that’s going to have those performance characteristics. So we’re pretty happy to say that even over the four years Fermyon’s been around, we’ve gone from 10 milliseconds of cold start, which is a fraction of what we were seeing at AWS, down to one millisecond. We’re now at about one half a millisecond to cold start, which makes it at pretty much near native speeds. And that I think is sort of the magic behind being able to run serverless functions.

just straight up to the end user without necessarily requiring caching or anything like that. And again, that’s why we talk relatively proudly about being the world’s fastest serverless

**Rachel Stephens** (13:29)
Okay.

**Allen Duet** (13:40)
Yeah, I think the other thing I add in there is that on the other side of serverless is this idea that you’re doing cost savings because you’re only paying for what you use there. But like you said, if you want the performance, then you actually pay this premium because you do have to keep alive, know, these essentially containers behind the scenes, right? Whereas you. ⁓

**Rachel Stephens** (13:40)
Got it.

**Allen Duet** (14:00)
you break this entire paradigm. And I bring this up because on our end, you we were looking to build a better serverless experience, obviously an infrastructure less experience for customers. And we have, you know, container services and, and, you know, manage Kubernetes. And we said, well, if we were to build this at scale, what would we do? Yes, of course we would keep these things alive. How do you charge customers for that? Where’s the premium going to be paid? And, and really you realize you can only really do this at scale. And so now this really opens up the world, I think, to this idea of

just pay for what you use from the compute perspective and you don’t have to pay for that performance guarantee. Like you get it as part of this technology. So that was what’s so transformative about Wasm and what makes it interesting for our cloud business.

**Matt Butcher** (14:39)
And we sort of had this funny moment too, where, again, we are a bunch of compute nerds. Fermyon, almost all of us have worked deep, deeply in compute in one form or another for most of our careers, right? And we were so proud when we got to the half a millisecond cold start thing. And then we realized, well, the problem is…

If you got half a second cold start and 300 milliseconds of network latency, it doesn’t actually solve the problem. And I remember, you know, pretty vividly this day when we’re like, oh no, you know, the story doesn’t work. We have to find a network that’s going to be fast enough and distributed enough. Cause cause in networking, you’re dealing with the laws of physics, right? And you gotta get close enough to the end user that you can actually preserve that level of performance. And I think that’s.

**Rachel Stephens** (15:04)
you

**Matt Butcher** (15:25)
That was so attractive to us when we looked at Akamai, the most distributed network. I think it’s over 4,000 points of presence around the globe. And even with 20, 30 core data centers and being able to deploy once and have these applications go to all of them, that’s when we start to get into this story where we say, yeah, we’re cutting down on the compute time from yay big down to just a tiny fraction. And then we can cut down on the network side from 300 milliseconds down to.

10 milliseconds, seven milliseconds and ranges like that. So it’s been a very exciting way of sort of tackling a problem at both ends and leading into the skills that the Akamai team brings and the skills that the Fermyon team bring to be able to create something that is actually genuinely really, really

**Rachel Stephens** (16:09)
So one of the things I think is interesting and it’s not like serverless compute is a new paradigm, but I do think that there were some really natural transition points for a lot of enterprises to say go from a VM to a container like that translated really easily. think maybe it’s a little bit more complicated sometimes for people to grasp what architecturally do I need to be doing to compose an application.

using serverless functions, using WebAssembly. It’s a little bit more work to figure out that art of the possible. I’m interested in seeing how have you seen people start to build these applications? What do they need to be thinking about? What does that look like?

**Matt Butcher** (16:47)
think the easiest way to sort of dive into the concept of serverless functions, right, which is what we’re specifically talking about, is to think about it instead of, it is to contrast what we mean by server and then get to where serverless and see what that kind of core difference is. Because as a developer, when I think of creating a server, I’m thinking about the software, right? ⁓ A software server is a process that starts up and it…

opens some ports and it listens for socket connections and it runs for days, weeks, months, right? Single process handling potentially millions or even billions of requests. ⁓ Serverless is effectively getting rid of that long running process. So instead you’re writing an event handler that says, hey, when you detect this event, start me up, run me to completion, I’ll send back a response and you can shut me down, right?

**Allen Duet** (17:28)
Thank

**Matt Butcher** (17:43)
And so it is different in kind. is, you one of the cool things about the transition from virtual machines to containers in my experiences was in a lot of cases, I was just rebuilding the software from an image into, or from a VM image into a Docker container, which was actually a pleasant, you know, cutting down of the amount of DevOps-y kind of work I was doing. But the transition from writing server full, server full, server software to serverless software,

is a code one, right? Instead of cutting down on the DevOps artifacts, you’re really cutting down on your code weight. And we have done one of our main pieces of concerted effort over the course of Fermyon’s existence. And you see this in Spin quite frequently, is that we have said, okay, if we’re gonna say, hey, developer, you have to alter your code in order to run this.

then we have to give them a story that says this is a task of making something easier rather than making something more complex. So if you look at the Rust API, the request response model there mirrors all the popular Rust frameworks. If you look at the JavaScript API, it mirrors exactly the kind you see in JavaScript land. If you look at the Go API, again, the Go paradigm is what we follow so that instead of…

having to start over, scrap their code, import a bunch of foreign libraries and redo all this work, they can say, okay, so what I’m really doing is removing this, removing this, removing this, and then this one Spin library is gonna shim together the go notion of a request and response model to whatever it is that Spin does under the covers. And that I think is the way we’ve tried to tackle that developer experience story, to tell a story that is essentially, we’re working hard to make it possible for you to simplify.

and make your code smaller and easier to test instead of, here’s a completely foreign way of doing it, rewrite all your stuff, right?

**Allen Duet** (19:31)
Yeah, I have kind of a different view there. I 100 % agree with the developer experience kind of approach there. But I think a lot of adoption of serverless was challenging. Even going, I remember even going from VMs to containers, because the infrastructure necessary to help support that, operate it and secure that was still in flight and in development over time. And I think those technologies have emerged and matured to a point where, you know, like

**Rachel Stephens** (19:31)
Yeah.

**Allen Duet** (19:58)
Folks are already adopting, like you say, classic functions. Being able to then say, apply that same paradigm, that same concept to something that is hugely ephemeral within my architecture space, but still secured, still operationally like visible to, you know, support, et cetera. That’s, that kind of stuff has kind of gotten to the point where architects aren’t really the folks who are saying yay or nay here anymore. It is really coming down to the developer experience as sort of the last.

you know, kind of, ⁓ if you will, hurdle to adopting of a new product like a Wasm and particularly Wasm Cloud. Now, I tell this because in the cloud side of the world, we get a lot of ⁓ asks, hey, integrate that for me, build it for me. I don’t want to have to build all those things individually. And so we see this and we think of serverless as sort of this pushing all of that classic 12 factor app stuff into the service provider land that we then build for you as portions of these services moving.

**Rachel Stephens** (20:53)
you

**Matt Butcher** (20:53)
And that really segues well into why Edge is the platform that Fermyon targeted and said, this is where Edge is the next wave, right? Because we’re talking about simplifying the developer experience. And Allen is right, it’s a game changer for developers when we can make their life easier and when it’s less a matter of, we buy into this new sophisticated thing and more a matter of, can we get more work done faster? But then what?

What the Edge story opened up for us is the ability to say, look, we can also relieve you of your responsibility of maintaining all this infrastructure where you have to send up a region here and a region there and a region there. And then you have to keep all of them in sync and you have to write your own software to make sure that the same things are getting deployed and your CI CD pipeline starts to get really complicated and your rollback story gets really confusing. We got all that and said, wait, if we just build a platform that can deploy to the Edge where we can push one of these functions up and it just goes everywhere.

and it’s orchestrated behind the scenes and the developer never has to worry about it. The operations team has a much simpler run book. That is a huge win, not just on the sort of developer experience, but on the operational experience of this platform too. In operational experience, there’s sort of a broad term, right? Because we want the developer to be able to self serve following that same kind of 12 factor model that has once again sort of surfaced as the way developers really do want to write code, but it also simplifies the story for…

platform engineering, SRE, DevOps teams who would otherwise be responsible for trying to keep track of all these moving parts that are necessary in order for developers to achieve their goals.

**Rachel Stephens** (22:25)
So I’m curious then, what kind of applications are you seeing customers assemble with these technologies? What seemed to be the popular use

**Matt Butcher** (22:33)
I think early on what we at Fermyon saw as sort of the first wave was people hosting websites, right? And the reason for that was that was something they couldn’t really do well on AWS Lambda. They all wanted to be able to do that. They couldn’t necessarily do it that well. So the first wave back in 2021, we saw a whole bunch of websites go. Then we started to see API servers go.

Then we started to see ⁓ the kind of rise of AI apps and what we saw at Fermyon were largely just experimental AI apps. And then somewhere along the line, a bit flipped. And now what we’re starting to see is the emergence of this kind of what we all call edge native computing now, where we’re starting to see people say, ⁓ I can just start pushing increasingly sophisticated applications toward the edge. We can start moving,

anything that needs faster delivery out into the edge. So we see a lot of the stories that Allen and I work together on have to do with expediting the delivery of data of one sort or another, or securing something closer and closer to the edge to hold back the ever increasing number of bots and or bad actors trying to steal API keys and stuff like that, being able to push authentication and verification and authorization.

out further and further to the edge is another place we’ve found sort of an early foothold. And one of the exciting things that we’re seeing emerge and that’ll let Allen answer too, cause he’s probably got a different set, is that because we can get closer and closer to the proximity of the user, we’re also starting just now to have customers come in and say, well, Akamai provides geographic information about where this request is because we’re in a data center out here.

Consequently, can we start making decisions all the way out at the edge about what content people see based on their locale? You can think of simple cases like store finders for e-commerce or news localization, but they get even more sophisticated into digital rights management in some of these cases as you have the proliferation of streaming providers and some fairly restrictive ways in which they are allowed to use the content that they’re licensed. And so it has been fascinating to see this kind of

going from the simplest case, right? I just wanna deploy my personal blog up to these fairly sophisticated and in many ways, very sensitive

**Allen Duet** (24:52)
Yeah, I would say on our end, like we’re definitely seeing, of those use cases along with Matt there, we’re talking to a lot of the same kind of companies. I’d say there’s a pattern that I’ve started to see here and I wanted to share, which was, think early on we had customers who were interested because of the technology, because of what WASM was, come to us saying, hey, I want to learn about this technology. And then they’re mapping that to

a problem space within their domain and then deploying it. And that’s why I think we saw a lot of like point solutions because they learned about the technology and then found a place for it. And now, you know, what we’re seeing, and Matt, I haven’t mentioned this to you yet, but we started to demo Spin as part of our app platform piece. And the feedback we’re getting from some of these developers along the way, who aren’t there by the way at all for WASM, they’re there to learn how to more easily deploy, you know, containerized solutions is boy, boy, Spin is a great experience. I want that.

**Matt Butcher** (25:27)
Mm-hmm.

**Allen Duet** (25:46)
irrespective of the fact that it’s fast and it’s secure. those are just like, cool. But the runtime support and the experience of deploying it are really what the appealing part is. And so it gets into a more interesting state where these things are maybe in some cases being used for areas that weren’t originally designed for, but now we’re in this hybrid environment, if you will, from an architecture perspective, where you do have WASM functions and functions and containerized workloads.

And so it’s really interesting to see. think that what Matt has mentioned is like, there is a very, very interesting case where, you know, customers have built oftentimes, you know, containerized what they’re considering like distributed content, I’m sorry, architectures, and then splitting it up, right? So further splitting it up. This thing can run at the edge. Let’s let that part run and do that logic and move that business logic from here to there because it’s better, faster, more secure, et cetera. And so

those things are the complex pieces, but it’s interesting to see that there’s, feel like in the next time we talk, we’re gonna be talking about a whole new set of use cases that are coming to us because people are, think, finding this amazing developer experience and then just applying it in creative ways. And so I understand the mapping of the problem space to the unique domains that the Wasm is, but I really feel like this is sort of, it feels like a functions 2.0 kind of a revolution here going on.

**Matt Butcher** (26:48)
Yeah.

Yeah.

Hey, you should introduce app platform too, because that’s something else we’ve been really excited about. You just mentioned it. And I believe we’re demoing it together at KubeCon coming up in Atlanta.

**Allen Duet** (27:14)
Yeah, that’s right. So we have a managed Kubernetes offering. That’s great. I think a lot of folks are obviously adopting Kubernetes as that standard. But when you do adopt Kubernetes, building out that developer experience is really the key. And obviously, we’re big in CNCF and big participants in KubeCon as well. We have used open source technologies from CNCF and built a application developer experience that we call App Platform.

It is designed for teams to easily build and deploy these containerized workloads or pods essentially through their Kubernetes and their managed LKE clusters. We have open sourced that platform as well, so you can use this in other places. But really the idea here is to say, a lot of folks who start with Kubernetes have the intent of building a platform. It takes sometimes years to build a platform. You can have this over lunch kind of thing. And so we’ll get you started off and get you going with that platform.

**Matt Butcher** (28:03)
haha

**Allen Duet** (28:06)
Within that, we offer a range of different CNCF open source projects, including Spin. so that’s what was mentioned in here earlier as we were demoing some of those experiences. Very fun to see the reaction for

folks.

**Rachel Stephens** (28:17)
Allen, can you give us just a flavor of the, I guess, the composite nature of the applications people are building? I’m assuming few, if any, are fully WASM. Like, I’m assuming we’re having assembly of pieces across your infrastructure. Is that right?

**Allen Duet** (28:32)
Yeah, that’s absolutely true. And I think part of that is, you have stateful workloads in cases still that, you know, have heavy reliance on storage. And so, you know, we definitely see that kind of thing. We often, because we’re heavy in the media space and, you know, we see like streaming services. And so you’ll see the classic kind of composite here is where you have a control plane sitting someplace and then they’ll Spin up in like maybe LK clusters in various locations.

Stream right so that pod is represents that stream You know you’re going through that particular piece and along the way you want to do like ad injection into that that media That’s where your wasm container is firing and you know doing that into that stream This is like the classic case of where you’ll have a very distributed spread out kind of architecture But the the you know the use cases here use different technologies that are applicable for whatever their specific Needs are and because we’re all like one network behind the scenes there

really makes it easy to kind of wire that stuff

together.

**Matt Butcher** (29:28)
And I think you see the carryover of many standard patterns like microservice based development as a way of standing up a long running piece inside of app platform or an LK instance. And then these super fast responsive serverless pieces that can get pushed closer to the user being written as web assembly functions. And so we have definitely seen sort of like a convergence of patterns that we’re all very comfortable with, right? The serverless pattern on one side, which is now

faster and better, the long running server process, stateful process on the other hand, which is now getting easier to deploy into a really high performance network, the Akamai network. And I think that convergence makes it not just tenable to build more sophisticated applications, but actually fairly straightforward for people to say, yeah, I can kind of whiteboard this up with four or five boxes instead of having to design a very complex and sophisticated network that’s spanning

multiple providers or something like that.

**Rachel Stephens** (30:25)
And in terms of thinking about the architecture, sometimes it can be fully edge-based. Sometimes we can be pulling back and of do those Linode-esque data centers that are going to be more regional than a traditional PoP and then kind of combining all of that together.

**Allen Duet** (30:41)
Yeah, that’s definitely the case there. what’s been interesting is you have compute density, we kind of call our core data centers, both compute density, you have a lot of stuff so you can scale quite high there, but you also have this rich set of as a service offerings. Whereas as you go further and further to the edge, you’re dealing with a thinner and thinner kind of set of capabilities. so intentionally so by design, obviously, to keep that latency down. And so what we like is when technologies like Fermyon,

bring that sandbox over to the edge because you can do quite a bit there. And then we bring additional services there like a KV store at the edge, right? And so a KV store at the edge and a KV store at a core data center that can scale differently are gonna be two different technologies. And yet, you get this abstraction through the Furman technologies where you kind of don’t care. Like somebody writes it and says, I needed a KV store and…

it will find the appropriate resource depending on where it’s at and use it. That’s like the ideal case is you’re fully abstracted these kind of, you know, these domain concerns from a developer so that they can stay in that logic space that Matt talked about earlier on and write the code in the way that they need to do it. And so we really kind of love this match up of, yep, these services as we go move through, you know, from edge to kind of these core areas.

can still support the exact same deployment. There’s no code changes necessary for that Fermyon, know, as I’m functioning to move

out.

**Rachel Stephens** (32:05)
Well, this has been an absolutely delightful conversation with the two of you. Now, I know that we can tell people to come find you at KubeCon, but assuming they would like to know more sooner, or maybe they’re not in Atlanta this year, where can people go online to learn more?

**Matt Butcher** (32:18)
Fermyon.com is a great place to go to sort of get the overview of the serverless technologies. You can link from there off into the CNCF, Spin and SpinCube projects and from there off into Fermyon Wasm functions running on Akamai. Allen, I’ll pass it over to you for the definitive source of URLs for Akamai.

**Allen Duet** (32:36)
Yeah, we’re locatable at several locations, obviously akamai.com, I think most folks would probably know us from Linode, the Linode brand, which we continue to operate and have, linode.com here. So that’s a great place to kind of find us. have a rich set of documents and data out on the internet here around the Linode brand still, so still useful for that. We have a little bit of a rebranding going on with Akamai Cloud, so that is one other way you can kind of find us, but we’ll be…

be a big booth over at KubeCon as well if you do happen to swing by and I think Matt and other Fermyon folks will be there with us.

**Matt Butcher** (33:09)
Yep.

Yep.

**Rachel Stephens** (33:10)
Well, thank you both so much for your time today. This has been great. I appreciate it.

**Matt Butcher** (33:14)
Yep, great to chat with you both.

**Allen Duet** (33:15)
Cheers.

More in this series

Conversations (99)