In this RedMonk conversation, Pascal Martin, principal software engineer at Bedrock Streaming, and Valentin Clavreul, principal backend engineer at Bedrock Streaming, discuss the evolution of streaming architectures and their partnership with Fastly as a crucial element in optimizing performance with RedMonk’s James Governor. They explore the challenges of scaling streaming services, the transition from PHP to Go for backend development, and the importance of interoperability with AWS and edge compute. The discussion also highlights the significance of observability in ensuring reliable architectures and the preparations needed for major sporting events like the Euros.
This RedMonk Conversation is sponsored by Fastly.
Links
Transcript
James Governor (00:26)
Hi, this is James Governor. I’m co-founder of RedMonk. I’m here at Fastly Xcelerate London 2025. And we’re here to talk about scale. We’re here to talk about how you build modern architectures for scaling streaming. I’ve got a couple of great guests that are here with me from a company called Bedrock. I’m here with Pascal Martin and Valentin Clavreul. And why don’t you introduce what you do at Bedrock? And in fact, we’ll start with what you do and then we’ll find out, or you could start with, what is Bedrock?
Pascal Martin (00:55)
Well, I’m a Principal Engineer at Bedrock. We are a company that helps create streaming champions. So we create a streaming platform and we sell it to broadcasters like MCX in France or Videoland in the Netherlands. And as a Principal Engineer, I work mostly on backend, resiliency, performance, scalability and cost.
James Governor (01:17)
Okay.
Valentin Clavreul (01:18)
Yeah, and me, Valentin, I work as a principal engineer too, mostly for the backend part of Bedrock. So helping the developers working on APIs to power up and provide the feature needed for our users.
James Governor (01:33)
Okay. And so I think Bedrock is not a household name because you’re very much, you’re the back office. You’re selling services to these companies. What sort of scale are we talking about? So if we think about like, maybe people don’t know Bedrock, but they do know the European championships. If there’s any football watchers here, but what sort of scale do you get at the biggest event?
Valentin Clavreul (01:59)
It’s a scale in millions basically. We know that football matches are really appreciated for the user of a platform. People may come all at once, so we need to face it.
James Governor (02:13)
And so this is you say billions, I think sometimes you’re talking tens of millions or even
Pascal Martin (02:21)
All in all, say our customers have more than 45 million users and they have seen 1 million users in one evening for some events.
James Governor (02:33)
Right. So I think one of the questions for me is the sort of transformation that we’re seeing that historically, perhaps very natural media business streaming that you would be looking at caching into CDN. But it feels like that’s evolving and there’s more of a need to be doing more interesting work in and around APIs. And perhaps it’s not just caching, but compute as well. So could you talk a bit about the journey you’ve been on? in terms of how you’re serving your customers, what you’re doing that’s streaming and what you’re doing that’s more compute oriented.
Pascal Martin (03:09)
We’ve been working with Fastly as a CDN for video delivery for many years, maybe before my time. And these last few years, we noticed more and more big events, think football matches and this kind of things. And our backend APIs in PHP at the time were starting to have some difficulties. So we moved in two ways. First way, we started programming with Go more and more. We are moving APIs towards Go so it scales much better than PHP-FPM. And we are also starting moving some workloads at edge, especially during the Euros last year in 2024.
Valentin Clavreul (03:46)
And the key feature was really this user personalization because if we cache everything, it would work great, but still our customer want us to retain every use base feature. And this is what brought us to Fastly Compute basically.
James Governor (04:02)
And I like the name of the personalization engine.
Valentin Clavreul (04:09)
The BFF. It’s an API getaway. Brought in a lot of small business rules to feed basically the front end client.
James Governor (04:09)
The BFF
Okay, so it’s a backend for frontend, BFF. I’m pretty sure if that was me, I’d be calling it bestie. But yeah, so in terms of the whole PHP to Go transition has got to be interesting. Have you had hired new people for that? Or have some of the PHP people been learning new language skills?
Pascal Martin (04:40)
Both. We trained every PHP developer wanting to move to Go and we know when we are a developer, we are Go developers and they do some PHP maintenance on the side.
James Governor (04:53)
Okay, we spoke before and you actually said that some people are also, if they’re Rust curious, there may be some teams in there that are writing Rust as well.
Valentin Clavreul (05:02)
We had a phase where we experimented both on wagers. Actually we chose Go because it was more natural coming from PHP and it was easier to train everyone to Go. Still, we did some Rust service that performed great. It was amazing, but we felt that maintenance and getting everyone up in skills to go to Rust was a bit harder.
James Governor (05:26)
Okay, and tell me a bit about the support, I mean, from a Go perspective for Compute at Edge and how that was, was that a useful function for you, the fact that you writing Go?
Valentin Clavreul (05:39)
Yeah, yeah, of course, because what we did in compute was using Go in a really natural way. We already had tooling, we already had best practice, and we could just do the same for compute. Basically, with just a slight, Fastly, overhead on top of this, a small API to call the cache layer, the HTTP layer, and that’s it. And if we just had to change it, we could rework this thin layer, it will work the same.
Pascal Martin (06:08)
But if you are starting from nothing and you know Rust, go with Rust. It’s funny to say that, start with Rust to run it and deploy it on Fastly. Because Rust is a bit better supported by Fastly Compute. But if your developers know Go, it’s well supported as well, it works well, and it’s not slow or anything. And if you have tooling and developers use that.
James Governor (06:26)
I mean, clearly, it’s been quite interesting watching the transition. mean, it’s not so long ago Rust was something people were doing on the weekend. And now clearly, a lot of organizations are using it at scale, systems programming. So yeah, it’s got a lot of momentum, both extremely useful languages and yeah, important, I guess from Fastly’s perspective that they’re supporting all of these languages because
You know, these are the modern languages of the web. One question I think that I’m interested in is you’re a big AWS shop as well. So talk a bit about the sort of the, I guess the interoperability aspects of running a compute at edge and taking advantage of Amazon services as well. So there’s almost an aspect of offload here. There are some things that you can do with Fastly that you can’t do with AWS. Talk about… what that experience looked like.
Pascal Martin (07:37)
I’ll start with the backend maybe, and then you go to the data stream. Most of our backend is on AWS. We’ve been using AWS for years now. When we moved to the cloud, we moved to AWS, and we are using AWS quite extensively. We are using DynamoDB, Step Functions, and these kinds of things a lot. We love them, and it works really well. But for compute, we thought, Fastly Compute was interesting. So we did some prototypes.
and it worked well, it was interesting, especially the Go part was really interesting for us. And we needed a data store at Edge, so we went with KVStore and it scales well, it answers our need. So from an interoperability perspective on this, no problem, except we needed to get the data to the KVStore.
Valentin Clavreul (08:26)
Yeah, and that’s where the data pipeline is. Basically, we need to be streaming every update of data from our AWS database, running DynamoDB mostly, to the KV store in almost real time. So it can be a bit challenging, but once the pipeline is nicely done, it works well.
James Governor (08:50)
Okay, assuming the sort of patterns you’re talking about, observability has got to be pretty important. So tell me a bit about the observability approach.
Valentin Clavreul (09:00)
Of course.
Because I believe if you cannot trust your own architecture, it’s worthless basically. So what we did was putting a tiny bit of observability on each step of the pipeline, either on the data pipeline or on the compute side, to really know and really be able to finally debug what’s happening. How many users did have an issue while synchronizing their subscription or whatever. And really helped us to build the solution iteratively. If you can’t see an error, you won’t be able to do anything. At least see the error and after think about how solving them.
Pascal Martin (09:37)
Yeah, we did not want to add one more observability platform. So what we did was trim logs to our existing platform. So developers can find what’s running on compute or in the data pipeline where they are already looking.
James Governor (09:53)
And Fastly made that an easy process in terms of the streaming.
Pascal Martin (09:58)
They have a log streaming feature, you enable it and logs go automatically.
James Governor (10:02)
So it’s really
easy to take the telemetry and…
Valentin Clavreul (10:04)
of terraform. UL and K. That’s it.
Pascal Martin (10:06)
Maybe one.
James Governor (10:11)
And so, Terraform, I I talked about interop between Fastly’s platforms and what you’re doing on AWS. Terraform, I understand, plays a fairly important role.
Pascal Martin (10:21)
Our infrastructure is managed with Terraform and when we choose a vendor today, if the vendor does not provide Terraform modules or support or whatever, we don’t choose the vendor and Fastly provides Terraform modules and Terraform providers. So that’s okay. AWS Terraform provider is great too. Everything is managed with Terraform. Even our observability platform is managed with Terraform.
James Governor (10:45)
Okay, there you go. So I guess the we’re gearing up now. I mean, I think of the sporting events are probably the bigger challenges you have. We’ve got the I guess the World Cup now is a big one. From an infrastructure perspective, what have you got to do between now and then? What have you got to refactor? What are the complex challenges that you’ve got ahead of you to be ready for that?
Valentin Clavreul (11:10)
So basically going further into what we did with compute because what we did first was a really tailored feature for special events. So now we are trying to industrialize a lot of bits and propagating it further in our code base and then to continue in our load testing operation basically to know every bottleneck of our architecture.
Pascal Martin (11:35)
Last year, prepare for the Euros, we did more than 100 changes on our platform for performance, resiliency, scalability, elasticity, and we expect we will do some more because that’s part of the game.
James Governor (11:50)
One thing I thought was interesting from the blog post that I read about the work you’ve been doing was this sort of, I think arguably the architectural partnership between you and Fastly that you were identifying potential bottlenecks. But I think there was one where one of the Fastly engineers was like, I think maybe you’d need to think about sharding this because otherwise you might struggle. Tell me a bit, I suppose that that’s so important, I think at this kind of scale that you’re getting a relationship.
Pascal Martin (12:17)
We did two steps. First, the architecture step and then the implementation step. I think you’ll speak about the implementation one, but the architecture one, we did some prototypes and then we called Fastly. We were like, okay, the Euros is coming up. Can we chat about what we are thinking we can do? So they set up a call with us. We did some schemas together and we were like, okay, we want to do this. Use compute there, KVStore there. Is this the right tool? And we’re like… Yes, okay, this is right tool for this, for this, for this. Run some load tests there and there. And okay, it was very reassuring to have this kind of a stump in the architecture before starting to implement it. And then when implementing and load testing,
Valentin Clavreul (13:02)
Yeah, and then we tried it. We tried a lot of small things, but since it’s a really specific environment, it’s a Wasm Engine, it’s a CDN, it has a lot of small things to know about. We really talked with Fastly a lot to know it works. A quick example, our load tests were coming from a single origin, targeting a single pop at Fastly so it wasn’t using at all the full capability of Fastly network.
James Governor (13:33)
So I think my favorite story there was the testing in production when you went, oh no, it doesn’t actually work in production. It turned out that actually you just needed to.
Pascal Martin (13:45)
There are some knobs and levers at Fastly, as I said, we did our first load test in our staging environment, of course, and then when we switched to production, we had a call with them to ensure the environment was ready for everything.
James Governor (13:58)
I mean, I talk about progressive delivery, where you think about the blast radius, you have to do testing and production. So when I read that case study, I was like, excellent. It’s good to know that people that are streaming at this kind of scale are doing that. So that’s great. Super interesting. As I say, there’s a great blog post on the Fastly website. I think that’s a really fascinating conversation. So I’d like to say thank you to Pascal, thank you to Valentin, and thank you to Fastly for hosting us.