A RedMonk Conversation: Kelly Shortridge Talks Security at Fastly

Share via Twitter Share via Facebook Share via Linkedin Share via Reddit

Get more video from Redmonk, Subscribe!

Kelly Shortridge (Senior Director, Security Product Line at Fastly) joins RedMonk’s Kelly Fitzpatrick to talk security. The Kellys cover isolation and modularity (with a few nature metaphors from Kelly S.), bridging the gap between security and development teams, and secure by design thinking (this time with an ice cream cone metaphor). Kelly S. also covers how her security philosophy translates to Fastly through her work with both the Security product line and the Security Research Team.

This was a RedMonk video, sponsored by Fastly.

Related Resources

Rather listen to this conversation as a podcast?

Transcript

Kelly Fitzpatrick: Hello. Welcome. This is Kelly Fitzpatrick with RedMonk here with a RedMonk Conversation. Today we’ll be talking about security. With me is Kelly Shortridge, Senior Director of the Security product line at Fastly. Kelly, thank you for joining me today on this RedMonk conversation where we have more Kellys than usual. To start things off, can you tell us a little bit about who you are and what you do?

Kelly Shortridge: Yes, absolutely. It’s Kelly squared–I love it. So I lead Fastly’s Security product lines. So I figure out the vision and strategy for Fastly, both why we should build things, why it matters to organizations that need to secure their apps and services, as well as what we should build, in what order, and all of those other important prioritization kind of decisions.

Kelly F: So, you know, not a lot. You just you don’t have much that you have to worry about. It’s not stressful at all. Yeah.

Kelly S: Herding cats both literally–you’ll probably see some cats in the background, so that’s a little cat herding–also some cat herding work, but it’s it’s really fun, especially at Fastly. We have some of the best engineers in the world, especially on the networking side of things. That’s our heritage. So colluding and brainstorming is always a pleasure.

Kelly F: That is good to hear. Well, to start things off and I promise we’ll talk about Fastly per se in a bit. But, looking at your body of work and all the things that you’ve done, I came across a talk that you did at. What was it like Wasm Day at KubeCon last year?

Kelly S: Yes, Last year.

Kelly F: Called “A Love Letter to Isolation.” And I think I’ve watched that talk like 3 or 4 times. We’ll include it in the show notes so people can go look at it. In a nutshell, for people who have not seen that, in that, you talk about the Wasm component model and what isolation and modularity means, and how this is just a really good thing for security all around. Can you just speak to that a bit for folks who have not seen the talk?

Kelly S: For sure. I think the condensed version of the talk, and a similar theme that I’ve echoed even in talks this year, as well as in the book I wrote, is there’s a lot we can steal from nature. Nature has a lot of great defenses that have emerged over the competitive pressures of evolution. One of them is isolation modularity. Because isolation is quite literally the basis of life. Life wouldn’t exist without it, and we see it at all sorts of scales. Like when I knock on here, it doesn’t hurt my brain because it’s isolated. We have this like nice hard skull that enables that modularity. And same thing if I somehow I got like a horrible bruise on my arm, but that doesn’t affect my ability to walk. That’s modularity in action, too. We see it again in terms of ecosystems down to species level, even on the scale of like the universe and planetary systems. So to me, it’s this really beautiful. That’s part of the reason why it’s a love letter, because I feel so strongly about the beauty of isolation and modularity. It’s also the unsung hero in software engineering. So if you think about just most modern computing systems, even things like user permissions, the fact that we have isolation means that we can have multiple users using the same system, which wasn’t always true in the really old school model.

Kelly S: Even browser isolation. That’s a thing that’s been hugely important for security as well as performance concerns. Process isolation–I think engineers are more familiar with that now because of things like microservices. But even if you aren’t conscious of it every day, isolation is almost the equivalent of like the cosmic microwave background. It’s just in the background of everything you’re doing as an engineer, which I think is really powerful. And my view is that we don’t treat it with the respect it deserves, and harness it enough as a strategy for security and especially security by design. So we don’t even have to think about things. Because yes, you can, espouse security policies like “don’t write bugs” that’s impossible to adhere to. We’re always going to make mistakes, but with isolation, it means we can contain the impact, which is just so powerful.

Kelly F: Yeah. And very much I feel like when you’re talking about developers and software development in general, the idea of how do you make whatever the thing is that you should be doing the easy thing to do, that you do not have to think about it. And, you know, to your point, isolation is one of those, those principles that make that possible.

Kelly S: Absolutely. And I think modularity more generally, you know, being able to separate out concerns is, again, kind of an unsung hero. And certainly where I’ve taken umbrage with the security industry writ large as it focuses so much more on policing human behavior rather than thinking, like you said, of how do we make security invisible? How do we make it so developers don’t have to think about it, but they still achieve the outcomes we want, which is to make sure ultimately that services, apps and services behave as intended despite the fact that attackers exist out there. Right?

Kelly F: Yeah, absolutely. So moving on a little bit, you also are the lead author on a book called Chaos Engineering. So talk about TLDR, for folks who haven’t had time to read a book (but maybe they should go go read this book) what’s it about? What was that process like? Are there any points from there that you want to highlight?

Kelly S: Sure. I’ll try to condense. I wrote, I think around 140,000 words in around nine months. I will try to condense as quickly as possible. It’s really a book about software resilience, like security, chaos engineering. That’s kind of the buzzword. I retcon it to mean chaos theory rather than chaos experimentation, though I do talk about that in the book as well. It’s really though about, again, how do we make sure that we minimize the impact of attacks when they happen? And more generally, how do we make sure that we minimize the impact of failures that happen in our software systems. How do we make sure that we recover gracefully and that we continue to adapt? I think that’s the key thing that’s missing, again, in cybersecurity strategy in general. And certainly software security is it’s often seen as, you know, it’s like, oh, we have to secure these individual components. We whack-a-mole, all the vulnerabilities. Things will be fine. But it’s not really about that. It’s more about how software interacts once it’s out in the wilds of production. Right. It’s about making sure that we can change on demand because if we release changes on demand, we can also release security patches on demand. And I’m a big fan of how do we harness, you know, tactics that maybe benefit us both performance wise or reliability wise as well as security wise?

So I was trying to accomplish two key things with the book, just as a, as an enticement to read. One was certainly teaching developers about software security and how to do it the right way. I don’t think it should be seen as this arcane art because it’s not. And a lot of the things that developers already know you can repurpose for security, so I think it’s powerful in that front. The second part was teaching security people about software development and software delivery. And again, all the tactics that security people who live in their world of security tools may not know about, including things like modularity or modular architecture. So I’m trying to bridge those two communities and hopefully get people excited that, you know, security can be innovative and be an enabler rather than, you know, a “Department of No” type of activity.

Kelly F: I think that that aspect of being a bridge between those two populations, and that’s something we see a lot we see almost like this separation. Talk about a scenario where isolation is bad, like having developers isolated over here and security people are isolated and not not speaking to each other. That is not going to give us as good of outcomes as we would, we would hope. So being that bridge from point A to point B and having that language and ability to facilitate that communication and collaboration among those groups, I think is extremely important. The fact that you could put it into a book just blows my mind.

Kelly S: Well, so far feedback’s been great on that front in terms of, finding common ground, because there’s a lot more than people think, including things like SRE and, you know, system signals that they may want to detect. Um, another classic example I love is Layer 7 DDoS attacks. I know a lot of our customers struggle with that. A lot of my friends who are security leaders and platform engineering leaders, they’re struggling with it. Turns out caching is a really great strategy to defend against those, because if you cache more content than attackers can’t target, as many objects to flood your origin, which is, again, it’s counterintuitive or you don’t think about it first when you think about security strategies against DDoS attacks, but it works. And so again, it’s about what are the things that we have in common rather than, you know, the kind of petty squabbles, because our common adversary should be the actual adversary who is attacking us, not each other. Right?

Kelly F: So in the book, I think it’s in this book, you talk about the ice cream hierarchy of security. Could you speak a little bit about that? Because I’m I’m here for ice cream. But anytime you can take security and give it that type of kind of metaphor to help people understand it better, I’m here for that as well.

Kelly S: I am all about the metaphors, whether nature wise or ice cream wise. I have a lot of whimsical analogies in the book that I hope make some of these principles more memorable. So the idea with the ice cream cone is basically like an inverted triangle, but it’s a lot more fun to say, like, you know, it’s an ice cream cone. So the idea is basically, the more a security solution relies on human behavior to succeed, like a human being hypervigilant 24/7 and never making mistakes, the less likely, obviously, it is to work. I always joke that if you only rely on like policies and again, humans being perfect, you have the base of the cone when you finish an ice cream, that’s all, like drippy. It’s good as a last bite, but is again, a metaphor. It’s like you can’t scoop a lot of resilience ice cream into it. It’s a very like leaky strategy that’s not very reliable. Whereas at the top of the big kind of like meaty, almost like the cone cup, in the analogy, is secure by design, and that’s making security invisible to developers, making sure that, you know, there are certain properties that are just baked into the infrastructure, like isolation or things like immutability and ephemerality, things like memory safety, where you’re ideally cutting off certain actions or attack paths so they can’t happen at all, rather than, again, a human having to make sure that something bad doesn’t happen. It’s like, well, the bad thing just can’t happen by design. That’s really the goal.

And I did not invent this, you know, in a cave with a box of scraps. I drew very heavily on safety solutions and how physical industries, including things like nuclear safety, where lives are at stake. And I think in general the stakes are much higher than in software. Like how they approach the problem right down to, you know, humans following procedures all the time, or in the case of, I think it was Three Mile Island where the correct there were a bunch of lights blinking, but the correct light they should have paid attention to was like on the backside of the panel out of, you know, dozens of lights. You can’t expect a human to be able to figure that out. So that’s at the very base of the cone. Whereas making sure, um, you know, again, they are by design kind safety procedures or safety principles in place in nuclear reactors or in mines or petrochemical plants, all those physical industries –that’s so much more reliable. And it’s how those industries have made great strides. It’s the same thing with  airplanes–that’s probably the most consumer friendly analogy–where they don’t place coffeemakers next to electrical like fundamental electrical wiring anymore because of an incident. And so they changed the design. So to me that was like very deep inspiration again that we could draw from in software. Um, and again, I feel like we have it lucky because it’s a lot easier for us to change the design of our software than it is to change, like how an airplane works or a nuclear reactor works, right? So in some sense, I feel like we’re a little privileged. So it’s almost like we should be kind to these other industries and leverage those privileges that they wish they had.

Kelly F: As somebody who grew up near a nuclear power plant and who spends a lot of times on planes, I am happy that these industries have kind of gotten that down. And we can we can learn from them as opposed to the other way around where it’s like, well, they’re going to have to model their behavior on the software industry.

Kelly S: Exactly.

Kelly F: So I feel that you have thought about security in many different contexts, and you speak about it very brilliantly to a number of different audiences. What has it been like taking all of that experience and translating it to your philosophy on security at a place like Fastly?

Kelly S: So part of the reason why I chose Fastly is because it’s so aligned to that philosophy, because Fastly helps, in a certain sense, power the internet. And it has such an opportunity as some of that fundamental infrastructure, underpinning just how software is delivered to end users to build in that security by design, which we have and we continue to invest in. And also with the Security product line, because we have that heritage of being, you know, really for platform engineers, before there was the term “platform engineering” and even being friendly for app developers, it’s like they are fully on board top to bottom with the idea that security has to be developer friendly. You just can’t compromise on that. You can’t have security that slows down software delivery. We hear some horror stories about, you know, WAFs in some places, slow down software delivery by three weeks. That means you ship, what, like a third fewer features per year. We just view that as unacceptable.

So I feel like Fastly is very aligned to my own philosophy, where you want to make sure that it’s developer friendly, it works with software delivery, not against it. It’s an enabler of business rather than a blocker. We’re very proud of the fact that we have a very, very, very low false positive rate, which means we’re not blocking legitimate users when we detect malicious behavior. And they’re very on board with that kind of layered approach that I think is so important. Right. Because there is no silver bullet. That was something clear to me I feel like nearly day one in my journey in the cybersecurity industry, despite what, you know, we sometimes see in the vendor halls, like “this fixes everything.” It’s like, no, we know that WAF isn’t all you need. There are strategies again, like caching. There are things you need like isolation where when you use our compute platform, for instance, you don’t pay more for like the isolation feature, it’s just part of the design. You’re gaining memory safety and that kind of isolation just by using the platform. And I really love that we have that aligned approach. So to me, it feels conceptually like the Security product line just adds those layers on top of a whole bunch of different strategies that are baked in by design. And because of that, focus on the platform engineering stakeholder and by extension, how they serve application developers, it really allows you to rethink with that persona first and foremost and at the forefront, like how does security look? And it turns out it looks very differently than what we see in the rest of the industry. I think that’s a great thing. And that’s why I’m at least really excited about the opportunity.

Kelly F: I love this idea of building for the folks who are enabling developers, right. So we talk a lot about how much developers are expected to know these days, and one of the things that often does fall through the cracks, of course, are security, compliance, things like that. So having folks who are paying attention to that, paying attention to that, on behalf of developers so (again) the easiest thing is to do the secure thing or. You know, the quote unquote right thing.

Kelly S: Exactly. We definitely feel like you shouldn’t have to be an expert in the latest, whatever weirdly named attack that’s out there, right? Most of the time, platform engineering teams, their focus is keeping the site up and running. And attacks in their various forms are one way that can jeopardize that reliability and performance and availability. And so we very much view it as like, you shouldn’t have to be an expert. We do the hard like “thinky- thinky” (as I call it), to build solutions that make it easier for the platform engineering teams. They can take all the credit internally to their organizations of like, you know, we minimize the impact of this attacker by using this solution. We were able to get both improved performance, which we see a lot of the time as well as, you know, stop Layer 7 DDoS, right. So we really want to–it’s kind of a hokey word, but actually empower the platform engineering teams to start with the “set and forget” model for a lot of security or in some cases, again, it being invisible. And then if they want to get more advanced, they can. Right. Like we allowed a lot of customization in terms of rules. You can even block, write custom things to block like ChatGPT bots if you don’t want them scraping your site. So we offer a lot of customization, but we very much view it as you can graduate into that as you see fit and as it meets your own roadmap. Because right now, especially, a lot of teams are resource constrained. And so we know that security, like you said, is not always the top priority. And we just want to make it as easy as possible. And a lot of times, knowing that they can get the CDN side with all of the security stuff, they need to make sure that everything again stays up and available and performant and fast. They really like just being able to trust us as the partner who figures all that out.

Kelly F: So I think that’s been, in a nutshell, a really good explanation of a) how your security philosophy affects what you do at Fastly, but also b) what that means for Fastly’s clients in terms of the offerings that Fastly has. If folks want to learn more about the Security product line, where can they go?

Kelly S: That’s a great question. We have quite a bit of information on our site. I highly recommend checking out the blogs by our Security Research Team too. We do always write them with that non-security person in mind. You don’t have to be an expert to read them, but there is a lot of interesting stuff out there where we talk about the attacks we see, as well as some of the tools we’re creating, including like a simulator, which is kind of like its own way of doing staging and testing, but for WAF logic. So that’s really cool. I definitely recommend checking out, um, some of the docs just to get a feel, including, you know, how we have Terraform support. Again, we understand, you know, how modern workflows work. So I would say between docs and the general public facing site and checking out some of that threat research or the Security Research Team’s blog that covers the gamut. There’s some demos on YouTube as well, but you can also always just add us, especially the Fastly devs account on Twitter is really active. We’re always happy to chat about that.

Kelly F: Cool. And we will include links to all of your socials in the show notes so people can go and find you on the interwebs. But are you doing any speaking in the near future?

Kelly S: Yes. So I just spoke StaffPlus New York, which was great. I’d love seeing more platform, infrastructure engineering type content in my hometown, New York. Next up is SecTor, where I’m going to be talking about chaos experimentation. But also this is to a security audience, but talking about how things like unit tests and integration tests, um, even like canary deployments, can be helpful for security. So kind of the other direction education wise. And then we’ll see in terms of next year. Right now, I’m obviously very focused on getting the roadmap going for 2025. I’m very excited about what we have coming out. And as always, I continue to nerd out on, you know, what can we learn from nature, especially more recently in the vein of like interesting deception tactics? So highly recommend if you do have some interesting thoughts or things that you’ve learned from other areas, like even volcanic plumbing systems has been a recent focus of mine. Please add me on socials. I would love to nerd out with you on that stuff.

Kelly F: Kelly, thank you again for a great conversation.

Kelly S: Likewise. Thank you. Glad again. Kelly squared–it’s a good combo.

More in this series

Conversations (81)