A Redmonk Conversation: The Burden of Shifting Left

Share via Twitter Share via Facebook Share via Linkedin Share via Reddit

Get more video from Redmonk, Subscribe!

In this RedMonk Conversation between Stephen O’Grady and John Amaral, CEO and co-founder of Slim.AI, the two discuss the idea of “shifting left,” or moving security tasks earlier in application development workflows. In particular, they examine not just the perceived benefits, but the costs to “shifting left” and what to do about it.

This was a RedMonk video, sponsored by Slim.AI.

Rather listen to this conversation as a podcast?

 

Transcript

Stephen O’Grady: Good morning, good afternoon, good evening. This is Stephen O’Grady with RedMonk and I’m here with John from Slim.ai. John, would you like to introduce yourself, please?

John Amaral: Sure. I’m John Amaral, I’m CEO and co-founder of Slim.ai. We’re a company focused on software supply chain security and helping companies automate vulnerability remediation.

Stephen: Outstanding. So we’re here today to talk about, well, shift left, among other things, and developer velocity and so on. And I guess I’ll start by asking: so, given the environment today with customers who are trying to deliver at velocity but are facing more complicated environments, and obviously we all know the pressures that we’re under from a security standpoint, what are you hearing from developers, about their comfort level? Where are they at in terms of how they’re thinking about security from an application development standpoint?

John: Well, I think there’s been a lot of progress lately, and awareness and even work on solving security closer to developers. And when I talk to developers, they are certainly aware of the concerns and they’re making investments in basically how they work and how they act to be able to make better choices in the software they use. And the software they develop with, open source choices is really kind of a complex and hard thing to do. But certainly there is more and more awareness that they’re part of the responsibility in securing the code they build. And certainly that’s been a feedback loop from the — to the right as more and more, I’d say, security activities are pushing back into the developers’ workspace and they think about their challenges certainly are are being impinged by this, I’ll call it latent rework. They’re getting feedback that they need to secure something and more and more of their time is being spent in those activities. I’ve heard as much as 25% or 30% of developers’ time is being invested in activities related to security, related to software curation, making sure that they’re producing better and more secure output. So it depends on the organization and the environment that organization operates in. If they’re regulated by something like FedRAMP or they have high security concerns, certainly those developers are feeling the pinch. And I think developers struggle to know what to do or have the tools to do them easily. It’s a lot of new work and a lot of that’s manual new work, and that certainly makes it more difficult to develop software at velocity.

Stephen: Yeah. So you spoke of developers’ responsibilities and obviously one of the terms of late, in the last couple of years has been shift left. So it’s essentially pushing traditional security tasks from the end of a life cycle earlier in it. On the one hand this this sort of makes sense on paper, right, because for a lot of the tasks it is beneficial for them to occur earlier and earlier in the life cycle. Right? But on the flip side a lot of shift left is basically putting more and more work on a developer’s plate at a time when, you know, frankly they’ve already had pretty heavy loads. So what’s your take on shift left?

John: As you said, almost perfectly, shift left in theory is a great idea. It means that you just catch problems when they’re in their infancy or when they’re in the formative stages, and so that they don’t propagate toward production. And I think the old adage is “it’s a thousand times cheaper to fix something at the developer’s desk than it is to fix it in production” or whatever that ratio is. It’s always a big number on the right and a little number on the left.

Stephen: Yep.

John: I think that’s interesting and maybe a little naive as well. I think shift left, again, great in theory, but if you don’t shift it left with automation and tooling the developers can use to do this quickly, or even do basically nothing… I get shift the solution toward the developer but automate the hell out of it. Anything that’s toil or work for developers that can be removed should be. Because we want them spending their precious cycles building value in our products, building value for our customers. And boy, it can sure be a giant tax on them, especially given that they’re not generally security experts or oftentimes they’re being asked to do stuff that requires a lot of preparation and planning and effort. So shift left without automation or shift left with a poor plan is just really basically slowing down your business. And we know we need the outcomes but we need to be thoughtful about how we ask developers to participate.

Stephen: Yeah. So it sounds like — I think I know where you’re going here, but I’ll ask the question anyway. If we’re in agreement that shifting left is going to cause some issues and it’s going to cause some friction from a developer standpoint what’s the alternative? You mentioned automation. So when you talk to customers, what are you proposing they do as an alternative to just taking these things literally off one plate and putting them on another?

John: Yeah. One area we spend a lot of time talking to customers and developers and users of our software about is vulnerability remediation. It’s literally one of the topics that always comes up and it’s something that our tools help with. And that is an area where shift left often happens, but it happens, I’d say in the bad sense, naively. It’s “I found a vulnerability, somebody found a vulnerability downstream” or in the process of building the software. It depends on the maturity of the organization. But they may be finding vulnerabilities because they’ve found them in production or right at the edge of production where they’re going to push something. And then all sorts of questions happen once you find this critical vulnerability or high vulnerability. Certainly, is that in scope for us? We know we saw it in the container, for instance. That’s where we play a lot. But is it a vulnerability that really impacts us? And you start asking all these triage questions about what to do about that vulnerability. And that often gets back to the developer’s desk because really no one knows whether that software runs, it’s vulnerable, or is it impact, or is it reachable? All that. And in the most normal sense of that interaction, developers are asked to go investigate that vulnerability: “tell me where it affects us. Does that code actually run?” And oftentimes the developers don’t know. They may know if they look in their code and say, yeah, well, I’m using that library to do X, Y, Z, but that specific part of the code, does it run? I don’t know. So what we’re really seeing a trend in lately is — and certainly something we do — is really allowing triage to be easy, prioritizing vulnerabilities, getting them to be like, okay, so that code and our tools do this.

John: We saw it run, it actually is a real vulnerability for you. Taking away all that investigatory work, taking away all that triage work and making it so that you can give actionable advice to the folks who care about the vuln, like the Secops or Devsecops people and the engineers. You can say, yeah, we use that package. Yes, there’s a fix. The fix will work. Like, get it down to something actionable so that the developer takes that insight and just uses it to solve the problem rather than spend who knows how many hours. And it can be many, many hours to do all that legwork. So this automation in triage is really a first step. And so we’re seeing a trend towards that. That’s an example of the kinds of automations that are possible. And there’s lots and lots of different use cases that are analogically similar to what I just described, sort of like what’s the work to get the bottom of the security thing that I care about and how do I get it to the point where I know what to do to change it? And that gap between awareness and actionable work is a big gray area that developers on average aren’t equipped to do when it comes down to software supply chain or software compositional issues. So that’s an area I think can use a lot of innovation.

Stephen: Makes sense to me. Cool! And with that, we are all set for this chat. John, thanks so much for the time, I really appreciate it.

John: Awesome.

 

More in this series

Conversations (72)