A RedMonk Conversation: Better Security, Faster

Share via Twitter Share via Facebook Share via Linkedin Share via Reddit

Get more video from Redmonk, Subscribe!

For many years, the enterprise approach to security has been scanning vast seas of manifests for open source software and then dumping alerts on developers. But how much of that software being scanned is used? How much time does it take for a developer to investigate an incident, and thus how much time could be saved if they only worked on what software in usage? In this conversation with Endor Labs’ CEO Varun Badhwar we’ll explore those questions and more as we rethink the traditional approach to enterprise security.

This was a RedMonk video, sponsored by Endor Labs.

 

Rather listen to this conversation as a podcast?

 

Transcript

Stephen O’Grady: Good morning. Good afternoon. Good evening. I am Stephen O’Grady. I’m the co-founder of RedMonk. RedMonk is a developer focused industry analyst firm, and I’m here to talk today with Varun. Would you care to introduce yourself, sir?

Varun Badhwar: Yes. Hi, everyone. I’m Varun Badhwar, founder and CEO of Endor Labs. We help organizations adopt open source software more securely and confidently.

Stephen: Excellent. And that’s actually a good introduction or segue, as it were, in terms of our conversation overall, which is going to touch on the security side. Well, some of the key takeaways from a security standpoint. But before we get there, I wanted to get into some of the context that we at RedMonk see in the conversations that we’re having. So in other words, developers are in an interesting spot today in the sense that they’re being pushed to move faster and faster and faster. Right? Organizational priorities at this point are across the board on velocity. The organization wants to move faster. They’re pushing their developers to do that. At the same time, developers are forced to do this with fewer of them. We’ve had layoffs across a variety of industries, so there’s fewer actual bodies to do the work. And at the same time, they’ve had more jobs posted on their shoulders, right. Jobs that used to be upstream or to the right, if you will, of the actual developer work, are now being placed back on their shoulders. And again, the context of all of this, of course, is that, security has never been more of a critical consideration because the attacks get more novel and more sophisticated by the day. So that’s the setup. That’s the context here. Not to put too fine a point on it, it’s a hard time to be a developer in many ways. And, to that end, you all have done some interesting research and survey work in terms of, okay, what precisely does this mean? What are the impacts? So can you talk about that work and pull out some of the key findings in your view?

Varun: Yeah. And so, Stephen, first to your point, I think as we talk to a lot of organizations, the biggest frustration is this what we call the developer productivity tax, that security tools and security results and scanning is causing. And in a lot of places that’s ending up being upwards of 50% of the developer’s time. So we dug deep to say, why is this happening? And look, I’m sure we’ve all seen statistics that a majority of our code is now open source. In fact, specifically, over 90% of our code is open source. Now, when that’s happening and you’re running all of these security scanning tools against that code, you’re producing tens of thousands of alerts. In reality, when developers are tasked with manually doing code reviews to uncover if this is a real problem or not, only about eight, actually only about two out of the ten, meaning eight out of ten are false positives. Why does this happen? Because only 12% of the code you’re importing from the open source ethos is actually used by your first party application. So think for a moment, if you could eliminate the 88% of the code that you knew wasn’t being called in your application, isn’t exploitable, you could be so much more targeted on the things you need to go find and fix out of cycle versus things you will just fix in a regular release cadence as an engineer.

Stephen: Yeah. And that’s one of the — I mean, I know dating back to my own days as a developer, part of it was, God, we had this so many times where QA would come back and say, hey, we have X issue or, sort of a bug fix and so on, and you’d dig into it, you’d take it apart for a couple of hours and it turns out, well, guess what? That pertains to an older version of the product and it’s no longer an issue and you’ve just wasted a bunch of your time. Right? So this is a cycle that developers know well. I guess my question is what can be done? In other words, if we can all agree that — and it shouldn’t be terribly controversial, that look, we’re consuming a lot of open source software. A high percentage of that open source, however, is not actively utilized. It’s just present. So, you know, obviously, if we’re going to target security issues and incidents within a given codebase, you only want to work on the stuff that’s actually relevant and in use. So okay, great. I think that’s the obvious takeaway. We want to focus on the things that actually represent vulnerabilities as opposed to some artifact that somebody imported a while ago and is not used and doesn’t represent a vulnerability. So what do we do? Like when you’re talking to developers today, what are your recommendations?

Varun: So first, let’s understand the cost, right? Because ultimately developers will want to do things once you can quantify the problem. On average, each time — what our surveys found is each vulnerability that a developer is tasked with reviewing just to review and triage to the point of decision of, am I actually going to do something about it or not, is on average eight hours of time. So it is not uncommon that you have tens of thousands or hundreds of thousands of these. Let’s take round numbers. 10,000 less vulnerabilities to investigate manually is 80,000 development hours given back for you to write code and innovate. So that’s the magnitude of the problem. So why does this problem exist? And what do you as an end developer do about it? The root of this problem, Stephen, is the fact that most of the scanning tools around software composition or open source scanning, whatever you want to call it, rely on manifest files. So they’ll rely on your Package.json requirements.txt files and things like that. These are basically proxies, guesstimates to what’s the reality in your application. At best, you know what packages are being imported, names and versions. How they’re being used, what’s dynamically being invoked in your application through all of the other nuances of languages like Python that are extremely popular with the advances in AI — it’s ground reality doesn’t exist in manifest files. So where does it exist? Source Code. Your code as it’s being built is the best source of truth. That’s the ground reality of what’s happening. So if you, instead of scanning manifest files, can scan the information in your source code through static analysis, now you have a very fine grained set of specific findings of what’s actually happening in your code and what do you care about.

And beyond that, you can apply layers of prioritization like exploitability maturity of those exploits. Is this code even available and running in production, or is this just in test? Have you shipped this release to a customer? There are so many mechanisms that you can apply, and what we find on average is when you do that, you can get rid of 90% of the noise and the 10% of the things that remain are the things that you have evidence for of how exactly this is happening in your code. So one, the time saving comes from the majority of those findings that you don’t, even as a developer, have to invest in it. Okay. So that’s step one. The other benefits on cost even come from just the ease of processing the information provided to me. Okay, where is this in my code? Where is my first party code calling this dependency? What is the path to upgrading it? What is the impact? And so furthermore, for the things I even want to act on, if I can give you more confidence on how and where to act upon that, we are looking at millions of dollars of savings in a typical open source program governance model. So true savings for enterprises of any scale and any magnitude. You know, if you have a handful of developers all the way to hundreds of thousands of engineers.

Stephen: Yeah. I mean, the analogy I would use for it’s almost like an intake nurse, right. In an emergency department. Where it’s okay, you have to sift through the actual critical injuries versus the ones that are like, hey, you’re fine. You know, here’s some aspirin, go home. You know, we don’t need to see you. And that, at scale, particularly the scale that we’re talking about here, where there are — some of the larger organizations are going to have tens, if not hundreds of thousands of developers… the math gets pretty compelling there pretty quickly, you know, for literally saving them an hour a week, let alone, eight or more on an incident basis. So that seems pretty straightforward to me. I guess what we’ll leave with here is, you know, if you’re talking to an enterprise that employs however many developers, small team, large teams, anything in between, and the argument is clear to them on paper. Okay, yeah, I understand that there’s a better way to focus my time, my resources, my developer, essentially, availability. What’s the next step? Where do they go? What do they do? Like how would you recommend they go about getting there, basically a handle on the issue of all of this time wasted.

Varun: Yeah, look, I think this is kind of where a lot of the Endor Labs focus on product development, and what we offer to our customers is, we are very proud to say go out there, look at your existing tool that’s scanning your manifest files. Pick your “best in class” tool that you use and then go run Endor Labs against a similar environment or same code base and go see — here’s what you’re typically going to find. You’re going to first find that the accuracy of just even the inventory of what’s being used is so much better when you do it through static analysis versus manifest files. So the accuracy of knowing what dependencies you rely on — both direct and transitive — is much higher. The second piece is the prioritization. You will see that in action. But Stephen, the third piece we haven’t talked about is as an engineer, security issues are bugs, right? But there are other bugs and concerns I have with my code. Things like maintainability, you know, there’s another important statistic worth calling out here. 62% of the time, once we import an open source package, we don’t go back to upgraded or updated.

And there are lingering — we have a saying. Open source software ages like milk, not wine. So if you’re thinking you’re going to have this code base, it’s like everything else, you have to maintain it, you have to update it, you have to know when to update it, find the best paths to update. And so those are the kind of things we’re really highlighting to engineering teams is, understand what’s in your code, understand when’s the best time to update. You know, flag issues when somebody is falling behind because this is all tech debt at the end of the day, right? If I’m not doing something, I’m 67 releases behind on an open source library that’s critical to my Web application. That’s a problem because tomorrow if my pager beeps and something’s down, I don’t have an easy path to update 67 releases in one shot. So, I think these are the kind of issues that really are front and center to software development. And yes, open source is free, but it’s not really free because you have to put in the time and effort to maintain, manage and nurture it.

Stephen: So you’re saying there’s a better way to approach things than handing your developers a list of thousands of theoretical vulnerabilities that have turned up in a scanner?

Varun: Correct. There’s a better way than that and there’s a better place to scan, which is not your manifest files and I can go on so many reasons why manifest file based scanning as your ground reality is so wrong.

Stephen: Well, we will have to hold that for a future conversation. Varun, really appreciate you stopping by to talk to us about this. This has been great.

Varun: Thanks for having me, Stephen.

 

More in this series

Conversations (72)