In this RedMonk conversation, Dan Rogers, CEO of LaunchDarkly, discusses the transformative impact of AI on software development, particularly in the context of feature management and DevOps, with Kate Holterhoff, senior analyst at RedMonk. They explore the evolving role of developers, the challenges and opportunities presented by AI adoption, and the importance of leadership in navigating these changes. The discussion also highlights the need for quality control and risk management in AI-generated code, emphasizing the balance between innovation and safety in the development process.
This RedMonk conversation is sponsored by LaunchDarkly.
Transcript
Kate Holterhoff (00:12)
Hello and welcome to this RedMonk Conversation. My name is Kate Holterhoff, Senior Analyst at RedMonk, and with me today is Dan Rogers, CEO at LaunchDarkly. Dan, thanks so much for joining me on the MonkCast.
Dan Rogers (00:23)
Thanks for having me on, Kate.
Kate Holterhoff (00:24)
real pleasure. For this episode, Dan and I will be chatting about AI, a technology that is already fundamentally changing the way we build and deploy software. It’s a subject we speak about at RedMonk every day with vendors, customers, and developers. And it is no exaggeration to say this entire domain is moving extremely fast. So talk to me about what you’re seeing around AI in the feature management space, Dan. How do you anticipate AI transforming the broad landscape of DevOps and software development and what specific changes are you preparing for in the next six to 12 months?
Dan Rogers (00:56)
Well, that’s a big question. Maybe I’ll break that down into a few pieces. First, let’s look at the world of the developer. It’s been exciting times really in the last six months and the next six months. The first is around productivity. So anyone that’s doing any coding now is probably using GitHub Copilot. That’s not only helping with code reviews, but now actually documentation as well. So what was once boring, doing the documentation afterwards can now be done.
through, you know, Gen AI. Amazon Code Whisperer as well, helping you work right there in the IDE. And then just the ability to do debugging much more easily. And, you know, you can resolve errors much more quickly. And that just gives you higher code quality. So what does this all mean for developers? It means a lot of their roles are changing, that many of applications that they are building will have AI in them, that therefore they need to understand how LLMs work.
They need to work alongside those AI models, constantly tweaking and training them. You know, the prompts actually get dialed in over time. So the core domain of a developer, it’s slightly different and has evolved in the last six months. So you’re to have to do a lot more problem solving and yes, think a little bit about AI ethics because you may get some answers and within a chat experience that you weren’t anticipating. So you’re going to have to think through all those, you know, what are you going to do with hallucinations?
So overall, the great thing is you’ve got a lot more automation for writing the code, a lot more automation for testing the code. Quality is going to go up. You’re going to be able to detect vulnerabilities that much easier. You’re to be able to give your teams a lot more visibility. AI has really enhanced DevOps so that you can actually see by stage, and that makes things like pipelines and rollbacks that much easier. And yeah, as I say, this means that
In the future, that’s kind of next six months that again, know, developers are going to have to be on their toes. They’re going to have to continue to learn, new tools. My son is a citizen developer and, the application he’s building is an LLM application. That’s how he’s starting with coding. He’s starting with coding with an LLM. And so, you know, he’s very aware of the fact that he has to constantly tune, but he’s potentially going to have to change models along the way. this kind of runtime experimentation is just par for the course for, I think, modern development teams.
Kate Holterhoff (03:22)
Yeah, and you’re really pointing to the fact that this is an exciting time, but also one of huge disruption here. So with AI adoption increasing across so many business verticals, can you talk about the tailwinds and headwinds that come from this, both for the industry and LaunchDarkly specifically?
Dan Rogers (03:40)
Yeah, you know, it’s like some companies are going to be on the wrong side of history on this one and some companies are going to be on the right side of history. you know, which company is going to be on the wrong side of history? If any of those kind of let’s call it primary functions that are now going to be abstracted is your core domain, then that is a worry for you, right? So, you know, those domains around debugging, around testing, around documentation, the stuff that was really, a bit of a drudgery for, for developers.
That’s obviously ripe for, for disruption. So that’s, I’d say that those are, that’s the wrong space to be in as a company. Fortunately, LaunchDarkly is in the right space to be in, which is really around progressively rolling out software. And guess what? With AI applications, you’re going to need to progressively roll those out. You definitely do not want to put a non-deterministic AI experience out there to the masses, to all of the users all at once without having progressively rolled that out. So that, that suits us really well.
You’re definitely going to want to target those AI experiences to subsets of users and really make sure those users love that experience before all of the users get it. And you’re definitely going to want to, as I talked about before, do a lot of experimentation. You’re going to need to experiment around costs, which model is use more token tokens. You’re going to want to do experimentation around performance, which prompts actually get you better efficacy of answers and actually create less errors.
And then of course, given that the models are constantly updating and being tuned themselves, you’re going to want to switch out and try the new ones, and that might happen, you know, really every three or four weeks at this point. And so it’s exciting times and companies like LaunchDarkly really help you as a platform for those new AI applications, allowing you to put release principles that we’ve all known for the last decade into practice for those AI applications.
Just because it’s an AI application doesn’t mean everything’s new. You’re still going to want to control your risks. You’re going to want to control your rollout. You’re going to want to experiment in runtime with this AI capabilities without affecting all users. You’re going to want to be able to do rollbacks. Let’s bring rollbacks back, and especially for AI applications. At LaunchDarkly, we can do rollbacks in about 200 milliseconds. That’s going to be essential when things start to go wrong, because things will go wrong as you experiment and dial these features in.
Kate Holterhoff (06:04)
I understand that you recently proposed a new rule for Gen.AI projects to quote, “move fast on AI, break nothing.” Now, Dan, this strikes me as an extremely ambitious proposition, especially as the amount of code that is being written and shipped is expected to explode in the coming years owing to the influence of generative AI developer tools such as these code assistants that you listed earlier. What do you tell folks that ask you how LaunchDarkly can help teams address these and sibling issues introduced by AI?
Dan Rogers (06:35)
You know, I recently spent time with one of the largest banks in the world and asked them, look, how, how are your AI projects going? And they said, well, slowly. And I said, well, why is that? That’s our choice. You know, we are very scared and we are risk averse. And so we have a backlog of AI projects. I did a bit of sense checking with other large global 2000 CTOs and kind of said, look, you have backlog of AI projects. Yes. So they are aware of the inherent risks of some of these AI applications.
And the solution for that, unfortunately, is infinite testing and, kind of worry about those AI applications. So the proposition I had that I want to boldly put out there is imagine you could move fast on your AI apps, parentheses, without breaking anything. And so imagine that backlog of projects, you have confidence that you can push them out because you know that you’re pushing them out in a way that allows you to roll back if things go wrong. Because you’re pushing them out in a way
that lets you roll them out to a thousand users instead of 10 million users. And so with a thousand users, it turns out you get a pretty good sample size. You can actually understand in production whether people are enjoying the experience. Are they clicking where you thought they were going to click? Are they responding how you thought they were going to respond? Are they satisfied with the answer? And that constant dialing, that constant tuning turns out you can do all of that in production safely in runtime with a platform like LaunchDarkly.
So the paradigm of move fast and break things we think is an old fashioned paradigm. What if you could move fast and not break things? And then that backlog of AI projects that are talked about get out into the world, push out your innovation, because this is how you’re going to differentiate. Increasingly, the way you’re going to differentiate on your application experiences is through AI capabilities. It is the game changer we’ve been looking for. This is how bank A versus bank B is going to differentiate.
how much they integrate some of those AI capabilities, how much they understand their customers and dial in and tune specific applications to their specific needs. This is the big differentiator. You do not want to be holding yourself back on your potential competitive advantage.
Kate Holterhoff (08:43)
So let’s dig into this issue of AI-authored code specifically that banks and enterprise customers are really concerned about. What challenges do you see arising from the influx of code that is AI-generated, particularly in terms of risk, code quality, and overall visibility into user impacts?
Dan Rogers (09:03)
You know, I was talking to, your colleague, James Governor about this actually just a couple of weeks ago. And we talked about the unpredictable outcomes of AI. And so what makes it so unpredictable? It’s this notion that these experiences actually require volume often to train and actually only when you put them out into production, do you really see how they’re going to manifest. So that is a kind of frightening and scary aspect of this non-deterministic part of, of Gen AI. Worse, what we can see happening is
you know, error propagation can really set in with some of these AI models when they’re integrated into that application without any validation. You just assume and build everything off the back of a particular model, then things can actually propagate very quickly. And so we also see a little bit of difficulty with the quality control. This is a black box, right? These large models are very hard for the average human to be able to debug or consume or even understand other than at an abstract level.
And so how do you do quality control on a black box? So it’s really opaque. And I would say that can obviously lead to bias. It can lead to errors. And they’re very hard to detect for an average developer. And so what could end up happening as a result of all these is customer frustrations. We’ve heard of, I guess these are urban legends at this point, of various chat experiences where
you know, the, the Gen AI model got a mind of its own and started recommending, you know, competitor car brands or started discounting the product because they thought the pricing was unfair. You know, these things can happen. And so that inconsistency in the experience is very real with Gen AI projects. And ultimately, I think it’s going to be very hard to explain a lot of the results. You know, if the results, let’s say in a medical application are different than you expect and the AI, which everyone thinks now has certainty, has almost, you know, omniscient like powers. If that’s giving you a result that’s different than you were expecting, who’s going to be there to explain the nuance of what the model actually is doing?
Kate Holterhoff (11:06)
Dan, you recently spoke about great code being led, not just written. So that’s another quote that I would love to hear more about. I like this idea. I want to better understand what it means though. Can you unpack the role of leadership in our generative AI code assistant present and how it’s going to impact the future?
Dan Rogers (11:25)
Yeah, funny again, I had a very similar conversation around this with James and what we talked about is I think something has changed in the last maybe five years where yes, every company is becoming a software company. Yes, software is eating the world. And as a result, the way in which companies define themselves is increasingly going to be around their digital experiences. If that is true, and we take that as for granted, and by the way,
which had had an increasingly many of those experiences are going to be generated by AI. So if all of that is true, what is the role of a company’s leadership? Well, I would posit the company’s leadership therefore cannot abstract themselves away from the software that their company represents. So just imagine a consumer brand company. You can’t just say, hey, you know, we’re all about the athletes and we’re all about, you know, the shoe material. It turns out your largest distribution channel is your digital application.
So you need to understand the release practices for that digital application. If your digital app is down, digital experience is down, you look bad as a company. You look as bad as a company as the sole of that sneaker falling off as an example. And so guess what? As leaders, your purview is how your applications are being run and how you’re releasing software. You don’t get to say that’s the CTO’s job and the CTO doesn’t get to say, that’s the head of platform engineering’s job. After all, you’re now a software company.
And so I’m kind of making this or imploring leaders to actually come down and understand their release practices. Because when bad stuff happens, you know, we call it bad ship. When bad ship happens, turns out that’s bad for everyone. It’s bad for the developers who become demotivated. It’s bad for your brand. It’s bad for your customers. And we see, you know, actually in the unfortunate case of the CrowdStrike outage, it can be bad for the world too. You know, hospitals shutting down, airlines not flying.
These are real consequences. So as a leader, do you get to divorce yourself from those and say, that’s the technology departments purview? I’d say no.
Kate Holterhoff (13:28)
Right. And with the AI landscape evolving rapidly, the folks that we speak with at RedMonk are almost without exception, both excited by the promise of AI and also increasingly alarmed by security issues, like those you mentioned with CrowdStrike. A lot of them relate to data privacy, but also jailbreaking and prompt injection, some of these unaddressed issues that some of the smartest minds in computer science are still trying to grapple with. So I’m curious, how should companies approach this moving forward in order to ensure that they are appropriately balancing safety with innovation when building generative AI software?
Dan Rogers (14:05)
You know, I kind of go back to what’s old is new and this idea of controlling the release, progress, you know, understanding the development life cycle that doesn’t change with AI applications. You need to make sure that you progressively roll the software out. You need to make sure that you’re ready to do, testing prior to actually pushing it out to larger groups. You need to make sure that you’re monitoring for errors as you roll things out right there in the software release process. And you need to be able to do rollbacks instantly and holistically across all of your applications to the last known good state. Those core concepts that were du rigueur of kind of five years ago. Don’t forget those just because it has the label Gen AI on them.
Kate Holterhoff (14:43)
I’ve really enjoyed speaking with you, Dan. Again, my name is Kate Holterhoff, Senior Analyst at RedMonk. If you enjoyed this conversation, please like, subscribe, and review the MonkCast on your podcast platform of choice. If you are watching us on RedMonk’s YouTube channel, please like, subscribe, and engage with us in the comments.
No Comments