In this RedMonk conversation, Claire Vo, Chief Product and Technology Officer at LaunchDarkly, discusses the significance of experimentation in feature management and software development with Kate Holterhoff, senior analyst at RedMonk. They address how LaunchDarkly stands out in the experimentation space, the impact of AI on experimentation, and the evolving role of software engineers in this process. Claire emphasizes the importance of a culture of experimentation and risk mitigation in software releases, encouraging organizations to integrate these practices into their operations.
This RedMonk conversation is sponsored by LaunchDarkly.
Transcript
Kate Holterhoff (00:13)
Hello and welcome to this RedMonk conversation. My name is Kate Holterhoff, Senior Analyst at RedMonk, and with me today is Claire Vo, Chief Product and Technology Officer at LaunchDarkly. Claire, thanks so much for joining me on the MonkCast.
Claire Vo (00:25)
Thanks for having me.
Kate Holterhoff (00:27)
All right, and LaunchDarkly is well known for feature management, but also offers feature experimentation. So before we jump into this topic, let’s set the table. Can you explain what is experimentation in the context of feature management?
Claire Vo (00:41)
Sure, so feature management is really a set of capabilities to get your production into code and in the hands of the user. And so it’s everything from feature flags to progressive delivery to targeted delivery so that you can build features and have really granular control about how those features actually touch your users. But that’s really just the beginning. It’s not just enough that your code hits production. What you want is to build things that actually make a difference to your users and therefore make a difference to your business. And that’s where experimentation comes into play. Feature management and experimentation really nicely together because feature management lets you get features into production and experimentation lets you get the right feature into production by testing it, tuning it, and measuring what matters.
Kate Holterhoff (01:29)
All right, well that makes a lot of sense to me. As a former academic who still keeps a foot in the research world by maintaining an affiliated researcher status at Georgia Tech, I love this idea of experimentation as a sort of general paradigm. It has the scientific approach that balances creativity with data-driven decision-making that I find really appealing. Now, LaunchDarkly has obviously done a lot of innovating in the feature management space around experimentation.
but I’m curious to hear what you think sets LaunchDarkly apart as an experimentation platform from competing vendors in the market.
Claire Vo (02:03)
Yes, and I’ve done experimentation now for two decades. So I feel like I have a good point of view of why I think LaunchDarkly has a really amazing product experimentation platform. And what that is, is we don’t think of experimentation as the realm of just one piece of the organization, whether it’s marketing or product or someone else. And we really don’t think that you can only experiment on the surface parts of your experience, like copy or imagery or design. We really believe that experimentation
should be baked into how you build digital experiences and how you build products. And from my point of view, and I’m sure you would share the same point of view, the folks and the teams that are at the center of building digital experiences and digital products are software engineers. You just can’t get around it. And so what I like about LaunchDarkly is we’ve built this end-to-end system that brings together the great feature management that software engineers love and then allows them to partner with their product teams, with their data teams, to really optimize and tune those features
at more than just the surface level. So of course, you can do the kind of experimentation we’re all thinking of, you know, this color button versus that color button, but you can also do experimentation deep in the stack, this database versus that database, this API query versus that API query, or this, you know, AI model versus that AI model. That’s the level at which experimentation can really impact your users and your business. And I believe that LaunchDarkly is the best at enabling software engineering teams, product teams, and data teams to do that altogether.
Kate Holterhoff (03:32)
That makes sense. Let’s talk brass tacks. How should organizations be approaching experimentation in software development to ensure that they effectively measure the impact of new features?
Claire Vo (03:42)
What I always say is that you have to make sure things aren’t going wrong, or at least you’re having a neutral impact on your customers. So many of us presume that just adding something new is net positive when often when we add something new, new code, whether front-end or back-end, we can actually degrade the user experience without knowing it. Sometimes you’ll see that as a big, terrible crisis where everybody’s laptop goes down and no one can use the internet or their computer to get their job.
or life done. It could be something as big as that, or it could be something as small as, know what, I increased latency on this checkout page by 30 % and people are dropping off because they’re getting impatient with waiting for the checkout. And all those things can be measured through experimentation tactics and strategies. And so first, I always like to tell engineering teams, think about how your experimentation of current state, new state, and are you sure it’s working well?
Once you’ve gotten past that, you really want to think about what things can you experiment on in the product experience and plan that roadmap ahead of time. So don’t just be one and done with your feature release. Instead think, my feature release is going to be my first best path, my MVP of what I can do, but I’m going to have plans to tune and iterate and optimize along the way. Product managers should have those plans, engineers should have those plans, and then they should figure out the measures that matter for their business, whether those are technical measures, revenue measures or user behavior measures and make sure that every release has those measures implemented.
Kate Holterhoff (05:17)
Let’s pivot to the elephant in the room by which, of course, I mean AI. How has AI influenced experimentation? Why is it even more important for organizations to experiment when introducing new AI features? And how do you see post-AI experimentation evolving over the next 6 to 12 months?
Claire Vo (05:33)
Yeah, so as somebody who builds with AI, as I always say, non-deterministic system non-determines, which is it’s really hard to predict the real experience of an end user using an AI or LLM backed product because these systems are by nature non-deterministic.
You can only be sure to some level that you’re going to get the output that you expect when you put in the input. That’s just the nature of building with these products. And so experimentation and in fact, experimentation in production is one of the best ways to nail AI experiences that work. You really do have to figure out what model works, what prompt works, what configuration and what parameters work. Do you want to chain models together? Do you not? Do users like
long lengthy responses, but they have to wait for them? Or do they like short snappy ones that are a little faster? You really have to determine that. And the only way to do that is through this iteration and experience and experimentation with user feedback at the center. So I think it’s more important than ever because of the nature of building with these products.
Kate Holterhoff (06:42)
Here at RedMonk, we are developer focused. So I want to hear more about how developers are engaging with experimentation, because historically, experimentation was perceived to be more of a playground for marketers. Now, however, experimentation has entered the domain of software engineers and product managers. So I’m interested, what precipitated this shift?
Claire Vo (07:03)
I think like all things consumer demand and tech technology advances precipitated this shift in that consumers have now experienced.
amazing digital experiences, whether that’s a mobile app or a website where you can use your voice to chat with an all-knowing AI bot, the consumer experience expectations are quite high. And to deliver those experiences, you have to build pretty complex software. This isn’t just a CMS and some HTML and a static site. These are dynamic, interactive, adaptive applications. And so the
technology underlying those is more complex and implementing those technologies requires software engineers. And so you can’t get away from engaging the engineering team when you want to do meaningful experimentation on your experiences, because they’re complex. To change them requires writing code, it requires more than changing colors and copy, as I said. And you can be more creative when you involve a software engineer in the experimentation hypothesis generation.
process, in the design process, all those things, because often software engineers can think of ways to build products that maybe product managers and marketers haven’t thought of yet, because they don’t understand the underlying technology to the depth that the engineering team does. So I really believe it’s a combination of we all expect pretty fancy experiences, and the ones that can give it to us are software engineers.
Kate Holterhoff (08:34)
Sounds like there’s an element of dog fooding going on in your explanations, Claire. How do you encourage a culture of experimentation at LaunchDarkly? And by extension, how do you encourage that philosophy shift for your customers and prospects?
Claire Vo (08:48)
Yeah, so, you know, it is really a culture of experimentation and that leadership comes from the top. And the first thing that folks need to do if they want to instill a culture of experimentation is get their ego and everyone else’s ego out of the way. So often, I’m sure you’ve had this experience, there’s some debate about what we should do with something we build and, you know, two people are on side A, I think we should, the button should be red and two people are on side B. No, red is a scary color and our users will hate it.
and they think that this sort of academic debate will get them to the right answer. And the reality is the users will give you the right answer. So that exact example came up. We were debating a specific flow in our site. There were two sides of it. Both had well-reasoned arguments for why one, you know, their design was the best practice. And I said, you know what?
We don’t get to decide, our users get to decide, build it as an experiment and tell me what happens. And that was the most honest reflection of the correct experience we should build. But it requires somebody saying, I don’t know, you don’t know, and you don’t know. The person that knows, the people that know are actual users, and let’s rely on them to tell us what’s right.
Kate Holterhoff (10:03)
That’s a fantastic example. As you look to the new year ahead, what developments do you expect to see in the experimentation space? And as CPTO, how do you ensure that you’re on the forefront of those evolutions?
Claire Vo (10:16)
So many people think that experimentation is about optimization. It’s about getting from good to great or great to excellent. And I would encourage folks, as we looked at the last year, we’ve seen a lot of kind of public failures of software. And I won’t name names, it hasn’t just been one. It’s been many. And I think folks in technology teams don’t appreciate the practice of experimentation as risk mitigation. Essentially, every change you implement in a system
should be measured. You should measure what that difference is between the new change and the old change. And you should know for sure within statistical significance if it’s having a negative impact because risk management is huge. And if you put this risk management lens on it, there is no reason that every single feature release should not be a version of an experiment. Then you can get yourself to the good stuff of how do I optimize? And I think that’s the sort of fun stuff that we all like to think about experimentation.
But I would encourage folks to consider, are they underutilizing experimentation and the aspect of risk management and software releases? And how can they integrate that as part of their engineering operations to ensure that the things they’re doing are having the impact that they want?
Kate Holterhoff (11:30)
I’ve really enjoyed speaking with you, Claire. Again, my name is Kate Holterhoff, Senior Analyst at RedMonk. If you enjoyed this conversation, please like, subscribe, and review the MonkCast on your podcast platform of choice. If you are watching us on RedMonk’s YouTube channel, please like, subscribe, and engage with us in the comments.
No Comments