A RedMonk Conversation: What’s the role of generative AI in production ops?

Share via Twitter Share via Facebook Share via Linkedin Share via Reddit

Get more video from Redmonk, Subscribe!

In this RedMonk Conversation between Stephen O’Grady and Anurag Gupta, founder and CEO of Shoreline, the focus is on the potential impact of generative AI on infrastructure. “It’s gonna be a game changer.” 

They highlight the challenges faced by those responsible for maintaining operational stability and incident response, such as the lack of reliable and up-to-date information available on platforms like Stack Overflow and Confluence wikis. They also discuss how generative AI can address these challenges by synthesizing data into short, accurate answers, and enabling follow-up questions.

The conversation also emphasizes the risk of hallucinations or incorrect responses, while ensuring the reliability of generative AI systems through testing. Anurag concludes with an explanation that curation can help enhance confidence in generative AI and unlock its full potential in infrastructure management.

This was a RedMonk video, sponsored by Shoreline.

Rather listen to this conversation as a podcast?

 

Transcript

Stephen O’Grady: Good morning. Good afternoon. Good evening. I am Stephen O’Grady. I’m a Principal Analyst and Co-founder of RedMonk and I’m here to talk to Anurag. Anurag, would you care to introduce yourself?

Anurag Gupta: Yeah. Hey, my name is Anurag Gupta. I’m the founder and CEO of Shoreline Software. We’re focused on automated incident response. Before Shoreline, I used to run database and analytics for AWS for eight years, which really taught me the importance of keeping the lights on.

Stephen: Indeed. And we are here to talk today about a topic, certainly of the moment. I don’t think it’s any secret, certainly for us or for anybody watching this I’m sure, that generative AI is having a moment and certainly making a huge impact on the industry as a whole in sort of short order. So I guess we’ll just sort of kick off with a basic question: as you think about it, what do you think that the sort of potential impact or the potential role for generative AI should be with respect to infrastructure? Let’s just start there.

Anurag: So I think it’s going to be a game changer because the challenge for all of us who are responsible for keeping the lights on is figuring out what to do when there’s an incident. And so, you can go to Stack Overflow, you can go to Reddit, you can go to Google. And the problem is, is that the information there is often incorrect and certainly incomplete. Pretty much always. And so, the best in class today is confluence wikis, but they’re also incomplete and often out of date. And so that’s a challenge. I mean, it’s just like a design doc who keeps those up to date once the code is written. And so what generative AI gives us is the ability to synthesize all the data that’s out there into short answers and to follow chain of thought reasoning, which becomes really important to ask the follow up questions, to figure out what to do. So I think it really has the potential to be very important.

Stephen: Yeah, yeah, for sure. And you know, we’re seeing it employed in a wide variety of areas. And, to your point, being able to to generate more complete and more accurate responses across a wide range of disciplines. So obviously, depending on what it’s trained on and what you’ve fed it, I think game changers is really not an overstatement in that regard. But let’s focus on today. Like so if we think about where we’re at today and you think about this intersection that we’re talking about of generative AI and infrastructure, what in your mind is state of the art? What can be done, if I’m a customer, or anybody else watching this right now, what can I expect? What can I do right now in the moment?

Anurag: So, I mean, there’s a lot of noise out there, right? Every single company is doing AI washing right now, talking about their generative AI capabilities. So two things I’ve liked are: there’s a company out there that is providing the capability to automatically generate Terraform modules from prompts. There’s another that automatically generates Kubectl commands, again based on English language prompts. And so, you know, the value of that is it reduces labor and lookup time, but they’re testable. Because the problem with generative AI is hallucinations. You know, I mean, imagine you were talking to somebody who just as confidently told you a wrong answer as a correct one. And you’d stop talking to them, right?

Stephen: Yeah, we’ve all had that experience for sure.

Anurag: That’s a challenge. And there’s not a lot of guardrails in production operations and the mistakes matter.

Stephen: Yeah. So, yeah, that’s the thing. I think that those are good examples and I think we’ll just see more and more of these over time, as AI gets rolled out and deployed towards these use cases. And I think you’re right, cautioning against AI washing, that’s a thing for sure. But this is notably different than past waves in our experience, because when you dig under the covers for a lot of the use cases we’re talking about, there’s actually — there’s real meat there, right? You know, this is, “hey, this is interesting. I haven’t seen this before.” And this is useful for customers and so on. So I’ll use that as a lead in to our last question here, which is, we’ve talked about the potential, right? It’s a game changer, potentially transformative from an infrastructure standpoint versus where we’re at today, right? Which is, hey, we have a couple use cases. It’s interesting. You can do these sort of discrete things. So, what in your mind has to happen for us to reach that potential? What are the next steps? Where do you go as a company moving forward?

Anurag: So in the world where it’s incredibly easy to create content, but content has hallucinations, the answer is going to be curation. So Wikipedia lets everybody add content, but there are editors who control what is accepted. You know, same as open source projects, right? And so we need to do the same thing for Runbooks. So at Shoreline we create about 100 Runbooks a month now using generative AI, but we also make sure they work and stay up to date and that they’re searchable. And, you know, you find them. In a world where content is ever easier to create, what you need is confidence. And confidence comes from curation. So trusted sources that you can go to that are providing you answers that you can believe in. And, that’s how I think we get from confluence wikis to modern day Runbooks.

Stephen: Yeah, I like it. I like it, well, one, because maybe we’re not all out of work just yet, but two, I like the way that you’re decoupling, essentially, the generation with the authenticity that the quality — the confidence, as you put it, in that content. Because that’s the nature of the systems now, right? They’re spinning things out and just asking you take them at face value. But, that sort of Wikipedia-like, “hey, is this actually accurate? Somebody check this.” That is certainly a means of mitigating the greatest weakness of the systems today.

Anurag: You know, the other thing we’re doing is co-piloting. So somebody can ask a prompt to do something like, “hey, can you convert this from running for AWS to Azure” or “from Linux to Windows” and all of those sorts of things. So you don’t have to be an expert in every piece of your tech stack anymore. I think that’s going to be really valuable for people.

Stephen: So that makes perfect sense to me. And with that, we’ll wrap up. And I just want to say thanks so much for stopping by Anurag.

Anurag: Yeah, Thank you, Steven.

Stephen: Awesome, yeah. Pleasure. Cheers, everybody.

More in this series

Conversations (72)