Earlier this week Red Hat announced the general availability of Red Hat Ansible Lightspeed with IBM watsonx Code Assistant. Packaged as a service and accessed via VS Code, it generates Ansible code based on user prompts and is one of two initial use-case-specific iterations of IBM’s watsonx Code Assistant (the other is aimed at mainframe modernization). While generative AI is often thought of in more all-purpose contexts, these domain-specific AI offerings leverage fine-tuning and customization with the aim of producing more reliable code than that generated by more general-purpose AI chatbots and code assistant tools.
It’s worth noting that I have been following Ansible for quite a while (and even got to keynote AnsibleFest 2021), which means that I was at AnsibleFest in October 2022 when they announced Project Wisdom, an earlier incarnation of Ansible Lightspeed. If you are up on your AI history, you’ll recognize that this dropped the month before ChatGPT’s initial release kicked the tech industry–and society at large–into a full-blown genAI frenzy. This matter of timing has given IBM/Red Hat a bit of an advantage in how they think and talk about leveraging AI. The result is that Ansible has managed to get a headstart on IaC platforms from the likes of Hashicorp and Pulumi when it comes to leveraging generative AI to facilitate and enable automation. And while it remains to be seen precisely how generative AI will continue to shape the automation landscape, Ansible Lightspeed makes a convincing case for domain-specific approaches.
In order to dig further into the approach Red Hat/IBM is taking with Ansible Lightspeed I spoke with Kaete Piccirilli, Director of Product Marketing for Red Hat Ansible, a few weeks before the GA announcement. Kaete was kind enough to talk a bit about the background and evolution of this approach:
We started it probably close to two years ago now, and we were collaborating with IBM Research to see what was possible. We’ve got a great partner in IBM, and we wanted to think about a few things in mind, which is how do we bring the power of AI to the Ansible code experience? Now that’s kind of generic in nature, but we also found as we talked to customers and the community that there was a little bit of a challenge with automation skills and maybe hiring people to be a part of the team and to be able to accelerate automation as much as people wanted to. So we were thinking about it and like, okay, how do we make it more accessible to more IT professionals to be able to utilize automation at scale?
And additionally, we wanted to make our Ansible creators, our automation developers and creators, be more productive, more efficient, maybe error-free, be able to accelerate the path to building automation, and maybe even help them do some of the automation that they have a little bit better, think about some of those pieces and parts.
Last but not least, we really wanted to think about a purpose-built model–that’s where IBM comes–that was more efficient, more accurate, and is very specific to the Ansible domain. One of the cool things about the Ansible YAML is that it’s pretty structured in nature. So it was actually a great place to start with thinking about how do you train on a language because it’s so structured in nature. So the outcome was what we have here today is Red Hat Ansible Lightspeed and the integration with the IBM Watson X code assistant.
Having had the opportunity to speak with Kaete and the rest of the Red Hat Ansible team often over the past few years, I am completely unsurprised to see the needs of technical users centered in this vision of the intersection of AI and automation. Indeed, one of the strongest arguments I have heard for purpose-built AI offerings–especially when they can be trained on use-case specific data–is that they have the potential not only to make users more productive, but to augment and improve existing skill sets in ways that feel aligned with existing coding practices across individuals and teams. As Kaete notes, this also has the potential to reach parts of an organization that may be resistant to automation efforts:
One thing I hear about a lot when I talk with customers is my team loves automation. We’re all in. But these three other teams, they’re not all in. And so we want to be able to reduce those barriers for that code creation because maybe they’re just not comfortable with automation or how to write it or how to use it. And if we can empower those individuals and those teams to work together, you’re able to see just an amazing thing happen across teams. That collaboration that really takes place, that helps just expand what teams can do. And last but not least, we talked about this before, (and I’d love to hear from your perspective) is the trust that comes with what those results have to say. And we have been thinking about that in mind with both the accelerated user, someone who’s using it every day, and then that new user. That’s where we get lots of questions. We’re just gonna toss this into a place where someone doesn’t know how to use automation? And so it’s really to help people get a little bit further every day[….] It just gives people a place to start. And that’s sometimes a challenge when bringing in people who are developing new skills is they just need a place to start. And so if they can trust the start, then they can begin to build that framework within their organization of a kind automation first mindset.
Kaete and I cover a lot in the rest of the conversation, including concerns that both Red Hat and RedMonk have been hearing from clients around generative AI, data, and privacy; some insights into training the model behind Ansible Lightspeed; and some of the advantages of using a domain-specific AI tool over more general-purpose AI offerings.
You can watch the video of this conversation below or see the full transcript, related resources, and options for listening to this conversation as a podcast.
Disclosure: The video discussed here was sponsored by Red Hat (but this post was not). IBM, Hashicorp, Microsoft, and Red Hat are currently RedMonk clients; Pulumi is not.
Related RedMonk posts: