A RedMonk Conversation: Private AI with VMware by Broadcom

Share via Twitter Share via Facebook Share via Linkedin Share via Reddit

Get more video from Redmonk, Subscribe!

AI-based solutions are cropping up in every portion of the tech stack and at every stage in the SDLC. From chips to applications, from the instantiation of code to running it in production; when new AI solutions are available across the board, how can enterprises consider where to place their investments in this new and evolving space?

Join RedMonk’s Rachel Stephens as she discusses these questions with Himanshu Singh, who runs AI Product Marketing at VMware by Broadcom. VMware created their Private AI solution to help address these enterprise AI concerns. Join Stephens and Singh as they discuss how enterprises are adopting this technology, how VMware thinks about privacy and risk when it comes to AI, and how enterprises can orchestrate AI workloads to be part of their broader technology solutions.

This was a RedMonk video, sponsored by VMware by Broadcom.

Rather listen to this conversation as a podcast?

 

Transcript

Rachel Stephens: Hi everyone. Welcome to RedMonk Conversations. I’m Rachel Stephens, and with me today I have Himanshu Singh. He runs AI product marketing at VMware by Broadcom. I’m really excited for our conversation today because I think we’re going to talk about some of the things that make AI really unique technology that is entering our space. And I’m excited to hear Himanshu’s thoughts on this. So first, do you want to give us just a quick intro to who you are, what you’re doing, any kind of things that will help set the stage.

Himanshu Singh: Hi, Rachel. Yeah, first of all, thank you for having me. This is fantastic opportunity to talk to your audience about what’s going on in AI and specifically what we’re doing from a VMware perspective as well. So I actually have been at VMware for quite some time now, about going a little over eleven years and done a variety of different things. So right now what I focus on is product marketing for our core kind of compute platform, which is vSphere, as well as our AI offering, which we’re calling VMware Private AI. So that’s currently what I focus on. And of course, as we’ve moved from being VMware, an independent company, to being part of Broadcom, we’re excited about the opportunities that we now know, the enhanced opportunities we now have to bring a lot of this really cool technology to our customers.

Rachel: Very cool. And I think one of the things that’s interesting about VMware, especially in your new Broadcom era, is you really do have this enterprise audience that is unique. I think a lot of the technical world, your account reach is broad and massive, and you really are everywhere. And so I think you have a really interesting take on what it means to bring software into an enterprise. And so one of the things that stood out to me when I was at your conference last fall, and it was VM, I keep calling it VMworld. It’s not VMworld anymore. It is VMware Explore. When I was at VMware Explore. And one of the things, and I think I actually was so impressed by it that I ended up tweeting it, you brought out your general counsel on stage during a technical keynote, which I have never seen in my career as an analyst. It’s the first time I’ve seen kind of general counsel on stage. So you brought Amy Ali on stage to come and speak about how VMware is trying to address some of the specific market concerns in and around AI specifically. And what I thought was interesting is because as we talked about, VMware is a place where you’re operating all the time in a world where compliance and security are paramount concerns for your customers and all the things that they’re doing.

So what was it about AI specifically that brought you to a place where you needed to bring Amy on stage? And I think, for me, I have some thoughts, but I would love to hear, what is it that you’re hearing from customers about what it means to bring AI into an enterprise space?

Himanshu: Yeah, so many thoughts. I’ll kind of go, I guess a few things I want to call out, by the way. I’m sure Amy was super surprised as well, with the idea of bringing up a general counsel to main stage in the main session at a technical conference. Right. That just doesn’t happen. But that’s kind of the reality of where we are today, right. With AI and the opportunity with AI. If you think about just going back a whole bunch of years, we’ve had these moments in time where things like the PC came about, the web basically became just easily accessible to everybody, or mobile revolution happened. I think we are in that moment now with AI, where we’re really looking at opening up the impact of AI to just everybody. It doesn’t matter if you’re in the tech industry, it doesn’t matter if you’re in manufacturing or you’re a legal counsel, for that matter. AI has implications of really enhancing all the things that we do around us. And there’s all sorts of Skynet situations possible as well. But what we want to do is we want to do responsible AI. We want to do AI in a way that impacts us positively.

It makes us a lot more productive and helps us do the things that we want to do in an easier way, in a way that improves our innovative capabilities. So AI is very nascent at this point in time. Of course, it’s new. What we wanted to do at the VMware Explore event, by bringing in Amy was to basically make that point about, hey, the folks who are in attendance at the conference or people who are watching online, just for them to get that exact moment to say, oh, hang on, this is not just another tech executive, a product marketer, or someone from engineering. This is somebody who’s like, in my company, we’re going to have a general counsel. Every single organization is going to have something like that. And so the fact that she went in and talked about how it was about kind of information retrieval and being able to get documentation and access to some of this content much, much faster is so important to her team, and it makes them so much more productive. Well, that’s a use case that can be applied anywhere, right? The most common one you can think about, or the obvious one, that where it’s already being applied is around contact centers, for example.

It just makes it so much easier for employees to be able to leverage all the knowledge that’s been built up within the organization and have it available to them very easily so that they can serve their end customers much easier. Now, across every vertical, we’re definitely getting, if you think about from a VMware perspective, we serve 300, 400,000 customers. And so it goes across all sorts of verticals, and they’re coming in and talking about, hey, I want to make sure that we can get the benefits of AI to our employees, to our organizations, to our customers. And because it’s so new, they’re looking for a trusted partner, a trusted advisor to kind of work with them in their specific scenario to really help them adopt these technologies. At the same time, the certain challenges that they kind of think about, if you look at how AI was done in these silos with just these kind of data scientists working on them, creating these unicorn models, basically that only did a certain thing. We’ve now moved to this whole era of generative AI, where the idea is you have a model that can be fine tuned based on your organization’s data, and then it can serve your needs, and it can be used and reused, and it can be used by the average employee in terms of using natural language to interact with the model versus computing language.

So all of that basically also means that this needs to be treated as a tier 1 enterprise workload. What that in turn, means is that, well, you have to think about all the typical things about availability, performance, cost, et cetera, et cetera. But the big one that we hear about is privacy and security, right? So, because fundamentally, anything related to AI uses vast amounts of data, and we’re specifically talking about data that might be unique to an organization, that is their IP, it needs to be protected, and it needs to be made sure that it’s not leaking out outside their organizational boundaries. Which is exactly why, from a VMware by broadcom perspective, we’ve come up with the idea of private AI and making sure that we are putting privacy in the AI field at the top of it. Essentially, there’s all sorts of other things, as I mentioned, around performance and cost. You want to get all that. You want to make sure if there’s compliance needs. You got to think about that as well. But fundamentally, privacy is something that’s very important. That’s what our customers tell us, and that seems to be resonating very well with them. And that is exactly why we get all this interest of our customers kind of talking to us about it.

Rachel: Yeah, that makes sense. I think that speaks to some of the concerns — we’re going to bring Amy on stage, we’re going to show that we have the compliance and legal footing and framework for what VMware is talking about when we enter this space. And I think that’s interesting because a lot of the times when we at RedMonk have that developer practitioner focus of what is it that people are using? That’s absolutely the case with AI. We want to understand how people are kicking the tires with these new products. There’s new products coming online every day. We want to understand how they impact people’s workflows. But AI is a fundamentally different beast in terms of needing kind of that legal framework and permission to be able to use enterprise data in a safe and effective way. And so I think it’s an interesting way where you have to kind of balance two separate sets of concerns around productivity and also just making sure that you are in a good place. So let’s talk a bit about what the VMware solution actually looks like. Can you tell me about your private AI foundation?

Himanshu: Yeah, so as I said earlier, in terms of when we talk about private AI, first of all, it’s not about private AI as in like a private cloud kind of scenario. The idea with private AI is privacy, and we do get that question all the time. The solution essentially focuses on four or five key things. Its one is privacy, ensuring that the customers intellectual property, their data, the access to that data, is all private. It’s protected, it’s behind these secure boundaries. The second one is to make sure customers have a variety of choices in terms of how they build their stack, what type of large language models they use, what type of frameworks that they use, and we can talk more about some of the things that we’re doing with all our partners in that space. The other one is around cost as well as performance. So if you’re trying to do generative AI in house, that does involve a significant investment, if you think about GPUs, for example. So what you want to do is you want to get all the performance from that investment that you’re making. So you want to basically look at an overall kind of TCO situation as well.

So we have to focus on that, and then the last one is compliance. So make sure you’re making sure that the workload meets all the needs of a typical kind of tier 1 workload that you’re running in your environment and then your IT is able to support it. So fundamentally, the way we do it is we offer VMware Cloud Foundation, which is our standardized private cloud platform that we offer. And the idea there is for VCF, VMware Cloud platform to be that ubiquitous layer that powers all tier 1 workloads in the organization. And so with that comes consistency. With that also comes things like security and privacy, et cetera. And it helps the organization to be able to manage and run these workloads through central IT without creating yet another silo like we’ve seen with specialized workloads otherwise. So with that standardization, with that support by IT, helps from a cost perspective, it helps from a skill set perspective, it helps from the idea of being able to deploy fast and being able to make sure that we succeed, the customer succeeds in their AI current deployment. Now, on top of the VCF stack we are working with (…) have your typical kind of layer where you can make your choices in terms of the large language models that you want to use, the frameworks that you want to use, for example, any kind of specialized tool sets, et cetera, that you’re looking to essentially.

So with that stack in place now you can take your LLM of choice, you can then fine tune it based on your data that is unique to you, that’s unique to your organization, and then be able to then deploy your large aggregate model for inferencing and then be able to have your employees going to use it essentially. So that’s where we come in. Fundamentally, we are an infrastructure company, and what we’ve done is made sure that the platform that we offer, VCF, is optimized for running AI workloads. And then we are also having a broad set of large ecosystem of partners that enables that choice aspect that I mentioned earlier.

Rachel: I feel like that was such a fantastic answer that I might not even need to ask my next question because I feel like you’ve covered a lot of the things in terms of talking about like tier 1 workloads and standardization and making sure that you’re not having silos. All of these are great stories, but I think for me, one of the things I really wanted to just explicitly ask is why is this something that needs to happen in terms of having an AI based solution at the VMware level of things? Because what we’re seeing in the AI tool space is that there’s this explosion of AI based tooling up and down the stack, like all the way from the chips all the way into developer ID coding tool kind of thing. So it’s ubiquitous throughout the stack. And so I think for me, one of the questions that everyone is trying to figure out as they figure out what does AI adoption look like in our organization is like, where does it need to happen? And so could you just make the case for why is VMware solution the way that you think makes sense for an enterprise?

Himanshu: Yeah, part of it I think it goes back to the idea of, hey, if you looked at, I don’t know, maybe 20 years ago, databases were — maybe even longer than that. Databases were a new thing, right? And now you cannot think of a workload that doesn’t use a database. That sentence does not make sense anymore. And the intent or the thought here is, as we go forward with all the evolution of technology, AI is going to become that ubiquitous in terms of its applicability and its ability to enhance any kind of workload overall. And so no matter what type of organization you are, no matter how big or small you are, there would be a certain way in which you can benefit from this. And if you think about organizations looking to manage their costs, drive forward more and more innovation, having maybe a competitive edge in the market, being able to do a lot more for their customers in a better way, giving their end customers a better experience. This is essentially where I think AI applicability becomes very important overall from an implementation perspective. Hence all the interest in AI, hence all the queries about AI, essentially.

Now, as we think about this as a workload from a VMware perspective, we want to make sure that the fundamental infrastructure layer that you have is optimized to be able to deliver the kind of performance that you need at the cost that you want it to be, while making sure that you have all these, the things we talked about earlier, things like security and privacy, et cetera. So the fact that 300,000, 400,000 customers today trust VMware to run their most business critical workloads today, that leads into, hey, those same workloads are going to be enhanced by AI, or you’re building net new workloads that you’re deploying, essentially, you’re building. So you want to make sure that customers are continuing to work with us as their strategic partner, as their strategic advisor, and making sure that what we are offering is evolving as well, and helping customers kind of meet their needs and their objectives as they kind of go forward. So that’s beyond the point in time hype of any new technology. The benefit of it is when it becomes obvious and ubiquitous, and that’s when that kind of adoption curve has been crossed. That’s when I think all the value really becomes available to everybody versus a select few.

And that’s exactly what we’re trying to do. We’re trying to democratize AI. We’re trying to make sure that no matter what type of organization you are, no matter what type of AI you’re looking to do, there is an option available from a VMware perspective, as a VMware customer, that we can do.

Rachel: I think you mentioned primarily inferencing. Is that where you’re thinking most of the workloads will go, at least in the early days?

Himanshu: For the typical enterprise? I think that’s where it is. If you think about kind of core training, if you look at companies that are coming up with these models, there’s a lot of LLMs being created right now. I think I was watching something the other day and they looked at like hugging face has more than half a million LLMs available to you to pick and choose from. And there’s just so many these options available out there. So you’re going to basically start with an LLM that’s related to your field, for example. But then of course, what you don’t want to do is, as an enterprise, is kind of reinvent the wheel. Like there’s going to be these organizations that are going to have massive compute power and resources that are going to build these LLMs, train them on a certain amount of data. And then what you want to do is be able to leverage that and then make it work for you based on your internal IP. The way we see enterprise adoption happening is to basically take that kind of standardized LLM, then be able to fine tune with your domain specific data in this particular case.

And then you drive inferencing. It could be in various locations, because then it becomes important about, hey, where’s your kind of user residing? Are you having specific edge locations? You have remote offices, you have point of sale locations, you have data centers, you have people in the field, for example, who might need to access through like a mobile interface. So all these different use cases need to be addressed. And you want to make sure that the computing that is happening at the end is as light as possible. So you want to do the fine tuning, and then the last stage from an inferencing perspective is kind of where the most value is going to be from an end user perspective. Now, as the space matures, I think it’s going to evolve a lot. We’re going to see a lot more vendors and organizations offering a variety of services in this case as well. So we’ll see how things going to go. And one thing is for sure is that we’re watching the space. We’re making sure that we are setting ourselves up to be able to serve the needs of our customers as we go forward.

Rachel: Wonderful. And I think we kind of talked about how you are partnering with your customers. I also want to hear, I think right now the AI space is so nascent, and people are all trying to figure out how all of these pieces come together. So we’ve seen some really crucial and interesting partnerships emerging across all the ecosystems as people are trying to figure out how we bring AI tooling and our best possible products forward. Could you talk a little bit just about your own partnership strategy here and what it is that you all are working on within the ecosystem?

Himanshu: Yeah. So I think fundamentally, if you think about how VMware has been, we’ve been sort of, I think we internally talk about we’re kind of the Switzerland in the ecosystem, right. We want to make sure that we have the broadest set of partnerships, broadest set of tool sets available on our platform overall on VMware Cloud Foundation. And so the same thought process carries on with AI workloads in this case as well. And so what we’ve done so far is we’ve announced kind of three kind of major partnerships at this point in time. One with Nvidia. Well, we’ve been working with Nvidia for more than a decade at this point, and we’ve had kind of AI offerings with them for about five years now, in fact. And so from a GenAI perspective, the obvious one that everybody’s working with Nvidia as well is the VMware–

Rachel: I was looking at how many partnerships Nvidia has done and how many times Jensen has shown up on stage at somebody else’s conference, and his air miles last year must have just been out of control.

Himanshu: I know, right? I could borrow some of them, actually. Yeah, we would take a vacation sometime soon, so. Hey Jensen, if you’re watching this…

Rachel: Nvidia is a partnership that makes sense.

Himanshu: Yeah. So that’s the thing with Nvidia, right? We’ve had a fantastic partnership of a lot of jointly engineered solutions with Nvidia for over a decade at this point in time. So they were obviously the ones that we’ve been working the longest with. And it was very natural. So the one that we are furthest along, I would say, is Nvidia. And we’re coming up with the offering called VMware Private AI Foundation with Nvidia, which is essentially an overall stack that has all the VMware components, has all the Nvidia components. So it becomes that, I would say, that easy button for enterprises to say, hey, we know the leader in AI space and the leader in the cloud infrastructure space are coming together, and we’re making sure that we have this offering available to customers to adopt. That’s basically a product offering that you’re going to be able to purchase very soon. In fact, we made the announcement last year at Explore, and we’re expecting to release the solution very soon as well for customers. So watch out for that announcement. The other one is our partnership with Intel. Now, Intel is another one that we’ve worked for decades at this point in time.

And what they’re doing, in fact, is they’ve come up with their new generation of Xeon CPUs and with embedded and included AI accelerators. So what that does is it makes AI enabled chipsets available to you just as part of a typical refresh cycle, right? So GPUs right now can be one, expensive. And there’s a whole kind of supply chain concern there as well. If you’re new in the market, you’re trying to get your hands on an Nvidia GPU. Yeah, there’s a wait period there as well. But beyond that, I think if you’re looking to do what’s being called now as small language models instead of a large language model, or you’re trying to do a typical kind of AI workload that is not a generative AI workload, you could very well look at the capabilities that the data chipset from Intel offers. And same idea, you have the VCF stack, and then you have the set of tools that Intel has packaged up and made available, and then you run that on kind of the new Intel chipset. There’s also the AMX CPUs that Intel has. You can add those on top of that as well.

So what we’ve done with Nvidia is we’re offering a whole solution that’s a SKU essentially, and a product that you can purchase with Intel. What we’ve done is we’ve basically created a jointly validated solution with the overall stack. We’ve put together a reference architecture that folks can leverage and work with and kind of deploy in their own environments. So that’s Intel.

Then the third one is actually with IBM. So IBM’s been in the AI space for quite some time. Their Watson offering has been there, you know they’ve been doing some fantastic work there. And so from a perspective of being able to offer Watson in the market, essentially they’ve typically been making it available in the cloud. And now essentially what we’re doing is all the capabilities of the Watson ecosystem essentially are now available to our customers on an on premises basis as well. So that’s kind of the big new kind of partnership with IBM. That’s another one that we announced recently as well. So we’re looking forward to continuing to do lots of good work with Intel and IBM. By the way, with IBM as well, similar idea. It’s a jointly validated solution across the stack reference architectures, all that sorts of guidance and documentation available for customers to adopt as well.

Now, of course, we’re not going to stop there, right? So the idea is to build the broadest ecosystem we can. And so we’re continuing to work with a key set of partners in this case to bring more and more of these options and choice available to our customers.

Rachel: Very cool. And I’m assuming, so when you’re saying, I believe you said Nvidia is like full stack, so that would include the chips all the way up to the CUDA kind of things that you can use to run AI/ML workloads? Is it software and hardware working together?

Himanshu: Yeah. So essentially you can think in terms of choose your favorite OEM, essentially where you’re looking at from a hardware perspective. Then you’ve got kind of the Nvidia GPUs, you’ve got the VMware VCF stuff, software stack. On top of it is the Nvidia AI enterprise suite stack as well. And then you can look at kind of the NeMo framework and the LLMs that Nvidia offers, or you can have like a non Nvidia LLM for that matter, as well. Right? So that kind of full stack, essentially, there’s different components kind of coming together. So they might not be, I would say a hardware and software stack. It’s more of a software stack, essentially. And the customer, of course, is going to come in with their favorite OEM that they work with, typically.

Rachel: Okay, well, thank you so much for your time. It’s been a really fun discussion, and I’ve loved hearing your perspective just on where the market is and how you all are adapting to that. It’s been great. If you have any people out there who are watching and listening, who want to hear more about VMware and private AI, where should I send them?

Himanshu: Yeah. So first of all, thank you so much for having this fantastic opportunity. Really loved the conversation. And if folks are looking to learn more about what we’re doing in the space, just literally go to VMware.com and you’ll find the links to our AI story over there. It’s much easier to be able to go and find that. What I would say is it’s a new field, but it’s changing. And there’s lots happening very quickly as well. And the pace of adoption is just fantastic to see. So I’m really excited to have the opportunity to be working in this space and excited about having conversations with folks like you.

Rachel: This was delightful, and we will look forward to watching all the things that come up as the adoption of everything just continues apace. So we’ll have to talk again.

Himanshu: Absolutely.

Rachel: Thank you for your time Himanshu.

Himanshu: Thanks, Rachel.

 

More in this series

Conversations (85)