MonkCast on the Road: Microsoft Build 2025 Announcements with GitHub’s Mario Rodriguez

MonkCast on the Road: Microsoft Build 2025 Announcements with GitHub’s Mario Rodriguez

Share via Twitter Share via Facebook Share via Linkedin Share via Reddit

In this MonkCast on the Road, Mario Rodriguez, CPO of GitHub, discusses the latest innovations and announcements at Microsoft Build 2025 with Kate Holterhoff, senior analyst at RedMonk. They chat about GitHub’s MCP server, GitHub SWE agents, open sourcing VS Code and WSL, and the future of intelligent applications.

This RedMonk conversation is sponsored by GitHub.

Links

Transcript

Kate Holterhoff
Hello and welcome to this MonkCast on the road. My name is Kate Holterhoff, senior analyst at RedMonk, and I am excited to have the CPO of GitHub with me today to talk about his experience at Build 2025. Mario Rodriguez, thank you so much for joining me on MonkCast.

Mario Rodriguez
Thank you so much for having me here. This is pretty exciting for me as well.

Kate Holterhoff
Wonderful.

Mario Rodriguez
We usually talk a lot, but not kind of in this setting.

Kate Holterhoff
It’s true.

Mario Rodriguez
So I want to see what you got for me. What are your questions?

Kate Holterhoff
Okay. Well, there’s been a lot of announcements at this Build. I’ve been very interested in, I guess, how GitHub has been working on its own MCP server. So I was wondering if you could talk to me a little bit about what that does.

Mario Rodriguez
One thing that I think is important for people to know, we have a very close relationship with Anthropic on MCP. We have been partnering with them a significant amount and we are in the steering committee even of MCP. We recently even made a GitHub repo public that looks into an MCP registry as well and an API spec for that. We’re kind of all in on it. And you probably heard Kevin Scott mentioned in the keynote, how MCP is kind of that HTTP layer in his opinion, if you’re going to compare it to the web. We feel the same. If you think about it, there’s going to be — these models have intelligence and they’re going to get better and better and better at intelligence, right? They have like three PHPs stacked on each other. And what they don’t really have is knowledge at the end. Especially in the enterprise where the knowledge lives in proprietary data and in other places. And what MCP really allows, without you writing a ton of code, the connection of that intelligence to the knowledge to solve a task. the models right now, especially even your Sonnet, as an example, is very good at tool calling, right? So we’re very excited about being able then to connect the of these graphs that we have in GitHub and all of the things that exist in the developer ecosystem with GitHub Copilot. And we think MCP is the way to do that. And the fact that it’s open source and no one, yes, there’s a steering committee and all those things, but it’s kind of like the web. No one owns HTTP at the end and no one, this is a protocol now that can serve that for all of humanity. So we’re very excited about that. And yes, we have now a local MCP server that you could run and we’re working right now also on a remote one and we hope to make that available very very soon, hopefully next month.

Kate Holterhoff
Interesting, yeah I really enjoyed Kedasha Kerr’s presentation of how she’s using GitHub’s MCP server as just as part of her workflow. I mean, it seems to really expedite a lot of the work that she was trying to accomplish there as we’re moving from using coding assistants, Copiilot, to this more agentic approach, that seems like it’s going to be a really core part…

Mario Rodriguez
I think in the demos that we did, you also saw Dev Mode by Figma MCP, which is great if you have a designer with all of these assets and files, how are you going to transfer that and all of those components into your code? Now, MCP allows you to do it because it could get that knowledge and it could then give it to the model. If you think about documentation, now I could do that. If you think about things like web search, now I could do it. So I think it’s this connector of tools that provide knowledge. very bullish on it, and you’re going to see a lot of innovation because of that, for sure. And it will enable a lot of people that provide knowledge and we’re very bullish on it and you’re gonna see a lot of innovation because of that for sure and it will enable a lot of people that are not even professional developers to accomplish things. Because then kind of through these tools, you just get connected to them and you don’t have to then create all that knowledge yourself and pass it to the model.

Kate Holterhoff
Right, right, right. Yeah. open specifications, ways of making sure that we’re communicating across all these tools and platforms. I mean, that’s absolutely essential. And that sounds like something that MCP is going to be able to provide us. So GitHub’s leaning into that. I also want to talk about what used to be called Project Padawan, but is now GitHub SWE. So can you talk to me about what that is?

Mario Rodriguez
So we did announce Copilot coding agent. if you all saw the keynote, there’s a special moment there where actually Satya Nadella did the demo of Copilot coding agent. You don’t usually get Satya to do a demo, let me say that much. And he did it, and it was great. He went to an issue, scrolled through the issue, explained what it is, assigned it to Copilot, and that’s it. He was able to get that. Copilot picks it up. it analyzes the issue, starts creating a plan, then codes that, and then through a draft PR, it tells you, here it is, and it’s ready for review. And really, if you think about in 2021, we started with code completion. And that was kind of like your pair programmer. Then we added conversational. I think that came around 2023. Then we added edits. But now you have to select the files yourself. And then very recently, we did Copilot agent mode in VS Code, right? You know, agent mode in VS Code is like takes you to here. I think but at the same time you are… you’re kind of in this IDE waiting room because it’s synchronous. You have to then prompt it. You have to wait for it to finish. And then once it finishes, you’re like, okay, here’s the next thing. And you’re kind of building incrementally, but whatever, like your imagination stuck within that session, in my opinion, I think with Copilot Coordination, now everything is possible, right? Like, me, let me, as an example, it’s not only like the toil that we could remove through issues and dependency updates and all those things.

But you could just be like, OK, I’m thinking of doing X. Let me create an issue for that. Or let me go into copilot.com and say, I want you to implement this algorithm in these three different ways based on this code base. And let me see what works the best. And you have this multi-threaded asynchronous experience with Copilot. And I believe the world will not be the same. Like, this is the year of agents. And this is really the year of asynchronous agents, in my opinion. Because now you’re no longer waiting for something, you have full multiplexing of and the power that that gives you from a coding perspective.

Kate Holterhoff
Right. And this is a silly question, but I love that when the agent is thinking it uses googly eyes, who chose that?

Mario Rodriguez
It’s very GitHub-y, I would say. Like, I mean, if you just even go to the booth, it’s pretty sweet what we have in there. So I think that was very GitHub-y. And I don’t know who actually it was probably their team, or someone else. So if someone in GitHub knows maybe can correct me, but I would say it’s very GitHub-y and I love it. Like it says, hey, I’m here. Let me look into it. And then it gives you a draft PR and it’s like, okay, I’m finished. Go review this.

Kate Holterhoff
Okay. I’m all for emojis signaling to me at what stage the agent is.

Mario Rodriguez
I think so. think GitHub is about enabling developers, but we also want to have fun.

Kate Holterhoff
Yeah. Okay. Well, I had fun with it. Yeah. Okay. Amazing. And so what else are you excited about during this Build? How many Builds have you been to at this point?

Mario Rodriguez
Oh my god, a lot of them.

Kate Holterhoff
A lot?

Mario Rodriguez
Yes. I did take a break probably the last two or three, but I’ve been probably like 15 or 10. Yeah. At least 10, I think. I was even before in what it was called the Professional Developer Conference. We used to be PDCs and Microsoft used to be PDCs and like that was kind of dating me probably.

Kate Holterhoff
Wow. Okay.

Mario Rodriguez
I’ve been to a lot of… this building is very, I have very fond memories of this building. Many breakout sessions. This one has been great, in my opinion. If you just look at the keynote, it was three hours, so there’s a lot packed in there. And even today’s keynote with Jay, where we try to kind of have a story end to end. We started with Jess and then utilizing Copilot in VS Code, a specific Copilot agent mode, and then utilizing Padawan and utilizing the SRE agent. And then after that, it was Foundry. I think the Foundry demos were amazing. So shout out to them on it and then everything else that kind of follow all the way to what Charles Lamanna ended up talking about. So I think it’s been an incredible Build. We have a large presence from a GitHub perspective, at Build right now. we have had a lot of great conversations with customers. A lot of them have to be about agents and what we’re doing in that space, but also about our core platform. Like I’d be speaking to them a lot about, look, right now what we’re going to be going towards is we have this creation tools, VS Code as an example, Spark from us is another creation tool. GitHub is known for that collaboration section. But now we’re expanding even into that app layer as well with all of these agents and MCP, right? And what we’re going to be doing is kind of providing this agentic layer across all of that with a command center on top of that. So I’ve been talking to them a lot about that. It’s resonating very well. So yeah, we’re excited.

Kate Holterhoff
Absolutely. What has interested me, and I have not been to as many by a long shot, this is my second Build, is the open source stories. I there’s been a lot around, how VS Code is now fully open source and even under the MIT license, which is remarkable. And following that from a RedMonk perspective, there’s been a lot in the news about open source, different license changes. So the fact that, yeah, VS Code is really leaning in to that and Microsoft is taking open source seriously and even using an MIT license. That was pretty remarkable.

Mario Rodriguez
If you think about it, VS Code has always been open source and like the foundation of that team. If you date back to Erich Gamma, what he has done before too, with IBM, but that has been their DNA forever. I think this is another step that we’re taking to make sure that the editor is an AI editor, in my opinion. Like the editor didn’t have an AI on it and that was closed source, right?

So the editor was open source, which is VS Code, but all of the AI capabilities were not. And what this allows us to do is to say, look, the AI editor of now and the future is open source. And that is within the DNA of VS Code, that is within the DNA of Microsoft and everything that Sati has been pushing for the last 15 plus years overall. So that’s also within the DNA of GitHub, like we are the home of open source. For us, what this allows to happen is the innovation that you’re going to see in the market, hopefully goes up significantly as more people can remix overall. Like that remix capability of open source is what excites me as well. So I think this is not going to be an isolated event from us. You’re probably going to see more and more of that. I mean, even here, I think they announced the open source of WSL, Linux. You’ll see more from us.

Kate Holterhoff
So there’s been a lot of announcements around the models as well. I’m super interested in where GitHub’s going with that. Can you talk about, yeah, what are you seeing in terms of models?

Mario Rodriguez
We’ve all seen kind of the rise of this AI engineer, right? You have engineers and developers and in our opinion, every single application going forward is going to have intelligence. We’ve seen that these engineers are going to have to deal with prompts. They’re going to have to deal with evals. They’re going to have to be deal with an understanding of how to integrate all of those things with a model deployment and inference into the SDLC. And I don’t know about you, but like the tools right now that we use for that are not great. Now we kind of got our feet kind of a little bit in the water with models and playgrounds. That’s what we first announced. And that was just kind of an entry point for us. It’s like, hey, there’s all these models in the ecosystem, go and play with them. And we kind of get you to a point and then after that you have to go into Azure. And the main feedback that we got back is that’s great, but it would be great if you start integrating some of those capabilities into the core platform. And that’s why you saw us announce, which is now there’s a model tab in GitHub, just like code, just like issues, just like projects. It’s a models tab. You go into it and you have the playground right there and you can just play. But now there’s a first class support for prompts and prompts in several formats and we’ll add even more. There’s a way for you to just do a very quick eval and try to see, okay, with this prompt, does it do well or not? And then kind of start integrating more and more that into your actions ecosystem and SDLC. And you could very quickly set up inference as well.

So we’re very excited now because this is the beginning of how GitHub is going to get transformed around these intelligent apps and how, in my opinion, how the SDLC is going to get transformed around these intelligent apps as well which is very different than how you do normal feature development. Like in these intelligent apps, you have to start with evals many times because almost every project you pick up is a research project at times. Unless someone else has done already and you’re trying to replicate it, like there’s a lot of new ways of working that we still don’t have the right tooling around. So we’re very excited about it. We want to make that super native into the platform. And I think we also announced like Grok 3 support and a couple other things like that. But what I’m excited about is that models tab with prompts and evals. And that’s why you’re going to see us go in a learning loop and continue to deliver features very, very quickly.

Kate Holterhoff
Yeah. And can you define intelligent apps?

Mario Rodriguez
It’s essentially saying it’s an app that calls into a model for inference to accomplish something. So intelligence meaning it’s connected to a model. If it doesn’t ever call a model for inference on something, then it’s just a normal app.

Kate Holterhoff
Just a normal app. Okay. A stupid app.

Mario Rodriguez
Non-intelligent.

Kate Holterhoff
It’s even worse.

Mario Rodriguez
Is it even worse?

Kate Holterhoff
Okay. Okay, I get it. All right. No, but I think that the point is well taken in terms of having that choice. I mean, that’s a debate that James Governor, my colleague who is also attending Build, we have this all the time is how much choice do developers actually want with their models? In my opinion, that is something that all developers are looking for. They want that choice of models because they want to play, they want to kick the tires. You know, I think maybe we will move towards having some, you know, pre-selected models that maybe folks who aren’t interested in, you know, making these tough decisions or, you know, that they’ll have the choice made for them.

Mario Rodriguez
I think in Copilot, for example, in Copilot right now, we have a model picker. Later on, there will be probably something that says auto and then we pick the model based on the task. In this case with models, it’s actually a little bit more about, hey, I have an application. I want to have intelligence in that application, meaning I am going to do a prompt against some model inference endpoint. And for that, you want to choose the right model for that specific case that you’re coding against. And I think for the most part what you’ll see is a little bit what you’re saying is people doing these things are not going to ask the consumer, have a model picker and you figured out which model I should be doing. Now, some big apps will do that 100%. But if you integrate something into a podcast app or something like that, you might end up picking yourself the model, meaning the app [unintelligible]. And in this case, what you want to know is out of these all these models, which one is performing the best for this specific task? And there’s where evals come in in my opinion and evaluation of that as well. So I agree with you for things like Copilot, eventually we’re going to be picking the model for you. For things that, hey, you’re a developer, you are integrating inference and intelligence into your application. Model choice is the name of the game in my opinion.

Kate Holterhoff
Right. Yeah. Okay. Well, it’ll be interesting to see how that develops.

Mario Rodriguez
Exactly.

Kate Holterhoff
Well, I think that’s probably a good place for us to wrap up. Thank you so much for coming on the MonkCast.

Mario Rodriguez
Thank you for having me.

Kate Holterhoff
Yeah, it’s been a pleasure.

Mario Rodriguez
See ya.

No Comments

Leave a Reply

Your email address will not be published. Required fields are marked *