Microsoft did an excellent job at its Build conference last month, showcasing its strengths and ambitions for the AI era.
It’s always hard to summarise Build, because the company has such a long history and a number of different developer and ops centers of gravity – including Microsoft Azure (cloud deployment and management), Developer Division – aka DevDiv – and GitHub (developer tools and platforms), Power Platform (low code and AI powered business applications), Windows (desktop and server infrastructure). From year to year one division or another will get more attention than others, depending on the vagaries of Big Launches. Then of course there are the cross-cutting concerns – issues which impact the company across its entire portfolio – open source and AI are good examples.
Everyone gets their moment on the keynote stage after Satya Nadella kicks things off on day one, which can make things feel hurried, where storytelling suffers. What Microsoft wanted us to come away with this year was the emergence of what it is calling The Agentic Web, the idea that AI-based agents are going to fundamentally remake how we use technology to conduct business.
While a lot of tech companies are currently proposing AI agent platforms as thinly disguised (sometimes not disguised at all) replacement for humans, Microsoft is keen to thread the needle of agents as platforms to enable humans to get their work done more effectively. Recent and future layoffs may not help that message, but it’s the message nonetheless.
After reflecting on the event (for quite a while, apparently), I am ready to tell some stories from Build as I see them.
The GitHub Embrace
Let’s start with DevDiv and GitHub – a key takeaway was that this was the most integrated Build we’ve seen yet from Microsoft and GitHub. They set the tone from minute one of the conference, introducing Seth Suarez, Microsoft principal program manager and Kadesha Kerr of GitHub as co-hosts, to introduce the winners of this year’s Imagine Cup, and then handing over to Microsoft CEO Satya Nadella for his keynote. Presentations and demos interwove GitHub seamlessly into the keynote content and the conference content more generally. So the framing was pretty clear – if GitHub and Microsoft are closer than ever at Build, you can rest assured that they’re closer than ever the rest of the time too.
Which is to say – one important implication of the AI revolution is that the arm’s length relationship between Microsoft and GitHub is going to be… less arm’s length. The integration isn’t just going to be at Build keynotes. As Microsoft gears up for a more competitive, winner take all, more cut-throat era, GitHub is a key asset, and is going to be treated as such.
Microsoft has kept GitHub at relative arm’s length since acquiring the company in 2018, which has made a lot of sense (up until now). It’s effectively an independent subsidiary. Microsoft has given GitHub time to grow up, with its own distinct culture, approach to UX and developer experience largely intact, while at the same time lighting a fire under the company from a product delivery perspective, with an infusion of its own people (see Nat Friedman and Thomas Domke, both Microsofties, but importantly company founders in their own right – see Xamarin and Hockeystick).
GitHub ships now. Perhaps not at the relentless pace of some startups, but fast enough to get ahead of the market, and respond to threats. The Microsoft-GitHub combination came into its own by leading the current AI wave crashing onto the shores of the industry. It’s worth remembering that GitHub Copilot launched in October 2021. OpenAI launched ChatGPT just over a year later in late November 2022. And now in 2025 with the emergence of AI-native editors like Cursor and Windsurf and a slew of agent platforms, everything is up for grabs.
There are some that would argue that Microsoft and GitHub didn’t respond quickly enough to the emerging threats, but I would argue that’s overblown. Things are moving extremely quickly in the industry right now. Responding to threats doesn’t always mean heading them off entirely.
Peacetime assumptions are behind us – competition is fierce, decisions need to be made quickly, and new products and features shipped remorselessly. Let’s just say GitHub is only too aware of this reality. As is the mothership.
Just as accessing LLMs with chat to generate code has been supplanted by integrating LLMs directly into editors, there is another center of gravity which arguably makes even more sense for agents – that center of gravity being the GitHub workflow. Agents are asynchronous – developers can’t be sitting around waiting for them to finish a job. The kind of long-running reasoning that agents provide is really anathema to developer flow, if you’re in an editor or chat interface. But GitHub was built for the kind of asynchronous workflow we’re seeing emerge. Agents and pull requests go together like peanut butter and chocolate.
GitHub Introduces Coding Agent For GitHub Copilot
The agent starts its work when you assign a GitHub issue to Copilot or ask it to start working from Copilot Chat in VS Code. As the agent works, it pushes commits to a draft pull request, and you can track it every step of the way through the agent session logs. Developers can give feedback and ask the agent to iterate through pull request reviews.
The agent is expressly designed to preserve your existing security posture, with additional built-in features like branch protections and controlled internet access to ensure safe and policy-compliant development workflows. Plus, the agent’s pull requests require human approval before any CI/CD workflows are run, creating an extra protection control for the build and deployment environment.”
This human approval is essential. Vibe coding is all very well and good, but someone has to take responsibility for checking in the code. And indeed take the credit for doing so. Developers are the human in the loop, which is just as it should be. Software engineering in general, and enterprise software engineering in particular, is really not amenable to You Only Live Once. So GitHub and Microsoft are positioning the engineer and engineering team as the point of control and quality, as much as curation and overall decision-making.
From an industry context perspective it’s worth mentioning Jules “an asynchronus coding agent”, which Google launched the same week. It’s another autonomous agent platform, designed to fit into the GitHub workflow. As Google stressed:
GitHub integration: Jules works where you already do, directly inside your GitHub workflow. No context-switching, no extra setup.
So deep GitHub integration is the new hotness for coding agents. Go where developers are, and enable asynchronous work.
In a move that was both offensive and defensive GitHub also announced that it is open sourcing Copilot Chat in VS Code, shoring up Code’s position as the modern developer’s editor of choice. As Cursor and Windsurfer double down on integration and tightly packaged experiences, contributing the VS Code GitHub Copilot Chat extension makes it more appealing from an ecosystem perspective. Open source is still a very very useful lever to play in 2025 – using the permissive MIT license emphasizes the ecosystem. Microsoft and GitHub also made a commitment to integrate and open source further AI capabilities into VS Code core.
Talking of ecosystems, Coding Agent will also be rolled out for Jetbrains, Xcode and Eclipse.
Beast mode Windows
Let’s use open source as the transition to the next section – where we examine Windows – specifically to see if Microsoft can make the platform more appealing to developers. Apple has owned the modern web developer for the last 15 years or so. Developers may kvetch about Apple Xcode, but they love the shiny, highly performant hardware and OS user experience, and Macbooks continue to dominate.
So Microsoft needs to compete on the basis of software, hardware (performance matters!), dev tool integration, and flexibility. That’s a lot.
A long standing industry joke is that this is (finally) the year of Linux on the desktop. Ironically enough, Microsoft has been doing its best to build Linux support into the OS with Windows Services for Linux, allowing you to run Linux on your Windows machine without needing a virtual machine or dual-boot setup. The WSL experience has been steadily improving, since its introduction in 2016. 9 years later Microsoft has finally responded to the very first issue on the WSL repo – to open source the code. WSL, which supports most popular distros including Arch Linux, doesn’t suddenly become every Linux user’s tool of choice – but the target isn’t actually Linux, it’s OS X. The open source decision just removes one potential objection.
If Microsoft really wants to turn the dial it needs to improve WSL performance and the out of the box experience for drivers and third party apps. The faster, snappier and less hassle it is, the more developers are likely to adopt it. Lag is one of the common complaints about WSL. But the open source move certainly won’t hurt. That said, Apple isn’t standing still – at its WWDC event it just announced a new Swift-based container runtime framework – enabling developers to create and run Linux container images directly on their Macs. It’s vm-based, but Apple performance is such that’s unlikely to be an issue. For a version one feature it looks pretty compelling.
Talking of performance Microsoft is clearly a long way behind Apple when it comes to silicon – Apple is ahead on both raw performance and energy consumption. Indeed, unlike Apple, Microsoft has to rely on traditional OEM partners for its microprocessors. In order to become more competitive it had to push ahead with support for ARM-based chips. It reached an important milestone in that respect last year with the launch of the Copilot+ form factor, and Microsoft’s first Surface laptops based on Qualcomm Snapdragon X chips.
The Copilot+ series was positioned as an AI-first architecture, for example including neural processing units (NPU), dedicated to AI and machine learning tasks, to best showcase and be performant. But optimising for NPUs was always going to be problematic in terms of getting developers excited about what they could do with a Windows machine. At launch Qualcomm got all of the attention because it was the basis of Microsoft’s own machines. Yay to ARM laptops! But in the land of LLMs, GPUs are king. Requiring NPUs is all well and good – but if you don’t do a great job of supporting Nvidia you’re really not in the game.
Which brings us back to Build 2025 and one of the more interesting announcements – which played directly to the idea of developer choice. Microsoft announced Windows AI Foundry, which through WindowsML, supports inferencing regardless of microprocessor, supporting AMD, Intel, Nvidia and Qualcomm, whether NPU, GPU or NPU. The platform also offers catalogs like Ollama and the Nvidia NIMs microservices packaging, as well as Microsoft’s LORA and Phi Silica small language models. This could be a big deal. Just as with driver support in Windows back in the day, out of the box model and framework support across all hardware architectures is a classic Microsoft play.
Microsoft also announced the private developer preview of a framework for managing Model Context Protocol (MCP) access to Windows applications. It makes perfect sense to adopt MCP, – it has overnight become a de facto industry standard, but it also pays to be cautious – MCP lacks a strong security model, and Microsoft needs to be very careful about potentially opening Windows to LLM-based security exploits.
One missed opportunity here was to show a beast mode gamer rig also used for running AI workloads. A lot of gamers choose Windows because of Xbox and hardware flexibility – there has to be a subset of folks that would love to upgrade their machine to use for both gaming and LLMs. A single machine for vibe gaming and vibe coding. That remains an open marketing opportunity.
Low Code is finally a thing
Not everything is about hardcore developers though – the Microsoft Power Platform also got its moment in the sun. Charles Lamanna, CVP of Business and Copilots at Microsoft always makes a compelling case, but the infusion of AI agents into the platform with Copilot Studio has the potential to finally deliver on the promises of low code. As a long term low-code skeptic I am increasingly coming round to the idea that AI is the underlying technology that will make a ton of new use cases possible for a broad range of users. Salesforce is in a similar position with its Agentforce platform – Salesforce admins and developers are going to build a ton of cool extensions for Salesforce customers. Salesforce has a deeply engaged global community, folks with business domain experience, and they’re ready to be unleashed with Agentforce if the user experience is right. The salesforce platform is increasingly horizontal, not being all about CRM, and Agentforce is a marker for that.
While some in the industry argue that AI will remove the need for packaged applications – just have a solid data foundation and a bunch of agents instead! – for companies with solid enterprise application customer bases, and a low-code/AI play, there is going to be plenty of opportunity for upside. And if there’s a question about whether AI will replace low-code, Power Platform provides an answer and the answer is no – the centers of gravity will come together and package agents up for business users.
So yes I think we can expect significant further growth from Power Platform – in a business that already has a lot of momentum picking up new enterprise logos. Microsoft did a solid job of showcasing Copilot Studio. We’re obviously not going to see everyone learning to code just because code assistants are out there; the drag and drop application development experience, augmented with natural language commands and prompts, makes a great deal of sense in the AI agent era.
So that’s Microsoft low code business, but what about infrastructure?
A quiet year for Azure
Azure… was not the belle of the ball at Build this year. Cloud infrastructure was somewhat relegated at Build 2025, which not to say there were no announcements, but mostly Azure was simply the place to run all the apps built with the new tooling.
One bit of news that did catch my eye was Azure AI Foundry. Model management and guardrails will be a core part of any infrastructure cloud, but what really struck me was that Microsoft is effectively describing what I call Progressive Delivery as a core AI pattern (our book on the subject will be published in November)
Azure AI Foundry will offer a new Model Router, in preview, which will automatically select the best OpenAI model for prompts, leading to higher quality and lower cost outputs. Additionally, automated evaluation, A/B experimentation and tracing in Foundry Observability will support rollback to proven models if new ones underperform, enabling developers to stay on the cutting-edge of model capabilities to deliver cost-effective solutions.
Stay on the cutting edge, but manage the blast radius.
Scott Guthrie executive vice president of the Microsoft Cloud + AI Group, in his keynote slot on day two positioned CosmosDB as an underlying service that has helped propel OpenAI forward. OpenAI is taking advantage of Microsoft managed data services, not just infrastructure as a service.
He also talked up Microsoft’s green credentials, one of few times sustainability was mentioned at the show. It was good to see (at least some) lip service paid to sustainability there, when other major vendors seem to be jettisoning their commitments.
Multimodel is not a position of strength
Finally – some thoughts on Large Language Models and industry ecosystems. Here is where the news is not quite so good for Microsoft. Betting on “multi model” is not the winning position.
At Build this reality was brought into stark relief in the “CEOs of competitors” section of Nadella’s keynote. Sam Altman of course appeared in a Zoom interview, this just a couple of weeks after rumours emerged that OpenAI might be acquiring Windsurf. Whether or not OpenAI does so, with its coming push into consumer devices, led by Jonny Ive, it’s going to be competing directly with both Apple and Microsoft.
After Sam Altman we had a recorded appearance with Elon Musk, because Microsoft was announcing support for the xAI’s Grok model. Whatever you think of Musk, and clearly he has his boosters, he’s a deeply divisive figure, and there were definitely people in the room and online who were not happy to see him featured. Nadella’s values and those of Musk would not seem to be terribly well aligned.
If Microsoft is forced to talk up the benefits of multi-model and provide platforms for third parties building frontier models, that’s arguably a problem for the company. Essentially all of the developer-facing tools described above are either running OpenAI or Anthropic.
Compare and contrast with Google, which is now puffing out its chest at its own events, trumpeting the capabilities of Gemini. Sure it supports multimodel in Google Cloud, but it can happily lead with Gemini in demos and its own code assist products and agents.
Microsoft on the other hand is not in charge of its own destiny when it comes to frontier models, which is ironic for the company that is now marketing the idea of Frontier Firms. According to Microsoft research:
We are entering a new reality—one in which AI can reason and solve problems in remarkable ways. This intelligence on tap will rewrite the rules of business and transform knowledge work as we know it. Organizations today must navigate the challenge of preparing for an AI-enhanced future, where AI agents will gain increasing levels of capability over time that humans will need to harness as they redesign their business. Human ambition, creativity, and ingenuity will continue to create new economic value and opportunity as we redefine work and workflows.
As a result, a new organizational blueprint is emerging, one that blends machine intelligence with human judgment, building systems that are AI-operated but human-led. Like the Industrial Revolution and the internet era, this transformation will take decades to reach its full promise and involve broad technological, societal, and economic change.
Well in that case it’s probably a good idea to own one of the most powerful AI and LLM companies. Models may be commoditising but that doesn’t mean you don’t want to own one. If that game moves forward to winning customers with tokens, then you don’t want to be relying on a third party. And it’s increasingly a token-based world.
In closing – Microsoft is in excellent shape, with some great assets, but in my view it either needs to embark on a moonshot to build or foreground its own models, or it should make an era defining acquisition of Anthropic, which offers the best experience for code. Anthropic would be expensive – certainly north of the $61.5bn valuation of its last funding round – but Wall Street trusts Nadella, and the stakes are absurdly high. Rumours currently abound that OpenAI is trying to change the terms of its contract with Microsoft, and that OpenAI is under-cutting Microsoft in Copilot sales deals. I don’t really see how this will end well. This last paragraph deserves a post in its own right, so that’s what you’re going to get. I will be following up shortly.
Disclosure: GitHub, Microsoft, Google Cloud, and Salesforce are all clients.
No Comments