The Developer Experience Gap

Share via Twitter Share via Facebook Share via Linkedin Share via Reddit

Grand Canyon National Park: Hermit Rd First Trail View 0174

When the iPhone was first introduced in January of 2007, it took the world by storm. The first device to compress – successfully, at least – a mobile phone, a computer with internet access and the 21 century’s equivalent of the Walkman into something that would easily fit into a pocket, the sheer breadth of its capabilities was without precedent. And this was before, notably, the SDK that made possible the “there’s an app for that” campaign. An SDK, as an aside, that Steve Jobs was originally opposed to.

What was less remarked upon, however, was the attention Apple paid to how this dizzying array of capabilities worked together, and worked together well. The first time a call came in while a user was listening to music, in fact, was a startling experience, at least in how it highlighted what low expectations people had for their electronics at the time.

It’s an experience that is taken for granted now, but relative to the competition then it was revolutionary. Call comes in while music is playing. In response, the iPhone fades the music out while fading the ring in. The call can be taken seamlessly, and once ended the music fades back in. Simple. Obvious.

On the one hand, there’s nothing particularly earth shattering about that example. Or rather, there should not have been. But this clear evidence that the team that designed the mobile phone software had actually spoken with their colleagues writing the software for the music player, and hundreds more examples of similar behavior, was enough to hand the iPhone the established, hard fought market for mobile phones to Apple almost overnight – a lead that in the United States at least has not been relinquished. The iPhone’s elegant, integrated design disrupted the landscape so thoroughly and completely that the case study is taught in business schools today.

It is still, however, a lesson that the industry has yet to internalize and apply at scale.

At the time that the iPhone was introduced, developers were in the early years of a golden age of software and hardware. The open source software they favored was gradually clawing its way towards mainstream acceptance, and the sheer volume of it was exploding. Entirely new categories of software, in fact, were being created from scratch and being quickly filled out by multiple competing projects released under licenses that varied in their requirements and obligations but were, in almost every case, OSI approved.

Less apparent at the time but no less important was the birth of the cloud market as we know it today. Some five months before Steve Jobs stood on stage waving the sleek, tiny slab of metal and glass around, an online retailer primarily known for selling books released a very rough beta of the industry’s first cloud computing service. No one knew it at the time, but this simple on demand compute service, along with its immediate predecessor, simple addressable storage, would herald the rise of a new computing model that would eventually assimilate the rest of the enterprise technology industry and position its creator as the accepted market leader.

As a result, developers then and since have had a vast array of tools and services at their fingertips, with more software and services arriving by the day. Nearly anything that a developer could want is available, at either no cost or for an amount that is accessible for most, if only on a trial basis. As Steve Jobs might have told them, “You’ve got everything you need.”

There’s one problem, however. Where the phone and music player teams were controlled by a party with the authority to make them play nicely together, this isn’t true of modern development infrastructure. Most toolchains, from where the first lines of code are written through test, build, integration and deployment all the way out to production, are made up of a patchwork quilt of products and services from different suppliers.

What the market is telling developers and their employers alike, effectively, is that the market can provide a system that will shepherd code from its earliest juvenile days in version control through to its adult stage in production. It is telling them that the system can be robust, automated and increasingly intelligent. And it is also telling them that they have to build and maintain it themselves.

It should come as no surprise, of course, that there is no central supplier capable of natively satisfying every last operational need. To some extent, this is a reaction to the backlash against Microsoft, for trying to be just that. By attempting to own the entire stack as they owned the office productivity and operating system markets in the early part of the last decade, Microsoft taught a generation of technology buyers lessons that could have been learned from Amdahl and IBM.

No vendor is or will be in a position to provide every necessary piece, of course. Even AWS, with the most diverse application portfolio and historically unprecedented release cadence, can’t meet every developer need and can’t own every relevant developer community. The process of application development is simply too fragmented at this point; the days of every enterprise architecture being three tier and every database being relational and every business application being written in Java and deployed to an application server are over. The single most defining characteristic of today’s infrastructure is that there is no single defining characteristic, it’s diverse to a fault.

Fragmentation makes it impossible for vendors to natively supply the requisite components for a fully integrated toolchain. That does not change the reality, however, that developers are forced to borrow time from writing code and redirect it towards managing the issues associated with highly complex, multi-factor developer toolchains held together in places by duct tape and baling wire. This, then, is the developer experience gap. The same market that offers developers any infrastructure primitive they could possibly want is simultaneously telling them that piecing them together is a developer’s problem.

The technology landscape today is a Scrooge McDuck-level embarrassment of riches.

If there is and can be no Apple-equivalent enterprise vendor arriving to make sure the enterprise telephone app team plays nicely with the enterprise music player team, what progress can be expected? It’s early, and the next few quarters should provide hints at who has accurately identified the depth of the problem and taken steps to address it. In the meantime, here are five adjectives that will describe the next generation of developer experience.

  • Comprehensive: One of the singular failures of today’s most successful platforms is the degree to which they treat the process of writing code, and then building it and deploying it, as near afterthoughts. Once the application gets to the platform it’s in good hands; the journey, however, can be arduous. Early signs of change are visible, however. Consider the degree to which GitHub’s Actions product, for example, extended a bridge from what had been an island to production environments, with base CI/CD capabilities in between. That’s one example of how vendors are evolving beyond their original areas of core competency, extending their functional base horizontally in order to deliver a more comprehensive, integrated developer experience. From version control to monitoring, databases to build systems, every part of an application development workflow needs to be better and more smoothly integrated.

  • Developer Native: In order to make the life of a developer easier, rather than harder, a quality developer experience is focused on allowing developers to use the tools they’re familiar with, or at least closely mimicking them. Whether that’s the out of the box support for VS Code that is a common feature of today’s platforms or the Git-like syntax Heroku pioneered in its CLI, the less that developers have to context switch between tools and steps the less friction there is in usage.

  • Elegant: While the technology industry has a profound appreciation for aesthetics in certain settings – think hardware design, for example – it is woefully under-appreciated in enterprise infrastructure. The sheer joy of using an interface that is thoughtful, well laid out and has done the little things to anticipate usage and potentially steer it is, regrettably, a rare experience. For all that vendors will profess to understand that developers are the New Kingmakers, they don’t tend to design their offerings that way. Too often products are designed and built as if in a vacuum, and limited attention is paid to the pattern of how applications are used in conjunction with one another, and where that works or doesn’t. Next generation developer experiences will be distinguished by what might appear to be magic but what is in fact merely the long, hard work of teams that have thought deeply about precisely how developers will use their tooling and what can be done to make that experience more seamless.

  • Multi-Runtime: As far back as early 2007, there have been vendors such as Salesforce (Force.com, 9/07) and Google (GAE, 4/08) that have offered more tightly integrated platforms. For operational reasons, however, these platforms have dramatically limited the supported runtimes and thereby the targetable workloads. This lesson was learned early by the Cloud Foundry project; when it launched in 2011, it included support for Java, JavaScript and Ruby. This allowed the project to cast a far wider net than projects which are limited to one supported programming language, or more problematically, one supported proprietary programming language. With most enterprises and a growing majority of applications constructed from more than one language, broad support for multiple runtimes is essential.

  • Multi-Vendor: If it’s not possible for one vendor to natively own the entire workflow, and it’s not at present, by definition a working solution must therefore be multi-vendor. For some, this would seem to doom the prospects for a truly high end developer experience. In reality, we’ve had solutions for years that have delivered quality developer experiences, if only for modest subsets of the total addressable market. For recent examples, consider models such as Netlify. Developers using that service can choose from a variety of external projects, host them at a choice of version control hubs (Bitbucket, GitHub or GitLab), link in external third party services like Algolia, Cloudflare, Prisma or Snyk before an automated build kicks in and pushes the output to its own CDN. On paper, that’s a lot of moving pieces vendor-wise. From a developer’s perspective, however, they never have to leave Git.

The vision of what a high end developer experience might look like will evolve, of course, and vary from domain to domain. It will need to change quickly, in fact, given the recent spike in adoption of higher level managed services. But sooner or later, a better experience will emerge and, like the iPhone, be the subject of conversations that discuss, in breathless amazement, integrations that probably should have been obvious and are certainly overdue.

It might even sell well too.

Disclosure: Algolia, Amazon, Cloudflare, GitHub, GitLab, Google, Microsoft, Salesforce and Snyk are RedMonk customers. Apple, Atlassian, Netlify and Prisma are not currently RedMonk customers.