tecosystems

The Kubernetes Lesson

Share via Twitter Share via Facebook Share via Linkedin Share via Reddit

When Kubernetes was first announced in 2014, reactions were mixed. Some pointed to its pedigree and that of its creators, Brendan Burns, Craig McLuckie and Joe Beda, as reason enough to pay attention. Others focused on the fact that it was derived from Google’s Borg software but was not itself Borg, dismissing it as “Borg-lite” or little more than an interesting science project. Both camps were forced to acknowledge, however, that it was entering a crowded and fragmented software market. It was one project among a rapidly expanding array of options.

In this first quarter of 2018, however, Kubernetes is arguably the most visible of core infrastructure projects. Kubernetes has gone from curiosity to mainstream acceptance, crossing any number of chasms in the process. The project has been successful enough that even companies and projects that have competing container implementation strategies have been compelled to adopt it.

The obvious question being asked in the wake of this success and one that we’re asked constantly is: how did this happen? What were the factors that led to this meteoric growth? The obvious answer, or at least part of it, is developers.

Many existing projects at the time Kubernetes was announced, such as the three year old Cloud Foundry, were operationally-focused. Designed around easing the burdens of organizations that were attempting to run software at scale, the value proposition for the original PaaS offerings in most cases was functionally equivalent to middleware.

This approach was strategically sound. Buyers then as now were actively looking for platform that would bridge them from a world of self-hosted datacenters and three-tier J2EE application portfolios to a new reality of polyglot applications run both on and offsite. And from the vendor perspective, buyers accustomed to paying premium middleware pricing are an ideal target market.

But the stage for Kubernetes’ growth had already been set. Two years after the introduction of Cloud Foundry and a year before the announcement of Kubernetes, Docker exploded onto the scene, popularizing almost overnight the tried concept of software containers. Containers solved several typical problems for developers – dependency management, environment portability and more – neatly and in a much lighter-weight fashion than traditional virtual machines. Thus it was no surprise that they became one of the fastest growing technologies we had ever seen at RedMonk.

While most of the large organization focus then was on software that eased operational and management burdens, developers increasingly embedded containers deeply into their software engineering practices.

Two decades ago, that might not have mattered. In modern software development organizations, however, what gets used in development and testing environments has a habit of showing up in production. This was the opportunity that Kubernetes was built to take advantage of. It provided developers with a means – an open source means, naturally – of taking the containers they were so enamored of and running them in production environments, but without having to make determinations such as which containers run on which hardware. In its initial incarnation, this was the simple, basic job that Kubernetes was hired for.

Kubernetes effectively attached itself to a developer-friendly technology that was in a period of high growth, in other words, which put it on the path to becoming a mainstream technology choice not just for developers but the organizations they work for. Though there are many factors associated with the growth of Kubernetes, then, none are as important as its ability to satisfy developers’ need for containers at every level of infrastructure.

When we’re asked, then, how Kubernetes got to where it is today and what the lesson is for those that would emulate its success, our answer is the same as it’s always been: developers are kingmakers. Ignore them at your peril.