tecosystems

What is the Atomic Unit of Computing?

Share via Twitter Share via Facebook Share via Linkedin Share via Reddit

defining the unit of atomic weight

According to published reports, Docker (neé dotCloud) is in the process of securing $40M in financing. Update Originally mis-stated the amount of financing, but the substance of the post stands.

If popularity is a guiding metric, this infusion will come as no surprise. Docker is one of the fastest growing projects we have ever seen at RedMonk, and virtually no one we speak with is surprised to hear that. In a little over a year, Docker has exploded into a technology that is seeing near universal uptake, from traditional enterprise IT suppliers (e.g. Red Hat) to emerging infrastructure players (e.g. Google).

There are many questions currently being asked about Docker. Most obviously, why now? The idea of containers is not new, and conceptually can be dated back to the mainframe, with more recent implementations ranging from FreeBSD Jails to Solaris Zones. What is about Docker that it has captured mainstream interest where previous container technologies were unable to?

Rather than one explanation, it is likely a combination of factors. Most obviously, there is the popularity of the underlying platform. Linux is exponentially more popular today than any of the other platforms offering containers have been. Containers are an important, perhaps transformative feature. But they historically haven’t been enough to compel a switch from one operating system to another.

Perhaps more importantly, however, there are two larger industry shifts at work which ease the adoption of container technologies. First, there is the near ubiquity of virtualization within the enterprise. When Solaris Zones dropped in 2004, for example, VMware was six years old, five months from being bought by EMC (in a move that baffled the industry) and three years away from an IPO. Ten years later, and virtualization is, quite literally, everywhere. At OSCON, for example, one database expert noted that somewhere between 30% and 50% of his very large database workloads were running virtualized. The last workload to be virtualized, in other words, is almost half the time. Just as the ASP market failure paved the way for the later SAAS market entrants, the long fight for virtualization acceptance has likely eased the adoption of container technologies like Docker.

More specific to containers specifically, however, is the steady erosion in the importance of the operating system. To be sure, packaged applications and many infrastructure components are still heavily dependent on operating system-specific certifications and support packages. But it’s difficult to make the case that the operating system is as all powerful as it was, given the complete reversal of attitudes towards Ubuntu in the cloud era. Prior to the ascension of Amazon and other public cloud suppliers, large scale enterprise support on a general basis was near zero. Today, besides being by far and away the most popular distribution on Amazon, Ubuntu is supported by those same enterprise stalwarts from HP to IBM. Nor has IAAS been the only factor in the ongoing disintermediation of the operating system; as discussed previously, PAAS is the new middleware, and middleware’s explicit mission has historically been to abstract the application from the operating system underneath it.

These developments imply that there is a shift at work in the overall market importance of the operating system (a shift that we have been expecting since 2010), which in turn helps explain how containers have become so popular so quickly. Unlike virtual machines, which replicate an entire operating system, containers act like a diff of two different images. Operating system components to the two images are shared, leaving the container to house just the difference: little more the application and any specific dependent libraries, etc. Which means that containers are substantially lighter weight than full VMs. If applications are heavily operating system dependent and you run a mix of operating systems, containers will be problematic. If the operating system is a less important question, however, containers are a means of achieving much higher application density on a given instance versus virtual machines fully emulating an operating system.

Taken in the aggregate, this is at least a partial explanation for the question of “why now?” As is typical with dramatic movements, Docker’s success is as much about context as the quality of the underlying technology – intending no disrespect to the Docker engineers, of course. Engineering is critical, it’s just that timing is usually more critical.

The most important question about Docker, however, isn’t “why now?” It is rather the one being asked more rarely today, by those struggling to understand where the often overlapping puzzle pieces fit. The explosion of Docker’s popularity begs a more fundamental question: what is the atomic unit of infrastructure moving forward? At one point in time, this was a server: applications were conceived of, and deployed to, a given physical machine. More recently, the base element of an infrastructure was a virtual recreation of that physical machine. Whether you defined that as Amazon did or VMware might was less important than the idea that an image resembling a server, from virtualized hardware and networking interfaces to a full instance of an operating system, was the base unit from which everything else was composed.

Containers generally and Docker specifically challenge that notion, treating the operating system and everything beneath as a shared substrate, a universal foundation that’s not much more interesting the raised floor of a datacenter. For containers, the base unit of construction is the application. That’s the only real unique element.

What this means yet is undetermined. Users are for the most part years away from understanding this division, let alone digesting its implications. But vendors and projects alike should, and in some cases are, beginning to critically evaluate the lens through which they view the world. Infrastructure players like VMware and the OpenStack ecosystem, for example, need to project forward the potential opportunities and threats presented by an application as opposed to VM-centric worldview, while Docker and others in similar orbits (e.g. Cloud Foundry) conversely need to consider how to traverse the comprehension gap between what users expect and what they get.

Google App Engine, Force.com and others, remember, tried to sublimate the underlying infrastructure in the first generation of PAAS offerings and the result was a market dwarfed by IAAS – which not coincidentally looked a lot more like the physical infrastructure customers were used to. But as the Turkey Fallacy states, “it hasn’t happened so it won’t happen” is not the most sustainable defense imaginable. Just because PAAS struggled to get customers beyond thinking in terms of physical hardware doesn’t mean that Docker will as well.

In any event, expect to see players on both sides of the VM / app divide aggressively jockeying for position, as no one wants to be the one left without a chair when the music stops.

7 comments

  1. […] “According to published reports, Docker (neé dotCloud) is in the process of securing $40M in financing…If popularity is a guiding metric, this infusion will come as no surprise. Docker is one of the fastest growing projects we have ever seen at RedMonk, and virtually no one we speak with is surprised to hear that. In a little over a year, Docker has exploded into a technology that is seeing near universal uptake, from traditional enterprise IT suppliers (e.g. Red Hat) to emerging infrastructure players (e.g. Google). There are many questions currently being asked about Docker. Most obviously, why now?…Rather than one explanation, it is likely a combination of factors…” Via Stephen O’Grady, RedMonk […]

  2. Excellent analysis Stephen but I disagree a bit about the importance of the OS. I think, in a limited meaning, Docker makes the OS more important. If you are going to be running 100s of containers on an OS – you want to make sure your OS is rock solid and secure enough to handle all that multi-tenancy. You also want to have that OS treat containers as a first class features so the devops/sysadmin team can use the tools they like to manage all those containers.

    As a developer who has fully bought into PaaS, I am not that excited about docker by itself though. By itself, I see it as a more elegant and efficient VM – still requiring me to do all the updates, networking, and general sysadminy stuff I don’t want to do. I am extremely excited over the next 12 months as my company (Red Hat), Docker inc., and Google start to bring more PaaS like features to the Docker ecosystem.

    Should be exciting times ahead!

    1. There is still a lot of work to do in this space. I think might take longer.

      Almost every PaaS I’ve seen so far has been great at running stateless components. What I’m missing is great support for running stateful data stores at the PaaS layer.

  3. The whole time while I was reading it, I was thinking: when is the author going to mention service oriented architectures

    So I’ll guess I’ll do it.

    So what was the smallest unit of computing a while ago ? People were running a few applications (webserver and database) or one application per machine. Now that people are builing for resiliance (read: distributed) with SOA. The unit of computing is getting even smaller. Smaller than unit of a machine or VM.

    Also a lot of containers, like at Google, Twitter, etc., are running on baremetal. Not VMs.

  4. Docker private PaaS is a much better meet-half-way than public PaaS was to pull IT’s focus up the stack to biz value: the application. Here’s hoping that more make the journey now that hybrid is a possibility and building cloud-native apps in house is more accessible.

  5. […] mainframe days, but never for an operating system as popular as Linux is today, as RedMonk analyst Stephen O’Grady notes. Few will change their OS merely to get the benefit of nifty container technology. But if it […]

  6. […] With microservices we’re dealing with disposability at a different layer of the stack, but the pattern is the same. As Stephen argues the key to the current container frenzy is that the Atomic Unit of computing is now the app. […]

Leave a Reply

Your email address will not be published. Required fields are marked *