tecosystems

The Road to Abstraction

Share via Twitter Share via Facebook Share via Linkedin Share via Reddit

Number 7, 1951 by Jackson Pollock

Computers are hard, which is why it’s no surprise that one of the long running trends over the history of the technology industry is abstraction. From machine code to Assembler to COBOL, even the earliest platform implementations have exhibited a tendency to progress incrementally away from low level primitives, which are non-intuitive to human beings and typically difficult and time consuming to learn. COBOL specifically, in fact, owes its existence to this trend. Facing high costs for programmers, a group convened in May of 1959 at the United States Department of Defense to discuss the creation of a computer language that “should make maximal use of English, be capable of change, be machine-independent and be easy to use, even at the expense of power.”

The goals that led to the spec approval of COBOL in 1960 are the same that many platforms follow today. As a combination of Moore’s Law and the vast array of on demand infrastructure that it spawned have rendered performance less of an immutable tradeoff, they might even be more relevant in 2017 than they were in 1959.

The most common way of making technologies easier to use has been abstraction. The further away a given platform gets from the underlying hardware that speaks in bits and bytes, the larger the number of humans that are capable of using it. This has been the fundamental assumption behind platforms from Microsoft’s Visual Basic to Salesforce’s Lightning.

While it’s unusual to stop to think about abstraction explicitly as a driving force, neither is it a surprise when pointed out. It’s taken for granted, not completely unknown. What can be interesting, however, is the pace of these cycles of abstraction and what they tell us about the future of the industry for buyers and sellers.

Consider the basic case of an enterprise application developer. The following is a non-exhaustive list of some of the platform types they might have used, or depending on the timeframe, been directed to use.

  • Java Application Server (~1997)
    There are many different ways to date this category, but one reasonable starting point is with one of the first releases of WebLogic in November of 1997. IBM followed with WebSphere in 1998 and Apache Tomcat followed that in 1999, but even in 1997 the emergence of what would come to be known as the Java Application Server category was clear. Its value proposition for clients was straightforward: Java was a known and trusted, if not beloved, language for creating enterprise applications. Application servers essentially functioned as an abstraction layer – a layer that later came to be called middleware – between the application itself and the underlying operating system / hardware platform. The reality on the ground was never as simple as the “write once, run anywhere” promises implied, but the portability of applications was sufficient to lead to massive adoption by enterprises and governments worldwide. Many of the applications built for and run on Java app servers are still in service today, though uncertainties about the future of the specific Java frameworks, the emergence of cloud environments and availability of app server talent have led to the porting of some applications. This adoption led to substantial valuations: BEA, the vendor behind WebLogic, was offered $6.7 billion by Oracle in 2007 and turned it down because it believed it undervalued the company. A year later, Oracle paid $8.5 billion, or just under $10B in today’s dollars. This was about a billion more than it paid for Sun Microsystems, the creator of the Java technology WebLogic was built on, and it represented the company’s second largest acquisition next to PeopleSoft in 2005.

  • Hosted PaaS (~2007)
    Ten years after WebLogic debuted, Salesforce kicked off the hosted Platform-as-a-Service (PaaS) wars with the introduction of its Force.com platform in September of 2007. Particularly in its initial incarnation, Force.com was orthogonal to the application server market. Most obviously, it involved hosting applications, which conservative enterprise buyers were not comfortable with at scale yet. Second, applications weren’t written in the industry standard Java language, but the Java-like Apex language proprietary to Salesforce. Lastly, application architecture was radically different than enterprises were accustomed to. Gone were notions of operating systems and servers and storage, instead Force.com and the hosted platforms that followed it such as Google App Engine offered an application fabric for developers to deploy against. This category saw relatively anemic interest from increasingly empowered developer populations in the early years, to the degree that many in the industry – particularly those on the investment side – wrote PaaS in general off as a fundamentally failed model. As later versions began to open up and embrace industry standard technologies, however, adoption picked up. Heroku, which allowed the creation of Rails-based applications deployed against Postgres and managed and deployed by Git, saw enough adoption in fact that it was ultimately acquired by Salesforce. Like the Java app servers that preceded it, hosted PaaS represented an important further abstraction: in addition to abstracting developers away from the underlying operating system and hardware, it abstracted them from the computing environment entirely.

  • Open Source PaaS (~2011)
    Targeted at a middleware-like opportunity in a PaaS platform that was itself open source and freely available, VMware released Cloud Foundry as an open source project in 2011 some four years after Force.com debuted. Cloud Foundry was a multi-runtime application platform that essentially served as a wrapper or container – more on that term shortly – for enterprise applications providing a number of platform features. Crucially, it and similar projects like OpenShift made them portable across a range of environments. Its key abstraction was, and arguably still is, the ability to run and host workloads across a variety of environments, clouds in particular; the more recent “Cloud Native” branding that has replaced the former PaaS messaging simply makes this latter promise explicit. As cloud adoption has accelerated, traditional middleware has struggled to achieve a foothold in emerging platforms like Amazon’s, and this has created a market opportunity for what used to be called open source PaaS platforms.

  • Containers (~2013)
    Lest contributors from projects like FreeBSD and Solaris take exception, let alone those who contributed to similar mainframe technologies, let’s note up front that containers existed long before 2013. But two years after the introduction of Cloud Foundry was the year that saw the first release of Docker as open source software. The explosive growth of and interest in Docker from developers was almost unprecedented; it remains one of the fastest growing projects RedMonk has ever tracked. The growth presumably surprised dotCloud – the company we now know as Docker – because the container technology was not its initial focus, a PaaS was. Docker didn’t invent the concept of containers, but it packaged them in such a way that it was easy for developers to experiment with, and experiment they did. In short order, the technology world was captivated by containers in a way that they had never been with FreeBSD Jails or Solaris Zones. Docker containers effectively allowed developers to abstract the operating system away to such a degree that many today argue it no longer matters, neatly encapsulating application environments and their dependencies in a lightweight and portable fashion. In the early days of Docker containers, their usage was typically restricted to development – it was by the company’s own admission not ready for production. Today, there are multiple platforms which help to schedule and orchestrate the deployment and running of containers in production, and the combination of technologies is meeting some of the same needs that previous middleware offerings had before it.

  • Serverless (~2014)
    Introduced at its re:Invent show a year after Docker burst on the scene, AWS’ Lambda service was by our estimation and many others the most significant announcement of that event. Lambda was a new and fundamentally distinct model. It required developers to rethink the way they constructed applications, because it supported only event-driven services which were triggered and retired upon completion. The server was abstracted, the operating system was abstracted, and even the middleware was abstracted. All that remained was code waiting for an event to trigger it. Adoption according to the metrics RedMonk tracks was surprisingly modest initially given the novelty of the service, but this was easily attributed to its unique – and to some, alien – approach and the fact that on launch it was, like the original hosted PaaS platforms that preceded it, a single vendor, proprietary platform. Since then, the model that Lambda pioneered – serverless – has become substantially more popular, as its benefits become better understood and as alternative implementations and frameworks have emerged. It’s probably not accurate to say that serverless has gone mainstream from a production usage standpoint as yet, though there are many companies worldwide doing so, but it is apparent that serverless is a viable model now over the longer term.

As mentioned, this list of platform technology innovations is far from exhaustive. These technologies are also, more often than not, distinct from one another in their architecture, philosophy and approach. They have different strengths, different weaknesses and, often, production uses. It is fair to characterize many of these technologies as apples and oranges. Yet each of these technologies has competed to some degree with the generations that preceded it, and those that have come after. Cloud Foundry and similar projects actively target traditional middleware such as WebLogic; container technologies like Docker coupled with management technologies like Kubernetes, meanwhile, have been a thorn in the side of projects like Cloud Foundry since their inception. So much so that the company that spawned Cloud Foundry has chosen to also offer customers Kubernetes with its Kubo offering. Hosted services such as Force.com or Lambda, for their part, compete with all of the above, selling against pure software with software that’s delivered as a service.

These examples and this competition, divergent though they might be, tell us three things.

First, that the industry’s conflation of abstraction with progress continues. The fundamental truths that drove the design and creation of COBOL are just as present in Lambda. The key to each category’s success, in fact, was mapping some new or repackaged form of abstraction to a real world market issue – application portability, development velocity, dependency management and environment complexity, to name a few.

Second, that innovation is arriving more quickly. If you’re keeping track at home, the timing between the cycles identified here is 10 years, four years, two years, and then one year. Gone are the days in which vendors had a decade or more to build businesses without an imminent threat of disruption. The only major technology area which is not subject to that fear at this time is cloud infrastructure, because the capital requirements and the attendant economies of scale combine to ensure a more limited field of competitors. Everyone else is left to innovate while wondering about what competition is just over the horizon.

Lastly, that this continual march towards higher levels of abstraction and the increasing pace of its arrival is and will continue to have profound impacts on both the buyers and sellers of technology.

Buyers are coping with a paradox unthinkable to those of us in the technology industry – too much innovation. Even as the market is delivering newer and more powerful and efficient solutions at a furious rate, it’s made choice and selection a real problem. This is certainly true of the market as a whole where buyers are forced to choose between public and private, DIY vs fully managed, VM vs container vs hermetically sealed platform, and so on. It’s increasingly also the case within vendors themselves; AWS, for example, is introducing new features and services so quickly that even customers that are all in on the platform are regularly unaware of new capabilities.

Sellers, for their part, can count on less runway to get their businesses established as new approaches barely have time to achieve mainstream recognition before the next one arrives. It also means increased competition, both in volume and in model. When there was effectively one middleware approach, with competitors more similar than not, market forces naturally led to consolidation. Today, there are many potential approaches for a given workload, and each of those approaches will boast multiple viable competitors – many of which are at least somewhat distinct from one another.

If you’re in another area of the technology market feeling grateful or even smug that you don’t have to cope with this accelerating stream of disruption, you’re going to be unpleasantly surprised sooner or later. This is not unique to the application market; the exact same pattern is identifiable in markets from databases to hardware. Abstraction is a force of nature in this industry, and if anything it’s getting stronger. Those who fail to recognize this will be supplanted by those that do.

Disclosure: AWS, The Cloud Foundry Foundation, Docker, IBM, Microsoft, Pivotal, Oracle, Red Hat and Salesforce are RedMonk customers.

2 comments

  1. Is the trend really abstraction or is it simplification?

    Think about what VMs, Containers, and Serverless are all themes upon: Creating absolute minimal units of code deployment.

    One other way I see this showing up is the trend towards static binaries in compiled languages like Rust and Go. If you can get something running by donwloading a single file and providing a few command line parameters, you’ve saved your “users” a *LOT* of time screwing around with dependencies and configuration files and default configuration paths and environment variables and all that other crap. But static binaries don’t really increase abstraction meaningfully, they just make it easier to get software running by obviating the need to build against a defined platform.

    Another avenue that’s being worked on is unikernels. This is again, not increasing abstraction, but making the software smaller and simpler to deploy.

    We always need to allow for some complication in scalable systems so that processing can be distributed correctly, but ultimately, the biggest convenience of all cited progress is removing complexity around packaging and deployment. Not *simply* abstraction. Actually, I think if it were simply users piling on abstractions, there wouldn’t be quite as much confusion. The hard part here is figuring out what is the true, simplest way for software to be packaged and deployed and subsequently coordinated.

  2. on the other hand,

    http://fingfx.thomsonreuters.com/gfx/rngs/USA-BANKS-COBOL/010040KH18J/index.html

    Old systems can’t be innovated away, they have to be kept running. Banks, airlines, the government federal and state and local, the military, etc etc, are very fond of reliability and stability. The robotic arm in the space shuttle used Intel 80186, right up until the shuttle itself was retired. I believe there is still a good market for non-disruptive technology.

Leave a Reply to Doug K Cancel reply

Your email address will not be published. Required fields are marked *