tecosystems

Addition By Abstraction

Share via Twitter Share via Facebook Share via Linkedin Share via Reddit

Lowe's La Quinta

The process of home construction is a complicated one. Parts and materials are manufactured around the world, shipped to regional lumberyards and home centers, from which contractors purchase the raw materials necessary to build the property per its design. Because it’s in essence an engineering exercise, one with high impact downsides including loss of life, construction is an industry dominated by both specialized experts – referred to as “subs” in the trade – and compliance requirements, most often in the form of first permitting and then code inspection and enforcement.

The advantages to the custom construction approach are many. Buyers get a home built specifically for their purpose, one designed according to their functional needs and aesthetic preferences. They also can expect lower maintenance costs, more efficient heating and cooling systems and the location they prefer.

Unsurprisingly, however, these benefits come with a cost – a thirty percent premium above a median single family home, on average. For this reason alone, it should come as no surprise that most buyers opt to purchase an existing home rather than construct one from scratch. The house might not be perfectly suited to their needs, but is likely close enough and is available immediately versus the higher latency inherent to new construction.

Because most would be buyers opt for existing home construction, then, they are not sent into the maze of home center aisles. They are instead met by realtors, who serve as an insulating layer between the buyer and a potentially overwhelming array of options and a byzantine labyrinth of legal contracts and local permitting and inspection services.

The technology industry is not the construction or real estate industries, obviously. For one, the average enterprise is less cost conscious than the average home buyer. The average enterprise’s requirements are likewise more complex than the average homeowner’s.

But in spite of the differences, there are parallels between construction and technology that may be instructive, because with notable exceptions such as Heroku, the technology market today is still firmly in the home center business.

As has been discussed previously, beginning with the introduction of the first cloud services in 2006, Amazon gradually refashioned the industry in its image. Less than a decade after the infrastructure-as-a-service and thus, cloud, market was born, the default expectation gradually became base level infrastructure primitives available as a web service, paid on use and available more or less instantly. This became the mainstream norm so quickly, in fact, that it is taken for granted today. On the rare occasion that infrastructure can’t be instantly provisioned and accessible, it’s considered an anti-pattern.

Miraculous as this nearly unending library of primitives to choose from would have been in 2005, however, it has been clear for several years that the DIY approach that said library necessarily implies has limitations. There are instances where developers want and need the ability to tune every last service underlying their application, but those instances are declining over time. It is also less feasible for any developer, no matter how skilled, to be fully proficient in all of the underlying infrastructure components given how many more of them are present today than a decade or two in the past. All of which means that the complexity inherent to a huge catalog of available services can become, in certain settings, less a strength than a liability.

This is particularly true in today’s environment. Where a developer’s first – and at times, only – priority might once have been scale, today it’s much more likely to be velocity. Scale is not a solved problem, to be sure, but an application development process that can operate at speed is the greater concern.

Interestingly, this is now as true within the executive ranks as it is in the trenches. Developers have always wanted to move quickly, but historically their employers have wanted the opposite. For decades, enterprises equated speed with risk, and given that this attitude was prevalent among any given business’ peers, there was little competitive pressure to accelerate the rate and pace of innovation. Over the last decade plus, this changed as Marc Andreessen articulated in his “Why Software is Eating the World” op-ed. In the years before and since, so-called digital natives have flooded into industry after industry, and their single most defining characteristic is speed. From their technology industry roots, they learned to conflate innovation with velocity, and prize the ability to iterate at rates that would have been unthinkable a decade prior, let alone two. This in turn has meant that the incumbent competitors have been under immense pressure, first to get their offline businesses online – an exercise that is increasingly complete at least according to some definition of that term – and subsequently to move at the pace that digital natives do.

With both buyers and practitioners, then, prioritizing velocity, enterprises are increasingly focused on identifying ways to move more quickly. They are pursuing this goal with a single-minded purpose and through any means necessary. Technology budgets are increasingly tilting away from support and maintenance and towards new application development and growth. The appetite for research about evidence based best practices for improving the rate of innovation, meanwhile, was such that GCP felt compelled to acquire the most prominent organization in that space.

As ever, speed kills.

Which is perhaps the biggest reason that interest in and demand for managed services has begun to spike. While there are many factors behind that trend, arguably none are more important than the market’s recent obsession with velocity. Abstractions have been a feature of the technology industry as long as there has been a technology industry, of course. The antiquated COBOL language, for example, was developed as an abstraction to make programming more accessible.

But beyond the initial infrastructure (IaaS), application (SaaS), and platform (PaaS) abstractions, we’re witnessing the rise and expansion of a distinct category of domain specific alternatives. Instead of providing a layer above base hardware, operating systems or other similar underlying primitives, they abstract away an entire infrastructure stack and provide a higher level, specialized managed function or service. This model isn’t new, and even the more visible of today’s managed services providers such as Algolia (2012), Auth0 (2013), Cloudinary (2011), Contentful (2013), Jumpcloud (2010), Snyk (2015), Stripe (2010) or Twilio (2008) / Segment (2011) have been around a while. The average age of those providers is a tick under nine years.

But while they’ve been around for years, these higher level managed services have never been more popular than they are today.

Consider the recent valuation of Snyk, who raised $200M at a $2.6B valuation. This would be surprising enough on its own, but to do this a mere nine months after raising $100M at a valuation just north of $1B is startling. Not many companies more than double their valuation during a global pandemic. But even that valuation is beneath the $3.2B Twilio paid in October to acquire Segment, a valuation which in turn is shy of the $36B valuation Stripe achieved when it took $600M in funding in April. At that valuation, for context, Stripe is worth more than EA, Splunk or Twitter and more than Akamai and Citrix combined.

These valuations are not an accident, but a reflection of the demand for lower operational responsibilities on the part of both organization and developer alike. These abstractions, which allow developers to make challenges such as authentication, commerce, identity, search and so on someone else’s problem, are exploding in popularity. The conventional wisdom says that most developers, given the opportunity in a vacuum to reimplement services that have already been built, will. Even if this were true, however, developers no longer have that luxury. Pushed to move quickly, developers today are meeting those demands by narrowing the scope of their own workloads, which is accomplished in turn by offloading discrete functional areas to third party managed services.

The implications of the growth spurt for managed services are varied and, in some areas, still unclear.

  • Velocity is inarguably improved, with the primary tradeoff being the introduction of external dependencies. In a world in which most businesses have already traded some portion of their on premises infrastructure for public cloud, however, this is not the fraught decision it would once have been.

  • It also raises questions for the development, build and deploy processes, particularly around how managed services are integrated with all of the above. The developer experience can be dramatically improved via higher level managed services, but much depends on how they’re integrated and, on an even more basic level, procured.

  • Likewise, what this means for the large hyperscale cloud infrastructure providers, whether that’s from an acquisition, in house development or partnership standpoint is a complicated question. It does seem unlikely, however, that any provider would be in a position to try to supply all or even most of the lower and higher abstractions necessary. Twilio’s Segment acquisition implies that further rollups of previously distinct services are likely, but where will the centers of gravity be?

  • Lastly, one area of potential concern for some is what the rise of managed services means for open source. Most of the services, after all, are not made available as open source software, dependent as each may be upon it.

    In a recent interview, Jay Kreps, CEO of Confluent, acknowledged that if he were at Linkedin today, the team he was a part of with Neha Narkhede and Jun Rao would likely not have written and released Kafka as open source software. He said in part, “It’s not how Kafka was written, it’s how it was adopted. You could get the exact same thing from a number of different companies today.”

    On the one hand, this is a tremendous validation of the work completed to date in the open source world. Unlike in 2010 when Kafka was being written, developers today can look around and find almost anything they need available as open source software, a managed infrastructure service or both. It also hints at a potential high growth avenue of commercialization for authors. On the other, the clear implication is less open source software being written and released. More subtly, it may mean a model of delivery and commercialization for open source which represents a departure from tradition. But more on that later.

The technology industry today may not be ready to offer the mass market buyer the technology equivalent of a house. But there is ample evidence to suggest that we’re drifting away from sending buyers and developers alike out into a maze of aisles, burdening them with the task of picking primitives and assembling from scratch. If the first era of the cloud is defined by primitives, its days are coming to an end. The next is likely to be defined by, as the computing industry has since its inception, the abstractions we build on top of those primitives. Whether those abstractions take the form of a house, however, has yet to be determined.

Disclosure: Algolia, Amazon, Auth0, Citrix, Google, Jumpcloud, Snyk, Splunk and Twilio are RedMonk clients. Akamai, Cloudinary, Confluent, Contentful, EA, Stripe and Twitter are not current clients.

No Comments

Leave a Reply

Your email address will not be published. Required fields are marked *