At this point, there’s a real chance that “cloud” becomes as misused a term as “SOA.” Which would be a crying shame, because the cloud – to me – is profoundly transformative; one that is universally more pertinent than SOA ever could be. But given that neither I nor Dell is legally permitted to regulate usage of the term, I suppose I will leave cloud to its fate.
Whatever we may call the set of disparate technologies that are collectively – and often, mistakenly – referred to as cloud computing, the impact has been undeniable. Look no further than vendor budgets for cloud related hiring, marketing, staffing. Firms large and small are furiously pivoting to reposition themselves for what could be a sea change in the way that development, deployment and management are done.
Which is as it should be. Because whether we’re talking about the traditional external clouds that have come to characterize, or the private and semi-private clouds that vendors envision for the future, clouds are an increasingly viable, even critical, deployment paradigm.
There are a multitude of potential problems, to be sure: Tim’s piece this week on lockin echoes the concerns that I have had that have led to both talk (1, 2) and action (ping me for details) on the question of standards.
However that question is resolved or not resolved, I remain convinced that the forecast ahead is cloudy. I am more convinced than ever, in fact. While it’s possible to debate the root cause of the worldwide financial crisis, the fact that it will affect technology spending and decision making cannot be argued. Not least because it already has.
As we move forward, I expect the cloud to be something of a beneficiary of the economic meltdown. For the following five reasons:
- Economies of Scale (People):
One of the natural consequences of an economic downturn – besides belt-tightening – is significant introspection into resource allocation, particularly people. Organizations are likely to be asking themselves detailed questions along the lines of: do I need these people? Is it a core competency? And if not, what can I do about it?
One option, of course, is outsourcing. It is extremely unlikely that organizations of any size will be content to outsource the entirety of their infrastructure, but it will be only natural for those with limited budgets to explore in great detail the potential cost savings of making infrastructure someone else’s problem to manage. If you’re a Fortune 500 organization, ask yourself this: can AmBayGooglHoo run their datacenters with fewer people than I can?
- Economies of Scale (Hardware):
While the potential savings in terms of human resources is significant, the economies of scale are perhaps most glaring with respect to hardware costs. With the exception of very large enterprises and governments – and in some cases, not even then – the major cloud suppliers can purchase hardware, networking equipment, bandwidth, and so on far cheaper than you or I. Meaning that even if all else were hypothetically equivalent – resources, experience, and so on – cloud providers will be able to deliver the functionality more cost effectively than I can. True, the cloud model has disadvantages – latency, primarily – which may outweigh these savings for a variety of workloads, but the economics will only become more compelling in a capital strapped environment.
- Pay as You Go:
Web and application hosts, in particular, were slow to wake up to the threat that Amazon presented, in part because they did not see the technology as revolutionary. Whether that view is correct or not, it misses what is perhaps the real significance of Amazon’s offering: the pay as you go model. Ridiculed by some at launch, because Amazon was issuing bills of 12 cents in some cases, it now looks as if it will form the basis for the commercial models that follow it. Not that Amazon should be credited with this innovation: SaaS models have been pay-as-you-go for a few years now, and even in the hardware space, Sun’s Network.com has offered on demand hardware since its launch.
But Amazon was the first to truly popularize the pricing structure in the context of a near universally applicable cloud model. In its wake, other would-be cloud entrants will likely be forced to adapt their traditional pricing models to the expectation of pay as you go that Amazon has fostered. And what better time to pay only for what you use than a recession?
- Time to Market:
Many have written before about the potential time to market advantages that the cloud offers to startups as well as established enterprises. Pre-cloud, launching a startup typically meant picking one of a.) underpowered, inflexibly priced managed hosting, or b.) overpriced co-location or self-hosting. The former was often not an option because the machines were too locked down, preventing necessary customization, while the latter added a considerable overhead of time: time to find a host, time to configure the machine, time to ship the machine, and most importantly time to manage the machine on an ongoing basis. Today, startups can spin up an instance in literally seconds, meaning that the latency from conception to deployment – potentially wide-scale deployment – is about that long. Of potentially greater impact is the ability of some cloud platforms, like Google’s App Engine, to assume the burden of scaling. Developers, then, may focus on their core competency – building applications – and let Google worry about the challenge of scaling them.
In either case, time to market – and potentially, revenue – is dramatically reduced.
- Tools, Languages and Runtimes:
While dynamic languages such as PHP have enjoyed massive success in the web startup world, which is overwhelmingly LAMP, enterprises – even with the backing of suppliers like IBM and Oracle – have been much slower on the uptake. One common justification for this reluctance has been the “dynamic languages can’t scale” mantra. Regardless of whether one believes that to be the case or not, many enterprises have managed to convince themselves that this is, in fact, true. This has led to a massive buildout of expensive, complicated middleware architectures – primarily Java, with a fair amount of .NET – because these “scale.” The question that enterprises may begin to ask more and more is this: if dynamic language developers tend to be cheaper, and tools and infrastructure are generally lower cost, and we can make scaling the applications someone else’s “problem,” well, don’t we have to look at that? The equation isn’t quite that simple, of course, but the underlying economics might well be.
Tim is correct when he says:
We haven’t quite figured out the architectural sweet spot for cloud platforms. Is it Amazon’s EC2/S3 “Naked virtual whitebox” model? Is it a Platform-as-a-service flavor like Google App Engine? We just don’t know yet; stay tuned.
But with that said, I personally don’t consider this solely a problem. The offerings differ, often significantly, when compared on the axes of “Control of Environment” and “Effort to Scale,” but there are, as always, different tools for different jobs. There are those deploying to Amazon for whom App Engine would be a non-starter, and vice versa. These distinct, and generally incompatible, architectural approaches doubtless present challenges to customers from a lockin or standardization perspective, which I assume is Tim’s concern. But conversely they do offer customers some choice and flexibility.
Whichever cloud approach one considers, however, the underlying economics will be compelling. The cloud has certain disadvantages relative to traditional on premise architectures, without question, but it is uniquely adapted to a marketplace that will increasingly be cost and time sensitive.
Disclosure: Dell and Sun are RedMonk customers, while Amazon and Google are not.