tecosystems

What IBM Joining the Cloud Foundry Project Means

Share via Twitter Share via Facebook Share via Linkedin Share via Reddit

When the OpenStack project was launched in 2010, IBM was one of many vendors in the industry offered the opportunity to participate. And though OpenStack launched with a nearly unprecedented list of supporters, IBM was not among them. In spite of their lack of a public commitment to an existing open source cloud platform – they had their own service offering in SmartCloud – they declined to join the project.

Until they did two years later.

In 2012, IBM joined along with Red Hat, another industry player that had passed on the initial opportunity to get on the OpenStack train. The original decision and the subsequent about face may seem contradictory, but it is nothing more or less than the inevitable consequence of how IBM approaches emerging markets.

For many customers, particularly risk averse large enterprises and governments, one of IBM’s primary assets is trust. IBM is in many respects the logical reflection of its customers, who are disinclined – for better and for worse – to reinvent themselves technically as each new wave of technology breaks, as each new “game changing” technology arrives. Instead, IBM adopts a wait and see approach. It was nine years after the Linux kernel was released that IBM determined that the project’s momentum, not to mention the potential strategic impact, made it a worthwhile bet. At which point they promised to inject $1 billion dollars into the ecosystem, a figure that represented a little over 1% of their revenue and fully a fifth of its R&D expenditures that year.

Which is not to compare IBM’s commitment last week to Cloud Foundry to its investment in Linux, in either dollars or significance. As much as one-time head of VMware now-head of Pivotal Paul Maritz is seeking to make Cloud Foundry “the 21st-century equivalent of Linux,” even the project’s advocates would be likely to admit there’s a long way to go before such comparisons can be made.

The point is rather that when evaluating the significance of IBM’s decision to publicly back Cloud Foundry, it’s helpful to put their decision making in context. Decisions of this magnitude cannot be made lightly, because IBM cannot return to enterprise customers who have built on top of Cloud Foundry at their recommendation in two years with a mea culpa and a new platform recommendation.

IBM’s support for the Cloud Foundry project signals their belief that the PaaS market will be strategic. Given the aforementioned context, it also means that after an extended period of evaluation, IBM has decided that Cloud Foundry represents the best bet in terms of technology, license and community moving forward. These are the facts, as they say, and they are not in dispute. The primary question to be asked around this announcement, in fact, is less about Cloud Foundry and IBM – we now know how they feel about one another – and more to do with what it portends for the PaaS market more broadly.

A great many in the industry, remember, have written off Platform-as-a-Service for one reason or another. For some VC’s it’s the lack of return from various PaaS-related investments, for the odd reporter here or there it’s the lack of traction for early PaaS players like Force.com or Google App Engine relative to IaaS generally and Amazon specifically. And for developers, it’s frequently the question of whether yet another layer of abstraction needs to be added to virtual machine, IaaS fabric, operating system, runtime / server, programming language framework and so on. The developer’s primary complaint used to be the constraints – run time choice, database options and so on – but these have largely subsided in the wake of what we term third generation PaaS platforms. PaaS platforms that offer multiple runtimes and other choices, in other words. Platforms like Cloud Foundry, OpenShift and so on.

But while it’s difficult to predict the future of PaaS, particularly the rate of uptake – certainly it hasn’t gone mainstream as quickly as anticipated here – the history of the industry may offer some guidance. For as long as we’ve had compute resources, additional layers of abstraction have been added to them. Generally speaking this has been for reasons of accessibility and convenience; it’s easier to code in Ruby, as but one example, than Assembler. But some abstractions, middleware in particular, have long served business needs by offering greater portability between application environments. True, the compatibility was never perfect, and write-once-run-anywhere claims tried the patience of anyone who actually tried it.

Greater layers of abstraction, nevertheless, appear inevitable, at least from a historical perspective. Few would debate that C is a substantially more performant language than JavaScript. Regardless of this advantage, accessibility, convenience and other factors such as Moore’s Law have conspired to advantage the more abstract, interpreted language over the closer-to-the-metal C as demonstrated in this data from Ohloh.

Will PaaS benefit from the long term industry trend towards greater levels of abstraction? Having corrected many of the early mistakes that led to premature dismissals of PaaS, it’s certainly possible. Oddly, however, many of the would-be players in the space remain reluctant to make the obvious comparison, that PaaS is the new middleware. Rather than attempt to boil the ocean by educating and evangelizing the entire set of capabilities PaaS can offer, it would seem that the simplest route to market for vendors would be to articulate PaaS as an application container, one that can be passed from environment to environment with minimal friction. It’s not a dissimilar message from the idea of “virtual appliances” that VMware championed as early as 2006, but it has the virtue of being more simple than packaging up entire specialized operating systems, and is thus more likely to work.

If we assume for the sake of argument, however, that PaaS will continue to make gains with developers and the wider market, the question is what the landscape looks like in the wake of the Cloud Foundry-IBM announcement. It’s obviously early days for the market; IBM-approved or no, Cloud Foundry isn’t yet listed as a LinkedIn skill, and the biggest LinkedIn user group we track had a mere 195 members as of July 15th. But in an early market, the IBM commitment is unquestionably a boost to the project. Open source competitors such as Red Hat’s OpenShift project, closed source vendors like Apprenda, hosted providers like Engine Yard, Force.com/Heroku or GAE will all now be answering questions about Cloud Foundry and IBM, at least in their larger negotiated deals.

As it always does, however, much will come down to execution. Specifically, execution around building what developers want and making it easy for them to get it. All the engineering and partnerships in the world can’t save a project that makes developers lives harder, as we’ve already seen with the first wave of PaaS vendors that failed to take over the world as expected. Whether or not Cloud Foundry can do that with the help of IBM and others will depend on who wins the battle for developers, and that’s one that’s far from over.

Disclosure: IBM is a RedMonk customer, as are Apprenda, Red Hat and Salesforce.com/Heroku. Pivotal is not a RedMonk customer, nor are Google or Engine Yard.

One comment

  1. Good article Stephen

Leave a Reply to Doug Clark Cancel reply

Your email address will not be published. Required fields are marked *