tecosystems

What the OpenStack Foundation Needs to Do

Share via Twitter Share via Facebook Share via Linkedin Share via Reddit

Jonathan Bryce, Lauren Sell, Mark Collier

In the wake of last week’s well attended OpenStack Summit, there has been much discussion of the state of the project. As is typical, this ranges from heated criticism of the project’s community, governance or technology to grandiose claims regarding its trajectory and marketplace traction. And as is typical, the truth lies somewhere in between.

Critics of the project suggesting that it has real organizational issues and engineering shortcomings to address are correct. As are proponents arguing that the project’s momentum is accelerating, both via additions to its community and by the lack thereof from competitive projects and products. The former is, in all probability, the more important of the two developments. Engineering quality is important, but as we tell all of our clients, it has become overvalued in many technology industry contexts. With the right resources, quality of implementation is – usually – a solvable problem. The lack of a community, and attendant interest, is much less tractable. More often than not, the largest community wins.

In the case of OpenStack, however, this can be considered a positive for the project only as long as there is one OpenStack community. It is unclear that this will remain the case moving forward.

Historically, some of the most important and highest profile platform technologies – Linux being the most obvious example – have been reciprocally licensed. In practical terms, this requires vendors distributing the codebase to make any modifications to it available under precisely the same terms as the original code. OpenStack, like Cloud Foundry, Hadoop and other younger projects, is permissively licensed. Unlike reciprocally licensed assets, then, distributors of OpenStack technologies are not required to make any of their bugfixes, feature upgrades or otherwise available under the same terms, or indeed available at all.

Though not required by the license, the overwhelming majority of code is contributed back to the project, because there is little commercial incentive to heavily differentiate from OpenStack. There are, however, commercial incentives to differentiate in certain areas. Which could, over the longer term, lead to fragmentation within the OpenStack community.

To combat this, the OpenStack Foundation and its Board of Directors must make two difficult decisions regarding compatibility.

First, it needs to answer a currently existential question regarding OpenStack: specifically, what is it, exactly? What constitutes an OpenStack instance? One interpretation is that an OpenStack instance is one that has implemented Nova and Swift, the compute and object storage components within OpenStack. What of vendors or customers who have found Swift wanting, and turned to Ceph or RiakCS, then, as an alternative? Are they not OpenStack? Further, how might the definition of what constitutes an OpenStack project evolve over time? Over what timeframe, for example, might customers have to implement Quantum (networking), Keystone (identity), Heat (orchestration) to be considered ‘OpenStack?’

Answering this question will involve difficult decisions for the OpenStack project, because opinions on the answer are likely to vary depending on the nature of existing implementations and the larger strategies they reflect. Because much of OpenStack’s value to customers – and the marketing that underpins it – lies in its avoidance of lock-in, however, answering this question is essential. A customer that cannot move with relative ease from one OpenStack cloud to another because the underlying storage substrates differ is, open source or no, effectively locked in.

The OpenStack Foundation could decline to take an aggressive position on this question, leaving it to the market to determine a solution. This would be a mistake, because as we’ve seen previously in questions of compatibility (e.g. Java), trademark is the most effective weapon to keeping vendors in line. OpenStack implementations that are denied the right to call themselves OpenStack as a result of a breach of interoperability guidelines are effectively dead products, and vendors know it. Given that the Foundation controls the trademark guidelines, then, it is the only institution with the power to address the question of what is OpenStack and what is not.

Assuming that the question of what foundational components are required versus optional in an OpenStack implementation can be answered to the market’s satisfaction, the second cause for concern lies in compatibility between the differing implementations of those foundational components. The nature of implementations, for instance, may introduce unintended, accidental incompatibilities. Consider that shipped distributions are likely to be based on older versions of the components than those hosted, which are frequently within a week or two of trunk. How then can a customer seeking to migrate workloads to and from public and private infrastructure be sure that they will run seamlessly in each environment?

This type of interoperability is by definition more complex, but it is not without historical precedent. As discussed previously in the context of Cloud Foundry, one approach the Foundation may wish to consider is Sun’s TCK (Technology Compatibility Kit) – should a given vendor’s implementation fail to pass a standard set of test harnesses, it would be denied the right to use the trademark. Indeed, this seems to be the direction that Cloud Foundry itself is following in an attempt to forestall questions of implementation compatibility.

Ultimately, the pride on display at the OpenStack Summit last week was well justified. The project has come a long way since its founding, when several of its now members declined to participate after examining the underlying technologies. But its future, as with any open source project, depends heavily on its community, which in turn is dependent on the Foundation keeping that community from fragmenting. The good news for OpenStack advocates is that there are indications the board understands the importance of these questions, and is working to address them. How effective they are at doing so is likely to be the single most important factor in determining the project’s future.

Disclosure: Multiple vendors involved in the OpenStack project, including Cisco, Cloudscaling, Dell, HP, IBM, Red Hat and are RedMonk customers. VMware, which is both a participant in the OpenStack community and a competitor to it, is a customer.

4 comments

  1. Steve, I see the exact same concerns you mention here, but more from a business perspective. If most vendors implement an identical version of OpenStack (solving your Concern #2), either as a service or a product, then they have no differentiation, the market becomes commoditized and their only differentiator becomes price. Customers will be able to move their workloads seamlessly between various OpenStack clouds, and therefore will constantly hunt around for the best price. This will be a good thing for consumers of cloud, but is obviously a troubling business proposition for cloud vendors. However, if vendors don’t implement identical versions, then there won’t be interoperability between vendors, and the value of having multiple OpenStack clouds to choose from will be eliminated (see your concern #1).

    From a vendor perspective then, I don’t see any potential good outcomes. Sure, OpenStack makes sense as a consulting business (CloudScaling or Mirantis… or someday IBM and Accenture) can come in and build an OpenStack cloud for you. But from a software or service provider perspective it’s hard to imagine a world where you can offer an OpenStack cloud that is BOTH differentiated from other offerings and interoperable with other OpenStack clouds.

    Am I missing something?

    DISCLOSURE: I formerly worked for Eucalyptus, an open source cloud company that competes with OpenStack

    1. Many web hosting companies use the same Apache and PHP etc. as other in the same business, but that doesn’t mean they can only compete on price. The same thing applies here.

      OpenStack providers can stand out on many different points:
      * Jurisdiction. I imagine a fair amount of people have compliance requirements mandating that their service must be hosted in a particular country, for instance.
      * Upstream network bandwidth. Saving a couple of cents, but being stuck on a narrow pipe may be ok for some people, but not for others.
      * ISO-9000 compliance of the provider. Maybe you care and want to pay for it. Maybe not.
      * IOPS. Perhaps paying a couple of cents extra for a signifiant I/O throughput boost is exactly what you need. Perhaps not.
      * Proximity to other interesting services that the provider either hosts or offers themselves.
      * Proven track record of not being a pile of suck.
      * Etc., etc.

      All of this without keeping bugfixes or improvements to themselves or similar shenanigans.

    2. While Soren makes some good points, the comparison of OpenStack Cloud and Web host w/ Apache and PHP falls short. The difference is how far up the stack, or put another way, how far down the rabbit hole the “same-ness” gets you. Providing a web server and a language is like providing blank paper, a pen and expecting the proverbial next great American novel 😉

      Sure it might happen, in the right hands, hands that have all or most of the other tools needed to accomplish the objective.

      Which is the real rub as I see it. Web hosting is about having a website. OpenStack is about having cloud. The future of the next generation (10-20 years) of the internet will belong to the folks who realize first and best that the battleground isn’t cloud at all (and it sure as hell isn’t web), it’s computing itself. Whether that’s an OpenStack team or not, remains to be seen – far too early to predict.

      But I will predict this, after the winner is clear, experts will describe the victory to laymen something like this, “Since the earliest days of computers everyone thought the challenge was to make them interface and interoperate, whether it was internally within the same system (mainframe) or across geographic borders (e.g. internet) or even across systems from difference vendors (e.g. packet switching) . Now we realize the objective all along was to make people interoperable, because the breakthroughs at XYZ made it happen. Finally Computer Science delivered on the promise of computers.”

      Who wants to be XYZ company? The future is up for grabs more so now than any time since 1995 when the march began to the web plateau. PI, People Interoperability, is the future. The question everyone must consider is, “Do we want to pay for it, or get paid for it?”

      @DanFarfan

  2. We’ve had discussions around some of this at the OpenStack summits before. With regard to people deploying something other than Swift, the general consensus was that as long as you exposed the same API, you’re fine. Some even suggested that if a better Swift than Swift came along, it could take its place as the canonical “OpenStack Object Storage”. Again, as long as the API didn’t change.

    I believe there was some work being put into a FITS (faithful implementation test suite) project that would validate the exhibited behaviour of a “cloud” and give a pass/fail verdict as to whether it was indeed “OpenStack”. I’m not sure where that project went, though.

Leave a Reply to Dan Farfan Cancel reply

Your email address will not be published. Required fields are marked *