(One of our clients emailed recently to ask about the ideas of open source and cloud computing – what seems to be going on there at the moment? Stephen has tackled this problem several times – the fate of “open source” in a cloud world is a hot topic: many people have much heart and sweat tied up into the concepts of “open source” and want to make sure their beliefs and positions are safe or, at best, evolve. Here’s a polished up version of what I wrote in reply on the general topic:)
Public vs. private
Much of the software used in cloud computing is free and open, but in cloud running the software is what’s important. Being able to freely get, contribute to, and generally do open source is fine, but when you have to millions of dollars to setup and run your own cloud, the open source software you rely on is a small, but important part. And if you’re running your own private cloud, at the end of the day you’re just running on-premise software, and I don’t expect that to change the dynamics of open source – e.g. the same open/core models will evolve (with small exception from “telemetry” enabled services) and the same concerns of lock-in, community, collaboration, payments, and “freedom” that have existed in the open source would will probably still exist.
For the most part, getting involved in the could involves using a cloud, not building your own. (See the “private cloud” exception, here.) When it comes to consuming and building on the cloud, the first dynamic that changes is that you’re paying for the use of the cloud, always. Contrast this with on-premise open source where, accordingly to conversion numbers I usually hear from vendors, you often get a free ride for a long time, if not forever. I can’t stress how much this change from “free as in beer” to actually paying for using software means: actually paying for IT from day one, even if it’s just “pennies per hour.”
“Freedom” in cloud-think
Cloud computing – Azure aside – is open source dependent thru and thru. The philosophic sentiments you’d expect from on-premise open source don’t exist as strongly in the cloud world though. Or, for that matter, in the operations department vs. the development department. Aside from Tim O’Reilly &co. going on about open data and open government (yay!), there’s no real champion of open source, as we know it, in the cloud.
Cloud offerings are “open as in feel free to come and use them,” but there’s hardly “free as in beer.” Software like Eucalyptus, Chef & Puppet, OSes, etc are free – but the actual us of clouds you’d do something with are not. Amazon, Rackspace, etc. aren’t free as anything except “feel free to pay for.” And while there are standards for interop under way, the early activity around cloud interop (from last year when there were several cloud standards efforts and scandals) has quietly drifted away except for a core group of die-hards.
Don’t misunderstand any of this: open source is important for cloud, critical even. I just don’t think most people in the cloud world look at it as a “freedom” issue: a philosophic part of what it means to be in software, or IT. Instead open source software is just one of the good, quality parts to build not-so “open” clouds from. Making money with software is increasingly the new “freedom”; slightly contrasted with the “it’s all free until we have a billion dollar exit” dreams of past “makers” here.
Aggregated Analytics
The aspect that does become interesting are things like sharing configuration and best practices among users. Instead of locking up knowledge of how to run, configure, and manage clouds, there are people looking to commodify (“open source” if you like) ops as we know it. You can see a lot of this in the so called dev/ops bucket, from people like Chef and Puppet, and very non-open source things like Spiceworks. SplunkBase tried to do this years before anyone knew what the hell was going on there, and others have played around with the idea over the years.
Aggregating and then doing analytics that you shoot back to your customer base as features and services in your “product” is where fascinating things happen. Stephen spends a lot of time on this – he calls it “telemetry” – and if you search for “collaborative systems management” or “collaborative IT management” in my stuff from a few years ago, you can see me whacking at the idea.
Thinking by counter-example
The best foil for all this is increasingly Apple. Throw up their stock price over 5 years, units shipped of iWhatever, and the now infamous section restricting development on their platforms to Objective-C, C/C++, and JavaScript. Then ask, how does open source win against that? Should we even care? All that iClosed business is like cloud in your pocket, and if Apple can continue to be successful creating and sustaining another AOL (a walled garden of the Internet, this time controlling metal to glass, even better than Facebook’s run at the same model!), open source is in a weird spot.
Disclosure: Splunk, Opscode, Puppet Labs, Eucalyptus, and other relevant folks are clients.
champaign -> champion?
Apple is like Wall-Mart
Wall-Mart says: “Want to be in our store? Play by our rules.”
Both companies manage to deliver outstanding value to customers, and both are consistently praised and reviled in the media.
Both are also very wealthy.
I guess that’s the way to be wealthy: use the current infrastructure (highway system, trucks, ships and warehouses — internet, cell phone networks and computer hardware) and build your own, fully-controlled system on top of that, managing it with an iron fist and forcing anyone who wants to participate to follow all your rules to the letter or not participate at all.
What does this have to do with the cloud?
Cloud providers tout themselves as unique. They have to become common infrastructure, making it easy for companies to get on and get off it as well as get through them, in order for the Wall-Mart of the worlds to build their own proprietary systems on top. As long as they want to themselves be the unique system, they will fail. Just like the railroads in Europe in the 17th century: different gauges all over the place. England standardized and made it become common infrastructure. Then English firms used it to lead the Industrial Revolution.
Let’s not put the cart before the horse: Cloud Computing is infrastructure. By itself it provides no value. Infrastructure has to be standard in order to really be useful.
I think the Cloud vendors should make a standard image format that can run on any of the cloud providers, and offer a standard API for that image format to interact with the internal cloud vendor services. So someone with Amazon images should be able to deploy them on RackCloud as-is and expect them to work, and vice-versa.
At that point, the lowest sustainable price vendor will win.
Thoughts?
Thanks for the correction(s), as always.
I think you’re pretty much right: if there was image interop among clouds, the confusion of which cloud to choose would be much easier to go through, and more people would be willing to use it. It’s that old “freedom to leave” feature that works well when the majority of the market (cloud providers) support it. Interop in that regards lets people assuage fears of lockin and start using the technology, allowing vendors to dream up new “lockins” it make cash off.
The other thing I think would be great is guaranteed top bandwidth cost between data centers of different companies. For example, I might want to run a cluster of cassandra nosql databases on rackcloud, but I also want to run front-end processes on Amazon web services, and these will connect to the cassandra cluster. Even better, I want to run a cassandra cluster across multiple cloud providers: 2 nodes at amazon, 2 nodes at rackcloud, and two other nodes elsewhere. I want to know what the bandwidth costs will be like between the data centers.
Ultimately, the data should live in more than one cloud.
http://www.truereligions.in/