Skip to content

Three Questions from the Cloud Foundry Summit

Cloud Foundry LEGO!

Six months ago there was no Cloud Foundry Foundation. This week, its biggest problem at the user conference was the fire marshal. At 1,500 reported attendees the event was a multiple of the project’s inaugural Platform event. To the point that it’s hard to see the Summit going back to the Santa Clara conference center. Enthusiasm will make people patient with standing room only events with seating along the walls two deep, but there are limits.

For an event reportedly put together in a little over two months, execution was solid. From HP’s magician/carnival barker to the Restoration Hardware furniture strewn liberally across the partner pavilion – excuse me, “Foundry” – walking the show floor had a different feel to it. Sessions were a reasonable mix of customer case studies and technical how to’s, which was fortunate because the attendees were an unusual mix of corporate and very pointedly non-corporate.

The conference comes at an interesting point in the Cloud Foundry project’s lifecycle. The first time we at RedMonk heard about it, as best I can recall, was a conversation with VMware about this project they’d written in Ruby a week or two before its release in 2011 – two years after the acquisition of Cloud Foundry. There are two things I remember about that call. First, that I was staying at the Intercontinental Boston at the time. Second, that I spent most of the briefing trying to imagine what kind of internal battles had been fought and won for a company like VMware to release that project as open source software.

By the end of that year, the project had enough traction to validate one of my 2011 predictions. Still, Cloud Foundry, like all would-be PaaS platforms, faced a substantial headwind. Disappointment in PaaS ran deep, back all the way to the original anemic adoption of the first generation of Google’s App Engine and Saleforce’s – released in April of 2008 and September of 2007, respectively. All anyone wanted to buy, it was argued, was infrastructure. Platform-as-a-Service was one too many compromises, for developers and their employers both. AWS surged while PaaS offerings stagnated.

Or so it appeared. Heroku, founded June 2007, required less compromise from developers. Built off of standard and externally available pieces such as Git, Ruby and Rails, Heroku was rewarded by growing developer adoption. Which was why Salesforce paid $212M to bring the company into the fold. And presumably why, when Cloud Foundry eventually emerged, it was open source. Given that one of the impediments to the adoption of and GAE in their initial incarnations was the prospect of being locked in to proprietary technologies, the logical alternative was a platform that was itself open source.

Fast forward to 2015. After some stops and starts, Cloud Foundry is now managed by an external foundation, a standard mechanism allowing erstwhile competitors to collaborate on a common core. The project has one foot in the old world with participation from traditional enterprise vendors such as EMC, HP and IBM and another in the future with its front and center “Cloud Native” messaging. How it manages to bridge that divide will be, to some degree, the determining factor in the project’s success. Because as Allstate’s Andrew Zitney discussed on Monday, changing the way enterprises build software is as hard as it is necessary. This is, in fact, one of three important questions facing the Cloud Foundry project in the wake of the Summit.

Is the Cloud Native label Useful or a Liability?

There are several advantages to the Cloud Native terminology. First, it’s novel and thus unencumbered by the baggage of prior expectations. Unlike terms such as “agile” which even one of the originators acknowledges has become “sloganized; meaningless at best, jingoist at worst,” Cloud Native gets to start fresh. Second, it’s aspirational. As evidenced by the growth of various cloud platforms, growing numbers of enterprises
are hyper-aware that the cloud is going to play a strategic role moving forward, and Cloud Native is a means of seizing the marketing high ground for businesses looking to get out in front of that transition. Third, it’s simple in concept. Microservices, for example, requires explanation where Cloud Native is comparatively self-descriptive. By using Cloud Native, Cloud Foundry can postpone more complicated, and potentilly fraught, conversations about what, precisely, that means. Lastly, the term itself explicitly disavows potentially fatal application compromises. The obvious implication of the term “native,” of course, is that there are non-native cloud applications, which is another way of saying applications not designed for the cloud. While it might seem counterintuitive, acknowledging a project’s limitations is a recommended practice, as customers will inevitably discover them anyway. Saving them this disappointment and frustration has value.

All of that being said, much depends on timing. Being exclusionary is an appropriate approach if a sufficient proportion of the market is ready. If it’s too early, Cloud Native could tilt towards liability instead of asset, as substantial portions of the slower moving market self-select themselves out of consideration by determining – correctly or not – that while they’re ready to tactically embrace the cloud, going native is too big a step. Even if the timing is perfect, in fact, conservative businesses are likely to be cautious about Cloud Native.

Cloud Native is a term then with upside, but not without costs.

How will the various Cloud Foundry players compete with one another?

The standard answer to questions of this type, whether it’s Cloud Foundry or other large collaborative projects, is that the players will collaborative on the core and compete above it. Or, as IBM’s Angel Diaz put it to Barb Darrow, “We will cooperate on interoperability and compete on execution.” From a high level, this is a simple, digestible approach. On the ground, temptations can be more difficult to resist. The history of the software industry has taught us, repeatedly, that profit is a function of switching costs. Which is where the incentive for ecosystem players to be interoperable enough to sell a customer and yet proprietary enough to lock them in, comes from.

Which is why the role of a foundation is critical. With individual project participants motivated by their own self-interest, it is the foundation’s responsibility to ensure that these do not subvert the purpose, and thus value, of the project itself. The Cloud Foundry Foundation’s primary responsibility should ultimately be to the users, which means ensuring maximum interoperation between competing instances of the project. All of which explains why will be interesting to watch.

How will the Cloud Foundry ecosystem compete with orthogonal projects such as Docker, Kubernetes, Mesos, OpenStack and so on?

On the one hand, Cloud Foundry and projects like Docker, Kubernetes, Mesos and OpenStack are very different technologies with very different ambitions and scope. Comparing them directly with one another, therefore, would be foolish. On the other hand, there is overlap between many of these projects at points and customers are faced with an increasingly complicated landscape of choices to make about what their infrastructure will look like moving forward.

While there have been obvious periods of transition, historically we’ve had generally accepted patterns of hardware and software deployment, whether the underlying platform was mainframe, minicomputer, client/server, or, more recently, commodity-driven scale-out. Increasingly, howevever, customers will be compelled to make difficult choices with profound long term ramifications about their future infrastructure. Public or private infrastructure? What is their approach to managing hardware, virtual machines and containers? What is the role of containers, and where and how does it overlap with PaaS, if at all? Does Cloud Foundry obviate the need for all of these projects? And the classic, rhetorical question of one-stop-shopping versus best-of-breed.

While Cloud Foundry may not be directly competing against any of the above, then, and certainly is not on an apples to apples basis, every project in the infrastructure space is on some level fighting with every other project for mindshare and visibility. The inevitable outcome of which, much as we saw in the NoSQL space with customers struggling to understand the difference between key-value stores, graph databases and MapReduce engines, will be customer confusion. One advantage that Cloud Foundry possesses here is available service implementations. Instead of trying to make sense of the various infrastructure software options available to them, and determining from there a path forward, enterprises can essentially punt by embracing Cloud Foundry-as-a-Service.

Still, the premium in the infrastructure is going to be on vision. Not just a project’s own, but how it competes – or complements – other existing pieces of infrastructure. Because the problem that a given project solves is always just one of many for a customer.

Categories: Cloud, Configuration Management, Containers, Platform-as-a-Service.

Is Collaborative Software Development the Next Big Thing?

Working Hard

For all of the technology industry’s historical focus on collaboration tooling, the vision of collaboration they espoused was typically narrow. When vendors talked about collaborating, what they meant was collaborating within your organization. The idea of working with those outside the corporate firewall was an afterthought, when it was thought about at all. Scheduling and calendar applications are perhaps the best evidence of this. Some 22 years after the introduction of Microsoft Exchange and nine after the release of Google Calendar, the simple act of scheduling a meeting with someone who doesn’t work directly with you remains a cache invalidation-level problem.

While this is baffling on the one hand, because it does seem like a solvable problem from an engineering perspective, it is at the same time unsurprising. The tools we get are the tools we demand, eventually. For the better part of the history of the technology industry, infrastructure software development involved very little collaboration. Instead, the creation of everything from databases to application or web servers to operating systems was outsourced by enterprises to third parties such as HP, IBM, Microsoft or Oracle. These suppliers built from within and sold to buyers unable or unwilling to create the software internally, and the closed nature of this system demanded very little low level collaboration between the individual suppliers, buyers or both. Buyer or seller, organizations were inwardly focused.

With the rise of the internet, however, came not just new opportunities but an entirely new class of problems. Problems that, in many cases, the existing technology suppliers were ill equipped to solve, because scaling an individual bank’s transactions is less difficult than, say, scaling the world’s largest online retail site in the weeks leading up to Christmas. Forced by need and by economics to develop their own software infrastructure, internet pioneers like Amazon, Facebook, Google, LinkedIn and Twitter evolved not only different software, but distinct attitudes towards its value.

These have been discussed at length in this space in the past. What’s interesting is that in comments like the one below, or projects like WebScaleSQL, we can see the possibility of a shift towards greater inter-organizational collaboration.

The industry has had large collaboratively developed projects for some time, of course: Linux is the most obvious example. But to a large extent, projects such as Linux or more recently Cloud Foundry and OpenStack have been the exception that proved the rule. They were notable outliers of cross-organizational projects in a sea of proprietary, single entity initiatives. For commercial software organizations, Linux was a commodity or standard, and the higher margin revenue opportunities lay above that commonly-held, owned-by-no-one substrate. In other words, software vendors were and are content to collaborate on one project if it meant they could introduce multiple proprietary products to run on top of the open source base.

What if a higher and higher percentage of the infrastructure software portfolio was developed by organizations with no commercial ambitions for the software, however? What if a growing portion of the projects used to build modern infrastructure came instead from Facebook, Netflix or Twitter, who behaved as if software was non-differentiating and saw more benefit than cost to sharing it as open source software?

In theory, that would remove one of the more important barriers to inter-organizational collaboration. If Facebook intended to sell PrestoDB as a commercial product, or Google Bazel, it would be natural for them to protect those assets and shun opportunities to collaborate with would be competitors. But given that the software being produced by Facebook, Google, Twitter and so on is not built for sale, it would be logical for these organizations to centrally collaborate on common problem areas – just as they do with Linux, just as they do with WebScaleSQL, and just as they do with the Open Compute Project. Logic isn’t enough by itself to overcome NIH, of course, but collectively distributing the burden of scalable infrastructure promises enough financial benefit that the economic incentives should eventually win out. Particularly if people such as Chris Aniszczyk or James Pearce have anything to say about it.

We could, in short, be looking at the emergence of a new attitude towards software development.

  1. 1960: IBM: “Software is For Selling Hardware”
  2. 1981: Microsoft: “Software is For Making Money”
  3. 1994: Amazon: “Software is Used for Services That Make Money and Worth Protecting”
  4. 2004: Facebook: “Software is Used for Services That Make Money and Not Worth Protecting”

To which we might now add:

  1. 2015: TBD: “If We’re A) All Building Similar Software, and B) It’s Not Competitively Differentiating, Why Are We Not Building it Together?”

The best part? If organizations finally decide to collaborate at scale across company boundaries, maybe, at long last, the end of our scheduling nightmare will come into view.

Categories: Collaboration, Open Source.

Who’s Contributing to Configuration Management Projects?

One of the more common areas of inquiry around open source for us at RedMonk concerns project contributors. Who is contributing to what project? What are the relative rates of contribution from contributor to contributor? How do the contributions to a project compare to contributions from competitive projects?

In many cases, this is a difficult if not impossible question to answer because the identities and affiliations of project contributors are obscure, whether by design or simply because developers prefer the individual identities independent from their employer. But just because a question is difficult to answer and may return imperfect results does not mean that it’s not worth asking.

With my colleague having updated some of the basic community metrics from Hacker News and Stack Overflow that we track on the configuration management space, we’ll take a look here at the top 5 contributors by domain to the Ansible, Chef, Puppet and SaltStack projects. CFEngine is omitted here because their GitHub repository for Version 3 is not backwards compatible with Version 2 and thus doesn’t accurately represent total project traction.

To set the context, however, here are a few charts comparing the surveyed projects to one another. First, we’ll look at the number of accepted pull requests across the four projects.

(Click to embiggen any of the below charts)

As noted by Donnie, Salt is disproportionately represented because pull requests are the sole contribution mechanism, but as Ansible’s Greg DeKoenigsberg observes it’s important to caveat both this metric and the following two by noting that Ansible and Salt are GitHub native and thus can be expected to outperform in that context.

With that said, here is how the projects compare relative to one another in terms of GitHub stars accumulated.

Even with the aforementioned qualifiers attached, Ansible’s performance here is notable. This trend is not new; the project has been popular on GitHub at least since 2013 when we added it to the projects in this space we track. Ansible is also the leader on GitHub in the number of forks, although its lead is less substantial in that category.

As noted, the fact that Ansible and Salt are the leaders in these categories is unsurprising given their relationship with GitHub the platform. But these metrics are, as mentioned, opaque. Who, precisely, is contributing to the projects?

To explore this question we turned to the actual Git commit logs for each project. More specifically, we extracted the email addresses per commit, and then looked at the contributions on a per domain basis. The following charts look at the top five contributors to each by domain. One quick caveat: no edits, corrections or consolidation has been made to these charts, so if there are multiple domains representing one company (e.g. and they are not consolidated here. We’ll revisit this approach in future, but the results are presented here without alteration, so it’s important to take that into consideration when evaluating relative contribution levels.



The dominant contributing domain to the Ansible project is, with a substantial majority coming from there. This potentially reflects the project’s age and permissive policies with respect to contributions, presumably regarding project contributions from internal employees as well. The strong presence here from Fedora is no surprise, given both Red Hat’s ties to the Ansible project as well as Fedora’s heavy usage of it. The remainder of the contributions are attributable to Ansible-related domains: and (James Cammarata, Ansible employee).



As might be expected for a project of Chef’s age and maturity, the majority of contributions to the project originate from Chef-owned domains ( and Independent addresses are a distant second place (~9X less), but it was interesting to see India-based Clogeny check in fourth place. Fifth place is rounded out by, which appears to be the personal domain of Matthew Kent, currently a Basecamp (née 37Signals) employee.



Much like Chef, the other elder stateman of this category, Puppet contributions are dominated by domains. There are 14X more contributions from that domain than, itself the URL of a company since acquired by Puppet. Third place, for its part, is the personal domain of Adrien Thebo, currently a Puppet Labs employee. After the fourth-largest contributor, contributions from gmail addresses, comes Days of Wonder, a board game manufacturer (e.g. Ticket to Ride). This seems to largely be the work of developer Brice Figureau, a major contributor to the project.



Following in Ansible’s footsteps, the younger Salt project is overwhelmingly composed of contributions from addresses. The number three and four contributing domains – and – effectively reflect company contributions, as Seth House is a SaltStack employee. The number two contributing domain, however, belongs to Pedro Algarvio, a Python developer with no formal affiliation with the project as documented by Matt Asay. The fifth largest contributing domain, meanwhile, is, no surprise given that university’s public usage of Salt.

The Net

In general, the findings from this project are mostly unsurprising. Older, more mature projects skew towards contributions from employees, while the younger would-be disruptors may or may not feature similar percentages of employee contributions, but if so are at least less formal in their contribution policies. It will be interesting to see whether or not Ansible or Salt’s contribution policies become more formal and employer-centric over time. It will likewise be necessary to monitor whether or how these relative contribution levels evolve; does it become more difficult for individual developers to rank amongst the top contributors? Do we see influxes of new contributions as the relative dynamics of project adoption shift? Can we expect changes in terms of the internal to external contribution ratios?

Either way, it’s interesting to go beyond strict contribution metrics to get a closer look at who the contributors are and where they’re coming from, even if it’s difficult or impossible to discover in some cases.

Disclosure: Ansible and Chef are RedMonk clients, while Puppet and Salt are not.

Categories: Configuration Management.

Who is Going to be the Ubuntu of Developer Infrastructure?

There were many things that made the early Linux desktop candidates difficult to manage. Lacking the vast catalog of drivers that Windows had at its disposal, for example, peripheral device support was a challenge. As was getting functionality like suspend working properly – not that Windows supported it flawlessly, of course. But assuming you could get these early builds up and running, at least, one of the most under-appreciated challenges of navigating the very different user interface was choice.

Or more accurately, the various distributions’ collective decision to not make any. Rather than risk offending a given community by selecting a competitive project, distributions instead shipped them all. So instead of one browser, you might have three. Looking for an Office equivalent, you might find five. While this decision was perhaps defensible from the distribution’s perspective, it was less than optimal from a usability standpoint.

Which the Ubuntu distribution, among others, recognized. Like Rails and other projects that followed, Ubuntu was “opinionated software,” though that term wouldn’t become popularized for another five years. Rather than attempting to appease all parties, the project attempted to make decisions on behalf of the user, decisions that would reduce the burden of choice. Instead of attempting to evaluate and select from multiple browsers, the distribution had evaluated the options for you and chosen Firefox. The distribution didn’t restrict users from installing an alternative, if that was their preference, it simply offered what it believed to be the best option at that time, in its opinion.

That packaging is a critical function within this industry is not exactly a revelation; my colleague, among many others, has discussed it at length. But if the experiences of developers today is any indication, opinionated packaging in the developer infrastructure space cannot come quickly enough.

As noted by Tim Bray and others, it’s difficult if not impossible for developers to understand in depth the infrastructure they rely on. Worse, it’s only getting more complicated as new projects arrive and existing projects extend their capabilities into adjacent areas.

Consider the questions surrounding even a project as popular as Docker. In spite of visibility growth that’s among the fastest we have ever seen at RedMonk, developers are still struggling to understand where it fits within their existing infrastructure options.

Wherever one looks in developer infrastructure today, choices are multiplying. Even in relatively stable markets like build systems, where Jenkins is what the numbers tell us developers prefer and complemented by credible alternatives like Bamboo, TeamCity or Travis, potentially interesting new competitors like Bazel – Google’s internal build tool, recently open sourced – continue to arrive.

For its part, it was not all that long ago that configuration management was considered a solved problem. Puppet appealed towards the systems administration end of the spectrum, while Chef built out a sizable audience of its own with more fans coming from the other, developer-end of the spectrum. It was at this point that Ansible and Salt arrived, and look where they are now.

If these are the choices for well established, well understood software categories, imagine how an average user will react when faced with understanding the role of and distinctions between software like Kubernetes, Mesos, Spark, Storm, Swarm, Yarn, etc – and then being forced to evaluate all of the above against public infrastructure alternatives.

For elite developers or organizations, choice is a positive, as they are more likely to have the time and ability to understand, evaluate and compare their options to determine best fit. The rest of the world, however, is going to need assistance in their technology selection process.

This assistance, in many cases, will arrive in the form of packaging. Much as Ubuntu attempted to make rational decisions on behalf of desktop users the world over, purveyors of developer infrastructure will be compelled to do the same for their customers. Best of breed is an ideal approach, but like many ideal approaches it scales poorly. In a perfect world the choices made by packagers will allow for substitution – to exchange Chef for Puppet, for example, or Ansible for Chef – but the simple fact is that as the complexity of infrastructure accelerates there are simply too many choices to be made.

Which is the point at which opinions, and opinionated infrastructure, becomes increasingly valuable. The question isn’t whether packaging will become an increasingly popular tactic, but rather who will employ it most efficiently. The best bets moving forward, for this reason, lie in service-based providers whose very nature abstracts their customers from the wide array of choices before them, presenting instead a single, unified service. But whether it’s on premise or as-a-service business models, becoming more opinionated is about to become more popular.

Categories: Choice, Containers, Devops.

Every Exit is an Entry Somewhere

A little over nine years ago, RedMonk made its first analyst hire. As an aside, if that number makes you feel old, well, you’re not alone. Anyway, our choice for the first non-founder analyst was a then little known BMC software developer based out of Austin, who was perhaps best recognized for his rather irreverent technology blog, Drunk and Retired. At the time, there was some consternation in the industry about the idea of hiring a developer to be an analyst. We fielded a lot of questions about the selection, but the quality of Cote’s work pretty quickly put those to rest.

It is probably in part due to Cote’s success – both with RedMonk and in his subsequent career at Dell, The 451 Group and now Pivotal – that we didn’t get nearly as many questions when we hired his replacement out of the Mayo Clinic. Superficially it might sound odd to hire as a technology industry analyst a Research Fellow doing drug discovery, but we’ve always been believers that we can teach someone to be an industry analyst – we’ve been in the business for thirty years collectively, after all. What we can’t do as easily is teach the skills necessary to be a good analyst: being creatively inquisitive, being able to communicate effectively or having an understanding and ability to grasp the macro trends shaping our industry.

When we find those, then, wherever they might be and whatever the background, we’re interested.

And find those we did in Donnie Berkholz. In spite of – or was it perhaps because of? – his non-traditional industry background, Donnie hit the ground running with us. With his background in statistical and quantitative analysis, he quickly made a name for himself exploring statistical trends, making predictions and that most important of RedMonk analyst duties: buying developers beers. He’s done nothing but prove us right in our initial belief that he could do this work at a high level, which is why we’re sad to be saying goodbye.

But like his predecessor, the time has come for Donnie to graduate from RedMonk. He’ll have more to tell you about his future plans shortly I’m sure, but suffice it to say you will still be seeing him around. His last day with us will be next Friday, as he wraps up a few projects with us. In the meantime, on behalf of all of us at RedMonk: we wish you all future success, Donnie, thank you for all of your efforts in helping RedMonk keep advancing the ball downfield. We’re happy to have played a small part in helping you transition into this industry.

As with any departure, the obvious next question is: what does this mean for RedMonk?

In the short term, more travel – they’re only making more conferences, and there are only so many of us. And we’re no more looking forward to filling Donnie’s shoes than we were to filling Cote’s. But over the longer term, our mission remains the same: we’re the analyst firm that is here for – and because of – developers. We will continue to fight the good fight on behalf of that constituency, even as market awareness of their importance adds more and more allies to our ranks. As a species we have a tendency to take progress for granted, but if you stop and think it really is amazing how different the reception our developer-centric message is today versus even four or five years ago.

Who will we hire? The best fit we can find. Like the Oakland A’s, we’ll think creatively about the opening and we’re already in the process of talking to some interesting candidates. That said, we’re open to all interested parties. And given our trajectory, we might even be adding more than one new analyst, but we’ll take it one step at a time.

Fair warning to all applicants: we will be very picky. You need to be able to communicate effectively, write well and be committed to rational discourse. You should have a reasonable online presence and a passion for developers and the tools they use. Other things we’ll look for include programming skills, economics and statistics training and experience with rich media. Previous experience as an analyst is a bonus, but absolutely not required. Interested? Send a CV and anything else you believe we should consider to hiring @

You will have big shoes to fill, whoever you are. The analysts that have come before you have done some incredible work, and we expect nothing less from you.

Why work here? The most obvious reason is that RedMonk remains, in my obviously biased opinion, an amazing place to work. There aren’t many too many jobs available that allow you to influence the strategic direction and decision making process of some of the biggest and most important technology companies in the world – as well as their disruptors, that give you a pulpit to produce public research for some of the best and brightest developers on the planet. Fewer jobs still let you work on things that are important, things that improve the day to day lives of developers, and by extension, the users they service. Tim O’Reilly says to “work on stuff that matters“; we think we do, almost every day. And as you might guess from conferences like the Monktoberfest, we try and have fun doing it.

Add in the flexibility that working for a small firm offers, from the ability to define your own research agenda to good hardware to variable vacation time to the option of working from home, and it’s a damn good gig. If any of that sounds interesting to you, drop us a line.

Last, to our clients and customers: if any of you have questions about this news, feel free to contact myself (sogrady @ or James (jgovernor @ if you like, or Juliane (juliane @ as always. We’re happy to answer anything we can.

So we wish you well, Donnie, and look forward to seeing who will step up in your place.

Categories: RedMonk Miscellaneous.

Open Source and the Rise of as-a-Service Businesses

I Want You To Open Source!

As discussed in my recap of the 2014 predictions from this space, it has been interesting to see Oracle’s SEC filings reflect the structural changes to both its business and the industry as a whole.

In 2012 and prior, Oracle reported on:

  • New software licenses
  • Software license updates and product support

By 2013, that became (additions in bold):

  • New software licenses and cloud software subscriptions
  • Software license updates and product support

And for the last fiscal year, Oracle expanded that into:

  • New software licenses
  • Cloud software-as-a-service and platform-as-a-service
  • Cloud infrastructure-as-a-service
  • Software license updates and product support

In just a few lines of a Consolidated Statement of Operations is writ the recent history of the software industry. Even companies that have efficiently extracted billions of dollars in profit from traditional, perpetual license software businesses are increasingly looking to cloud and service-enabled lines of business for future growth.

The numbers make it easy to see why. From 2012 to 2014, Oracle’s new software license revenue was down 0.37%. Over the same time period, its IaaS, PaaS and SaaS offerings combined reported growth of 75% – and if you exclude IaaS, which is a nascent business for the company – the growth rate jumps to 146%.

If we can accept for the sake of argument that this is not a unique adjustment of Oracle’s, but a pattern replicating itself across a wide range of businesses and industries, there are many questions to be answered about what the impacts will be to the industry around it. Of all of these questions, however, none is perhaps as important as the one I have discussed with members of the Eclipse and Linux Foundations over the past few weeks: what does the shift towards as-a-service businesses mean for open source? Is it good or bad for open source software in general?

The problem is that this question is difficult to answer precisely, because evidence can be found to support opposing arguments.

The Good News

On the positive front, the creation of services businesses has indirectly and directly led to an enormous portfolio of open source software. With the introduction and subsequent commercialization of the internet, a new class of problems demanded a new class of solutions. Prior to the internet, the types of scale-out architectures that are now standard within large service providers were relatively uncommon outside of specialized areas like HPC. The prevailing design assumption at the time, as Joe Gregorio observed, was N = 1, not the N > 1 fundamental assumption on which today’s platforms are built.

To satisfy the immediate demand for an enormous volume of new software written to solve new classes of problems, fundamental shifts were required. Most obviously, an unusually high percentage of this software was not written for purposes of sale. Unlike prior eras in which industry players lacking technical competencies effectively outsourced the job of software creation to third party commercial software organizations, companies like Amazon, Facebook and Google looked around and quickly determined that help was not coming from that direction – and even if it did, the economics of traditional software licensing would be a non-starter in scale-out environments.

Which is how this scale imperative led to a seismic shift in the way that software was designed and written. The decision by many of the original organizations to make these assets freely available as open source software consequently led to similarly titanic shifts in how software was distributed, marketed and sold.

The sheer number of companies not in the business of selling software who are releasing their creations as open source has dramatically inflated both the number and quality of available open source solutions. It has also put enormous competitive pressure on software vendors to either compete against open with a closed alternative or make their software similarly available.

The net, then, is that the rise of service-based businesses has directly and indirectly led to the creation of a lot of new open source software, which is positive for the industry – from a functional if not commercial standpoint – and customers alike. And having disrupted first the enterprise software industry and then compute, open source is now turning its eyes towards previously immune sectors like networking and storage.

The Bad News

One of the most important advantages open source enjoyed and continues to enjoy over proprietary alternatives is availability. As developers began to assert control over technical selection and direction in increasing numbers, even in situations where a proprietary alternative is technically superior, the sheer accessibility of open source software gave it an enormous market advantage. Choosing between adequate option A that could be downloaded instantly and theoretically superior option B gated by a salesperson was not in fact a choice. Thus Linux became the most widely adopted operating system on the cloud and MySQL the most popular relational database on the planet.

What is widely under-recognized, however, is the fact that from a convenience standpoint, open source does not enjoy the same advantages over its services counterparts that it did over proprietary competitors. Open source is typically less convenient than service-based alternatives, in fact. If it’s easier to download and spin up an open source database than talk to a salesperson, it’s even easier to download nothing at all and make setup, operation and backup of that database someone else’s problem.

If convenience is an increasingly important factor in technology adoption, then, and all of the available evidence suggests that it is, open source’s relative disadvantage in this area is a potential problem.

Particularly when you consider the motivations of vendors, who have not forgotten one of the primary lessons of the proprietary software market: locking in customers is good for business. As Shapiro and Varian put it 1999, “the profits you can earn from a customer – on a going-forward, present-value basis – exactly equal the total switching costs” (emphasis theirs). Put another way, then, the more it costs to switch, the more profits it is possible in theory to extract.

Among services companies, meanwhile, we see radically different attitudes towards the value of software, and thereby attitudes towards the act of open sourcing internal software. Facebook, at one end of the spectrum, is radically open, contributing everything from infrastructure software (e.g. Cassandra, HHVM, PrestoDB) to datacenter designs to the public knowledge pool. Amazon and Microsoft, however, are not major contributors of net new projects or packages to upstream projects.

The Net

There is little debate that to this point, the rise of service-based businesses has been a boon for open source. And for many of the pioneers of these scale-out businesses, open source is a competitive weapon in their biggest challenge: talent recruitment and retention. Developers evaluating open positions today are often faced with a choice: develop in a black box, where your work will touch thousands of other developers daily, but for which you’ll receive no external credit. Alternately, they can work on interesting problems and be given some latitude for sharing their work outside the firewall, improving their visibility and marketability moving forward.

To some extent, the lack of interest in the AGPL is a testament to the foundational role of open source in building out service-based businesses. If collaborative development and upstream contributions to key infrastructure projects was a major issue, the AGPL would probably be employed more frequently as a solution. Instead, it is an infrequently used license whose protections are widely regarded as unnecessary.

All of that being said, however, it is equally true that the future for open source in a services world is ambiguous. There are substantial incentives for vendors to drift towards non-open software, and the current trend towards permissive licensing, if anything, could accelerate this. By placing few if any restrictions on usage of open source software, permissive licenses present no barrier to any form of usage of open source software in proprietary contexts. It is true, however, that for services-based businesses, even copyleft licenses such as the GPL pose little threat because the distribution trigger is not tripped.

Taken as a whole, then, open source advocates would be wise to be appreciative of how far service businesses have gotten them to date, while being wary and watchful of their intentions moving forward.

Disclosure: Amazon, Microsoft and Oracle are RedMonk clients. Facebook is not.

Categories: Cloud, Hardware-as-a-Service, Open Source, Software-as-a-Service.

What’s in Store for 2015: A Few Predictions

If it seems odd to be posting predictions for the forthcoming year almost three months in, that’s because it is. In my defense, however, the 2015 iteration of this exercise comes little more than ten days late by last year’s standards. Which were, it must be said, very late themselves. Delayed or not, however, predictions are always a useful exercise if only, as Bryan Cantrill says, because they may tell us as much about the present as the future.

Before we continue, a brief introduction to how these predictions are formed, and the weight you may assign to them. The forecast here is based, variously, on both quantitative and qualitative assessments of products, projects and markets. For the sake of handicapping, the predictions are delivered in groups by probability; beginning with the most likely, concluding with the most volatile.

From the initial run in 2010, here is how my predictions have scored annually:

  • 2010: 67%
  • 2011: 82%
  • 2012: 70%
  • 2013: 55%
  • 2014: 50%

You may note the steep downward trajectory in the success rate over the past four years. While rightly considered a reflection of my abilities as a forecaster, it is worth noting that the aggressiveness of the predictions was increased in the year 2013. This has led to possibly more interesting but provably less reliable predictions since; you may factor the adjustment in as you will.


  • Amazon is Going to Become More Dominant Thanks to “Honeypots”
    Very few today would argue that Amazon is anything other than the dominant player in the public cloud space. In spite of its substantial first mover advantage, the company has continued to execute at the frantic pace of a late market entrant. This has maintained or even extended the company’s lead, even as some of the largest technology companies in the world have realized their original mistake and scramble to throw resources at the market.

    In 2015, Amazon will become even more dominant thanks to its ability to land customers on what I term “honeypot” services – services that are exceedingly easy to consume, and thus attractive – and cross/upsell them to more difficult-to-replicate or proprietary AWS products. Which are, notably, higher margin. Examples of so-called “honeypot” services are basic compute (EC2) and storage (S3) services. As consumption of these services increases, which it is across a large number of customers and wide range of industries, the friction towards other AWS services such as Kinesis, Redshift and so on decreases and consumption goes up. Much as was the case with Microsoft’s Windows platform, the inertia to leave AWS will become excessive.

    The logical question to ask about escalating consumption of services that are difficult or impossible to replicate outside of AWS, of course, is lock-in.

    The answer to what impact this will have on consumption can be found from examining the history of the software industry. If the experience of the past several decades, from IBM to Microsoft to VMware, tells us anything, it’s first that when asked directly, customers will deny any willingness to lock themselves into a single provider. It also demonstrates, however, that for the right value – be that cost, or more typically convenience or some combination of the two – they are almost universally willing to lock themselves into a single provider. Statements to the contrary notwithstanding. Convenience kills.

  • Kubernetes, Mesos, Spark et al are the New NoSQL
    Not functionally, of course. But the chaos of the early NoSQL market is remarkably similar to the evolution of what we’re seeing from projects like Mesos or Spark. First, there has been a rapid introduction of a variety of products which require a new conceptual understanding of infrastructure to appreciate. Second, while there may be areas of overlap between projects, in general they are quite distinct from one another. Third, the market’s understanding of what these projects are for and how they are to be used is poor to quite poor.

    In the early days of NoSQL, for example, we used to regularly see queries to our content that were some variation of “hadoop vs mongodb vs redis.” While these projects all are similar in that they pertain to data, that is about all they have in common. This was not obvious to the market for some time, however, as generations accustomed to relational databases being the canonical means of persisting data struggled to adapt to a world of document and graph databases or MapReduce engines and key-value databases. In other words, the market took very dissimilar products and aggregated them all under the single category NoSQL, in spite of the fact that the majority of the products in said category were not comparable.

    This is currently what we see when customers are evaluating projects like Kubernetes, Mesos and Spark: the initial investigation is less functional capability or performance than basic education. Not through any failing on the part of the individual projects, of course. It just takes time for markets to catch up. For 2015, then, expect these and similar projects to achieve higher levels of visibility, but remain poorly understood outside the technical elite.

    If the term NoSQL was reintroduced in 2009 then, per Wikipedia, and may have achieved mainstream status in 2014, it may be 2020 before the orchestraters, schedulers and fabrics are household names.


  • Docker will See Minimal Impact from Rocket
    Following the announcement of CoreOS’s container runtime project, Rocket, we began to field a lot of questions about what this meant for Docker. Initially, of course, the answer was simply that it was too soon to say. As we’ve said many times, Docker is one of the fastest growing – in the visibility sense – projects we have ever seen. Along with Node.js and a few others, it piqued developer interest in a way and at a rate that is exceedingly rare. But past popularity, while strongly correlated with future popularity, is not a guarantee.

    In the time since, we’ve had a lot of conversations about Docker and Rocket, and the anecdotal evidence strongly suggested that the negative impact, if any, to Docker’s trajectory would be medium to long term. Most of the conversations we have had with people in and around the Docker ecosystem suggest that while they share some of CoreOS’s concerns (and have some not commonly cited), the project’s momentum was such that they were committed for the foreseeable future.

    It’s still early, and the results are incomplete, but the quantitative data from my colleague above seems to support this conclusion. At least as measured by project activity, Docker’s trendline looks unimpacted by the announcement of Rocket. I expect this to continue in 2015. Note that this doesn’t mean that Rocket is without prospects: multiple third parties have told us they are open to the idea of supporting alternative container architectures. But in 2015, at least, Docker’s ascent should continue, if not necessarily at the same exponential rate.

  • Google Will Hedge its Bets with Java and Android, But Not Play the Swift Card
    Languages and runtimes evolve, of course, and eventually Google may shift Android towards another language. Certainly the fledgling support for Go on the platform introduced in 1.4 was interesting, both because of the prospect of an alternative runtime option and because of the growth of Go itself.

    That being said, change is unlikely to be coming in 2015 if it arrives at all. For one, Go is a runtime largely focused on infrastructure for the time being. For another, Google has no real impetus to change at the moment. Undoubtedly the company is hedging its bets internally pending the outcome of “Oracle America, Inc. v. Google, Inc.,” which has seen the Supreme Court ask for the federal government’s opinions. And certainly the meteoric growth of Swift has to be comforting should the company need to make a clean break with Java.

    But the bet here is, SCOTUS or no, we won’t see a major change on the Android platform in 2015.

  • Services Will Be the New Open Core
    I’ve written extensively (for example) on how companies are continuing to shift away from the tried and true perpetual license model, and for those interested in this topic, I have an O’Reilly title on it due any day now entitled “The Software Paradox.” The question is what comes next?

    Looking at the industry today, it’s clear that at least with respect to infrastructure software, it is difficult to compete without some component of your solution – and typically, a core that is viable as a standalone product – being open source. As Cloudera co-founder Mike Olson puts it, “You can no longer win with a closed-source platform.” Which sounds like a win for open source, and indeed to some extent is.

    It is equally true, however, that building an open source company is inherently more challenging than building one around proprietary software. This has led to the creation of a variety of complicated monetization mechanisms, which attempt to recreate proprietary margins while maintaining the spirit of the underlying open source project. Of these, open core has emerged as the most common. In this model, a company maintains an open source “core” while layering on premium, proprietary components.

    While this model works reasonably well, it does create friction within the open source community, as the commercial organization is inevitably presented with complicated decisions to make about what to open source and what to withhold as proprietary software. Coupled with the fact that for some subset of users, the open source components may be “good enough” and it’s clear that while open core is a reasonable adaptation to the challenges of selling software today, it’s far from perfect.

    Which is why we are and will continue to see companies turn to service-based revenue models as an alternative. When you’re selling a service instead of merely a product, many of the questions endemic to open core go away. And even in models where 100% of the source code is made available, selling services remains a simpler exercise because selling services is not just selling source code: it’s selling the ability to run, manage and maintain that code.

    In the late 1990’s when the services model was first proposed by companies then referred to as “Application Service Providers,” the idea was laughable. Why rent when you could buy?

    Whether it’s software or cars today, however, customers are increasingly turning towards on-demand, rental models. It’s partially an economic decision, as amortizing capex spend over a longer period of time in return for manageable premiums is often desirable. But more importantly, it’s about outsourcing everything from risk to construction, support and maintenance to third parties.

    Consider, for example, Oracle. In the space of three years the company has gone from reporting on “New software licenses” in its SEC filings to “New software licenses,” “Cloud software-as-a-service and platform-as-a-service,” and “Cloud infrastructure-as-a-service.” There will be exceptions, as there always are, but the trajectory of this industry is clear and it’s towards services. We’ll leave the implications of this shift for open source as a topic for another day.


  • One Consumer IoT Provider is going to be Penetrated with Far Reaching Consequences
    If it weren’t for the fact that there’s only ten months remaining in the year and the “far reaching” qualifier, this would be better categorized as “Likely” or even “Safe.” But really, this prediction is less about timing and more about inevitability. Given the near daily breaches of organizations regarded as technically capable, the longer the time horizon the closer the probability of a major IoT intrusion gets to 1. The attack surface of the consumer IoT market is expanding dramatically from thermostats to smoke detectors to alarms to light switches to door locks to security systems. Eventually one of these providers is going to be successfully attacked, and bad things will follow. The prediction here is that that happens this year, much as I hope otherwise.

  • AWS Lambda Will be Cloned by Competitive Providers
    With so many services already available and more being launched by the day, it’s difficult for any single service to stand out. Redshift is hailed as the fastest growing in the history of the BU, EC2 and S3 continue to provide the base core to build on, and accelerating strikes into application territory via WorkDocs and Workmail make it easy for newly launched AWS offerings to get lost. Particularly when they don’t fit pre-existing categories.

    For my money, however, Lambda was the most interesting service introduced by AWS last year. On the surface, it seems simplistic: little more than Node daemons, in fact. But as the industry moves towards services in general and microservices specifically, Lambda offers key advantages over competing models and will begin to push developer expectations. First, there’s no VM or instance overhead: Node snippets are hosted effectively in stasis, waiting on a particular trigger. Just as interest in lighter weight instances is driving containers, so too does it serve as incentive for Lambda adoption. Second, pricing is based on requests served – not running time. While the on-demand nature of the public cloud offered a truly pay-per-use model relative to traditional server deployments, Lambda pushes this model even further by redefining usage from merely running to actually executing. Third, Lambda not only enables but compels a services-based approach by lowering the friction and increasing the incentives towards the servicification of architectures.

    Which is why I expect Lambda to be heavily influential if not cloned outright, and quickly.


  • Slack’s Next Valuation Will be Closer to WhatsApp than $1B
    In late October of last year, Slack took $120M in new financing that valued the company at $1.12B. A few eyebrows were raised at that number: as Business Insider put it: “when it was given a $1.1 billion valuation last October, just 8 months after launching for the general public, there were some question marks surrounding its real value.”

    The prediction here, however, is that that number is going to seem hilariously light. The basic numbers are good. The company is adding $1M in ARR every 11 days, and has grown its user base by 33X in the past twelve months. Those users have sent 1.7B messages in that span.

    But impressive as those numbers might be, how does that get Slack anywhere close to WhatsApp? The same month that Slack was valued at a billion dollars, WhatsApp hit 600 million users – or 1200 times what Slack is reporting today. How would they be even roughly comparable, then?

    In part, because Slack’s users are likely to be more valuable than WhatsApp’s. The latter is free for users for the first year, then priced at $0.99 per annum following. Slack meanwhile, offers a free level of service with limitations on message retention, service integrations and so on, with paid pricing starting at $6.67 – per month. Premium packages go up to $12.50 per month, with enterprise services priced at $49-99 coming. This structure should allow Slack to hit a substantially higher ARPU than WhatsApp – which it will need to because it’s so far behind in subscriber count.

    The real reason that Slack is undervalued, however, is because its versatility is currently being overlooked. Slack is currently viewed by parties external as a business messaging tool, not a consumer one. See, for example, this graphic from the Wall Street Journal.

    Source: The Wall Street Journal

    The omission of Slack here is understandable, because Slack itself has made no effort to brand itself or price itself as a consumer-friendly messaging tool. From the marketing to the revenue model, Slack is pitched as a product for teams.

    The interesting thing, however, is that what makes it useful for teams also makes it useful for social groups. A group of my friends, for example, has turned Slack into a replacement not just for email or texting but WhatsApp. While a number of us started using WhatsApp while in Europe last year, Slack’s better for two reasons. First, it’s mobile capable, but not mobile only. True, WhatsApp has added, but that’s Chrome only. And frankly, for a messaging app that you use heavily, a browser tab doesn’t offer the experience that a native app does – and Slack’s native apps are excellent. For Slack, the desktop is a first class citizen. For WhatsApp and the like, it’s an afterthought.

    Second, the integrations model makes Slack far more than just a messaging platform. My particular group of friends has added a travel channel where our Foursquare and TripIt notifications are piped in, an Untappd channel where our various checkins are recorded, a blog channel which notifies us of posts by group members and so on. We also have our own Hubot linked to our Slack instance, so we can get anything from raccoon gifs to Simpsons quotes on demand.

    Slack has been so successful with this group of friends, in fact, that I created another room for our local Portland chapter of Computers Anonymous. While it’s early, it seems as if the tool will have legs in terms of helping keep an otherwise highly distributed group of technologists in touch.

    Anecdotal examples do not a valuation make, of course. And there’s no guarantee that Slack will do anything to advance this usage, or even continue to permit it. But it does speak to the company’s intrinsic ability to function well as a message platform outside of the target quote unquote business team model. What might a company with the ability to sell to both WhatsApp users and enterprises both be worth? My prediction for 2015 is a lot more than $1.12B.


  • The Apple Watch Will be the New Newton
    If this were any other company besides Apple, this prediction would likely be in the “Safe” category. “Likely,” at worst. Smartwatches as currently conceived seem like solutions in search of a problem. Is anyone that desperate to see who’s calling them that they need a notification on their wrist? Who makes phone calls anymore anyway? As for social media and email notifications, aren’t we already dangerously interrupt driven enough? And then there’s the battery life. For the various Android flavors, it varies from poor to abysmal. Apple, for its part, hasn’t talked much about the battery life on their forthcoming device, which probably isn’t a great sign.

    But it’s Apple, and counting them out is dangerous business indeed. How many couldn’t see the point of combining a computer with a phone? Or the appeal of a tablet (in 2010, not 1993)? Hence the elevation of this prediction to the spectacular category.

    It could very well be that Apple will find a real hook with the Watch and sell them hand over fist, but I’m predicting modest uptake – for Apple, anyway – in 2015. They’ll be popular in niches. The Apple faithful will find uses for it and ways to excuse the expected shortcomings of a first edition model. If John Gruber’s speculation on pricing is correct, meanwhile, Apple will sell a number of the gold plated Edition model to overly status conscious rich elites. But if recent rumors about battery life are even close to correct, I find it difficult to believe that the average iPhone user will be willing to charge his iPhone once a day and her watch twice in that span.

    While betting against Apple has been a fool’s game for the better part of a decade, then, call me the fool. I’m betting the under on the Apple Watch. Smartwatches will get here eventually, I’m sure, but I don’t think the technology is quite here yet. Much as it wasn’t with the Newton once upon a time.

Categories: Business Models, Cloud, Collaboration, Containers, Databases, Hardware-as-a-Service, Microservices, Mobile, Platform-as-a-Service, Programming Languages, Software-as-a-Service.

Revisiting the 2014 Predictions

With the calendar now reading January, it’s time to look ahead to 2015 and set down some predictions for the forthcoming year. As is the case every year, this is a two part exercise. First, reviewing and grading the predictions from the prior year, and second, the predictions for this one. The results articulated by first hopefully allow the reader to properly weight the contents of the second – with one important caveat that I’ll get to.

This year will be the fifth year I have set down annual predictions. For the curious, here is how I have fared in years past.

Before we get to the review of 2014’s forecast, one important note regarding the caveat mentioned above. Prior to 2013, the predictions in this space focused on relatively straightforward expectations with well understood variables. After some ferocious taunting constructive feedback from Bryan Cantrill, however, emphasis shifted towards trying to better anticipate the totally unexpected than the reverse.

You can see from the score how that worked out. Nevertheless, we press on. Without further delay, here are the 2014 predictions in review.


38% or Less of Oracle’s Software Revenue Will Come from New Licenses

As discussed in November of 2013 and July of 2012, while Oracle has consistently demonstrated an ability to grow its software related revenues the percentage of same derived from the sale of new licenses has been in decline for over a decade. Down to 38% in 2013 from 71% in 2000, there are no obvious indications that 2014 will buck this trend.

Because of some changes in reporting, it’s a bit tricky to answer this one simply. When Oracle reported its financial results in 2012, their consolidated statement of operations included just two categories in software revenue: “New software licenses” and “Software license updates and product support.” The simple, classic perpetual license software business, in other words. A year later, however, Oracle was still reporting in two categories, but “New software licenses” had become “New software licenses and cloud software subscriptions.”

While this made it impossible to compare 2012 to 2013 on an Apples to Apples basis, the basic premise held: theoretically reporting new software licenses-only in 2012, the percentage of overall software revenue that represented was 37.93%. The number in 2013, with “cloud software subscriptions” now folded in? 37.58%. With or without cloud revenue included, then, a distinct minority – and declining percentage – of the overall Oracle revenue was derived from the sale of new licenses.

For the fiscal year 2014, however, Oracle finally abandoned the two category reporting structure and broke cloud revenue out into not just one but two entirely new categories. In its 2014 10-K, Oracle provides revenue numbers for the following:

  • New software licenses
  • Cloud software-as-a-service and platform-as-a-service
  • Cloud infrastructure-as-a-service
  • Software license updates and product support

Which begs the question: if we’re trying to determine what percentage of Oracle’s software revenue derives from the sale of new software licenses, do we include one or both cloud categories, or evaluate software on a stand alone basis?

If the latter, the 2015 prediction is easily satisfied. If we exclude cloud revenue, only 32.25% of Oracle’s non-hardware revenue was extracted from new software licenses – a drop of 5.68% since the last time Oracle only reported on that category. But given that 2013 conflated software and cloud revenue and was used as the basis for the 2015 prediction here, it seems only fair to use that as the basis for judgement.

So what percentage of overall revenue did new cloud (IaaS, PaaS and SaaS included) and software licenses generate for the company in 2014? 37.65%.

Which means we’ll count this prediction as a hit.

The Biggest Problem w/ IoT in 2014 Won’t Be Security But Compatibility

Part of the promise of IoT devices is that they can talk to each other, and operate more efficiently and intelligently by collaborating. And there are instances already where this is the case: the Nest Protect smoke alarm, for example, can shut off a furnace in case of fire through the Nest thermostat. But the salient detail in that example is the fact that both devices come from the same manufacturer. Thus far, most of the IoT devices being shipped are designed as individual silos of information. So much so, in fact, that an entirely new class of hardware – hubs – has been created to try and centrally manage and control the various devices, which have not been designed to work together. But while hubs can smooth out the rough edges of IoT adoption, they are more band-aid than solution.

And because this may benefit market leaders like Nest – customers have a choice between buying other home automation devices that can’t talk to their Nest infrastructure or waiting for Nest to produce ones that do – the market will be subject to inertial effects. Efforts like the AllSeen Alliance are a step in the right direction, but in 2014 would-be IoT customers will be substantially challenged and held back by device to device incompatibility.

If the high profile penetrations of JP Morgan, Sony et al had been IoT related, this prediction would have been more problematic. But while there were notable IoT related security incidents like those described in this December report in which the blast furnace in a German factory was remotely manipulated, in 2014 the bigger issue seems to have been compatibility.

Perhaps in recognition of this limiting factor, manufacturers have indicated that 2015 is going to see progress in this area. Early in January, for example, Nest announced at CES that it would partnering with over a dozen new third party vendors, from August to LG to Philips to Whirlpool. 2014 also saw the company acquire the manufacturer of a potentially complementary device, the Dropcam. This interoperation will be crucial to expanding the market as a whole, because connected devices unable to interoperate with each other are of far more limited utility.

I’ll count this as a hit.

Windows 7 Will Be Microsoft’s Most Formidable Competitor

The good news for Microsoft is that Windows 7 adoption is strong, with more than twice the share of Windows XP, the next most popular operating system according to Statcounter. The bad news for Microsoft is that Windows 7 adoption is strong.

With even Microsoft advocates characterizing Windows 8 as a “mess,” Microsoft has some difficult choices to make moving forward. Even setting aside the fact that mobile platforms are actively eroding the PC’s relevance, what can or should Microsoft tell its developers? Embrace the design guidelines of Windows 8, which the market has actively selected against? Or stick with Windows 7, which is widely adopted but not representative of the direction that Microsoft wants to head? In short, then, the biggest problem Microsoft will face in evangelizing Windows 8 is Windows 7.

The good news for Microsoft is that Windows 7 declined slightly, from 50.32% in January to 49.14% in December. The bad news is that Windows 8.1 (11.77%) is still behind Windows XP (11.93%) in share. Back on the bright side, that was up from Windows 8’s 7.57% in January and the next closest non-Microsoft competitor was Mac OS at 7.83%.

Still, it seems pretty clear that Windows 7 is Microsoft’s most formidable competitor – we’ll see how Windows 10 does against it. Hit.

The Low End Premium Server Business is Toast

Simply consider what’s happened over the last 12 months. IBM spun off its x86 server business to Lenovo, at a substantial discount from the original asking price if reports are correct. Dell was forced to go private. And HP, according to reports, is about to begin charging customers for firmware updates. Whether the wasteland that is the commodity server business is more the result of defections to the public cloud or big growth from ODMs is ultimately irrelevant: the fact is that the general purpose low end server market is doomed. This prediction would seem to logically dictate decommitments to low end server lines from other businesses besides IBM, but the bet here is that emotions win out and neither Dell nor HP is willing to cut that particular cord – and Lenovo is obviously committed.

It’s difficult to measure this precisely because players like Dell remain private and shipment volumes from ODM suppliers are opaque, but there are several things we know. In spite of growth in PCs, HP’s revenue was down 4% (1% net) in 2014. And while CEO Meg Whitman expects x86 servers to play a part in a 2015 rebound, there are no signs that that was the case in 2014. Cisco, meanwhile, which eclipsed HP for sales of x86 blade servers in Q1, grew its datacenter business (which includes servers) in 2014 at a 27.3% clip compared to the year prior, but that was down from 59.8% growth 2012-2013 and the 2014 revenue represents only 7.3% of Cisco’s total for the year.

Amazon, on the other hand, is growing by virtually any metric, and rapidly in terms of users, consumption metrics and its portfolio of available services. Nor is Amazon the only growth area in public cloud: DigitalOcean has become the fourth largest web host in the world in less than two years, according to Netcraft.

Whether you base it on Amazon’s one million plus customers, then, or the uncertain fortunes of the x86 alternatives, it’s clear that traditional x86 businesses remain in real trouble. Hit.


2014 Will See One or More OpenStack Entities Acquired

Belatedly recognizing that the cloud represents a clear and present danger to their businesses, incumbent systems providers will increasingly double down on OpenStack as their response. Most already have some commitment to the platform, but increasing pressure from public cloud providers (primarily Amazon) as well as proprietary alternatives (primarily VMware) will force more substantial responses, the most logical manifestation of which is M&A activity. Vendors with specialized OpenStack expertise will be in demand as providers attempt to “out-cloud” one another on the basis of claimed expertise.

There are a few acquisitions here that are not OpenStack entities but certainly influenced by same – HP/Eucalyptus and Red Hat/Inktank come to mind – but it’s not necessary to include these to make this prediction come true. Just in the last year we’ve seen EMC acquire Cloudscaling, Cisco pick up Metacloud and Red Hat bring on eNovance. That leaves a variety of players still on the board, from Blue Box to Mirantis to Piston, and it will be interesting to see whether further consolidation lies ahead. But in the meantime, this prediction can safely be scored as a hit.

The Line Between Venture Capitalist and Consultant Will Continue to Blur

We’ve already seen this to some extent, with Hilary Mason’s departure to Accel and Adrian Cockcroft’s move to Battery Ventures. This will continue in large part because it can represent a win for both parties. VC shops, increasingly in search of a means of differentiation, will seek to provide it with high visibility talent on staff and available in a quasi-consultative capacity. And for the talent, it’s an opportunity to play the field to a certain extent, applying their abilities to a wider range of businesses rather than strictly focusing on one. Like EIR roles, they may not be long term, permanent positions: the most likely outcome, in fact, is for talent to eventually find a home at a portfolio company, much as Marten Mickos once did at Eucalyptus from Benchmark. But in the short term, these marriages are potentially a boon to both parties and we’ll see VCs emerge as a first tier destination for high quality talent.

The year 2014 did see some defections to the VC ranks, but certainly nothing that could be construed as a legitimate trend for the year. This is a miss.

Netflix’s Cloud Assets Will Be Packaged and Create an Ecosystem Like Hadoop Before Them

My colleague has been arguing for the packaging of Netflix’s cloud assets since November of 2012, and to some extent this is already occurring – we spoke to a French ISV in the wake of Amazon reInvent that is doing just this. But the packaging effort will accelerate in 2014, as would-be cloud consumers increasingly realize that there is more to operating in the cloud than basic compute/network/storage functionality. From Asgard to Chaos Monkey, vendors are increasingly going to package, resell and support the Netflix stack much as communities have sprung up around Cassandra, Hadoop and other projects developed by companies not in the business of selling software. To give myself a small out here, however, I don’t expect much from the ecosystem space in 2014 – that will only come over time.

In spite of some pilot efforts here and there including services work, there was little “acceleration” of the packaging of Netflix’s cloud assets. This is a miss.


Disruption Finally Comes to Storage and Networking in 2014

While it’s infrequently discussed, networking and storage have proven to be largely immune from the aggressive commoditization that has consumed first major software businesses and then low end server hardware. They have not been totally immune, of course, but by and large both networking and storage have been relatively insulated against the corrosive impact of open source software – in spite of the best efforts of some upstart competitors.

This will begin to change in 2014. In November, for example, Facebook’s VP of hardware design disclosed that they were very close to developing open source top-of-rack switches. That open source would eventually come for both the largely proprietary networking and storage providers was always inevitable; the question was timing. We are beginning to finally seen signs that one or both will be disrupted in the current year, whether its through collective efforts like the Open Compute Project or simply clever repackaging of existing technologies – an outcome that seems more likely in storage than networking.

As discussed previously, strictly speaking, disruption had already come for storage at the time that this was originally written. As for networking, the disclosure that some of the largest potential networking customers – Amazon and Facebook, among others – are now designing and manufacturing their own networking gear instead of purchasing it from traditional suppliers was disruptive enough. The fact that it’s that Facebook’s custom network designs, at least, are likely to be released to the public should be that much more concerning to traditional networking suppliers.

With the caveat then that the storage timing, at least, was off, this is a hit.


The Most Exciting Infrastructure Technology of 2014 Will Not Be Produced by a Company That Sells Technology

More and more today the most interesting new technologies are being developed not by companies that make money from software – one reason that traditional definitions of “technology company” are unhelpful – but from those that make money with software. Think Facebook, Google, Netflix or Twitter. It’s not that technology vendors are incapable of innovating: there are any number of materially interesting products that have been developed for purposes of sale.

The difficulty, as I should know by now, with predictions like these, is that they’re dependent on arbitrary and subjective definitions – in this case what’s the most “exciting” project of 2014. While there are many potential candidates, however, for us at RedMonk, Docker was one of our most discussed infrastructure projects over the past calendar year. By a variety of metrics, it’s the one of the most quickly growing projects we have ever seen. The Google Trends graph above corroborates this, albeit in an understated manner.

As a result, it seems fair to argue that Docker is a good candidate for the most exciting infrastructure technology of 2014. And unfortunately for my prediction, it is in fact produced by a company that sells software. So this is a miss.

Google Will Buy Nest Google Will Move Towards Being a Hardware Company

In the wake of Google’s acquisition of Nest, which I cannot claim with a straight face that I would have predicted, this prediction probably would have been better positioned in the Safe or Likely categories, as it seemed to indicate a clear validation of this assertion. But then they went and sold Motorola to Lenovo, effectively de-committing from the handset business.

So while I don’t expect hardware to show up in the balance sheet in a meaningful way in 2014, it seems probable that by the end of the year we’ll be more inclined to think of Google as a hardware company than we do today.

In spite of the launch of the Nexus Player, the acquisition of Nest, the continued success of the Chromecast, beating Apple to market with Android-powered smartwatches and a new pair of Nexus phone and tablet devices – not to mention the self-driving cars – it can’t realistically be claimed that people think of Google as a hardware company today. Certainly the company has more involvement in physical hardware than it ever has, but by and large the company’s perception is shaped by its services: Search, AdSense/Words, Gmail, GCE etc. That might have shifted somewhat if the Nest brand had been folded into Google’s and the company had released additional device types, but that’s merely speculation.

The fact is that Google is not materially more of a hardware company today than it was when these predictions were made. Ergo, this is a miss.


Google Will Acquire IFTTT

Acquisitions are always difficult to predict, because of the number of variables involved. But let’s say, for the sake of argument, that you a) buy the prediction that a major problem with the IoT is compatibility and b) that you believe Google’s becoming more of a hardware company broadly and IoT company over time: what’s the logical next step if you’re Google? Maybe you contemplate the acquisition of a Belkin or similar, but more likely you (correctly) decide the company has quite enough to digest at the moment in the way of hardware acquisitions. But what about IFTTT?

By more closely marrying the service to their collaboration tools, Google could a) differentiate same, b) begin acclimating consumers to IoT-style interconnectivity, and c) begin generating even more data about consumer habits to feed their existing (and primary) revenue stream, advertising.

Not much argument here, as IFTTT was not acquired by anyone, Google included. The logic behind the prediction remains sound, but there’s no way to count this as anything other than a miss.

The Final Tally

To wrap things up, how did the above predictions score? The short answer is not well. Out of the ten predictions for the year, five were correct. Which means, unfortunately, that five were not, good for a dismal 50% average. In the now five years of this exercise, 50% is the lowest score ever, and the lowest since last year, which saw the debut of the new more aggressive format – which is obviously not a coincidence.

In my defense, however, the misses were primarily drawn from the least certain predictions; all of the “Safe” predictions, for example, were hits. In terms of scoring, then, the context is important. The failure rate of predictions is highly correlated to their difficulty. It’s simpler, obviously, to predict acquisitions in a given category than to predict a specific acquirer/acquiree match.

All of that said, the forthcoming predictions for 2015 will remain aggressive in nature, even if that means 2016 will see a similarly contrite and humble predictions wrap up.

Categories: Cloud, Hardware, IoT, Network, Storage.

The RedMonk Programming Language Rankings: January 2015

Update: These rankings have been updated. The third quarter snapshot is available here.

With two quarters having passed since our last snapshot, it’s time to update our programming language rankings. Since Drew Conway and John Myles White originally performed this analysis late in 2010, we have been regularly comparing the relative performance of programming languages on GitHub and Stack Overflow. The idea is not to offer a statistically valid representation of current usage, but rather to correlate language discussion (Stack Overflow) and usage (GitHub) in an effort to extract insights into potential future adoption trends.

In general, the process has changed little over the years. With the exception of GitHub’s decision to no longer provide language rankings on its Explore page – they are now calculated from the GitHub archive – the rankings are performed in the same manner, meaning that we can compare rankings from run to run, and year to year, with confidence.

This is brought up because one result in particular, described below, is very unusual. But in the meantime, it’s worth noting that the steady decline in correlation between rankings on GitHub and Stack Overlow observed over the last several iterations of this exercise has been arrested, at least for one quarter. After dropping from its historical .78 – .8 correlation to .74 during the Q314 rankings, the correlation between the two properties is back up to .76. It will be interesting to observe whether this is a temporary reprieve, or if the lack of correlation itself was the anomaly.

For the time being, however, the focus will remain on the current rankings. Before we continue, please keep in mind the usual caveats.

  • To be included in this analysis, a language must be observable within both GitHub and Stack Overflow.
  • No claims are made here that these rankings are representative of general usage more broadly. They are nothing more or less than an examination of the correlation between two populations we believe to be predictive of future use, hence their value.
  • There are many potential communities that could be surveyed for this analysis. GitHub and Stack Overflow are used here first because of their size and second because of their public exposure of the data necessary for the analysis. We encourage, however, interested parties to perform their own analyses using other sources.
  • All numerical rankings should be taken with a grain of salt. We rank by numbers here strictly for the sake of interest. In general, the numerical ranking is substantially less relevant than the language’s tier or grouping. In many cases, one spot on the list is not distinguishable from the next. The separation between language tiers on the plot, however, is generally representative of substantial differences in relative popularity.
  • GitHub language rankings are based on raw lines of code, which means that repositories written in a given language that include a greater number amount of code in a second language (e.g. JavaScript) will be read as the latter rather than the former.
  • In addition, the further down the rankings one goes, the less data available to rank languages by. Beyond the top tiers of languages, depending on the snapshot, the amount of data to assess is minute, and the actual placement of languages becomes less reliable the further down the list one proceeds.

(click to embiggen the chart)

Besides the above plot, which can be difficult to parse even at full size, we offer the following numerical rankings. As will be observed, this run produced several ties which are reflected below (they are listed out here alphabetically rather than consolidated as ties because the latter approach led to misunderstandings).

1 JavaScript
2 Java
4 Python
5 C#
5 C++
5 Ruby
9 C
10 Objective-C
11 Perl
11 Shell
13 R
14 Scala
15 Haskell
16 Matlab
17 Go
17 Visual Basic
19 Clojure
19 Groovy

By the narrowest of margins, JavaScript edged Java for the top spot in the rankings, but as always, the difference between the two is so marginal as to be insignificant. The most important takeaway is that the language frequently written off for dead and the language sometimes touted as the future have shown sustained growth and traction and remain, according to this measure, the most popular offerings.

Outside of that change, the Top 10 was effectively static. C++ and Ruby jumped each one spot to split fifth place with C#, but that minimal distinction reflects the lack of movement of the rest of the “Tier 1,” or top grouping of languages. PHP has not shown the ability to unseat either Java or JavaScript, but it has remained unassailable for its part in the third position. After a brief drop in Q1 of 2014, Python has been stable in the fourth spot, and the rest of the Top 10 looks much as it has for several quarters.

Further down in the rankings, however, there are several trends worth noting – one in particular.

  • R: Advocates of the language have been pleased by four consecutive gains in these rankings, but this quarter’s snapshot showed R instead holding steady at 13. This was predictable, however, given that the languages remaining ahead of it – from Java and JavaScript at the top of the rankings to Shell and Perl just ahead – are more general purpose and thus likely to be more widely used. Even if R’s grow does stall at 13, however, it will remain the most popular statistical language by this measure, and this in spite of substantial competition from general purpose alternatives like Python.

  • Go: In our last rankings, it was predicted based on its trajectory that Go would become a Top 20 language within six to twelve months. Six months following that, Go can consider that mission accomplished. In this iteration of the rankings, Go leapfrogs Visual Basic, Clojure and Groovy – and displaces Coffeescript entirely – to take number 17 on the list. Again, we caution against placing too much weight on the actual numerical position, because the differences between one spot and another can be slight, but there’s no arguing with the trendline behind Go. While the language has its critics, its growth prospects appear secure. And should the Android support in 1.4 mature, Go’s path to becoming a Top 10 if not Top 5 language would be clear.

  • Julia/Rust: Long two of the notable languages to watch, Julia and Rust’s growth has typically been in lockstep, though not for any particular functional reason. This time around, however, Rust outpaced Julia, jumping eight spots to 50 against Julia’s more steady progression from 57 to 56. It’s not clear what’s responsible for the differential growth, or more specifically if it’s problems with Julia, progress from Rust (with a DTrace probe, even), or both. But while both remain languages of interest, this ranking suggests that Rust might be poised to outpace its counterpart.

  • Coffeescript: As mentioned above, Coffeescript dropped out of the Top 20 languages for the first time in almost two years, and may have peaked. From its high ranking of 17 in Q3 of 2013, in the three runs since, it has clocked in at 18, 18 and now 21. The “little language that compiles into JavaScript” positioned itself as a compromise between JavaScript’s ubiquity and syntactical eccentricities, but support for it appears to be slowly eroding. How it performs in the third quarter rankings should provide more insight into whether this is a temporary dip or more permanent decline.

  • Swift: Last, there is the curious case of Swift. During our last rankings, Swift was listed as the language to watch – an obvious choice given its status as the Apple-anointed successor to the #10 language on our list, Objective-C. Being officially sanctioned as the future standard for iOS applications everywhere was obviously going to lead to growth. As was said during the Q3 rankings which marked its debut, “Swift is a language that is going to be a lot more popular, and very soon.” Even so, the growth that Swift experienced is essentially unprecedented in the history of these rankings. When we see dramatic growth from a language it typically has jumped somewhere between 5 and 10 spots, and the closer the language gets to the Top 20 or within it, the more difficult growth is to come by. And yet Swift has gone from our 68th ranked language during Q3 to number 22 this quarter, a jump of 46 spots. From its position far down on the board, Swift now finds itself one spot behind Coffeescript and just ahead of Lua. As the plot suggests, Swift’s growth is more obvious on StackOverflow than GitHub, where the most active Swift repositories are either educational or infrastructure in nature, but even so the growth has been remarkable. Given this dramatic ascension, it seems reasonable to expect that the Q3 rankings this year will see Swift as a Top 20 language.

The Net

Swift’s meteoric growth notwithstanding, the high level takeaway from these rankings is stability. The inertia of the Top 10 remains substantial, and what change there is in the back half of the Top 20 or just outside of it – from Go to Swift – is both predictable and expected. The picture these rankings paint is of an environment thoroughly driven by developers; rather than seeing a heavy concentration around one or two languages as has been an aspiration in the past, we’re seeing a heavy distribution amongst a larger number of top tier languages followed by a long tail of more specialized usage. With the exceptions mentioned above, then, there is little reason to expect dramatic change moving forward.

Update: The above language plot chart was based on an incorrect Stack Overflow tag for Common Lisp and thereby failed to incorporate existing activity on that site. This has been corrected.

Categories: Programming Languages.

DVCS and Git Usage in 2014

To many in the technology industry, the dominance of Decentralized Version Control Systems (DVCS) generally and Git specifically is taken as a given. Whether it’s consumed as a product (e.g. GitHub Enterprise/Stash), service (Bitbucket, GitHub) or base project, Git is the de facto winner in the DVCS category, a category which has taken considerable share from its centralized alternatives over the past few years. With macro trends fueling further adoption, it’s natural to expect that the ascent of Git would continue unimpeded.

One datapoint which has proven useful for assessing the relative performance of version control systems is Open Hub (formerly Ohloh)’s repository data. Built to index public repositories, it gives us insight into the respective usage at least within its broad dataset. In 2010 when we first examined its data, Open Hub was crawling some 238,000 projects, and Git managed just 11% of them. For this year’s snapshot, that number has swelled to over 674,000 – or close to 3X as many. And Git’s playing a much more significant role today than it did then.

Before we get into the findings, more details on the source and issues.


The data in this chart was taken from snapshots of the Open Hub data exposed here.

Objections & Responses

  • Open Hub data cannot be considered representative of the wider distribution of version control systems“: This is true, and no claims are made here otherwise. While it necessarily omits enterprise adoption, however, it is believed here that Open Hub’s dataset is more likely to be predictive moving forward than a wider sample.
  • Many of the projects Open Hub surveys are dormant“: This is probably true. But even granting a sizable number of dormant projects, it’s expected that these will be offset by a sizable influx of new projects.
  • Open Hub’s sampling has evolved over the years, and now includes repositories and forges it did not previously“: Also true. It also, by definition, includes new projects over time. When we first examined the data, Open Hub surveyed less than 300,000 projects. Today it’s over 600,000. This is a natural evolution of the survey population, one that’s inclusive of evolving developer behaviors.

With those caveats in mind, let’s start with the big picture. The following chart depicts the total share of repositories attributable to centralized (CVS/Subversion) and distributed (Bazaar/Git/Mercurial) systems.

Even over a brief three year period (we lack data for 2011, and have thus omitted 2010 for continuity’s sake) it’s clear that DVCS systems have made substantial inroads. DVCS may not be quite as dominant as is commonly assumed, but it’s close to managing one in two projects in the world. When considering the inertial effects operating against DVCS, this traction is impressive. In spite of the fact that it can be difficult even for excellent developers to shift their mental model from centralized to decentralized, that version control systems are not typically the priority of other infrastructure elements, that the risks associated with moving from one system to another are non-trivial, DVCS has clearly established itself as a popular, mainstream option. Close observation of the above chart, however, reveals a slight hiccup in adoption numbers which we’ll explore in more detail shortly.

In the meantime, let’s isolate the specific changes per project between our 2014 snapshot and the 2010 equivalent. How has their relative share changed?

As might be predicted, comparing 2010 to 2014, Git is the clear winner. The project with the idiosyncratic syntax made substantial gains (25.92%) partially at the expense of Subversion (-12.02%) but more CVS (-16.64%). Just as clearly, Git is the flag bearer for DVCS more broadly, as other decentralized version control systems in Bazaar and Mercurial showed only modest improvement over that span – 1.33% and 1.41% respectively. The takeaways, then, from this span are first that DVCS is a legitimate first class citizen and second that Git is the most popular option in that category.

What about the past year, however? Has Git continued on its growth trajectory?

The short answer is no. With this chart, it’s very important to note the scale of the Y axis: the changes reflected here are comparatively minimal, which is to be expected over the brief span of one year. That being said, it’s interesting to observe that Subversion shows a minor bounce (1.28%), while Git (-1.17%) took a correspondingly minor step back. Bazaar and CVS were down negligible amounts over the same span, while Mercurial was ever so slightly up.

Neither quantitative nor qualitative evidence supports the idea that Git adoption is stalled, nor that Subversion is poised for a major comeback. Wider market product trends, if anything, contradict the above, and suggest that the most likely explanation for the delta in Open Hub’s numbers is the addition of major new centrally managed codebases to Open Hub’s index.

It does serve as a reminder, however, that as much as the industry takes it for granted that Git is the de facto standard for version control systems, a sizable volume of projects have yet to migrate to a decentralized system of any kind. The implications for this are many. For service providers who are Git-centric, it may be worth considering creating bridges for users on other systems or even offering assistance in VCS migrations. For DVCS providers, the above may be superficially discouraging, but in reality indicates that the market opportunity is even wider than commonly assumed. And for users, it means that those still on centralized systems should consider migrating to decentralized alternatives, but by no means are condemned to the laggard category.

While it is thus assumed here, however, that the step back for Git is an artifact, it will be interesting to watch the growth of the platform over the next year. One year’s lack of growth is easily dismissed as an anomaly; a second year would be more indicative of a pattern. It will be interesting to see what the 2015 snapshot tells us.

Disclosure: Black Duck, the parent company of Open Hub, has been a RedMonk customer but is not currently.

Categories: Version Control.