Skip to content

The Implications of Cloud Native

Two months ago, “Cloud Native” was something of a new term, adopted most visibly by the Cloud Foundry project; a term both aspirational and unburdened by legacy at the same time. As of this week at OSCON, it’s a statement, borderline manifesto. As if it wasn’t enough that Google and a host of others adopted the term as well, it now has its own open source foundation – the imaginatively titled Cloud Native Computing Foundation. In the wake of its relatively sudden emergence, the obvious questions are first what is cloud native, and second what does it mean for the industry?

As far as the term itself, the short answer is a new method for building applications. The long answer depends in large part on who you ask.

There is a rough consensus on many Cloud Native traits. Containers as an atomic unit, for example. Micro-services as the means of both construction and communication. Platform independence. Multiple language support. Automation as a feature of everything from build to deployment. High uptime. Ephemeral infrastructure (cattle not pets). And so on.

Bigger picture, the pattern Cloud Native platforms have in common is that they are a further abstraction. Individual compute instances are subsumed into a larger, virtual whole, what has been referred to here as a fabric.

Where the consensus breaks down is what software – precisely – one might select to achieve the above.

One of the most interesting aspects of OSCON in the year 2015 is what may be taken for granted. In the early days of the conference, attendees were the rebel alliance, insurrectionists waging war against the proprietary empire and desperately asserting their legitimacy at the same time. Today, open source has won to the degree that you hear phrases like “single-entity open source is the new proprietary.” As Mike Olson once put it, “you can no longer win with a closed-source platform.” This victory for open source has many implications, but one much in evidence at OSCON this year is choice.

With open source rapidly becoming the default rather than a difficult-to-justify exception, naturally the market has more open source options available to it. Which on the one hand, is excellent news, because more high quality software is a good thing. Choice does not come without a cost, however, and that’s particularly evident in what is now known as the Cloud Native space.

One of the biggest issues facing users today is, paradoxically, choice. In years past, the most difficult decision customers had to make was whether to use BEA or IBM for their application server. Today, they have to sort through projects like Aurora, Cloud Foundry, Kubernetes, Mesos, OpenShift and Swarm. They have to understand where their existing investments in Ansible, Chef, Puppet and Salt fit in, or don’t. They have to ask how Kubernetes compares to Diego. Bosh to Mesos. And where do containers fit in with all of the above, which container implementation do they leverage and are they actually ready for production? Oh, and what infrastructure is all of this running on? Bare metal? Or is something like OpenStack required? Is that why Google joined? And on and on.

Even if we assume for the sake of argument that the Cloud Native vision will be an increasingly appealing one for enterprises, how to get there is an open question. One that, with rare exceptions such as the Cloud Foundry Foundation’s OSCON slides, few of the participants are doing much to help answer concerned as they are with their own particular worldviews.

Beyond the problem of choice, Cloud Native is, as mentioned previously, deliberately and explicitly exclusionary. It posits that there are pre- and post-cloud development models, and implies that the latter is the future. Certainly it’s the recommended approach. Traditional packaged applications or legacy three-tier architectures, in other words, need not apply.

But if we step back, Cloud Native also represents the return trajectory of a very long orbit. Decades ago, early mainframe virtualization capabilities notwithstanding, the notion of computing was a single large machine. When you deployed a mainframe application, you didn’t decide which mainframe, it went to the mainframe. With the subsequent mini-computer and client-server revolutions came a different notion, one further propagated and accelerated by Infrastructure-as-a-Service offerings: individual servers – physical or otherwise – as the atomic unit of computing. Instead of hosting an application on a large machine, as in the mainframe days, architectures were composed of individual machine instances – whether measured in the single digits or tens of thousands.

This has been the dominant computing paradigm for decades. While the primary obstacle to using the first wave of PaaS platforms like and Google App Engine was their proprietary nature, their break from this paradigm was a secondary obstacle. PaaS, like Cloud Native today, implies a fundamental rethinking of the nature of infrastructure. Where once applications would be deployed to a forest of individual instances, these platforms instead would have users push them to a single fabric – one large, virtual computer. Almost as if we’re back to the mainframe, if the mainframe was a federation of large numbers of individual instances integrated via systems software with scheduling capabilities.

The current Cloud Native crop of software isn’t the first time we’ve seen this “single virtual computer” concept made manifest, of course. There are many examples of this extant today, the most familiar of which may be Hadoop. There are no individual Hadoop servers; there are instead fleets of machines linked via distributed filesystems and schedulers to jointly execute large scale data operations.

In that respect, Cloud Native can be thought of as Hadoop for modern applications. Applications aren’t pushed to a server or servers, they’re deployed to a fabric which decides where and how they are run. Running an application on a single large computer requires that it be constructed in a fundamentally different way than if it were run on lots of traditional instances, which is why Cloud Native is deliberately and explicitly exclusionary.

Many applications will never make that jump; it won’t be technically feasible or, more likely, economically practical, to become Cloud Native. But the trajectory at this point seems clear. Just as organizations today take their hardware cues from internet pioneers like Facebook, Google or Twitter, so too will they be following the internet pioneers’ lead in software infrastructure. If a company builds its datacenter like Facebook, why run it differently? Not in the binary, all-or-nothing sense, but rather that the idea of Cloud Native, virtual single-computer infrastructures or fabrics will become a mainstream deployment option in a way that they are not today. The question is: on what timeframe?

That question is incredibly important, because Cloud Native going mainstream would have profound implications for vendors, open source projects and hosted offerings alike. Cloud Native becoming a first class citizen, if not the default, would be a major opportunity for the projects branded around projects such as Cloud Foundry or Kubernetes, the vendors that support those projects and the companies that offer them as a service – the likes of Google, HP, IBM, Pivotal or Red Hat, in other words. Any transition that impacted standard notions of infrastructure, meanwhile, would require projects (e.g. OpenStack) or vendors (e.g. Amazon, VMware) focused on that paradigm to adapt, and potentially implement one of the Cloud Native projects themselves.

To date, in spite of the long term availability of fabric-like PaaS products – debuted in September of 2007, some thirteen months after EC2 – infrastructure models that resembled traditional physical architectures have dominated the market. Nor should we expect this crown to be surrendered in the near term, given Amazon’s comical growth rate. But an increasing number of datapoints suggest that the traditional infrastructure paradigm will be at least complemented by an alternative, whether it’s called Cloud Native, a fabric or a platform. The question is not whether Cloud Native will be a destination, then, but rather how one gets there.

Disclosure: Amazon, Ansible, Chef, CoreOS, HP, IBM, Pivotal, Red Hat, and VMware are customers, as are multiple OpenStack participating projects. Facebook, Google, Puppet and Salt are not currently RedMonk customers.

Categories: Cloud.

Nadella’s Tough Decision

In late June, CEO Satya Nadella emailed Microsoft’s staff with an announcement that was part updated mission statement and part warning shot. In it, Nadella articulated his vision for Microsoft, which is “is to empower every person and every organization on the planet to achieve more.” A little less concrete than “a PC on every desk and in every home,” but it certainly doesn’t lack for ambition. Nadella also served notice, however, that “tough choices in areas where things are not working” were coming. The only question was what he meant.

On July 8th, he provided the answer. Microsoft cut 7,800 jobs from its phone business, or around 6% of its total work force, in what is generally understood to be a repudiation of Steve Ballmer’s acquisition of Nokia. Microsoft isn’t exiting the phone business entirely, but the company’s plans for the business have been dramatically scaled back to something more closely resembling Google’s Nexus hardware line.

While it’s difficult to argue the point that Microsoft’s phone business was, to borrow Nadella’s words, “not working,” there was nevertheless some surprise and discontent amongst observers of the company. The principal objection to this decommitment is perhaps best encapsulated by the Ars Technica piece “Analysis: Nadella threatens to consign Microsoft to a future of desktop obscurity.” These arguments can be summed up relatively simply: mobile is a vital and growing market, particularly when measured against its massive but stagnant desktop counterpart, and therefore Microsoft has no choice but compete in this market regardless of its performance to date.

Given the stakes of mobile, this argument is understandable. Former Microsoft CEO Steve Ballmer, the man responsible for the Nokia deal, was hardly the first to take major risks in search of mobile rewards. Google, who once enjoyed a close relationship with Apple, earned the Cupertino manufacturer’s undying enmity when the search giant felt compelled to jump into the market itself with Android.

But there are a few problems with the argument that Microsoft should have continued its Charge of the Light Brigade with what remained of Nokia.

  • First, there’s the question of approach. Even the critics of Nadella’s move would likely concede that Microsoft’s mobile platforms – both hardware and software – are also rans in their respective markets. This is in spite of years of investment and the multi-billion dollar acquisition of what was once one of the handset market’s preeminent manufacturers. If you’re going to argue, then, that Microsoft should not backtrack from the Nokia assets, then, it is necessary to provide a strategy for success in the market that Microsoft has not attempted yet. Otherwise, you’re essentially arguing that the company should throw good money after bad. If you want to argue that they should compete in a given market, that’s fine, but you have to be able to at the same time plausibly explain how they could compete in the market in question.
  • Second, there is the question of return. Let’s assume, counterfactually, that a unique and untried strategy was conceived and propelled Microsoft back to relevance. What would the return be? The market suggests that the financial return would be limited. As has been documented many times, in spite of its marketshare minority, the overwhelming majority of profits in the handset market are owned by Apple.

    It’s difficult to conceive of a scenario in which this would not be true of Microsoft as well. Microsoft has tried to duplicate Apple like margins in other hardware lines such as the marginally more successful Surface, but markets without carrier intermediaries are more straightforward. To be relevant in this market, Microsoft would likely have to follow the same course as the Android manufacturers, which is to keep margins minimal.

    Some have argued that Microsoft’s return for a minimally profitable hardware business would come elsewhere. The question is where? They don’t need a mobile platform to sell Office, that’s already available – thanks in part to Nadella – on both of the most popular mobile platforms. They don’t need a hardware platform to sell OS licenses, because the company has already conceded to the market reality: the market value of a mobile OS is $0, thanks to Google. What about the message of “Universal Windows Apps” for the legions of Windows developers out there? This idea has some intrinsic problems, in that independent of technology, universal applications are very diffcult to build because of intrinsic differences in form factors and input methods. But Microsoft also doesn’t need a flagship handset business to make this argument.

  • Third, there are other opportunities. Arguments that it’s mobile or else ignore the reality that the public cloud is going to be a large and growing market. And unlike mobile, where it was likely facing a sisphyean task, Microsoft is as well positioned in the public cloud as anyone save Amazon to capitalize on that growth. Some will look at Apple’s comical, absurd profit lines and conclude that Nadella’s decision to abandon the path Ballmer tried to set the company on is like walking away from a potentially winning lottery ticket. But thus far, that lottery has only produced one Apple.

    It is also worth asking whether or not Microsoft was positioned to compete effectively in a consumer market. While there are obvious exceptions – the Xbox, for example – Microsoft is at its core more business oriented than consumer. Gates may have wanted the PC in every home, but for most consumers the operating system was an afterthought: it was never an object of desire in the way that an iPhone is. Microsoft did well to extend into the home from the business, and to get consumers using its business-focused Office software, but the company was never really about consumers.

    Azure, on the other hand, is explicitly and expressly a business play, in a market that offers massive opportunities for growth. One that Microsoft has the DNA to be far more successful – and profitable – in than handsets.

Even if, as reported, Nadella was not in favor of the Nokia acquisition originally, it was undoubtedly, as he put it, a tough decision for the company. There’s the human cost of telling almost eight thousand people that they need to seek employment elsewhere, and there’s the public relations cost of telling the market the company you lead had effectively made a $7 billion dollar mistake. But as tough as it must have been, Nadella made the logical decision for the company. Now it’s up to Microsoft to capitalize on the focus he has afforded it.

Disclosure: Microsoft is not currently a RedMonk customer.

Categories: Mobile.

Meet the New Monk: Fintan Ryan

The problem with a good problem to have is that it’s still a problem. It is, by definition, better than whatever the alternative is. It is also, by definition, still a challenge. This is how I came to consider our recent hiring process. On the one hand, we had a legitimately overwhelming number of bright, talented and passionate candidates. On the other, well, we had a legitimately overwhelming number of bright, talented and passionate candidates. How does one sift through dozens of applicants who would all bring something different, something important, to the table?

In our case, the answer is: very deliberately. We went through multiple interview rounds. We reviewed submitted materials. We researched backgrounds. We tested. And internally we debated. And debated. And debated. We’d spend a half hour agonizing over whether one candidate would simply make it on to the next round. Just to help myself in the decision-making process, I put together a baseball-style scouting scoreboard for our finalists, ranking them on a variety of characteristics as a scout would, with a numerical ranking from 20-80.

We could have made our lives easier, of course, by narrowing the funnel. One of our candidates asked us about this, in fact:

This was my answer:

We kept the funnel wide, knowing that it would cost us time, because we wanted to get it right. Hiring a BMC developer and a Mayo Clinic scientist worked for us in the past, after all, so we talked to evangelists, electrical engineers, professors, COOs, consultants, a marketer or two and developers, naturally. The notes from the first round alone streched over 40 pages.

Eventually, however, our lengthy starting list was funneled down to a single name, and that name was Fintan Ryan.

Fintan may be familiar to some of you, whether it’s from the work he’s done with RedMonk in the past on a few conferences or some of the community work he’s done in London. In any event, those of you who follow what we do at RedMonk will have the chance to get to know him better.

As you’ve come to expect with new RedMonk analysts, Fintan brings an eclectic mix of skills to the table. He’s been a developer – holds a few US patents, in fact. He’s been the one tasked with managing developers as well, from waterfall to agile. He’s done yeoman’s work in community organizing, whether it’s conferences with us like IoT at Scale and Thingmonk or external events such as the CoreOS London meetups.

Analytically, his quantitative research chops are excellent; he did some very interesting research just prior to our opening, in fact, for no other reason than he was curious. And it’ll be nice to have someone else on board working in R. Beyond the technical skills, however, Fintan seems to have a knack for asking interesting questions, a trait that can be harder to find than the ability to answer them.

Most importantly, however, Fintan is passionate about what we do at RedMonk. From my perspective, almost every other requirement for this job is negotiable. Believing in what we do, however, isn’t optional.

Starting on August 5th, then, Fintan will be the next monk. With us, he’ll be covering the same broad spectrum of topics that we cover, and based on the quality of the research he did as a function of our interview process, you’re going to enjoy his work. In the meantime, please join me in welcoming Fintan to the RedMonk family, and if you’re so inclined, feel free to hunt him down on Twitter to say hello.

Categories: RedMonk Miscellaneous.

Fit Which Culture?

Now hiring

In early June, a number of people I follow on Twitter linked to this piece by Mathias Meyer entitled “Why Hiring for ‘Culture Fit’ Hurts Your Culture.” Although it was apparently triggered by the specific example of drinking as an example of company culture, Meyer correctly and succinctly articulates a broader definition of culture itself, one that encompasses everything from the copy you use for recruiting to the health merits of the food and drink provided in an office.

Culture, in other words, is everything – in the literal sense – that a company is. Every decision, every activity, every product, every message and every hire is both driven by and representative of an organization’s culture.

In the startup world in particular, culture can be a competitive weapon, a means of asymmetrically competing with larger, better resourced competitors.
If you’re a cash poor startup, you need more than equity to recruit against cushy, well paying gigs with Big Co. Culture is, at least in theory, an asset that differentiates a startup from other opportunities. On the one hand, Big Cos offer stability, good money and real benefits. On the other, startups have higher theoretical upside from a compensation standpoint – but that’s not enough. Enter culture.

Most large company cultures, if they were ever employee-centric, eventually find that that scales poorly. Which is, as an aside, why so many startups that grow past the point of being a startup have significant HR issues. For the smaller startups, however, perks that might not scale to a large organization become a selling point. Startup culture, in other words, is by its very nature intended not just to run a business, but to serve as a recruitment tool.

Which is intrinsically neither good nor bad, in my view. Culture, in this sense, is just a tool. The problem is that while startup culture is designed to appeal to potential employees, what appeals to one employee may be unappealing or impractical for another. And when the fact that one aspect is unappealing or impractical is used as an organizational filter, under the banner of ‘hiring for culture fit,’ you have a problem. Hiring only people like you is not a great approach, for reasons that are hopefully obvious. Worse, it’s both an insidious problem to identify and one that can be difficult to remedy. If a majority of employees have bought into, and to some extent signed up for, an aspect of a given culture that may need to be scaled back or deprecated entirely, they are likely fight it.

All of that being said, I don’t believe Meyer’s answer – “Stop using ‘culture fit’ ” – is necessarily the correct approach. The obvious risk, of course, is throwing the baby out with the bathwater. Instead, it’s important to be absolutely clear on which aspects of your culture – specifically – are being considered during the hiring process.

At RedMonk, for example, alcohol is to some extent a part of our culture. Not in the way that it is for startups, because none of us work in the same city. But in the Monktoberfest and Monki Gras, at least, we deliberately incorporate craft beer into the experience. But while we tried to have some fun with our job description, we have never and will never consider whether someone drinks, or what they drink (even if it’s major domestics), as part of the hiring process. Because it’s simply not relevant to being an analyst, and therefore will never be a part of how we evaluate a candidate for that role.

There are cultural aspects, however, that are critical to us. Personality, for example, is important. Whether someone fits into our distributed culture, for example, is very relevant to us, as is our estimation of how they will in the cultural sense represent RedMonk while at events, with clients and so on. The most crucial cultural consideration, however, is whether a candidate believes in what we do. While we are not blind to other lenses, we are passionate about the importance of developers at RedMonk and believe that this underpins the work that we produce. If someone lacks that same passion, they probably aren’t a great fit for us.

There seems to be little question, ultimately, that hiring for “culture fit” can be a mistake, one that reinforces organizational stereotypes and limits organizational diversity. As someone put it to me recently, when you hire your friends, you’re likely to look around eventually and realize that a number of them were hired just because they were your friends, not on their merits – and they’re simply happy to be there. Which is a problem for everyone.

But removing “culture fit” from the equation entirely seems to be an over-rotation in the opposite direction, one that may lead to the hire of employees who are legitimately poor fits for one reason or another.

Instead of dismissing ‘culture fit’ entirely, then, the best approach seems to be to whittle down the huge, unmanageably broad definition of culture down to just those few characteristics that do matter. There are many more aspects of a given culture that don’t matter than do, and should have no impact on hiring. But there are always a few that should play a role, so understanding what those are and being explicit about them is key.

Categories: People.

The RedMonk Programming Language Rankings: June 2015

This iteration of the RedMonk Programming Language Rankings is brought to you by HP. The tools you want, the languages you prefer. Built on Cloud Foundry, download the HP Helion Development Platform available today.

It being the third quarter, it is time at RedMonk to release our bi-annual programming language rankings. As always, the process has changed very little since Drew Conway and John Myles White’s original analysis late in 2010. The basic concept is simple: we regularly compare the performance of programming languages relative to one another on GitHub and Stack Overflow. The idea is not to offer a statistically valid representation of current usage, but rather to correlate language discussion (Stack Overflow) and usage (GitHub) in an effort to extract insights into potential future adoption trends.

In general, the process has changed little over the years. With the exception of GitHub’s decision to no longer provide language rankings on its Explore page – they are now calculated from the GitHub archive – the rankings are performed in the same manner, meaning that we can compare rankings from run to run, and year to year, with confidence. There was, however, a minor issue with this month’s run which had an interesting impact which will be discussed in more detail below.

In the first quarter run, we noted that an erosion in the typically strong correlation between how a language performed on GitHub and Stack Overflow had been arrested. Down to .74 in Q314, the correlation in Q1 was back up to .76. For the third quarter, however, the correlation has resumed its slide; this ranking’s .73 represents an all time low. The correlation between the two properties remains strong from a statistical standpoint, but it will be interesting to observe whether the two properties continue to drift apart.

Before we continue, please keep in mind the usual caveats.

  • To be included in this analysis, a language must be observable within both GitHub and Stack Overflow.
  • No claims are made here that these rankings are representative of general usage more broadly. They are nothing more or less than an examination of the correlation between two populations we believe to be predictive of future use, hence their value.
  • There are many potential communities that could be surveyed for this analysis. GitHub and Stack Overflow are used here first because of their size and second because of their public exposure of the data necessary for the analysis. We encourage, however, interested parties to perform their own analyses using other sources.
  • All numerical rankings should be taken with a grain of salt. We rank by numbers here strictly for the sake of interest. In general, the numerical ranking is substantially less relevant than the language’s tier or grouping. In many cases, one spot on the list is not distinguishable from the next. The separation between language tiers on the plot, however, is generally representative of substantial differences in relative popularity.
  • GitHub language rankings are based on raw lines of code, which means that repositories written in a given language that include a greater number amount of code in a second language (e.g. JavaScript) will be read as the latter rather than the former.
  • In addition, the further down the rankings one goes, the less data available to rank languages by. Beyond the top tiers of languages, depending on the snapshot, the amount of data to assess is minute, and the actual placement of languages becomes less reliable the further down the list one proceeds.

(click to embiggen the chart)

Besides the above plot, which can be difficult to parse even at full size, we offer the following numerical rankings. As will be observed, this run produced several ties which are reflected below (they are listed out here alphabetically rather than consolidated as ties because the latter approach led to misunderstandings). Note that this is actually a list of the Top 21 languages, not Top 20, because of said ties.

1 JavaScript
2 Java
4 Python
5 C#
5 C++
5 Ruby
9 C
10 Objective-C
11 Perl
11 Shell
13 R
14 Scala
15 Go
15 Haskell
17 Matlab
18 Swift
19 Clojure
19 Groovy
19 Visual Basic

As with last quarter, JavaScript maintains a slim margin on second-place Java, with the caveat that the difference between numerical rankings is slight. The language’s sustained performance, however, reflects the language’s versatility and growing straegic role amongst startups and enterprises alike.

Aside from those two languages, the Top 10 has been static. With minor execeptions, in fact, it has remained static for several years. While we see periodic arguments from advocates of a particular language, or a particular style or type of language, the simple fact is that the group of the most popular languages has changed little and shows little propensity for future change, though there are two notable would-be challengers discussed below. This raises some interesting questions about language adoption and whether fragmentation has reached its apogee.

Outside of the Top 10, however, we have several changes worth discussing in more detail.

  • Go: A year ago, we predicted that Go would become a Top 20 language within a six to twelve month timeframe. Six months ago, it achieved that goal landing as the #17 language in our January rankings. In this quarter’s run, Go continues on that same trajectory, up another two spots to #15. In the process, it leapfrogged Haskell and Matlab. While the language has appeared at times to be in the trough of disillusionment following an extended honeymoon period, none of the periodic criticism has had any apparent impact on the project’s growth. And with an increasingly strategic foundational role within projects that are themselves strategic, Go’s future appears bright. It’s also worth considering whether the Supreme Court decision could eventually, indirectly lead to a more significant change in Go’s fortunes given recent project activity.

  • Erlang: One of the long time choices for developers struggling with concurrency, Erlang jumped one spot on our rankings from #26 to #25, which merits mention because of a recent change in the licensing of the project. Two weeks ago, at the urging of a few prominent Erlang community members, Erlang dropped its early-MPL derived Erlang Public License in favor of the Apache License, Version 2. While a change of this type will not by itself do much to affect the project’s fortunes, removing friction to the adoption of a project – which transitioning from a vanity license to a widely accepted public alternative represents – is certainly a welcome development.

  • Julia/Rust: Historically, we’ve discussed these two languages together because they were both languages to watch, they were closely ranked and on similar trajectories. Last quarter, however, Rust put some distance between itself and its erstwhile rankings-mate, jumping eight spots to Julia’s three. This time around, however, Julia (#52) was the higher jumper, moving up four spots to Rust’s two (#48) – too bad that information wasn’t available in time for JuliaCon. As for Rust, anecdotal evidence has been accumulating for some time that the language was piquing the interest of developers from a variety of spaces, and the quantitative evidence supporting this observation is ample. Both remain languages to keep an eye on.

  • CoffeeScript: This ranking makes the fourth out of five quarters in which CoffeeScript has dropped. From its high ranking of 17 in Q3 of 2013, in the four runs since, it has clocked in at 18, 18, 21 and now 22. It’s not impossible that the language finds a foothold and at least stabilizes its position, but its prospects for re-entering the Top 20 appear dim both because of its own lack of momentum and the competition around it.

  • Dart / Visual Basic: Two quick notes on languages we’re asked about frequently. Visual Basic dropped from 17 into a three-way tie for 19th along with Clojure and Groovy. That’s fine company to be keeping, but the future of VB in the Top 20 is unclear. Dart, for its part, is a language that we field regular questions on both because of its Google pedigree and its ambitions vis a vis JavaScript. To date, however, while Dart has shown steady growth, it’s growth has been minimal next to its Google-born sibling, Go. Dart moved up one spot this quarter, from #34 to #33.

  • Swift: As mentioned at the top, this month’s rankings had a minor issue. At the request of a few parties ahead of Apple’s WWDC, we went to take a look at the rankings to determine how Swift had performed given its meteoric rise from #68 to #22. Unfortunately, due to a change in page structure, our automated Stack Overflow scrape had failed. So we narrowed the scope, did a quick manual lookup of the Stack Overflow numbers for the Top 30 from the prior run, and calculated out rankings just for that subset. For this partial run, we had a 3-way tie for 18th place and then Lua and Swift tied for 21, leaving Swift just outside the Top 20.

    For our official rankings, however, we obviously required a complete set of Stack Overflow data, so we collected a full run shortly after WWDC. The partial results from our June 1st run were of course discarded so as to compare all languages on an even footing. When we ran the full rankings then, with the new, complete Stack Overflow set we discovered something interesting: Swift had jumped from #21 to #18. Call it the WWDC effect, but Stack Overflow in particular surged as is evident from the chart and pushed Swift up just enough to displace the 19th place finishers. This means that it’s last three rankings in order are 68, 22 and 18. While we caution against reading too much into the actual numerical placement, Swift is certainly the first language to crack the Top 20 in a year. By comparison, one of the fastest moving non-Swift languages, Go, ranked #32 in the original 2010 dataset finally cracking the Top 20 in January of this year. Even if you assign little importance to the actual ranking, then, there is no debate that Swift is growing faster than anything else we track. The forthcoming release of Swift as open source and availability of builds for Linux, as well, should theoretically provide even more momentum going forward.

The Net

For several quarters now, we’ve seen a pattern of little to no change at the top of the rankings, with the list becoming more volatile in direct proportion to a descent down the rankings. Go and Swift represent the first two potential challengers for the Top 10 we’ve seen in some time. It will be interesting to see if one of Go or Swift can punch their way into an otherwise static Top 10, and if so, on what timetable. At a minimum, Go would have to displace Objective C, Perl, Shell, R and Scala. Perl and Shell are everywhere but lack the volume of languages higher up the spectrum, while R and Scala are very popular languages but specifically purposed. The best bet for weakening Objective C, meanwhile, is accelerating Swift adoption. Swift, for its part, has to tackle the above list, as well as Matlab, Haskell and Go itself.

Between Go’s increasing popularity as a modern back end language and Swift’s bid for traction outside of the iOS landscape, the next few iterations of this list will be interesting to watch.

Update: Please note that in the plot above the position of Ada and AGS Script are incorrect. Their Stack Overflow rankings were over-represented and thus the plot values are high.

Categories: Programming Languages.

Apple, Google and Privacy


On the surface, Apple WWDC and Google I/O are exactly what they seem to be: showcases for the company’s respective audiences. The ever longer keynotes are meticulously scripted and rehearsed to dramatically unveil increasingly bloated product portfolios and feature catalogs whose purpose in turn is to create excitement if not outright lust. The perfect show for either firm is one where an audience member would happily trample the person sitting next to them to get their hands on the latest object of desire debuted on stage.

On another level, however, these shows are implicit statements of direction. As Mahatma Gandhi put it in a very different context, “action expresses priorities.” Google’s I/O show, as described, made its priorities clear: Google is still intent on organizing the world’s information. The win for Google is more data to refine its advertising model, which represents 89% of the company’s total revenue. The win for users is services such as Google Photos – at the cost, arguably, of their privacy. By leveraging its access to so many users and so many photos, Google’s machine learning algorithms are good enough now to tell that one picture of a teenager and another of an infant are, in fact, the same person.

The service is made possible, of course, by tremendously intelligent algorithms created by tremendously intelligent engineers. But its lifeblood, ultimately, is data. As Anand Rajaraman wrote years ago, more data usually beats better algorithms. Which implies that the single most important assset for Google is not in fact software, but access.

Which is why Benedict Evans’ assertion that what keeps Google up at night is reach rings true. In a world absent Android, and where mobile’s corrosive erosion of PC usage continues, Google would be uncomfortably dependent on Apple, at a minimum, for the reach it requires to function. From search to maps to photos, Apple would be in a position to control, on a granular level, what Google had access to.

Because this future was not hard to predict, however, Google not only built Android, but saw it blossom into a popular, volume platform – one that is compared to Windows on a volume basis, in fact. Reach must still be a concern for the company, for as popular as Android is iOS is a massive platform in its own right and far more opaque to the search giant. But at least in Android it has guaranteed itself unfettered access to a large subset of the market’s available telemetry.

Which is essential, because Google’s vision for computing is clearly cloud centric, and more particularly driven by the aggregated of value of millions of users living in the same cloud. The same way that Google can determine where traffic is congested by noticing that thousands of GPS-enabled Android devices are slowing down is the same way that the company can determine that because a million fans of obscure band A have liked obscure band B means that there’s a reasonable chance you will too. “Did you mean?” is incredible when you own the majority of the world’s search traffic, not so much for a single user.

It’s not that Google has given up innovating on the device itself – see Project Soli, for example – but it correctly understands that services is where the company is strongest. Assuming, of course, that its pipeline of user telemetry is never jeopardized.

But if that’s Google’s existential concern, what is Apple’s? According to Evans, it is the fear that developers leave. This is, to me, less plausible. First, there is the fact that Apple developers and users have a reputation for being fanatically loyal. That could always change, of course, but the fact that iOS generates more revenue on a per user basis than Android gives developers an additional financial incentive to stick around. My concern, if I were Apple, would not be the retention of loyal and (relatively) well compensated developers.

If we assume that Apple’s actions express its priorities, and by extension its fears, then, what does Apple fear? My bet would be services.

Many have noted that Apple has substantially cranked up its rhetoric around privacy lately. At WWDC, for example, Apple made a point of noting that its “Proactive” update to Siri – one intended to match some features of Google Now – operates off of on device data only. A week prior to that, Tim Cook was even more explicit, saying that Apple doesn’t want user data and that users shouldn’t be forced to trade access to information for a free service. To give you an idea of tone, his speech was characterized as “blistering.”

The obvious question is: why is Apple making such an issue of privacy now? The simplest and most charitable explanation, if one that is potentially naive, is that Apple as a corporation is genuinely concerned with user privacy. It is equally plausible, however, that Apple’s attitude here is simply a reflection of its revenue model, a manifestation of its issues trying to build services, or both.

Regarding the former, clearly Apple’s primary revenue engine is not user data, as with Google. Apple generates its unparalleled margins instead by selling a polished hardware and software combination. Cook is, in this sense, being entirely truthful: the Apple of today doesn’t want or need user data. Skeptics, however, will argue that Apple’s primary and overriding concern is its bottom line, and that sentiment has little to do with the current privacy-as-a-feature approach.

With the general elevation of user privacy to mainstream issue in the wake of the Snowden revelations, however, Apple certainly couldn’t be faulted for highlighting its lack of interest in user data and its competitor’s corresponding reliance on it. All’s fair in love in war, as they say. The important question for Apple, however, is whether it can keep these promises to its users.

As Om Malik writes in the New Yorker:

Apple was rather short on details in explaining how it will achieve its goal of using, learning, and building better services without collecting data on a global scale.

Certainly Apple will be able to infer certain information from the device itself. It seems clear, however, that single device inference will always be limited when measured against algorithms that can run against hundreds of thousands or millions of similar devices. More data beats better algorithms, remember.

Which means that Apple has explicitly and publicly committed itself to not leveraging one of the most important ingredients in building the kinds of proactive services its own product development track acknowledges are in demand. More problematically, services are not an area in which Apple has distinguished itself historically, so operating in this space with a self-imposed handicap is unlikely to be helpful.

Apple in the form of Tim Cook, then, seem to be making a bet that users will value their privacy over new functionality and new capabilities. That they will voluntarily choose a less capable platform if it means they don’t have sacrifice user information. This is certainly possible, but if we assume that the average developer is more attuned to privacy concerns than the average citizen, charts like these would concern me. They would be, in fact, what kept me up at night if I worked at Apple.

Categories: Mobile, Privacy, Services.

What is OpenStack?

In the wake of the OpenStack Summit, held in Vancouver this year, two major questions remained. First and perhaps most obviously, why in the holy hell aren’t there more technology conferences held in Vancouver? Sure, it’s marginally more difficult to get into than San Francisco by air – at least if your primary carrier is JetBlue, which doesn’t service Vancouver. But this is the view from the conference center, which is itself quite impressive.

(click to embiggen)

Not that I have anything against California as a conference destination, mind. If Las Vegas is Mos Eisely, San Francisco is Shangri-La. But there is not a venue in San Francisco that can hold a candle to the Vancouver Conference Center and its absurd backdrop of mountains, water and lazily circling float planes.

Aside from the interesting but ultimately trivial question of venue, however, there was one big question following the Summit: what does the future hold for OpenStack?

The OpenStack project has always been a fascinating exercise in contradictions. On the one hand, it has attracted the kind of broad industry support and investment that other projects would kill for, and outlasted would be competitors like CloudStack and Eucalyptus. Both of which had distinct and theoretically marketable technical advantages over the offspring of NASA and Rackspace, notably. On the other hand, it is simultaneously and continually maligned by technologists from a variety of quarters. Much of this criticism is naturally the result of competitive messaging; in spite of its participation in the project, for example, VMware’s vCloud/vSphere teams predictably have less than kind things to say about OpenStack’s track record for success. But the criticism of OpenStack is hardly limited to competitors. Some of today’s largest contributors, in fact, initially passed on participating in the project after evaluating the first incarnation. And even significant OpenStack players today acknowledge that OpenStack has a lot of work to do moving forward.

None of which helps answer the question of where OpenStack is headed as a project, unfortunately. Technology is criticized all the time, frequently with merit, but the correlation of criticism with lack of adoption has not been strong, historically. Engineering quality matters, but not as much as the industry would like to believe. For better and, frequently, for worse.

The first question that needs to be unpacked when considering the fate of OpenStack is also one of the least interesting. Many critics of the OpenStack project build their arguments on fundamental questions of the economics of private versus public infrastructure. The arguments essentially boil down to an assertion that private infrastructure makes little sense for most companies – or any but the very largest internet companies, if you’re in the business of providing public infrastructure. These arguments tend to be built on macro-economic foundations: most private companies won’t be able to compete with the economies of scale realized by the public infrastructure providers, public cloud players will achieve outsized technical advantages through deeper vertical integration, and so on. I have made these arguments myself many times: see here for an example.

The simple fact is, however, that even if we assume, counter-examples like Etsy notwithstanding, these arguments to be entirely correct with the math unassailable and the downside risk of private investment perfectly understood, private infrastructure will be a fact of life for the foreseeable future. Whether or not it should be based on a rational, dispassionate evaluation of the variables is just not relevant.

Without descending into micro-economics in the form of the rational choice theory, the market evidence available to date is that in spite of subtantial and frankly unprecedented growth for cloud services, private infrastructure is a preferred strategy for many organizations that on paper would appear to be perfect fits for public alternatives. Whether these choices are rational or correct in an academic sense is, again, besides the point. The cloud is growing at an incredible rate, and the hits to x86 server manufacturers amongst others underscores that point, but there are still more businesses treating servers like pets than cattle. More importantly, there are legions of IT staffers that will be protecting what they believe is their livelihood – the private infrastructure – at all costs. Unless technical leadership is willing to wage total war on its own infrastructure, then, private infrastructure will continue to be a thing.

If we assume, therefore, that private infrastructure will remain a market – if one facing more competitive pressure than at any other time in its existence – the question is whether or not you want your private infrastructure to resemble public infrastructure in the feature sense. Do you want the provisioning of an instance to take a week or ninety seconds, in other words?

The answer to that question hopefully being self-evident, we’re left with two conclusions.

  • First, that there will be private infrastructure on some reasonable scale.
  • Second, that that infrastructure must be or become competitive with base level features of public clouds.

Which implies that there will be a market for private cloud software. This is the market that VMware has been steadily monetizing, and the market in which OpenStack is, with all due respect to Apache CloudStack, the last open source project left standing from a visibility standpoint.

All of which seems to be good news for the OpenStack project, and is, to some extent. There is competition on the way, certainly, from newer combinations such as Docker/Mesos and other private infrastructure fabric alternatives (in spite of complementary use cases today), but at least at present they require users to conceptualize their environment in ways the average OpenStack user may not be ready for. For today, anyway, the most likely t-shirt a customer wears to a meeting with an OpenStack vendor is one from VMworld. OpenStack has a window, in other words. How big that window is depends in part on one’s estimation of cloud growth and the downside impact to private competition, but it is difficult to build the case that the entire world will transition away from its private infrastructure to public alternatives in the short to medium term.

But keeping that window open depends on the answer to a fundamental question OpenStack has, and continues to, struggle with: what is OpenStack? The confusion around this subject manifests itself on several levels. Most obviously there is project composition: while OpenStack is typically referred to as if it were a single project, it is better described as a (growing) collection of independent sub-projects – some of which compete with one another. This in turn has several implications.

  • The independent nature of the projects has made installation and upgrades problematic, historically.
  • It is a burden for OpenStack vendors and marketers, who must educate would-be users of OpenStack on the nature of the project and the choices available to them.
  • And finally it’s meant that defining what – precisely – a canonical OpenStack instance consists of has been a hard enough question that answering it has been a project of its own.

Those issues, however, are solvable problems. Or more correctly, they should be, except for the other major manifestation of definitional issues. OpenStack has assembled one of the most impressive rosters of member companies in the industry. The good news is that this has ensured a growing, vibrant community and excellent project visibility. The bad news is that the number of member companies guarantees that members will inevitably have different, and often competing, visions for the project’s future. It’s not difficult to understand that what a carrier might require from OpenStack, for example, could look very different from what an implementer would like to see. Neither of which is likely to be what an operating system vendor expects. And so on.

OpenStack is hardly the first open source project to be the center of broad-based, cross-category investment and collaboration, of course. Linux has been successful on this front for years. But successful projects have typically had a clear sense of purpose and identity: to be an operating system kernel, for example. OpenStack’s raison d’être, by comparison, has been less clear. On a high level, it’s been a mechanism for building an IaaS-type private cloud, but there have been major disagreements between project participants on how to get there, what to build it from and more. In some circumstances, this diversity can be a strength: the ability of OpenStack to substitute different storage subsystems based either on issues with the default choices, customer preferences or both has been useful. Abruptly attempting to redefine the IaaS vision to suddenly extend into PaaS, less so.

This existential crisis notwithstanding, if it is true that private infrastructure investments will be sustained over time, and that said infrastructure should borrow features from today’s public infrastructure, it is necessary to conclude that OpenStack has a market opportunity. Certainly the large system incumbents believe this. On the heels of 2014’s acquisitions of Cloudscaling (EMC), eNovance (Red Hat) and Metacloud (Cisco) – not to mention related pickups like Eucalyptus (HP), Inktank (Red Hat) and Nebula (Oracle) – consolidation continues in 2015. A few short weeks after the OpenStack Summit, Cisco and IBM announced separate OpenStack acquisitions within hours of each other in Piston and Blue Box respectively.

For these strategies to pay off, however, the OpenStack project and its members need an answer to the fundamental question of what is OpenStack. Without that, the project will have a difficult time improving the developer experience and will leave itself vulnerable to more focused projects with a clear sense of identity and purpose. Many in the industry laugh at the idea that Mesos, for example, is an OpenStack competitor, but the Mesophere tagline of “an operating system for your datacenter” would seem to put it squarely in contention for the private infrastructure opportunity OpenStack was built to address. OpenStack may today be more easy to adopt for enterprises given its resemblance to traditional infrastructure versus Mesos’ more forward vision of knitting many resources into a single large instance, but is that advantage sustainable?

This is but one of many questions OpenStack should be considering as it attempts to discover what, in fact, it is. The answers need to come soon, however, because the window will not remain open indefinitely.

Disclosure: Multiple vendors involved in the OpenStack project, including Cisco, HP, IBM, Oracle and Red Hat are RedMonk customers. VMware, which is both a participant in the OpenStack community and a competitor to it, is a customer. Mesosphere is not a customer, nor is the OpenStack Foundation.

Categories: Cloud, Open Source.

The Software Paradox – Available Now

One of the things you do as an analyst is talk to companies. A lot of companies, potentially, depending on what you cover. In between talking to companies, you’re also doing research on them. What do the trajectories of their products look like relative to one another, as measured by a variety of proxy metrics? What story are their financials telling us? And so on. When you do this for years, the hundreds of conversations and the hours of research, unsurprisingly, begin to reveal patterns. The primary job of analysts, in fact, is to notice such things.

One such pattern I noticed five years ago or so was that companies were having a harder time making money from software. Not a hard time, precisely – Microsoft and others were effectively still printing money through the up-front sale of bits – but harder. Traditional software companies were facing stiff competition from SaaS-players and open source. Commercial open source organizations, for their part, were cycling through models from dual licensing to open core in an effort to try and find a reliable mechanism that would convince customers to pay for something they otherwise could obtain for free. The companies making some of the most interesting and technically sophisticated software, meanwhile, not to mention generating the most revenue and achieving the highest market caps, were not in the business of selling software. Many even made their innovative internal software assets available as open source, implying that they saw more commercial benefit to simply making the code available for free than they would have trying to monetize it.

The evidence suggested, then, that software’s intrinsic commercial value – at least in the traditional, perpetual license sense – was declining. This was the message I discussed with the audience in May of 2011 at the Open Source Business Conference.

The odd thing was that even as software was becoming more difficult to sell, it was simultaneously becoming more strategically important. Less than three months after I spoke to the OSBC audience about the observable transition in market valuations of software companies, Marc Andreessen’s op-ed “Why Software is Eating the World” made its appearance in the Wall Street Journal. Its reception was such that the title has since become a cliché. In the piece, Andreessen succinctly articulated a position that many now take for granted: that software was transforming entire industries, and that non-technology companies were either becoming or being replaced by technology companies at an accelerating rate.

This seeming paradox has occupied a lot of my time over the past five years. Looking at the financials of software companies, SaaS companies and technology companies that don’t sell technology. Talking to companies with very different perspectives on software, from commercial open source to proprietary software to IaaS. And of course speaking with and reading the writings of a lot of developers along the way, to get a sense for where they see things headed, and what they value. It’s all been an attempt to answer a simple question: how is it that software’s strategic importance could be on an opposite trajectory from its commercial value? How could software be more important but worth less?

One output from this work has been “The Software Paradox,” an O’Reilly title that launched last week and which a few of you have been kind enough to notice. It’s available as a free download from O’Reilly, courtesy Paypal, right now. If you prefer to get it via your Kindle, it should be available there in a few weeks. If you have other questions or want to see the Table of Contents, those are at

Either way, my hope is that for anyone in the software business, or those competing with it, the book will help challenge your understanding of the software industry and the economics behind it – particularly on a going forward basis. Even if you disagree with the premise or your business is currently an exception to it – and there are many, to be sure – it’s worth considering the idea, because many that assumed they were exceptions were caught scrambling.

Lest I forget, I need to thank Kate for putting up with the time it took for me to put this together, and for those of you I will not out here who took the time to not only read the book but provide feedback ahead of time – your feedback was invaluable.

Thanks, also, for those of you who have taken the time to read it already or will in future. Whatever your take on the argument, there is nothing more valuable than your time so I appreciate you spending it on this.

Categories: Books, Economics.

What Google I/O Was About

There were other technologies discussed at Google I/O this week, but more so than in years past this was an Android show. So much so that even non-Android projects were either a derivative of Android (Brillo), impacted Android applications (Chrome Custom Tabs) or run on top of the mobile operating system (Photos). Which tells us that Google, like the rest of the industry, believes that this is a mobile world, and we’re just living in it. Which is neither news nor interesting. What is worth noting is precisely how Google is attacking this market.

Back in November of 2012, one of the original iPad engineers made the argument that Google was getting better at design faster than Apple was improving its web service capabilities. Setting aside the often subjective question of design, this year’s I/O made it clear that Google is doubling down on services.

Consider that the theme for the forthcoming Android M release is user experience and usability. To that end, Google has made some useful changes to the stock Android interface, from the way privacy settings are handled to the volume sliders. But the real leaps in user experience are not going to come from the interface, but rather the services it connects to. If it seems counterintuitive that something like machine learning could positively impact user experience, you haven’t been paying attention.

The various digital assistants – Cortana, Siri and, well, Ok Google – suggest that the best mobile user interface may be the one that can be bypassed most effectively. To accomplish this trick, of course, you need two types of data. Bulk telemetry to train the algorithms for tasks like speech recognition, and contextual data to let a given user’s habits and preferences inform a platform’s interactions with them.

Google explicitly acknowledged this at the show when discussing its voice recognition error rate, down to 8% from 23% in less than two years. But it was also implicit in the most important service announced, Google Now On Tap. Announced in 2012, Now – which was reportedly a 20% time project originally – was Google’s first attempt to proactively leverage available contextual data on the Android platform. It knows where you live, and will alert you unsolicited about traffic backups on your commute home from the office. It will keep you up to date on the scores of your favorite teams, tell you when to leave for the airport to catch your flight and attempt to pick out links to content you’ll find interesting. All of which is useful if you visit the standalone Google Now launcher page.

What Google has done with Now On Tap is extend the reach of Now’s contextual data and understanding into any application on the phone. If you’re reading email and don’t understand a reference, Now will explain it to you. If you get a text from your spouse reminding you to do something, it will be there to create a reminder for you. And so on.

Now On Tap is important for two reasons. Most obviously, it’s potentially a big productivity win for users because instead of shifting context from a given app to look something up on Wikipedia or create a reminder, they’ll be able to do so without going anywhere. While Now On Tap is important for users, however, it’s a potential gold mine for Google. Previously, Google’s contextual awareness was limited to the applications it directly managed: it learned from your search history, it mined your Gmail Inbox and so on. With On Tap, Google stands to gain an important new channel into behaviors and actions previously opaque to it, specifically your interactions with third party applications.

None of this is even possible, of course, without deep investment in machine learning and proto-artificial intelligence. Google Photos, for example, ingested the archive from my phone, which includes dozens of photographed receipts for Expensify. There are no tags applied to these images, no categorization or labels – the word “receipts” is entirely absent from them, in fact. But when I query Google Photos, it’s intelligent enough to return pages and pages of nothing but receipts. This is, in computer science terms, dark magic. And it is this dark magic that Google has clearly identified as its differentiator in mobile moving forward.

Which is why one decision of Google’s in the mobile arena is so perplexing: its complete lack of a messaging strategy. The market has accorded messaging a high degree of importance. WhatsApp – conspicuously featured on stage at I/O – was valued at $19 billion. Slack more than doubled its valuation over a six month period to almost $3 billion. And iMessage is so much a part of the iPhone experience that discrimination against so-called “green bubble” users – i.e. anyone without an iPhone – is a thing.

What is Google’s response to this surge in messaging client importance? Crickets. The Android Hangouts client attempts to converge Google Talk-style IM and SMS in one place, but is extraordinarily awkward to use, because it does not, in fact, converge them. Google Voice, meanwhile, was at one point a credible SMS and voice service, but appears to have been effectively orphaned. Asked about this at Google I/O, the official response was “we have nothing to announce at this time.” For a conference that was first about Android and second about improving the user experience for the platform, then, the lack of any news or roadmap about what Google’s messaging strategy will be was baffling. Maybe Google can satisfy its back-end telemetry needs by introspecting a variety of clients from TenCent to WhatsApp via Google Now On Tap, but the users are left without a service comparable to what Apple, Facebook or a variety of third party platforms offer.

The odd lack of messaging news notwithstanding, however, I/O has been an impressive, if subtle, look at Google’s ability to put information to work at scale to deliver services sufficiently advanced so as to be indistinguishable from magic. Google may or may not ever match Apple’s design prowess, and the show may have lacked the surprise “One More Thing,” but the company is second to none when building the kinds of artificially intelligent services users will increasingly come to rely on. If it accomplished nothing else, then, I/O was a useful reminder of that fact.

Categories: Conferences & Shows.

Three Questions from the Cloud Foundry Summit

Cloud Foundry LEGO!

Six months ago there was no Cloud Foundry Foundation. This week, its biggest problem at the user conference was the fire marshal. At 1,500 reported attendees the event was a multiple of the project’s inaugural Platform event. To the point that it’s hard to see the Summit going back to the Santa Clara conference center. Enthusiasm will make people patient with standing room only events with seating along the walls two deep, but there are limits.

For an event reportedly put together in a little over two months, execution was solid. From HP’s magician/carnival barker to the Restoration Hardware furniture strewn liberally across the partner pavilion – excuse me, “Foundry” – walking the show floor had a different feel to it. Sessions were a reasonable mix of customer case studies and technical how to’s, which was fortunate because the attendees were an unusual mix of corporate and very pointedly non-corporate.

The conference comes at an interesting point in the Cloud Foundry project’s lifecycle. The first time we at RedMonk heard about it, as best I can recall, was a conversation with VMware about this project they’d written in Ruby a week or two before its release in 2011 – two years after the acquisition of Cloud Foundry. There are two things I remember about that call. First, that I was staying at the Intercontinental Boston at the time. Second, that I spent most of the briefing trying to imagine what kind of internal battles had been fought and won for a company like VMware to release that project as open source software.

By the end of that year, the project had enough traction to validate one of my 2011 predictions. Still, Cloud Foundry, like all would-be PaaS platforms, faced a substantial headwind. Disappointment in PaaS ran deep, back all the way to the original anemic adoption of the first generation of Google’s App Engine and Saleforce’s – released in April of 2008 and September of 2007, respectively. All anyone wanted to buy, it was argued, was infrastructure. Platform-as-a-Service was one too many compromises, for developers and their employers both. AWS surged while PaaS offerings stagnated.

Or so it appeared. Heroku, founded June 2007, required less compromise from developers. Built off of standard and externally available pieces such as Git, Ruby and Rails, Heroku was rewarded by growing developer adoption. Which was why Salesforce paid $212M to bring the company into the fold. And presumably why, when Cloud Foundry eventually emerged, it was open source. Given that one of the impediments to the adoption of and GAE in their initial incarnations was the prospect of being locked in to proprietary technologies, the logical alternative was a platform that was itself open source.

Fast forward to 2015. After some stops and starts, Cloud Foundry is now managed by an external foundation, a standard mechanism allowing erstwhile competitors to collaborate on a common core. The project has one foot in the old world with participation from traditional enterprise vendors such as EMC, HP and IBM and another in the future with its front and center “Cloud Native” messaging. How it manages to bridge that divide will be, to some degree, the determining factor in the project’s success. Because as Allstate’s Andrew Zitney discussed on Monday, changing the way enterprises build software is as hard as it is necessary. This is, in fact, one of three important questions facing the Cloud Foundry project in the wake of the Summit.

Is the Cloud Native label Useful or a Liability?

There are several advantages to the Cloud Native terminology. First, it’s novel and thus unencumbered by the baggage of prior expectations. Unlike terms such as “agile” which even one of the originators acknowledges has become “sloganized; meaningless at best, jingoist at worst,” Cloud Native gets to start fresh. Second, it’s aspirational. As evidenced by the growth of various cloud platforms, growing numbers of enterprises
are hyper-aware that the cloud is going to play a strategic role moving forward, and Cloud Native is a means of seizing the marketing high ground for businesses looking to get out in front of that transition. Third, it’s simple in concept. Microservices, for example, requires explanation where Cloud Native is comparatively self-descriptive. By using Cloud Native, Cloud Foundry can postpone more complicated, and potentilly fraught, conversations about what, precisely, that means. Lastly, the term itself explicitly disavows potentially fatal application compromises. The obvious implication of the term “native,” of course, is that there are non-native cloud applications, which is another way of saying applications not designed for the cloud. While it might seem counterintuitive, acknowledging a project’s limitations is a recommended practice, as customers will inevitably discover them anyway. Saving them this disappointment and frustration has value.

All of that being said, much depends on timing. Being exclusionary is an appropriate approach if a sufficient proportion of the market is ready. If it’s too early, Cloud Native could tilt towards liability instead of asset, as substantial portions of the slower moving market self-select themselves out of consideration by determining – correctly or not – that while they’re ready to tactically embrace the cloud, going native is too big a step. Even if the timing is perfect, in fact, conservative businesses are likely to be cautious about Cloud Native.

Cloud Native is a term then with upside, but not without costs.

How will the various Cloud Foundry players compete with one another?

The standard answer to questions of this type, whether it’s Cloud Foundry or other large collaborative projects, is that the players will collaborative on the core and compete above it. Or, as IBM’s Angel Diaz put it to Barb Darrow, “We will cooperate on interoperability and compete on execution.” From a high level, this is a simple, digestible approach. On the ground, temptations can be more difficult to resist. The history of the software industry has taught us, repeatedly, that profit is a function of switching costs. Which is where the incentive for ecosystem players to be interoperable enough to sell a customer and yet proprietary enough to lock them in, comes from.

Which is why the role of a foundation is critical. With individual project participants motivated by their own self-interest, it is the foundation’s responsibility to ensure that these do not subvert the purpose, and thus value, of the project itself. The Cloud Foundry Foundation’s primary responsibility should ultimately be to the users, which means ensuring maximum interoperation between competing instances of the project. All of which explains why will be interesting to watch.

How will the Cloud Foundry ecosystem compete with orthogonal projects such as Docker, Kubernetes, Mesos, OpenStack and so on?

On the one hand, Cloud Foundry and projects like Docker, Kubernetes, Mesos and OpenStack are very different technologies with very different ambitions and scope. Comparing them directly with one another, therefore, would be foolish. On the other hand, there is overlap between many of these projects at points and customers are faced with an increasingly complicated landscape of choices to make about what their infrastructure will look like moving forward.

While there have been obvious periods of transition, historically we’ve had generally accepted patterns of hardware and software deployment, whether the underlying platform was mainframe, minicomputer, client/server, or, more recently, commodity-driven scale-out. Increasingly, howevever, customers will be compelled to make difficult choices with profound long term ramifications about their future infrastructure. Public or private infrastructure? What is their approach to managing hardware, virtual machines and containers? What is the role of containers, and where and how does it overlap with PaaS, if at all? Does Cloud Foundry obviate the need for all of these projects? And the classic, rhetorical question of one-stop-shopping versus best-of-breed.

While Cloud Foundry may not be directly competing against any of the above, then, and certainly is not on an apples to apples basis, every project in the infrastructure space is on some level fighting with every other project for mindshare and visibility. The inevitable outcome of which, much as we saw in the NoSQL space with customers struggling to understand the difference between key-value stores, graph databases and MapReduce engines, will be customer confusion. One advantage that Cloud Foundry possesses here is available service implementations. Instead of trying to make sense of the various infrastructure software options available to them, and determining from there a path forward, enterprises can essentially punt by embracing Cloud Foundry-as-a-Service.

Still, the premium in the infrastructure is going to be on vision. Not just a project’s own, but how it competes – or complements – other existing pieces of infrastructure. Because the problem that a given project solves is always just one of many for a customer.

Categories: Cloud, Configuration Management, Containers, Platform-as-a-Service.