Blogs

RedMonk

Skip to content

Ubuntu and ZFS: Possibly Illegal, Definitely Exciting

The project originally known as the Zettabyte File System was born the same year that Windows XP began shipping. Conceived and originally written by Bill Moore, Jeff Bonwick and Matthew Ahrens among others, it was a true next generation project – designed for needs that could not be imagined at the time. It was a filesystem built for the future.

Fifteen years later, it’s the future. Though it’s a teenager now, ZFS’s features remain attractive enough that Canonical – the company behind the Ubuntu distribution – wants to ship ZFS as a default. Which wouldn’t seem terribly controversial as it’s an open source project, except for the issue of its licensing.

Questions about open source licensing, once common, have thankfully subsided in recent years as projects have tended to coalesce around standard, understood models – project (e.g. GPL), file (e.g. MPL) or permissive (e.g. Apache). The steady rise in share of the latter category has further throttled licensing controversy, as permissive licenses impose few if any restrictions on the consumption of open source, so potential complications are minimized.

ZFS, and the original OpenSolaris codebase it was included with, were not permissively licensed, however. When Sun made its Solaris codebase available for the first time in 2005, it was offered under the CDDL (Common Development and Distribution License), an MPL (Mozilla Public License) derivative previously written by Sun and later approved by the OSI. Why this license was selected for Solaris remains a matter of some debate, but one of the plausible explanations centered around questions of compatibility with the GPL – or lackthereof.

At the time of its release, and indeed still to this day as examples like ZFS suggest, Solaris was technically differentiated from the far more popular Linux, offering features that were unavailable on operating system alternatives. For this reason, the theory went, Sun chose the CDDL at least in part to avoid its operating system being strip-mined, with its best features poached and ported to Linux specifically.

Whether this was actually the intent or whether the license was selected entirely on its merits, the perceived incompatibility between the licenses (verbal permission from Sun’s CEO notwithstanding) – along with healthy doses of antagonism and NIH between the communities – kept Solaris’ most distinctive features out of Linux codebases. There were experimental ports in the early days, and the quality of these has progressed over the years and been made available as on-demand packages, but no major Linux distributions have ever shipped CDDL-licensed features by default.

That may change soon, however. In February, Canonical announced its intent to include ZFS in its next Long Term Support version, 16.04. This prompted a wide range of reactions.

Many Linux users, who have eyed ZFS’ distinctive featureset with envy, were excited by the prospect of having official, theoretically legitimate access to the technology in a mainstream distribution. Even some of the original Solaris authors were enthusiastic about the move. Observers with an interest in licensing issues, however, were left with questions, principally: aren’t these two licenses incompatible? That had, after all, been the prevailing assumption for over a decade.

The answer is, perhaps unsurprisingly, not clear. Canonical, for its part, was unequivocal, saying:

We at Canonical have conducted a legal review, including discussion with the industry’s leading software freedom legal counsel, of the licenses that apply to the Linux kernel and to ZFS.

And in doing so, we have concluded that we are acting within the rights granted and in compliance with their terms of both of those licenses. Others have independently achieved the same conclusion.

The Software Freedom Conservancy, for its part, was equally straightforward:

We are sympathetic to Canonical’s frustration in this desire to easily support more features for their users. However, as set out below, we have concluded that their distribution of zfs.ko violates the GPL.

If those contradictory opinions weren’t confusing enough, the Software Freedom Law Center’s position is dependent on a specific interpretation of the intent of the GPL:

Canonical, in its Ubuntu distribution, has chosen to provide kernel and module binaries bundling ZFS with the kernel, while providing the source tree in full, with the relevant ZFS filesystem code in files licensed as required by CDDL.

If there exists a consensus among the licensing copyright holders to prefer the literal meaning to the equity of the license, the copyright holders can, at their discretion, object to the distribution of such combinations

The one thing that seems certain here, then, is that very little is certain about Canonical’s decision to ship ZFS by default.

The evidence suggests that Canonical either believes its legal position is defensible, that none of the actors would be interested or willing to pursue litigation on the matter, or both. As stated elsewhere, this is if nothing else a testament to the quality of the original ZFS engineering. The fact that on evidence, Canonical perceives the benefits to outweigh the potential overhead of this fifteen year old technology is remarkable.

But if there are questions for Canonical, there are for their users as well. Not about the technology, for the most part: it has withstood impressive amounts of technical scrutiny, and remains in demand. But as much as it would be nice for questions of its licensing to give way before its attractive features, it will be surprising if conservative enterprises consider Ubuntu ZFS a viable option.

If ZFS were a technology less fundamental than a filesystem, reactions might be less binary. As valuable as DTrace is, for example, it is optional for a system in a way that a filesystem is not. With technology like filesystems or databases, however, enterprises will build the risk of having to migrate into their estimates of support costs, making it problematic economically. Even if we assume the legal risks to end users of the ZFS version distributed with Ubuntu to be negligible, concerns about support will persist.

According to the SFLC, for example, the remedy for an objection from “licensing copyright holders” would be for distributors to “cease distributing such combinations.” End users could certainly roll their own versions of the distribution including ZFS, and Canonical would not be under legal restriction from supporting the software, but it’s difficult to imagine conservative buyers being willing to invest long term in a platform that their support vendor may not legally distribute. Oracle could, as has been pointed out, remove the uncertainty surrounding ZFS by relicensing the asset, but the chances of this occurring are near zero.

The uncertainty around the legality of shipping ZFS notwithstanding, this announcement is likely to be a net win for both Canonical and Ubuntu. If we assume that the SFLC’s analysis is correct, the company’s economic downside is relatively limited as long as it complies promptly to objections from copyright holders. Even in such a scenario, meanwhile, developers are reminded at least that ZFS is an available option for the distribution, regardless of whether the distribution’s sponsor is able to provide it directly. It’s also worth noting that the majority of Ubuntu in usage today is commercially unsupported, and therefore unlikely to be particularly concerned with questions of commercial support. If you browse various developer threads on the ZFS announcement, in fact, you’ll find notable developers from high profile web properties who are already using Ubuntu and ZFS in production.

Providing developers with interesting and innovative tools – which most certainly describes ZFS – is in general an approach we recommend. While this announcement is not without its share of controversy, then, and may not be significant ultimately in the commercial sense, it’s exciting news for a lot of developers. As one developer put it in a Slack message to me, “i’d really like native zfs.”

One way or another, they’ll be getting it soon.

Categories: Open Source, Operating Systems.

What’s in Store for 2016: A Few Predictions

Volviendo / Coming Back...

Every so often, it’s worth taking a step back to survey the wider technical landscape. As analysts, we spend the majority of our time a few levels up from practitioners in an attempt to gain a certain level of perspective, but it’s still worth zooming out even further. To look not just at the current technical landscape, but to extrapolate from it to imagine what the present means for the future.

For six years running, then, I’ve conducted this exercise at the start of the new year. Or at least plausibly close to it. From the initial run in 2010, here is how my predictions have scored annually:

  • 2010: 67%
  • 2011: 82%
  • 2012: 70%
  • 2013: 55%
  • 2014: 50%
  • 2015: 50%

You may note the steep downward trajectory in the success rate. While rightly considered a reflection of my abilities as a forecaster, it is worth noting that the aggressiveness of the predictions was increased in the year 2013. This has led to possibly more interesting but provably less reliable predictions since; you may factor the adjustment in as you will.

Before we continue, a brief introduction to how these predictions are formed, and the weight you may assign to them. The forecast here is based, variously, on both quantitative and qualitative assessments of products, projects and markets, based on everything from hard data to off hand conversations. For the sake of handicapping, the predictions are delivered in groups by probability; beginning with the most likely, concluding with the most volatile.

With that explanation out of the way, the predictions for the year ahead:

Safe

  • Bots are the New UI:
    There are two dangers to delaying the release of your annual predictions until well into the new year. First, they can be proven correct before you publish, meaning that your prediction is no longer, technically speaking, a prediction. Second, someone else can make a similar prediction, which can – depending on the novelty of what you forecast – steal your thunder.

    Both of these have unfortunately transpired over the past month. First, Google’s Cloud Functions and IBM’s OpenWhisk obviated the need for my doubling-down on a bullish forecast for serverless architectures. And just a few weeks earlier, Tomasz Tunguz – who is always worth reading, incidentally – unknowingly stole major elements from my prediction regarding bots in a piece entitled The New UI For SaaS – The Question.

    One of the most surprising conversations I have today is with enterprise vendors who dismiss Slack as a messaging vendor, or with engineers who view it as little more than an IRC-implementation for muggles. Both miss the point, in my view. First, because they miss the platform implications, which I’ll get to, but just as importantly because they obscure the reality that bots are the new UI.

    Consider the universal problem of a user interface. If you’re implementing a GUI, you face increasingly difficult decisions about how to shoehorn a continually expanding featureset into the limited real estate of a front end. Making matters worse, aesthetic expectations have been fundamentally reset by the incursion of consumer-oriented applications. And while you’re trying to deliver a clean elegant user interface with too many features, the reality of mobile is that you’ll probably need to do so with even more limited screen real estate.

    Those whose users primary or sole interface is the command line have it easier to some degree, but their lives are also complicated by rampant fragmentation. Gone are the days when you could expect developers to memorize every option or flag on every command because there are simply too many commands. Too many developers today are reduced to Google or Stack Overflow as an interface because they’re not using a given tool quite enough to have completely internalized its command structure and options.

    Attempts to solve these user interface problems to date have essentially been delaying actions, because the physics of the problem are difficult to address. Complexity can only be simplified in so many ways so many times before it’s complex again.

    Enter the bot, which is essentially a CLI with some artificial intelligence baked in. Deployed at present at relatively narrow, discrete functional areas, their ultimate promise – as Tunguz discusses – is much broader. But for now text-based AI’s such as X.ai’s Amy or the Slack-based Howdy or Meekan point the way towards an entirely new brand of user interface. One in which there is no user interface, at least as we are typically acquainted with that term. If I want to schedule a meeting with someone via Amy, I don’t log in to a new UI and look at schedules, I use the same user interface I always have: email. Amy the artificial assistant parses the language, has contextual awareness of my calendar and then coordinates with the third party much as a human would. Or if I’m booking with one of us internally, I no longer have to open Google Calendar: I ask Meekan to pick a time and a date and turn it loose.

    And bots are not just for scheduling meetings – or ordering cars from Uber. Within the coming year we’re going to see tools extensively ported to bots. Why can’t I start and stop machines via a bot as I would the CLI? Or ask questions about my operations performance? Or, elsewhere, my run rate or cashflow? Some of our clients are working on things like this as we speak, and Slack’s December Platform launch, including the botkit Howdy, will speed this along.

    We’ve all had the experience at one point or another – particularly if you’ve ever used Google Analytics – of paging endlessly through a user interface for something we know an application can do, but can’t figure out how. What if you could skip that, and simply ask a bot in plain English (or the language of your choice) to do what you want?

    Folks who have been using things like Hubot for years already know the answer to this. As platforms like Slack expand, more of us will begin to realize the advantages to this in 2016, as bots become the New UI.

  • Slack is (One of) The New Platform(s):
    Based in large part on the absurd success, both in terms of marketshare and revenue, of Microsoft’s twin platforms, Office and Windows, software businesses ever since have attempted to become platforms. Most of these efforts historically have ended in failure. Becoming a platform, as it turns out, is both expensive and entirely dependent on something that is intensely difficult to predict: volume traction. Even for well capitalized would be players with platform ambitions, the dynamics that lead to the annointment as a platform are difficult to navigate.

    Few, particularly those who still regard Slack as a jumped up instant messaging client, would have anticipated that Slack would become such a platform, but it’s well on its way. We have had persistent group chat clients and capabilities for decades, at this point, and for all of their immense user traction, even the most popular IM networks never made the jump to platform. Domestically, at least: China’s networks are materially distinct here.

    Most obviously, Slack’s growing its userbase: it essentially quadrupled over the past calendar year from around 500,000 users to over 2 million. But the important jump was in its app catalog. From 150 apps in the catalog at launch, Slack has almost doubled that number to 280 at the moment. And we’re seeing significant interest and traction from third parties who’d like to add themselves to that number, because Slack is checking an increasing number of the boxes first class platforms have to to be taken seriously.

    When we look back on 2016, then, it will be regarded as the year that Slack became a platform.

  • Newsletters are the New Blogs:

    Whether you attribute the decline in RSS and its client applications to the rise of social media like Facebook and Twitter is, to some degree, academic. Whether they were the cause or simply the beneficiary, the fact is that a great many whose consumption of content used to depend on RSS readers now look to the social networks to fill a similar need.

    Similar is not same, however. As Facebook’s algorithmic feed and Twitter’s much excoriated dalliances with something similar have demonstrated, one of the difficulties with social networks is that they’re difficult to scale. With an RSS reader, you don’t miss a post from an author you’ve subscribed to. With Facebook or Twitter, the more you friends you have, the more difficult it is not to.

    Enter newsletters. Well, technically that’s not accurate, as they’ve been around since well before RSS readers or social networks. But since the demise of the former and the rise of the latter, newsletters are increasingly becoming the de facto alternative, as Paul Ford suggests above. If you want to be sure readers don’t miss your content, and readers are similarly interested, newsletters have been pressed into service as the solution.

    In 2016, we’ll see this trend go mainstream, and authoring tools designed for actual authors rather than, say, marketers, will emerge.

    All of which means I probably need to start a newsletter already.

Likely

  • Open Source is the Future, and It Will Become Evenly Distributed:
    The rise of open source at this point has been well chronicled. While the most efficient mechanisms for commercializing open source software remain hotly debated, the sustainability of open source itself is no longer in question. In an increasing number of scenarios, open source is viewed even by staunchly capitalistic businesses as a logical strategic choice.

    Even so, we haven’t yet hit the tipping point where it’s the default software development model. There are still many more scenarios in which open source is an exception, a mere science experiment, rather than the most logical choice for a given piece of software.

    There were signs in 2015 that this was changing, and this will accelerate in 2016. Google, for example, has typically guarded its infrastructure software closely. It published the details that made building Hadoop possible, but kept its actual implementation closed. With Microsoft’s CNTK or Google’s TensorFlow and arguably Kubernetes (it’s not Borg, but a reimplementation of it), this pattern has begun to shift. Apple’s decision to make its Swift runtime open source is another example of an organization which has historically been protective of its software assets recognizing that the benefits to open source outweigh the costs of proprietary protections. Even in industry, enterprises are beginning to see the advantages – whether in developer marketing/recruitment/retention, cost amortization, etc – and make strides towards either releasing their own internal software as open source (see Capital One’s Hygieia) or easing restrictions on contributing back to existing projects.

    Open source will become evenly distributed, then, in 2016.

  • SaaS is the New Proprietary…But Will Lead to More Open Source:
    As I have argued previously, SaaS is on several levels a clear and present danger to open source sofware. First, questions about access to source are deemphasized in off premise implementations in ways they are not in on prem alternatives. Second, many SaaS offerings have incorporated the embrace, extend and extinguish model by building attractive proprietary extensions onto open source foundations. Lastly, just as open source enjoyed massive advantages in convenience and accessibility over proprietary alternatives, so too is SaaS more convenient than OSS.

    Where many OSS advocates still consider traditional proprietary software the threat, then, they would do better to shift their attention to SaaS alternatives.

    All of that being said, SaaS is counterintuitively a potential benefactor to open source in important ways. As described above, important SaaS vendors are both investing heavily in software development to tackle very difficult, unique problems and realizing that the benefits to making some or all of this software available as open source outweigh the costs.

    The Platform-as-a-Service market is perhaps the industry’s best evidence of this. The initial implementations in early 2007 – Force.com and Google App Engine – massively lagged IaaS alternatives in adoption not because of technical limitations, but because of their proprietary nature. The technical promise of PaaS – focus on the application, not the infrastructure it runs on – was intriguing from a developer standpoint. But no one wanted to write applications that would never run anywhere else.

    Fast forward six years and the PaaS market is a promising, growing category. Why? Because customer concerns about lock-in have been mitigated via the use of open source software. As ever, developers and the enterprises they work for are more likely to walk through a door they know they can walk back out of.

    AWS’ Lambda is a more recent indication of this phenomenon at work. Technically innovative, it underperformed from a visibility and adoption perspective largely because of concerns around lock-in. These may or may not be lessened by the release of similar server-less services from Google and IBM, but if history is any guide, the simplest path towards dramatically accelerating Lambda adoption would be for AWS to release an open source implementation of the product.

    Whether the famously private Amazon will take such a step is unknown, but on an industry-wide basis the growth of SaaS will lead to the release of more open source software in 2016.

  • Winter is Coming:
    We may be less than a week from the end of meterological winter, but the metaphorical kind is still looming. The obvious signs of a market correction are there: an increasingly challenging funding environment, systemic writedowns of existing investments, a renewed skepticism of the sustainability and funding models of startups, and existential crises for multiple large incumbents. The less obvious signs are the private conversations, subtle pattern shifts in job hunting trends and so on.

    How deep or prolonged the next dip will be is difficult to predict at this time, but what seems inevitable is that it will start this year.

Possible

  • Google Releases an iMessage Competitor at I/O:
    Google’s strategy with respect to messaging has been perplexing of late. While products like HipChat are correctly regarded as the primary competition for Slack, it is nevertheless true that a good portion of the latter’s traction has come at the expense of Google Talk – a product which has seriously languished in recent years. Towards the SMS end of the messsaging spectrum, meanwhile, Google’s general response to the rapid growth and popularity of Apple’s iMessage has been apathy and indifference. Which makes sense if the only business you care about is search. If, however, the enterprise collaboration and mobile markets are of some importance – as Google’s actions on paper suggest they are – this inaction is baffling.

    More to the point, for every quarter they delay a response, they’re that much further behind from an adoption standpoint. Even if they were able to roll out a viable iMessage competitor for Android tomorrow, for example, they’d be facing a protracted battle to win users back from competiting services.

    Perhaps Google has come to regard the messaging market as akin to the old IM networks; superficially useful, but limited in their long term value. Or maybe they’re pessimistic about the opportunity to compete with multiple closed, defensible networks and are planning the strategic equivalent of an island hop. The difficulty with either strategy is that if the first prediction above is true, and bots are the new UI, Google’s lack of a visible, well adopted chat vector to their users is a serious problem.

    Which is why I expect Google to attempt to remedy this in 2016, the logical release for which would be at the I/O conference. Google is undoubtedly behind, but not insurmountably so. Yet. Slack is still in low single digit millions from an adoption standpoint, and Apple has artificially created vulnerabilities with its single platform approach – an iMessage that worked seamlessly across platforms and, importantly, had legitimate (i.e. not Mac’s Messages) desktop clients for a variety of desktop operating systems would generate interest, at least.

  • 2016 Isn’t the Year of VR, the Rift/Vine/etc Notwithstanding:
    A little while back I had the opportunity to demo the latest build of Oculus’ VR software and hardware. It was legitimately mindblowing. I haven’t had too many experiences like it in my time in this industry. The last portion of the demo placed you on a city street in the midst of an alien attack. Action was slowed dramatically, so you could turn your head and watch a bullet float by, or watch the car next to you detonate and lift into the air as if it were underwater, but still on fire. Insane.

    But 2016 isn’t going to be the year of VR.

    Most importantly, the equipment is too expensive. As Wired says, the problem isn’t necessarily with the cost of the unit itself, in spite of the $600 price tag (or $800, if you want an HTC Vive): it’s the total cost of ownership, to borrow the enterprise term. First, the $600 doesn’t include higher end controllers. But more importantly, it doesn’t factor in the cost of the associated PC hardware – specifically the graphics card:

    True, you can bring that cost down by going with a desktop, but how many people will buy a desktop over a laptop these days? Even if cost is addressed, it will take time to populate the kind of software catalogs buyers will need to see to justify the expense and the equipment.

    Based on the few times I’ve used VR, I’m bullish on the technology long term. But my expectations for it in 2016 are modest.

Exciting

  • “Boot” projects Will Become a Thing:
    Better than ten years removed from the initial release of Rails, it seems strange to be writing about the “new” emphasis on projects intended to simplify the boostrapping process. But in spite of the more recent successes of projects like Bootstrap and Spring Boot, such projects are not the first priority for most technical communities. Perhaps because of the tendency to admire elegant solutions to problems of great difficulty, frameworks and on ramps to new community entrants tend to be an afterthought. In spite of this, they frequently prove to be immensely popular, because in any given community the people who know little about it far outnumber the experts. Even in situations, then, when the boot-oriented project and its opinions are outgrown, boot-style projects can have immense value simply because they widen the funnel of incoming developers. Which is, as we tell our customers are RedMonk every day, one of the single most important actions a project can take.

    Based in part of the recent successes mentioned as well as a growing awareness of this type of project’s value, we’re going to see boot-style projects become a focus in the year ahead, because every project should have one.

  • Open Source Hardware Becomes a Measureable Player:
    We’ve known for some time that the largest internet providers have been heavily vertically integrated, more so by the year. From Google’s custom servers to Facebook’s custom networking gear to Amazon’s custom racks and custom chips built with Intel, the web pioneers have little reliance today on external integrated products. For all that traditional incumbents have attempted to portray themselves as arms suppliers to the world’s biggest and fastest web properties, the reality is that they at best have been relegated to niche suppliers and at worst have been cut out of the supply chain entirely. Initiatives like Facebook’s Open Compute project have only helped accelerate this trend, by democratizing access to hard-won insights in high-scale compute, network, storage problems.

    Vendors have sprung up around these and other efforts – Cumulus Networks, for example – and this will inevitably continue, as the same forces that sought to excise the margin on first software and then compute continue towards networking and storage. Call it the fulfillment of the disruption that began as far back as 2014, but in the year ahead we’ll see hard impacts from open source hardware on large existing incumbents.

Spectacular

  • AI Will Be Turned Loose on Crime:
    For anyone who’s listened to the first season of Serial, one of the things that hits you is just how much data there is to process. From verbal statements to timelines to maps to cell tower records to email threads, it’s an immense amount of information to keep track of, even for a single victim crime. With each offense, the complexity goes up commensurately.

    Complexity and synthesis of multiple forms of disparate information – particularly tedious, numerical information – is not something that people in general do particularly well. Computers, on the other hand, are exceptional at it. With the accompanying improvements in natural language processing, additionally, it’s possible to envision Philip K Dick-like AI-detectives that can process thousands of streams of information quickly and dispassionately, rendering judgements on outcomes.

    We’re a little ways off from Blade Runner, of course – Moravec’s Paradox still holds, even if yesterday’s Atlas videos are terrifying. But purely from an analysis perspective, we’re clearly at the point where an AI could assist in at least some investigatory elements.

    What would the interest be from the AI side? Clearly not financial, because even if the system worked perfectly it would likely take a decade or more to address law enforcement and legal concerns. No, the primary benefit would be marketing value. IBM didn’t have Watson play Jeopardy for the prize money; the benefit was instead marketing, introducing the first computer to play and beat humans at a spoken language game.

    With that in mind, it’s difficult to imagine a higher profile potential marketing opportunity than true crime. Consider the transcendent success of Serial and the more recent popularity of Netflix’s Making a Murderer. What if an AI project could be a primary factor behind the discovery of a miscarriage of justice?

    It would be very interesting indeed, which is why we might see it in 2016.

Sad

  • Silicon Valley Continues to Follow in Wall Street’s Footsteps:
    My Dad worked on Wall Street for forty years, the entirety of his career. When I was growing up, this fact could be cited with something like pride. If nothing else, Wall Street was a fiercely competitive market that attracted intelligent participants. Whatever else might be said about this flag bearer for capitalism, it meant you knew how to work hard and compete.

    Today, Wall Street is a ruined term, having become synonymous with a spectacular tonedeafness, outrageous excesses of compensation and uncontrolled greed. I’m still proud of my Dad, but in spite of his time on Wall Street rather than because of it. He was, fortunately for us, the antithesis of Wall Street rather than the embodiment of it. He was never corrupted by that business, and that fact did him no favors over the course of his career.

    When I got into technology a few decades ago, I had a lot of pride in my industry, much as I’m sure my Dad did. He probably felt about Wall Street the way I did about Silicon Valley. At least initially.

    Looking around the technology industry today I am regularly dismayed by what I see. From calls for the secession of California to arguments in favor of increasing inequality to literally unbelievable insensitivity to those less fortunate, the term Silicon Valley is – in the circles I travel in, at least – becoming synonymous with…a spectacular tonedeafness, outrageous excesses of compensation and uncontrolled greed. For the first time in my career, I am occasionally embarrassed to tell someone I work in technology.

    The overwhelming majority of the people in this industry, of course, are regular, good people. It is undoubtedly a case of a few bad apples ruining the bunch. But unfortunately much the same is true of Wall Street: most of the people who work there are not members of the 1%, just people trying to get by. That distinction, however, gets lost quickly.

    We in the technology industry are running the same risk, in my opinion. Unless the excesses are widely condemned, and unless we can collectively articulate a vision that isn’t something like “we always know best” or “the homeless should just learn a computer language”, I fear Silicon Valley is headed the way of Wall Street. That most of us aren’t responsible for the appalling lack of empathy won’t matter: we’ll all be tarred with the same brush.

    I don’t expect any progress in this department in 2016, which is why it’s listed here. Alas.

Categories: AI, Business Models, Cloud, Collaboration, Hardware-as-a-Service, Open Source, Platform-as-a-Service, Platforms, Social, Software-as-a-Service, VR.

The RedMonk Programming Language Rankings: January 2016

This iteration of the RedMonk Programming Language Rankings is brought to you by Rogue Wave Software. It’s hard to be a know-it-all. With our purpose-built tools, we know a lot about polyglot. Let us show you how to be a language genius, click here.


It’s been a very busy start to the year at RedMonk, so we’re a few weeks behind in the release of our bi-annual programming language rankings. The data was dutifully collected at the start of the year, but we’re only now getting around to the the analysis portion. We have changed the actual process very little since Drew Conway and John Myles White’s original work late in 2010. The basic concept is simple: we periodically compare the performance of programming languages relative to one another on GitHub and Stack Overflow. The idea is not to offer a statistically valid representation of current usage, but rather to correlate language discussion (Stack Overflow) and usage (GitHub) in an effort to extract insights into potential future adoption trends.

With the exception of GitHub’s decision to no longer provide language rankings on its Explore page – they are now calculated from the GitHub archive – the rankings are performed in the same manner, meaning that we can compare rankings from run to run, and year to year, with confidence.

Historically, the correlation between how a language ranks on GitHub versus its ranking on Stack Overflow has been strong, but this had been weakening in recent years. From its highs of .78, the correlation was down to .73 during our last run – the lowest recorded. For this run, however, the correlation between the properties is once again robust. For this quarter’s ranking, the correlation between the properties was .77, just shy of its all time mark. Given the recent variation, however, it will be interesting to observe whether or not this number continues to bounce.

Before we continue, please keep in mind the usual caveats.

  • To be included in this analysis, a language must be observable within both GitHub and Stack Overflow.
  • No claims are made here that these rankings are representative of general usage more broadly. They are nothing more or less than an examination of the correlation between two populations we believe to be predictive of future use, hence their value.
  • There are many potential communities that could be surveyed for this analysis. GitHub and Stack Overflow are used here first because of their size and second because of their public exposure of the data necessary for the analysis. We encourage, however, interested parties to perform their own analyses using other sources.
  • All numerical rankings should be taken with a grain of salt. We rank by numbers here strictly for the sake of interest. In general, the numerical ranking is substantially less relevant than the language’s tier or grouping. In many cases, one spot on the list is not distinguishable from the next. The separation between language tiers on the plot, however, is generally representative of substantial differences in relative popularity.
  • GitHub language rankings are based on raw lines of code, which means that repositories written in a given language that include a greater amount of code in a second language (e.g. JavaScript) will be read as the latter rather than the former.
  • In addition, the further down the rankings one goes, the less data available to rank languages by. Beyond the top tiers of languages, depending on the snapshot, the amount of data to assess is minute, and the actual placement of languages becomes less reliable the further down the list one proceeds.

(click to embiggen the chart)

Besides the above plot, which can be difficult to parse even at full size, we offer the following numerical rankings. As will be observed, this run produced several ties which are reflected below (they are listed out here alphabetically rather than consolidated as ties because the latter approach led to misunderstandings). Note that this is actually a list of the Top 21 languages, not Top 20, because of said ties.

1 JavaScript
2 Java
3 PHP
4 Python
5 C#
5 C++
5 Ruby
8 CSS
9 C
10 Objective-C
11 Shell
12 Perl
13 R
14 Scala
15 Go
15 Haskell
17 Swift
18 Matlab
19 Clojure
19 Groovy
19 Visual Basic

JavaScript’s continued strength is impressive, as is Java’s steady, robust performance. The long time presence of these two languages in particular atop our rankings is no coincidence; instead it reflects an increasing willingness to employ a best-tool-for-the-job approach, even within the most conservative of enterprises. In many cases, Java and JavaScript are leveraged side-by-side in the same application, depending on its particular needs.

Just as JavaScript and Java’s positions have remained unchanged, the rest of the Top 10 has remained similarly static. This has become the expectation rather than a surprise. As with businesses, the larger a language becomes, the more difficult it is to outperform from a growth perspective. This suggests that what changes we’ll see in the Top 10 will be slow and longer term, that fragmentation has begun to slow. The two most obvious candidates for a Top 10 ranking at this point appear to be Go and Swift, but they have their work cut out for them before they get there.

Outside of the Top 10, however, here are some of the more notable performers.

  • Elixir: The Erlang-friendly language made a notable jump this time around. The last quarter we surveyed languages, Elixir placed at #60. As of this January, it had jumped to #54. While we caution against reading too much into specific numerical differences, the more so the further down the list one goes, this change is notable as it suggests that the language – a darling amongst some language aficionados – is finally seeing some of the growth ourselves and others have expected from it. Interestingly, Erlang did not benefit from this bounce, as it slid back to #26 after moving up to #25 last quarter.

  • Julia: Julia’s growth has been the tortoise to other languages’ hares historically, and this run was no exception. For this run, Julia moves from #52 to #51. Given the language’s slow ascent, it’s worth questioning the dynamics behind its adoption, and more specifically whether any developments might be anticipated that would materially change its current trajectory. So far, the answer to that question has been no, but certainly its focus on performance and syntactical improvement would seem to offer would-be adopters a carrot.

  • Rust: Another popular choice among language enthusiasts, Rust’s growth has outpaced slower growth languages like Elixir or Julia, but not by much. This time around, Rust moves up two spots from #48 to #46. The interesting question for Rust is when, or perhaps if, it will hit the proverbial tipping point, the critical mass at which usage becomes self-reinforcing and an engine for real growth. Go went through this, where its growth through the mid to low thirties was relatively modest, then picked up substantially until it entered the Top 20. In the meantime, it will have to settle for modest but steady gains quarter after quarter.

  • Swift: Swift’s meteoric rise has predictably slowed as it’s entered the Top 20, but importantly has not stopped. For this ranking, Swift moves up one spot from #18 to #17. As always, growth is more difficult the closer you get to the top, and in passing Matlab, Swift now finds itself a mere two spots behind Go – in spite of being five years younger. It is also three spots behind Scala and only four behind R. Which means that Swift finds itself ranked alongside languages of real popularity and traction, and is within hailing distance of our Tier 1 languages (R is the highest ranking Tier 2). The interesting thing is that Swift still has the potential to move significantly; its current traction was achieved in spite of being a relatively closed alternative amongst open source alternatives. Less than four weeks before we took this quarter’s snapshot of data, Swift was finally open sourced by Apple, which means that the full effect of this release won’t be felt until next quarter’s ranking. This release was important for developers, who typically advantage open source runtimes at the expense of proprietary alternatives, but also because it allows third parties to feel comfortable investing in the community in a way they would not for a proprietary stack – see IBM’s enthusiastic embrace of Swift. This means that Swift has, uniquely, multiple potential new engines for growth. So it will be interesting indeed to see what impact the release has on Swift overall adoption, and whether it can propel it near or actually into the Top 10.

  • Typescript: One interesting, although unheralded, language to watch is TypeScript. A relatively new first class citizen in the Microsoft world, this (open source, notably) superset of JavaScript is quietly moving up the ranks. In this ranking, TypeScript jumped two spots from #33 to #31, passing ASP in the process. Obviously it’s a small fraction of JavaScript’s traction, but the list of interesting technologies it outranks now is growing longer: ASP (#32), OCaml/TCL (#33), Cold Fusion/DART (#37), among others, as well as the aforementioned Elixir/Julia/Rust. It’s not reasonable to expect any explosive growth from Typescript, but it wouldn’t be surprising to see it get a bounce should it prove capable of moving into the twenties and becoming more widely visible. Regardless, it’s become a language to watch.

The Net

We are regularly asked why we don’t run the language rankings more regularly – every quarter or even on a monthly basis. The answer is that there isn’t enough movement in the data to justify it; programming languages are popular enough and the switching costs are sufficient to mean that significantly shifting adoption is a slow, gradual process. For every language language except Swift, anyway.

It will be interesting to see whether or not we’ll see new entrants into the Tier 1 of languages, with the most likely candidate at this point being Swift followed by Go. Further down the list, several interesting but currently niche languages are getting close to thresholds at which they have the potential to see substantial, if not guaranteed growth. Not the kind that will take them into the Top 10, but certainly there are vulnerable languages at the back end of the Top 20. In the meantime, we’ll keep checking in every other quarter to report back on progress – or the lack thereof – in any of the above areas.

One More Thing

(Added 2/22/2016)

Of all the requests we receive around our programming language rankings, the ability to browse the history of their performance is by far the most common. The current rank of a language is of great interest, of course, but for many previous rankings and the trajectory they imply are at least as interesting, and in some cases more so.

We’ve been thinking about this for a while, and while a number of different visualizations were assessed, there are so many different angles to the data a one size fits all approach was less than ideal. Which led to the evaluation a few dynamic alternatives, which were interesting but had some issues. Rather than hold up this quarter’s already delayed release for a visualization that had potential but might not work, then, we went ahead and published the rankings.

Over the weekend, however, the last major obstacles were addressed. It’s not perfect, but the historical depiction of the rankings is in a state where we can at least share a preliminary release. A few notes:

  • This is not a complete ranking of all the languages we survey. It includes only languages that are currently or have been at one time in the Top 20.
  • This graphic is interactive, and allows you to select as many or as few languages as you prefer. Just click on the language in the legend to toggle them on or off. This is helpful because Swift fundamentally breaks any visual depiction of growth: de-select it and the chart becomes much more readable.
  • The visualization here, courtesy of Ramnath Vaidyanathan’s rCharts package, is brand new and hasn’t been extensively tested. Mobile may or may not work, and given the hoops we had to jump through to host a D3-based visualization on a self-hosted WordPress instance, it’s likely that some browsers won’t support the visualization, HTTPS will break it, etc. We’ll work on all of that, and do let us know if you have problems, but we wanted to share at least the preliminary working copy as soon as we were able.

With that, we hope you enjoy this visual depiction of the Historical Programming Language Rankings.

Categories: Programming Languages.

Revisiting the 2015 Predictions

It’s easy to make predictions. It’s even easier to cherry pick correct predictions and hope the rest are forgotten. Believers as we are in accountability, however, for the fifth year in a row it’s time to review my predictions for the calendar year just ended. All of them, good, bad and ugly.

As is the case every year, this is a two part exercise. First, reviewing and grading the predictions from the prior year, and second, the predictions for this one. The results articulated by the first hopefully allow the reader to properly weight the contents of the second – with one important caveat that I’ll get to.

This year will be the sixth year I have set down annual predictions. For the curious, here is how I have fared in years past.

Before we get to the review of 2015’s forecast, one important note regarding the caveat mentioned above. Prior to 2013, the predictions in this space focused on relatively straightforward expectations with well understood variables. After some serious shade thrown by constructive feedback from Bryan Cantrill, however, emphasis shifted towards trying to better anticipate the totally unexpected than the reverse.

With the benefit of hindsight, one thing is apparent:

Nevertheless, we press on. Without further delay, here are the 2015 predictions in review.

Safe

  • Amazon is Going to Become More Dominant Thanks to “Honeypots”
    In 2015, Amazon will become even more dominant thanks to its ability to land customers on what I term “honeypot” services – services that are exceedingly easy to consume, and thus attractive – and cross/upsell them to more difficult-to-replicate or proprietary AWS products. Which are, notably, higher margin. Examples of so-called “honeypot” services are basic compute (EC2) and storage (S3) services. As consumption of these services increases, which it is across a large number of customers and wide range of industries, the friction towards other AWS services such as Kinesis, Redshift and so on decreases and consumption goes up. Much as was the case with Microsoft’s Windows platform, the inertia to leave AWS will become excessive.

According to several metrics, AWS satisfied the prediction that it would extend its lead in 2015. This, in spite of notable increases in traction for several other players, most notably Google and Microsoft. Attributing the cause to this performance is a more difficult question, but it is interesting to note that AWS has a new title holder for the fastest growing service in its history, Aurora, which took the crown from Redshift. The rapid adoption of unique-to-AWS, and thus not externally replicable, services is suggestive of a user population that is willing to invest in premium, in-place services. As quickly as these services are growing, however, it is unrealistic to believe that they are the engines behind AWS’ broader acceleration: they’re simply too new. Which in turn implies that the growth of AWS is still fueled in large part by its core, basic offerings and that services such as Aurora or Redshift are important first for their ability to lock customers in and secondarily for the revenue they generate.

And so we’ll count this prediction as a hit.

  • Kubernetes, Mesos, Spark et al are the New NoSQL
    Not functionally, of course. But the chaos of the early NoSQL market is remarkably similar to the evolution of what we’re seeing from projects like Mesos or Spark. First, there has been a rapid introduction of a variety of products which require a new conceptual understanding of infrastructure to appreciate. Second, while there may be areas of overlap between projects, in general they are quite distinct from one another. Third, the market’s understanding of what these projects are for and how they are to be used is poor to quite poor.

    This is currently what we see when customers are evaluating projects like Kubernetes, Mesos and Spark: the initial investigation is less functional capability or performance than basic education. Not through any failing on the part of the individual projects, of course. It just takes time for markets to catch up. For 2015, then, expect these and similar projects to achieve higher levels of visibility, but remain poorly understood outside the technical elite.

Developer and end user questions about emerging infrastructure projects are nearly ubiquitous. So are vendor efforts to answer these questions. Nor is this just Kubernetes.

Ten years ago, the most difficult software choices an enterprise had to make were a) which relational database and b) which application server to use. Today, they are confronted by a bewildering array not just of projects and products, but of distinct technical approaches and visions. Worse, these are moving targets; two projects that do not functionally overlap today may – probably will – tomorrow.

Until these coalesce around standard, accepted models much as Ajax once came to describe a mixed bag of unrelated technologies which were employed towards a particular approach, market confusion will remain and education will be a significant upfront problem for infrastructure projects major and minor.

We’ll call this a hit.

Likely

  • Docker will See Minimal Impact from Rocket
    Following the announcement of CoreOS’s container runtime project, Rocket, we began to field a lot of questions about what this meant for Docker. Initially, of course, the answer was simply that it was too soon to say. As we’ve said many times, Docker is one of the fastest growing – in the visibility sense – projects we have ever seen. Along with Node.js and a few others, it piqued developer interest in a way and at a rate that is exceedingly rare. But past popularity, while strongly correlated with future popularity, is not a guarantee.

    In the time since, we’ve had a lot of conversations about Docker and Rocket, and the anecdotal evidence strongly suggested that the negative impact, if any, to Docker’s trajectory would be medium to long term. Most of the conversations we have had with people in and around the Docker ecosystem suggest that while they share some of CoreOS’s concerns (and have some not commonly cited), the project’s momentum was such that they were committed for the foreseeable future.

    It’s still early, and the results are incomplete, but the quantitative data from my colleague above seems to support this conclusion. At least as measured by project activity, Docker’s trendline looks unimpacted by the announcement of Rocket. I expect this to continue in 2015. Note that this doesn’t mean that Rocket is without prospects: multiple third parties have told us they are open to the idea of supporting alternative container architectures. But in 2015, at least, Docker’s ascent should continue, if not necessarily at the same exponential rate.

Normally, this is where we’d examine quantitative data from sources like GitHub, Stack Overflow and so on to assess Docker’s trajectory and potential impacts on it. This is unnecessary, however. It would be difficult to build the argument that Docker was not impacted by Rocket in the wake of the June announcement of the Open Container Initiative. To be clear, this is a low level standard, one that permits competing container implementations: it is explicitly not an effort to make either Docker or Rocket the once and future container spec. And Docker’s momentum as a project has continued unabated.

But as welcome as the news was from a market perspective – standard wars are tedious and benefit few over the long term – the impact of Rocket and the drivers behind it are clear. Which makes this, unfortunately, a miss.

  • Google Will Hedge its Bets with Java and Android, But Not Play the Swift Card
    That being said, change is unlikely to be coming in 2015 if it arrives at all. For one, Go is a runtime largely focused on infrastructure for the time being. For another, Google has no real impetus to change at the moment. Undoubtedly the company is hedging its bets internally pending the outcome of “Oracle America, Inc. v. Google, Inc.,” which has seen the Supreme Court ask for the federal government’s opinions. And certainly the meteoric growth of Swift has to be comforting should the company need to make a clean break with Java.

    But the bet here is, SCOTUS or no, we won’t see a major change on the Android platform in 2015.

Scoring this prediction depends on whether the spirit of “major change” is accounted for. If we judge this prediction literally, it’s a miss because the platform did in fact see a major change in 2015. For the first time since the project’s inception, Android is moving to something other than a cleanroom reimplementation of the runtime. For reasons likely stemming from Oracle America, Inc. v. Google, Inc., Google is moving Android N to the OpenJDK project. Which is the literal definition of a major change.

If the context of the prediction is taken into account, however, the prediction is less of a miss. While the shift to OpenJDK is a big change for the Android project, it is still Java. What Google has declined to do, at least this time around, is make a change such as the one Apple is undertaking, replacing Objective C with the newly released (and open sourced) Swift.

In light of that fact, it seems fair to call this a push.

  • Services Will Be the New Open Core
    Which is why we are and will continue to see companies turn to service-based revenue models as an alternative. When you’re selling a service instead of merely a product, many of the questions endemic to open core go away. And even in models where 100% of the source code is made available, selling services remains a simpler exercise because selling services is not just selling source code: it’s selling the ability to run, manage and maintain that code.

If services were the new open core, what would we expect to see? One logical outcome would be commercial organizations – especially those who have historically embraced open core-like revenue models – deploying open source software as a service. Which is what we began to see in 2015. Previously, we’ve seen Oracle expand not just its business models but its financial reporting from the traditional on premise software offerings to incorporate IaaS, PaaS and SaaS. In 2015, this continued. In March, Elastic – originally known as Elasticsearch – acquired a company called Found, whose principal product was Elasticsearch offered as a service. Asked about the deal, Elastic CEO Steven Schuurman responded:

“At this point we have massive corporations who run 10, 20, 30, or many more instances of ELK servicing different use cases and they’re coming to us asking how to manage and provision different deployments. On the other hand, people ask us when we’re going to provide something as a service.”

IBM came to a similar conclusion regarding the demand for these types of database-as-a-service offerings, adding a company called Compose in July. Like Found, Compose was in the database-as-a-service business, and was added to IBM’s existing stable of software as a service offerings.

Less obviously, a large number of the commercial open source organizations we spoke with over the year were either exploring the idea of similar forays into SaaS models or actively engaged in their execution.

None of which, of course, should come as a surprise given the aforementioned traction of products like Aurora. Service-based models are increasingly in demand, as Schuurman said, which means that vendors will supply that demand.

This is a hit.

Possible

  • One Consumer IoT Provider is going to be Penetrated with Far Reaching Consequences
    If it weren’t for the fact that there’s only ten months remaining in the year and the “far reaching” qualifier, this would be better categorized as “Likely” or even “Safe.” But really, this prediction is less about timing and more about inevitability. Given the near daily breaches of organizations regarded as technically capable, the longer the time horizon the closer the probability of a major IoT intrusion gets to 1. The attack surface of the consumer IoT market is expanding dramatically from thermostats to smoke detectors to alarms to light switches to door locks to security systems. Eventually one of these providers is going to be successfully attacked, and bad things will follow. The prediction here is that that happens this year, much as I hope otherwise.

The problem with this prediction, again, is its specificity. In a year in which attackers found vulnerabilities in everything from from Barbie dolls to Jeeps, the fragility of the IoT ecosystem was revealed repeatedly. But while there were attacks on consumer IoT providers such as FitBit, none of them came with far reaching consequences. That we know of now, at least.

As much as this prediction is essentially inevitable, then, for the year 2015 it’s a miss. Thankfully.

  • AWS Lambda Will be Cloned by Competitive Providers
    For my money, however, Lambda was the most interesting service introduced by AWS last year. On the surface, it seems simplistic: little more than Node daemons, in fact. But as the industry moves towards services in general and microservices specifically, Lambda offers key advantages over competing models and will begin to push developer expectations. First, there’s no VM or instance overhead: Node snippets are hosted effectively in stasis, waiting on a particular trigger. Just as interest in lighter weight instances is driving containers, so too does it serve as incentive for Lambda adoption. Second, pricing is based on requests served – not running time. While the on-demand nature of the public cloud offered a truly pay-per-use model relative to traditional server deployments, Lambda pushes this model even further by redefining usage from merely running to actually executing. Third, Lambda not only enables but compels a services-based approach by lowering the friction and increasing the incentives towards the servicification of architectures.

    Which is why I expect Lambda to be heavily influential if not cloned outright, and quickly.

This was, simply put, a miss, and a miss with interesting implications. Lambda’s server-less model was and is an interesting, innovative new approach for constructing certain types of applications. And it certainly attracted its a reasonable share of developer attention; the Serverless framework (AKA JAWS) checks in with over 6600 stars on GitHub – or about a third of what Node has – at present.

For all of that, however, Lambda adoption has not been proportionate to its level of market differentiation. For such an interesting new service, actual usage of Lambda has been surprisingly modest. On Stack Overflow, the aws-lambda tag has a mere 388 questions attached to it – Node has almost 110,000.

The question is why, and my colleague actually answered this in October:

So far Lambda adoption has been a little slow, partly because it doesn’t fit into established dev pipelines and toolchains, but also almost certainly because of fears over lock-in.

Lambda, in other words, has the same problem that Force.com and Google App Engine had when they launched: however impressive the service, developers are reluctant to walk into a room that they can’t walk out of. Which in turn implies that, counterintuitively, the best thing for Lambda at this point would be a competitive clone. AWS could potentially address this concern and jumpstart Lambda adoption by releasing an open source implementation of the service, even a functionally limited version, but given the company’s history this is extremely unlikely.

In the interim, then, AWS must either find ways to reduce the potential for lock-in or wait for other competitors to make its own product easier to adopt.

Exciting

  • Slack’s Next Valuation Will be Closer to WhatsApp than $1B
    In late October of last year, Slack took $120M in new financing that valued the company at $1.12B. A few eyebrows were raised at that number: as Business Insider put it: “when it was given a $1.1 billion valuation last October, just 8 months after launching for the general public, there were some question marks surrounding its real value.”

    The prediction here, however, is that that number is going to seem hilariously light. The basic numbers are good. The company is adding $1M in ARR every 11 days, and has grown its user base by 33X in the past twelve months. Those users have sent 1.7B messages in that span.

    Anecdotal examples do not a valuation make, of course. And there’s no guarantee that Slack will do anything to advance this usage, or even continue to permit it. But it does speak to the company’s intrinsic ability to function well as a message platform outside of the target quote unquote business team model. What might a company with the ability to sell to both WhatsApp users and enterprises both be worth? My prediction for 2015 is a lot more than $1.12B.

Yet another prediction undone by an overly specific forecast. On the one hand, Slack’s valuation is enormously higher than it was entering the year. By March, in fact, a mere five months after the billion dollar round that mystified some observers, Slack’s valuation more than doubled to just under $3 billion. Having made the leap from application to platform in December, with a variety of third parties now treating it as a viable route to market, the company is unquestionably more valuable on paper than it was during its last round, even if the current funding climate has soured.

All of which validates the underlying premise to the original prediction, which was that the company was “hilariously” undervalued. The problem was the usage of WhatsApp as the yardstick. Even with its valuation more than doubled, Slack is still not within hailing distance of the $19B WhatsApp fetched.

I remain convinced that the company’s strict business focus and blind eye towards off label usage of the service is doing it no favors from a valuation standpoint, but as the saying goes, it is what it is.

Which means that this prediction is, sadly, a miss.

Spectacular

  • The Apple Watch Will be the New Newton

    It could very well be that Apple will find a real hook with the Watch and sell them hand over fist, but I’m predicting modest uptake – for Apple, anyway – in 2015. They’ll be popular in niches. The Apple faithful will find uses for it and ways to excuse the expected shortcomings of a first edition model. If John Gruber’s speculation on pricing is correct, meanwhile, Apple will sell a number of the gold plated Edition model to overly status conscious rich elites. But if recent rumors about battery life are even close to correct, I find it difficult to believe that the average iPhone user will be willing to charge his iPhone once a day and her watch twice in that span.

    While betting against Apple has been a fool’s game for the better part of a decade, then, call me the fool. I’m betting the under on the Apple Watch. Smartwatches will get here eventually, I’m sure, but I don’t think the technology is quite here yet. Much as it wasn’t with the Newton once upon a time.

Out of all of the predictions, this is the most difficult to grade because with the exception of Apple itself, no one actually knows how well or poorly the Apple watch is selling. Speculation, however, is not in short supply.

Fred Wilson predicted in 2015 that the Watch would not be a home run, and got irritated when media outlets interpreted that to mean it would be a flop. John Gruber was understanding of the nuance. In December, he scored his prediction that it would be a flop as a hit (as an aside, Gruber was less impressed with his predictions for 2016). An analyst with Creative Strategies regarded it as “absolutely mind blowing that people in their right mind think the Apple Watch is a flop.”

Who’s right? In July, in a piece entitled “Why the Apple Watch is Flopping,” Engadget said:

Imagine if months after the iPad release, we learned it still hadn’t outsold some model of Windows tablet. A couple of million units sold sounds okay, but hardly the sort of smash hit we’ve come to expect from Apple. A precipitous decline in sales after just a couple of months? Not a good sign.

In November, Forbes opened a piece on the Watch with the following:

Apple is remaining tight lipped about the official sales figures for its debut wearable – the Apple Watch – but that hasn’t stopped analyst Canalys declaring some pretty staggering numbers.

It has stated that Apple has “shipped nearly 7 million smart watches since launch, a figure in excess of all other vendors’ combined shipments over the previous five quarters.”

And just last month, Wired closed a piece of theirs on the Watch with:

Cook has said in the past that the company keeps quiet on specific sales numbers for the Watch for competitive reasons, even when they exceed expectations. Still, one can’t help but think that if Apple had such a record-breaking December—which would make sense, since Apple likely sold a whole bunch of Watches over the holidays—it would want the world to know.

The tl;dr then is that no one knows if the Watch is a success because Apple declines to provide the information necessary to make a firm judgement. Two assertions seems plausible, however. First, that Apple is outselling other comparable wearables; whatever the market for the Apple Watch, it seems clear that Apple’s logistical, marketing and product design advantages make comparisons to Android, Pebble and other smartwatches pointless. But as Wired suggests, if the Apple Watch was a breakout success, it is unlikely that they’d continue to be opaque about the sales figures. If you have success, particularly in a brand new product category with uncertain prospects, you would typically advertise that fact.

Anecdotally, reactions from Apple Watch buyers suggest that the Watch is a very different product from the iPhone. When the iPhone was released, every buyer was an evangelist. Having the internet in your pocket – the real internet rather than a tiny subset dumbed down via WAP – was life changing. Watch users are, in my experience, much less enthusiastic. Most like it, for the notifications or the fitness capabilities, typically, but I have yet to have a conversation with a Watch buyer who characterizes it as a must buy, let alone a life changing device. The most common review is something like “I like it, but it’s not for everyone.” The iPhone, comparatively, was for everyone.

There are very few products like the iPhone, of course, so judging a product – even an Apple product – by that standard is, to some degree, unfair. But with Apple flush with cash but facing questions about its ability to grow, all eyes were on the company to see if it could reinvent the watch the way it did the mobile phone. So far, at least, the answer appears to be no. Arguably part of the problem is a function of technology limitations – weight, battery life and so on. But the Watch is also still waiting for its killer app, its internet-in-your-pocket, and it may be that there simply isn’t one.

Time will tell on that subject, and unfortunately it will have to for this predication as well. Absent better data, the verdict on this prediction is incomplete.

The Final Tally

How did I do? Setting aside the one push and the one incomplete grade, three of six predictions can be argued to be correct. This marks the second year in a row of fifty percent success. Another way of saying this is that I tied the lowest rate since I started doing these predictions.

As was the case last year, the failure rate of predictions is highly correlated to their difficulty. It’s simpler, obviously, to predict some things than others, hence the categories.

For 2016, the predictions will nevertheless attempt to preserve the aggression. Which means that I can probably look forward to a similar failure rate this time next year.

Categories: Cloud, Collaboration, Containers, IoT, Services.

The Time is Nigh

As many of you are already aware, Kate and I are expecting a baby. With a scheduled delivery date of next Monday, today is my last day in the office. As of the close of business, I will be headed out on paternity leave. My return date will depend on how things proceed with mother and baby, but I’m hoping to ramp back up at some point in mid to late January.

For RedMonk customers, not much changes: contact Juliane for any of your engagement needs and Marcia for anything operations related. James, at least, will be around to keep the lights on.

While I’m out of the office, I will be around as usual on Twitter – and I hear parents of newborns have plenty of free time. Until my return then, be well, enjoy your holidays and wish us luck.

Categories: Personal.

DVCS and Git Usage in 2015

For many in the industry today, version control and decentralized version control are assumed to be synonomous. Slides covering the DevOps lifecycle, as but one example, may or may not call out Git specifically in the version control portion of the stack depiction, but when the slides are actually presented, that is in the overwhelming majority of cases what is meant. Git, to some degree, is treated as a de facto standard. Cloud platforms leverage Git as a deployment mechanism, and new collaboration tools built on services built on Git continue to emerge.

Are these assumptions well founded, however? Is Git the version control monster that it appears to be? To assess this, we check Open Hub’s (formerly Ohloh) dataset every year around this time to assess, at least amongst its sampled projects, the relative traction for the various version control systems. Built to index public repositories, it gives us insight into the respective usage at least within its broad dataset. In 2010 when we first examined its data, Open Hub was crawling some 238,000 projects, and Git managed just 11% of them. For this year’s snapshot, that number has swelled to over 683,000 – or close to 3X as many. And Git’s playing a much more significant role today than it did then.

Before we get into the findings, more details on the source and issues.

Source

The data in this chart was taken from snapshots of the Open Hub data exposed here.

Objections & Responses

  • Open Hub data cannot be considered representative of the wider distribution of version control systems“: This is true, and no claims are made here otherwise. While it necessarily omits enterprise adoption, however, it is believed here that Open Hub’s dataset is more likely to be predictive moving forward than a wider sample.
  • Many of the projects Open Hub surveys are dormant“: This is very likely true. But the size of the sample makes it interesting even if potentially limited in specific ways.
  • Open Hub’s sampling has evolved over the years, and now includes repositories and forges it did not previously“: Also true. It also, by definition, includes new projects over time. When we first examined the data, Open Hub surveyed less than 300,000 projects. Today it’s over 600,000. This is a natural evolution of the survey population, one that’s inclusive of evolving developer behaviors.

With those out of the way, let’s look at a few charts.


(click to embiggen)

If we group the various different version control systems by category – centralized or decentralized – this is the percent of share. Note that 2011 is an assumption because we don’t have hard data for that year, but even over the last four years a trend is apparent. Decentralized tooling has moved from less than one in three projects in 2012 (32%) to closer to one in two in 2015 (43%). That’s the good news for DVCS advocates. The bad news is that this rate has become stagnant in recent years. It was 43% in 2013, actually dipped slightly to 42% in 2014, and returned to 43%, as mentioned, this year.

On the one hand, this suggests that DVCS generally and Git specifically might have plateaued. But the more likely explanation is that this is an artifact of the Open Hub dataset, and our imperfect view of same. It is logical to assume that some portion – possibly a very large one – of the Open Hub surveyed projects are abandoned, and therefore not an accurate reflection of current usage. Many of those, purely as a function of their age, are likely to be centralized projects.

Nor did the Open Hub dataset add many projects in the past calendar year; by our count, it’s around 9671 total net new projects surveyed, or around 1% of the total. Which means that even if every new project indexed was housed in a Git repository, the overall needle wouldn’t move much.

Overall, however, if we compare the change in individual share of Open Hub projects from 2010 against 2015, these are the respective losses and gains.


(click to embiggen)

Git unsurprisingly is the big winner, CVS the equally unsurprising loser. Nor has any of the data collected suggested material gains for non-Git platforms. DVCS in general has gained considerably, and is now close to parity and Git is overwhelmingly the most popular choice in that segment.

What the specific rate of current adoption is versus the larger body of total projects will require another dataset, or more detailed access to this one. For those who may be curious, we did compare this year’s numbers against last years, but as the largest single change was Git’s gain of 0.75% share it didn’t offer much in the way of new information. Given that existing projects may change their repository, we can’t simply assume that Git captured 75% of the net new projects.

Our annual look at the Open Hub dataset, then, does support the contention that DVCS and Git are effectively mainstream options, but is insufficiently detailed to prove the hypothesis that Git has become a true juggernaut amongst current adoption – even if the anecdotal evidence concluded this a long time ago.

Categories: Version Control.

Changing Tack: Evolving Attitudes to Open Source

Santtu at the helm

Even five years ago, evidence that the role of software was changing was not difficult to find. Microsoft, long the standard bearer for perpetual license software sales, had seen its share price stall for better than a decade. Oracle was in the midst of a multi-year decline in its percentage of revenue derived from the sale of new licenses. Companies that made money with software rather than from it, meanwhile, such as Amazon, Facebook and Google were ascendent.

It wasn’t that software had become unimportant – quite the contrary. It was becoming more vital by the day, in fact. As Marc Andreessen pointed out later that year, across a wide number of traditional industries, the emerging players were more accurately considered technology companies – and more specifically, software companies – that happened to operate in a given vertical than the reverse.

This counterintuitive trend was what led to the publication of “The Software Paradox,” which attempted to explain why software could become more valuable and less saleable at the same time. And what companies within and outside the software industry should do about it.

One of the most important factors in both making software more difficult to sell and more vital to an organization was open source. In general, organizational attitudes towards open source tended to be informed by a variety of factors, but could be roughly categorized along generational lines. This classification was presented to the OSBC audience in 2011:

  • First Generation (IBM) “The money is in the hardware, not the software”:
    For the early hardware producers, software was less interesting than than hardware because the latter was harder to produce than the former and therefore was more highly valued, commercially.

  • Second Generation (MSFT) “Actually, the money is in the software”:
    Microsoft’s core innovation was recognizing where IBM and others failed to the commercial value of the operating system. For this single realization, the company realized and continues to realize hundreds of billions of dollars in revenue.

  • Third Generation (GOOG) “The money is not in the software, but it is differentiating”:
    Google’s origins date back to a competition with the early search engines of the web. By leveraging free, open source software and low cost commodity hardware, Google was able to scale more effectively than its competitors. This has led to Google’s complicated relationship with open source; while core to its success, Google also sees its software as competitively differentiating and thus worth protecting.

  • Fourth Generation (Facebook/Twitter) “Software is not even differentiating, the value is the data”:
    With Facebook and Twitter, we have come full circle to a world in which software is no longer differentiating. Consider that Facebook transitioned away from Cassandra – a piece of infrastructure it wrote and released as open source software – for its messaging application to HBase, a Hadoop-based open source database originally written by Powerset. For Facebook, Twitter, et al the value of software does not generally justify buying it or maintaining it strictly internally.

While it’s certainly possible to debate the minutiae of these classifications, the more interesting question is whether they would persist. Recently, we’ve begun to see the first signs that they will not. That second and third generation organizations that believed – at minimum – in software as a protectable asset have begun to evolve away from these beliefs.

Google

Google’s release of TensorFlow was particularly interesting in this regard. Google’s history with open source software was and is complex. The company was built atop it, and as representatives like Chris DiBona are correct to note, in the form of projects such as Android Google has contributed millions of lines of code to various communities over time. But it tended to be protective of its infrastructure technologies. Rather than release its MapReduce implementation as open source software, for example, it published papers describing the technologies necessary to replicating it, out of which the initial incarnation of Hadoop was born.

With TensorFlow, however, Google declined to protect the asset. Rather than make the code replicable via the release of a paper detailing it, it released the code itself as open source software. As Matt Cutts put it:

In the past, Google has released papers like MapReduce, which described a system for massive parallel processing of data. MapReduce spawned entire cottage industries such as Hadoop as smart folks outside Google wrote code to recreate Google’s paper. But the results still suffered from a telephone-like effect as outside code ran into issues that may have already been resolved within Google. Now Google is releasing its own code. This offers a massive set of possibilities, without reinventing the wheel.

This move is relatively standard at fourth generation companies such as Facebook or Twitter, but it represents a change for Google. A recognition that the benefits to releasing the source code outweigh the costs. While the market’s understanding of and appreciation for the benefits may lag – the WSJ writeup of the news apparently required a quote to confirm that “It’s not a suicidal idea to release this” – Google’s does not.

Microsoft

For many years and across many teams at Microsoft, open source was a third rail issue. In spite of the rational, good work done by open source advocates within the company like Jason Matusow or Sam Ramji, the company’s leadership delivered a continual stream of rhetoric that alienated and antagonized open source communities. Unsurprisingly, this attitude filtered down to rank and file employees, many of whom viewed open source as an existential threat to their employer, and therefore was something be fought.

With years and a change in leadership, however, Microsoft’s attitude towards open source is perceptibly shifting. While the company has been moving in this direction for years, recent events suggest that the thawing towards open source has begun to accelerate. In November of last year, nine months after Satya Nadella took the reins at Microsoft, large portions of the company’s core .NET technology were released as open source. Last April, the awkward Microsoft Open Technologies construct was decommissioned and brought back into the fold. Seven months after that, Microsoft inked a partnership with open source standard bearer Red Hat, one that president of product and technology Paul Cormier “never would have thought we’d do.” And most recently, the company’s Visual Studio Code project – built on Google’s Chromium among other pieces of existing open source technology – was itself open sourced in a bid to make the editor truly cross-platform.

It can certainly be argued (and was by RedMonk internally) that many of these are simple and logical decisions that should have been made years ago. It’s also important to note that Microsoft’s twin mints, Office and Windows, remain proprietary in spite of public comments contemplating the alternative. All of that being said, however, it’s difficult to argue the point that on multiple levels, it is, as engineer Mark Russinovich says in the above linked piece, “a new Microsoft.”

The Net

What does it mean when an organization that saw software as an asset worth protecting commits to open source? Or one that viewed software as the ends rather than the means and had tens of billions of dollars worth of evidence supporting this conclusion? The short answer is that it means that open source is being viewed more rationally and dispassionately than we’ve seen since the first days of the SHARE user group.

Open source is being viewed, increasingly, as neither an existential threat nor an ideological movement but rather an approach whose benefits frequently outweigh its costs. There’s a long way to go before these concepts become truly ubiquitous, of course. Even if the most anti-open source software vendors are beginning to come around, the fact that the announcement of Capital One’s Hygieia project was considered so unusual and newsworthy suggests that enterprises are lagging the vendors that supply them in their appreciation for open source.

But if the above generational classifications begin to break down in favor of nuanced, strategic incorporation of open source, that will be a good thing for the market as a whole, and for the developers that make it run.

Disclosure: Neither Google nor Microsoft is a RedMonk customer at present.

Categories: Open Source.

Crossing the Amazon: IBM in an Age of Disruption

cloud formation over amazon river

The Wired headline in April of this year read, “Amazon Reveals Just How Huge the Cloud Is for Its Business.” The numbers for AWS were $4.6B for 2014, up 49% from the year before and on track to hit $6.23B by year’s end. The TechCrunch headline from October was “Amazon’s AWS Is Now A $7.3B Business As It Passes 1M Active Enterprise Customers.” Revenue at $7.3B, not $6.23B. A growth rate no longer of 49%, but 81%.

It is the velocity and trajectory of this business that has everyone in the industry spooked and valuations of the business formerly relegated to the “other” revenue category on financial statements accelerating. Even after seeing sales shrink for 14 consecutive quarters, after all, and amidst calls to rebrand the company from Big Blue to Medium Blue, each of IBM’s non-finance business units generated more revenue in 2014 than AWS projects to this year. Three out of the four were a multiple of the seven billion figure: GTS was ~$37B, Software $25B and GBS came in at ~$18B.

But the market and evaluators alike are less concerned, at least in the case of Amazon and IBM, with present day revenue figures than how they project to change over time, hence the euphoric AWS headlines and the quarterly pillorying IBM receives. What IBM is going through at present, in fact, suggests that Michael Dell’s original decision to take his firm private was a wise one.

Market disruption is a violent process, and surviving it can be almost as drawn out and painful as succumbing to it. As IBM knows, of course, having been one of the few companies to reinvent itself more than once. Expecting the same patience from investors, however, is a lot to ask, particularly in an age of activist shareholders carrying Damoclean swords.

If bullish perceptions of cloud native players, Amazon and otherwise, are driven by expectations of future returns driven by current models, however, it is perhaps worth taking a step back and evaluating IBM’s current models rather than current returns. The question is how should IBM, or companies in IBM’s position, respond to the macro-market factors currently disrupting its businesses.

From a high level, all of the incumbent systems players – from Cisco to Dell/EMC to HP to IBM to Oracle – need to recognize, among other market dynamics, the following:

  • Between the ascendance of ODMs and the explosion of IaaS, the market for premium low end hardware is gone. What hardware growth there is will come from the cloud – just ask Amazon, Google or Microsoft.
  • Traditional perpetual license software models are not gone, but in systemic decline. Customers instead are shifting to services-based models, with additional value adds from data (both collected and sourced).
  • Open source and commodity services have offered customers some relief from lock-in, but it remains as closely tied to profit as Shapiro and Varian described in 1999. This implies that while it’s important to offer commodity entrypoints, higher-end proprietary services will be critical to both profit and retention.
  • New market conditions require new partners.

Measured by this criteria, at least, IBM is making logical adjustments to its businesses.

  • Low-end hardware businesses have been divested, and investments redirected to potential growth businesses such as Softlayer.
  • An increasing emphasis within its software business is on services, e.g. Bluemix, acquisitions like Cloudant/Compose/etc, or the just announced Spark-as-a-Service.
  • Proprietary or exclusive offerings such as Watson or the Twitter and Weather Company partnerships offer IBM the ability to upsell customers to higher margin, more difficult to replicate externally services.
  • IBM’s partnership with Apple gives them a premium mobile hardware story, and Box CEO Aaron Levie was prominently on display at Insight.

From a directional standpoint, IBM appears to be responding to the systemic disruption across its footprint with a combination of internal innovation (Watson), open source (Cloud Foundry, Node, OpenStack) and inorganic acquisition (Bluebox, Cloudant, Softlayer, etc). Betting on cloud over traditional hardware, or SaaS rather than shrink-wrapped software may not seem aggressive to independent industry observers for whom the writing has been on the wall since halfway through the last decade, but the larger the business the more difficult it is to turn.

Much of IBM’s ability to reverse its recent downward financial trend, then, depends on its ability to execute in the emerging categories on which it has placed its bets. Some adjustments are clearly necessary. A heavy majority of the airtime at its Insight show this week, for example, has been devoted to its Watson product. While the artificial intelligence-like offering is intriguing and differentiated, however, as a business tool it’s a major marketing challenge. Positioning compute instances or databases offered as a service is a simple exercise. Explaining to audiences what “cognitive computing” means is non-trivial. Not least because unlike cloud, IBM is trying to push that rock up a hill by itself. Strangely, however, the company seems intent on leading with the most difficult to market product, rather than using more widely understood cloud or SaaS businesses as an on ramp and using Watson as a secondary differentiator. It would be as if AWS led with Machine Learning and mentioned, after the fact, that EC2 and RDS were available as well.

That being said, marketing and positioning is a solvable problem if the strategic direction is correct. And 14 quarters of declining revenue or no – remember that as AWS itself demonstrates, revenue is a lagging indicator – IBM is in fact making changes to its strategic direction. The company just makes it harder than it needs to be to see that at times.

Whether they can execute on these new directions, however, is what will determine whether the company’s turnaround is successful.

Disclosure: Amazon, Cisco, Dell, HP, IBM, and Oracle are RedMonk customers. Google and Microsoft are not current customers.

Categories: Cloud, Conferences & Shows.

All In: On Amazon, Dell and EMC

Datacenter Work

In her 1969 book, On Death and Dying, the Swiss psychiatrist Elisabeth Kübler-Ross attempted to capture and document the emotions most frequently experienced by terminally ill patients. The model is famous today, of course. Even if you don’t remember the model’s name, you’ll probably recall that individuals faced with a life-threatening or altering event are expected to experience a series of five emotions: denial, anger, bargaining, depression and acceptance. Though the model’s accuracy has been challenged and research doesn’t support it as either definitive or all-encompassing, its utility has sustained it through the present day.

While there are significant differences between corporate entities and human beings, Citizens United notwithstanding, there are interesting parallels between organizations faced with the threat of disruption and people faced with disruption’s human equivalent, death.

If you listen to incumbents talk about their would be disruptive competitors year after year, for example, specific, industry-wide patterns begin to emerge. Patterns which, as with the Kübler-Ross model, progress in stages. When you talk to a given incumbent about would-be disruptors, chances are good you’ll have a conversation like the following. The interesting thing is that you’ll have essentially the same conversation with any of the incumbents; their responses all follow this basic pattern. The timing of the conversational stages may vary, the substance almost never.

  • Stage 1: “I’ve never heard of that company.”
  • Stage 2: “Yes I’ve heard of them, but we’re not seeing them in any deals.”
  • Stage 3: “They’re beginning to show up in deals, but they’re getting killed.”
  • Stage 4: “They’re growing, but it’s all small deals and toy apps, they don’t get the enterprise.”
  • Stage 5: “Here’s how we compete against them in the enterprise.”

As with a patient facing a life-threatening diagnosis, the threat is difficult to acknowledge, let alone process. Acceptance, therefore, is arrived at but gradually.

Which brings us, oddly enough, to Amazon. Even shortly after S3 and EC2 debuted in March and August of 2006, respectively, it was evident that these services – their relatively primitive states notwithstanding – were strategically significant. The reaction of incumbents at the time? “I’ve never heard of Amazon Web Services.” Or if the company representative was especially progressive, “”Yes I’ve heard of Amazon Web Services, but we’re not seeing them in any deals.”

Five years ago last month, the only real surprise left was the lack of apparent concern about Amazon from the market incumbents it was busily disrupting. Here was a company that was quite obviously a clear and present danger, but much of the industry seemed stuck on the idea of Amazon as a mere retailer. Where companies should have been moving in earnest, what you’d hear most often was “Amazon Web Services is beginning to show up in deals, but they’re getting killed.”

In the years since, belated recognition of the threat posed has triggered massive responses. While claiming publicly that “Amazon Web Services is growing, but it’s all small deals and toy apps, they don’t get the enterprise,” behind the scenes massive investments in datacenter buildouts were underway, and organizations attempted to quickly retool to embrace and fight the cloud simultaneously.

None of those responses, however, are more massive than the announcement that Dell is acquiring EMC. Should the transaction close, at $67B it would be larger than the second largest technology acquisition – HP/Compaq – by a factor of two if you account for inflation, nearly three if you don’t. The obvious question is what this all has to do with Amazon.

On the surface, it might seem that the answer is very little. Dell went private in 2013 and so the numbers we have are old, but as of two years ago the revenue Dell derived from its traditional enterprise lines of business – servers, networking, storage and services – was $19.4B. The numbers for its traditional PC business – desktop PC and mobility – were $28.3B, or thirty percent higher. The problem for Dell, and one of the reasons it was making a big push amongst analysts at the time around its enterprise business, was the relative trajectories of the revenue streams. Even with modest to negative growth from its services (1%) and storage (-13%) businesses, servers and networking buoyed its enterprise business to 4% growth from the year prior. Mobility and PC returns over that same span were down 15%. All of which makes the decision to go private straightforward: it was going to get worse before it got better.

If Dell was going to bet on a business moving forward, then, it had essentially two obvious paths it could follow. Behind door number one was doubling down on the PC and mobile markets. The former market is being actively hollowed out, with both volume and margin cratering. In the latter, Apple effectively owns the entirety of the market’s profits.

Dell’s messaging and behavior, both before and after its decision to escape the limelight of the public market, suggested that Dell had picked door number two. Dating back at least to the 2009 $4B acquisition of Perot Systems, Dell has had ambitions of moving upmarket from the increasingly problematic fortunes and falling margins of the PC business. Every acquisition since then, in fact, is in service of the company’s enterprise ambitions.

In the context of 2008 and 2009, this directional shift was understandable. Amazon was growing fast, but unless you were paying close attention to the new kingmakers – the developers who were inhaling cloud services – its significance was not apparent. Certainly very few boards understood on a fundamental level the threat that cloud infrastructure and services would pose to their proven enterprise offerings.

The question facing Dell is whether the strategy that made sense in 2008 or 2009 makes sense today. As current EMC CEO Joe Tucci said in announcing the news, “The waves of change we now see in our industry are unprecedented and, to navigate this change, we must create a new company for a new era.” It is true as many observers have noted that this announcement is about more than AWS.

But it is also true that the new era Tucci referred to is increasingly defined by AWS. Witness the AWS announcements at re:Invent last week. It has been understood for a while that commodity servers and storage were vulnerable; if Dell going private wasn’t evidence enough, IBM’s deprecation of its x86 business and HP’s struggles with same should be. Many enterprise providers believed, however, that higher margin software areas were outside the scope of Amazon’s ambitions. This was a mistake. At re:Invent, AWS reconfirmed with offerings like QuickSight that there is very little outside the scope of their ambitions. Traditional enterprise providers must expect AWS to use its core services as a base of operations from which to target any and all adjacent enterprise software categories that promise sufficient margin.

When you couple the accelerating demand for and comfort level with infrastructure and software as a service with a widening enterprise-appropriate portfolios, it is indeed a new era, and one in which many traditional suppliers are playing catch up. To borrow Reed Hasting’s metaphor, Amazon is becoming the enterprise incumbents faster than they are becoming Amazon. Much faster.

The addition of EMC would obviously bring Dell a variety of assets that could be deployed towards a variety of ends. EMC plugs their most obvious infrastructure hole, and the company owns stakes in key software entities in Pivotal and VMware among others, the former of which is reportedly expected to go public and the latter of which is expected to be kept that way. The addition of EMC, however, better equips Dell to compete against the likes of Cisco, HP, IBM and Oracle than Amazon, however.

Which implies that the combined entity’s short term strategy will be competing against the traditional players for the enterprise dollar. Longer term, however, it will be interesting to see how it leverages its assets to compete in an increasingly cloudy world. Given the size of this deal, acquisitions that would move the needle from a cloud perspective can probably be ruled out. Which in turn means that any major push from the new Dell into the cloud – presuming there is one, eventually – will have to come from within or via the acquisition of much smaller players.

Given the meteoric rise of Amazon and widely assumed growth in demand for cloud services, it’s easy to criticize this acquisition, as many have, on the basis that it doesn’t make Dell an immediate alternative to the major public cloud suppliers. It is less obvious, however, whether another acquisition would. On paper, players like a CenturyLink (14.45B mkt cap) could be potentially be acquired for less than half the cost of EMC and bring with them a wide portfolio of infrastructure capabilities from bare metal through PaaS. In the real world, however, it’s difficult to imagine a company whose DNA dating back to the dorm room founding is manufacturing hardware for customers making a success of an acquisition that would be, for all practical purposes, a pivot into the cloud.

Instead, Dell went all in on building an organization that could more effectively compete with the traditional enterprise players. How they’ll all fare in the new era that Tucci referred to is the question.

Disclosure: Amazon, CenturyLink, Cisco, Dell, EMC, IBM and HP are all RedMonk customers. Google and Microsoft are not current customers.

Categories: Cloud, Hardware.

The 2015 Monktoberfest

As pre-conference headlines go, “astronomical tides,” “biblical rain” and “massive coastal flooding” would not be high on my list. Particularly if your conference is, like the Monktoberfest, on the coast. At a distance from it measurable in feet, in fact. The rains were so bad on the Wednesday before the Monktoberfest that it was for a brief period not clear that I would be able to make it back to Portland from Freeport where I was picking up the last of the conference supplies. According to the locals on our Monktoberfest Slack instance most of the major arteries into the city from Franklin Street to Forest Avenue had leveled up to actual rivers.

The Whole Foods in Portland, which is less than a mile from my office and the conference venue both, is on Franklin. It looked like this a bit before noon on Wednesday.

When that’s the scene a few hours before you’re supposed to host a chartered cruise to welcome your inbound attendees, things get interesting. Forecasts are consulted, phone calls are made, emails are answered and tweets are sent. The meteorologists assured us, however, that the worst was behind us and that the rain would blow through. Which, for once, is exactly what happened.

By five thirty, we were still looking at a lot of clouds but they’d quit actively dumping water on us. We were even treated to an actual sunset.

The moment the boat, biblical rains that day or no, nimbly pushed back from the dock everything was set in motion. The Monktoberfest at that point began its work, and arguably its most important function: connecting and re-connecting the people who take the time out of their schedule to be with us up here in Maine. On the boat, at dinner afterwards, and at Novare Res late that night, the kinds of conversations that people only have in person were had. Repeatedly.

At 10 AM the next day, we gathered at the Portland Public Library, as we have every year, to listen to talks, to contribute to talks with questions, and to meet each other. Over the next day and a half, we had talks on everything from building a volunteer legion and open APIs/platforms to medievalism in gaming and brewing beer with cylon.js and raspberry PIs. Being an impostor to the economics of the hop industry. Our speakers were, as always, prepared, unique and excellent. And before you ask, yes, all of the talks were filmed and will be available later.

The Monktoberfest is, as the saying goes, a labor of love. Like any other conference, it involves hundreds of hours of labor on the part of a great many people. But we love it. We hope the attendees do too, of course, and every year it is reactions like this that make it all worthwhile.

I say this every year because it’s true: it’s the people that make this event worth it. Every Monktoberfest the people who help put it on ask me about how the group we have assembled can possibly be so friendly. My answer is simple:

If I read that from someone else I’d dismiss it as hyperbole. I had a difficult time explaining that, for example, to Whit Richardson, a reporter for the Portland Press Herald, who stopped by to talk about the event with me.

But the simple fact is that it’s not hyperbole. That description is verbatim what I am told, year in and year out, by our caterers, by the people we have staffing the show, by Ryan and Leigh and by all of the people new to our event. Exactly how we end up with such a good group is a mystery to me, but I certainly appreciate it.

The Credit

I said this at the show, but it’s worth repeating: the majority of the credit for the Monktoberfest belongs elsewhere. My sincere thanks and appreciation to the following parties.

  • Our Sponsors: Without them, there is no Monktoberfest
    • HP Helion: We can’t make the investments in food, drink and venue that have come to characterize the Monktoberfest without a lot of help. We were very grateful that HP Helion stood up and made a major commitment to the conference. They’re one of the main reasons there was a Monktoberfest, and that we could deliver the kind of experience you’ve come to expect from us.
    • Red Hat: As the world’s largest pure play open source company, there are few who appreciate the power of the developer better than Red Hat. Their support as an Abbot Sponsor – the only sponsor to have been with us all five years, if I’m not mistaken – helps us make the show possible.
    • EMC{code}: The fact that we’re able to serve you everything from Damariscotta river oysters to lobster sliders is thanks to EMC{code}’s generous support.
    • Blue Box: We should first thank Poseidon that we were able to get out on the water at all, but once he cleared the weather for us Blue Box was the support we needed for our welcome cruise.
    • Apprenda: Hopefully your brilliant new Libby 16oz tulips made it home safely. When you get a chance, thank the good folks at Apprenda for them.
    • DEIS: Of all of our sponsors, none was quite so enthusiastic as the DEIS project. They sponsored coffee, breakfast, snacks and they bought you a round. Food, coffee and beer makes them one of the conference MVPs.
    • Cisco DevNet: Got some bottles while you were out and need to open them? Thank the team over at Cisco DevNet for your bar quality spinner.
    • Oracle Technology Network / Pivotal: Maybe you enjoyed the Allagash peach sour. Maybe it was the To Øl citra pale. Or the Lervig/Surly imperial black ale. Either way, these beers were brought you by the Oracle Technology Network and the team at Pivotal.
    • CircleCI: Our coffee, supplied to us by Arabica, got excellent reviews this year. Part of the reason it was there? CircleCI.
    • O’Reilly: Lastly, we’d like to thank the good folks from O’Reilly for being our media partner yet again and bringing you free books.
  • Our Speakers: Every year I have run the Monktoberfest I have been blown away by the quality of our speakers, a reflection of their abilities and the effort they put into crafting their talks. At some point you’d think I’d learn to expect it, but in the meantime I cannot thank them enough. Next to the people, the talks are the single most defining characteristic of the conference, and the quality of the people who are willing to travel to this show and speak for us is humbling.
  • Ryan and Leigh: Those of you who have been to the Monktoberfest previously have likely come to know Ryan and Leigh, but for everyone else they really are one of the best craft beer teams not just in this country, but the world. As I told them, we could not do this event without them; before I even start planning the Monktoberfest, in fact, I check to make sure they’re available. It is an honor to have them at the event, and we appreciate that they take time off from running the fantastic Of Love & Regret to be with us.
  • Lurie Palino: Lurie and her catering crew did an amazing job for us, and as she does every year, deliver on an amazing event yet again. With no small assist from her husband, who caught the lobsters, and her incredibly hard working crew at Seacoast Catering.
  • Kate: Besides having a full time (and then some) job, another part time job as our legal counsel, and – new for this year! – being pregnant, Kate did yeoman’s work once more in designing our glasses and fifth year giveaway, coordinating with our caterer, working with the venues and more and more.How she does it all is beyond me. As I like to say, the good ideas you enjoy every year come from here. I can never thank her enough.
  • Rachel: Knowing that Kate was going to be incapacitated to some degree by her pregnancy, we enlisted Rachel’s assistance to share some of the load. Little did we know that we were going to get one of the most organized and detail-oriented resources in existence. Every last detail was tracked, interaction by interaction, in GitHub, down to the number and timing of reminder phone calls made. We couldn’t have done this without Rachel.
  • The Staff: Juliane did her usual excellent job of working with sponsors ahead of the conference, and with James secured and managed our sponsors. She also had to handle all of the incoming traffic while we were all occupied with the conference. Marcia handled all of the back end logistics as she does so well. Celeste, Cameron, Kim and the rest of the team handled the chaos that is the event itself with ease. We’ve got an incredible team that worked exceptionally hard.
  • Our Brewers: The Alchemist was fantastic as always about making sure that our attendees got some of the sweet nectar that is Heady Topper, and Mike Guarracino of Allagash was a huge hit attending both our opening cruise and hosting us for a private tour on Friday afternoon after the conference ended. Oxbow Brewing, meanwhile, did a marvelous job hosting us for dinner. Thanks to all involved.

On a Sadder Note

alex-nola

The first year that we held the Monktoberfest – before there was such a thing as the Monktoberfest, in fact – Alex King offered to help. Some of you might know Alex from his early work as a committer on WordPress. Others from Tasks Pro. Or FeedLounge. Or the now ubiquitous Share This icon. Or Crowd Favorite. Anyway, you see where I’m going: Alex was a legitimately big deal professionally, yet still happy to help me get a small event off the ground. His team produced the t-shirt design that have been used every year of the show. His company Crowd Favorite was our first sponsor, and sponsored every year that Alex ran the company. And he attended and evangelized each and every show.

He was, in many respects, the conference’s biggest supporter.

In July, he called to tell me that for the first time he was not going to be able to make the conference – but used the opportunity to keep supporting us. On September 27th, three days before the conference he helped build began, Alex passed away after a long fight with cancer. I did my best to tell our attendees who he was and what he had accomplished, but to my discredit I could not hold it together long enough. The best I could do was call for a moment of silence.

View post on imgur.com

I’ll have more to say about Alex, but in the meantime it is my hope that everyone who wears their 2015 Monktoberfest shirts for years to come will see the crown on the sleeve and be reminded of Alex King – a man who helped ensure there was a Monktoberfest, and a man who was my friend.

Categories: Conferences & Shows.