Skip to content

Will Python Kill R?

In an article entitled “Python Displacing R As The Programming Language For Data Science,” MongoDB’s Matt Asay made an argument that has been circulating for some time now. As Python has steadily improved its data science credentials, from Numpy to Pandas, with even R’s dominant ggplot2 charting library having been ported, its viability as a real data science platform improves daily. More than any other language in fact, save perhaps Java, Python is rapidly becoming a lingua franca, with footholds in every technology arena from the desktop to the server.

The question, per yesterday’s piece, is what this means for R specifically. Not surprisingly, as a debate between programming languages, the question is not without controversy. Advocates of one or the other platforms have taken to Twitter to argue for or against the hypothesis, sometimes heatedly.

Python advocates point to the flaws in R’s runtime, primarily performance, and its idosyncratic syntax. Which are valid complaints, speaking as a regular R user. They are less than persuasive, given that clear, clean syntax and a fast runtime correlate only weakly with actual language usage, but they certainly represent legitimate arguments. More broadly, and more convincingly, others assert that over a long enough horizon, general purpose tools typically see wider adoption than specialized alternatives. Which is again, a substantive point.

R advocates, meanwhile, point to R’s anecdotal but widely accepted traction within academic communities. As an open source, data-science focused runtime with a huge number of libraries behind it, R has been replacing tools like MATLAB, SAS, and SPSS within academic settings, both in statistics departments and outside of them. R’s packaging system (CRAN), in fact, is so extensive that it contains not only libraries for operating on data, but datasets themselves. Not only does it contain datasets for individual textbooks taught by academia, it will store different datasets by the edition of those textbooks. An entire generations of researchers is being trained to use R for their analysis.

Typically this is the type of subjective debate which can be examined via objective data sources, but comparing the trajectories is problematic and potentially not possible without further comparative research. RStudio’s Hadley Wickham, creator of many of the most important R libraries, examined GitHub and StackOverflow data in an attempt to apply metrics to the debate, but all the data really tells us is that a) both languages are growing and that b) Python is more popular – which we knew already. Searches of package popularity likewise are unrevealing; besides the difficulty of comparing runtimes due to the package-per-version protocol, there is the contextual difficulty of comparing Python to R. Python represents a superset of R use cases. We know Python is more versatile and applicable in a much wider range of applications. We also know that in spite of Python’s recent gains, R has a wider library of data science libraries available to it.

My colleague Donnie Berkholz points to this survey, which at least is context-specific in its focus on languages employed for analytics, data mining, data science. It indicates that R remains the most popular language for data science, at 60.9% to Python’s 38.8%. And for those who would argue that current status is less important than trajectory, it further suggests that R actually grew at a higher rate this year than Python – 15.1% to 14.2%. But without knowing more about the composition and sampling of the survey audience, it’s difficult to attribute too much importance to this survey. Granted, it’s context specific, but we have no way of knowing whether the audience surveyed is representative or skewed in one direction or another.

Ultimately, it’s not clear that the question is answerable with data at the present time. Still, a few things seem clear. Both languages are growing, and both can be used for data science. Python is more versatile and widely used, R more specialized and capable. And while the gap has been narrowing as Python has become more data science capable, there’s a long way to go before it matches the library strength of R – which continues to progress in the meantime.

How you assess the future path depends on how you answer a few questions. At RedMonk, we typically bet on the bigger community, but that’s not as easy here. Python’s total community is obviously much larger, but it seems probable that R’s community, which is more or less strictly focused on data science, is substantially larger than the subset of the Python community specifically focused on data. Which community do you bet on then? The easy answer is general purpose, but that undervalues the specialization of the R community on a discipline that is difficult to master.

While the original argument is certainly defensible, then, I find it ultimately unpersuasive. The evidence isn’t there, yet at least, to convince me that R is being replaced by Python on a volume basis. With key packages like ggplot2 being ported, however, it will be interesting to watch for any future shift.

In the meantime, the good news is that users do not need to concern themselves with this question. Both runtimes are viable as data science platforms for the foreseeable future, both are under active development and both bring unique strengths to the table. More to the point, language usage here does not need to be a zero sum game. Users that wish to leverage both, in fact, may do so via the numerous R<==>Python bridges available. Wherever you come down on this issue, then, rest assured that you’re not going to make a bad choice.

Disclosure: I use R daily, I use Python approximately monthly.

Categories: Programming Languages.

The Difficulty of Selling Software

On the surface, this statement by Asymco analyst Horace Dediu is clearly and obviously false. For 2013, Microsoft’s Windows and Business (read: Office) divisions alone generated, collectively, $44B in revenue. This number was up around 4% from the year before, after being up 3% in 2012 versus the year prior. This comment, in other words, is easily dismissed as hyperbole.

But given that the overwhelming amount of evidence contradicting the above statement, and his familiarity with capital markets, it’s highly unlikely that Dediu would be unaware of this. Which makes it reasonable, therefore, to conclude that he did not intend for the statement to interpreted literally. Which in turn implies that Dediu’s making a directional statement rather than a literal description of the market reality.

Even if one gives, for the sake of argument, Dediu the benefit of the doubt and assumes subtlety, the next logical counterargument is that he’s unduly influenced by his focus on consumer markets. The trend there, after all, is clear: the majority of available consumer software is subsidized by either advertising (e.g. Facebook, Google, Twitter) or hardware (e.g. Apple). More to the point, both of these models are attempting to exert pressure on the paid software model, as in the case of the Apple iWork and Google Docs competing for mindshare with the non-free Microsoft Office or the now free OS X (non-server) positioned against the non-free Microsoft Windows. Even in hot application spaces like mobile, it’s getting increasingly difficult to commercialize the output.

If this is your analytical context, then – and certainly Dediu’s primary focus (Asymcar notwithstanding) is on Apple and markets adjacent to Apple – the logical conclusion is indeed that software prices are heading towards zero in most categories, and that software producers need to adjust their revenue models accordingly.

No surprise then that it is by labeling the decline of realizable revenues as a consumer software-only phenomenon that enterprise providers are able to reassure both themselves and the market that they are uniquely immune, insulated from an erosion in the valuation of software as an asset by factors ranging from the price insensitivity and inertia of enterprise buyers to technical and/or practical lock-in. And to be fair, enterprise software markets are eminently more margin-oriented than consumer alternatives, not least because businesses are used to regarding technology as a cost of doing business. For consumers, it has historically been more of a luxury.

But the fact is that the assertion that it’s getting more difficult to charge for software is correct, as we have been arguing since 2010/2011.

The surface evidence, once again, contradicts this claim. Consider the chart of Oracle’s software revenue below.

This, for Oracle, is the good news. With few exceptions, notably a market correction following the internet bubble, Oracle has sustainably grown its software revenue every year since 2000. The Redwood Shores software giant, in fact, claimed in October that it was now the second largest software company in the world by revenue behind Microsoft, passing IBM. If a company that large can continue to generate growth, year after year, it’s easy to vociferously argue that the threat of broader declines in the viability of commercial software-only models is overblown. But this behavior, common to software vendors today, increasingly has a whistling-past-the-graveyard ring to it.

Whatever your broader thoughts on the mechanics of Dediu-mentor and Harvard Business School professor Clayton Christensen’s theory of disruption, history adequately demonstrates that even highly profitable, revenue generating companies are vulnerable. Oracle, for example, is as a software-sales business challenged by a variety of actors from open source projects to IaaS or SaaS service-based alternatives. To its credit, the company has hedges against both in BerkeleyDB/MySQL/etc and its various cloud businesses. It’s not clear, however, that even collectively they could offset any substantial impact to its core software sales business – while not broken out, MySQL presumably generates far less revenue than the flagship Oracle database. Software was 67% of Oracle’s revenue in 2011, a year after they acquired Sun Microsystems and its hardware businesses. In 2013, software comprised 74% of Oracle’s revenue.

The question for Oracle and other companies that derive the majority of their income from software, rather than with software, is whether there are signs underneath the surface revenue growth that might reveal challenges to the sustainability of those businesses moving forward. Consider Oracle’s 10-K filings, for example. Unusually, as discussed previously, Oracle breaks out the percentage of its software that derives from new licenses. This makes it easier to document Oracle’s progress at attracting new customers, and thereby the sustainability of its growth. The chart below depicts the percentage of software revenue Oracle generated from new licenses from 2000-2013.

There are a few caveats to be aware of. First, there are contradictions in the 2002 and 2003 10-K’s; second, where the 2012 10-K reported “New software licenses,” the 2013 10-K is now terming this “New software licenses and cloud software subscriptions.” With those in mind, the trendline here remains clear: Oracle’s ability to generate new licenses is in decline, and has been for over a decade. At 38% in 2013, the percentage of revenue Oracle derives from new licensees is a little less than half of what it was in 2000 (71%). Some might attribute this to the difficulty for large incumbents to organically generate new business, but in the year 2000 Oracle was already 23 years old.

What this chart indicates, instead, is that Oracle’s software revenue growth is increasingly coming not from new customers but from existing customers. Which is to the credit of Oracle’s salesforce, in spite what of the company characterized as their “lack of urgency.”

It may not be literally true, as Dediu argued above, that you can’t charge for software anymore. But it’s certainly getting harder for Oracle. And if it’s getting harder for Oracle, which has a technically excellent flagship product, it’s very likely getting harder for all of the other enterprise vendors out there that don’t break out their new license revenues as Oracle does. This is not, in other words, an Oracle problem. It’s an industry problem.

Consumer software, enterprise software: it doesn’t much matter. It’s all worth less than it was. If you’re not adapting your models to that new reality, you should be.

Disclosure: Oracle is not a RedMonk client. Microsoft has been a RedMonk client but is not currently.

Categories: Business Models, Cloud, Databases, Open Source, Software-as-a-Service.

The Questions for Hadoop Moving Forward

Strata + Hadoop World New York 2013

In the beginning – October, 2003 to be precise – there was the Google File System. And it was good. MapReduce, which followed in December 2004, was even better. Together, they served as a framework for Doug Cutting’s original work at Yahoo, work that resulted in the project now known as Hadoop in 2005.

After being pressed into service by Yahoo and other large web properties, Hadoop’s inevitable standalone commercialization arrived in the form of Cloudera in 2009. Founded by Amr Awadallah (Yahoo), Christophe Bisciglia (Google), Jeff Hammerbacher (Facebook) and Mike Olson (Oracle/Sleepycat) – Cutting was to join later – Cloudera oddly had the Hadoop market more or less to itself for a few years.

Eventually the likes of MapR, Hortonworks, IBM and others arrived. And today, any vendor with data processing ambitions is either in the Hadoop space directly or partnering with an entity that is – because there is no other option. Even vendors with no major data processing businesses, for that matter, are jumping in to drive other areas of their businss – Intel being perhaps the most obvious example.

The question is not today, as it was in those early days, what Hadoop is for. In the early days of the project, many conversations with users about the power of Hadoop would stall when they heard words like “batch” or compared MapReduce to SQL (see Slide 22). Even already on-board employers like Facebook, meanwhile, faced with a market shortage of MapReduce-trained candidates were forced to write alternative query mechanisms like Hive themself. All of which meant that conversations about Hadoop were, without exception, conversations about what Hadoop was good for.

Today, the revese is true: it’s more difficult to pinpoint what Hadoop isn’t being used for than what it is. There are multiple SQL-like access mechanisms, some like Impala driving towards lower and lower latency queries, and Pivotal has even gone so far as to graft a fully SQL-compliant relational database engine on to the platform. Elsewhere, projects like HBase have layered federated database-like capabilities onto the core HDFS Hadoop foundation. The net of which is that Hadoop is gradually transitioning away from being a strictly batch-oriented system aimed at specialized large dataset workloads and into a more mainstream, general purpose data platform.

The large opportunity that lies in a more versatile, less specialized Hadoop helps explain the behavior of participating vendors. It’s easier to understand, for example, why EMC is aggressively integrating relational database technology into the platform if you understand where Hadoop is going versus where it has been. Likewise, Cloudera’s “Enterprise Data Hub” messaging is clearly intended to achieve separation from the perception that Hadoop is “for batch jobs.” And the size of the opportunity is the context behind IBM’s comments that it “doesn’t need Cloudera.” If the opportunity, and attendant risk, was smaller, IBM would likely be content to partner. But it is not.

Nor is innovation in the space limited to those would sell software directly; quite the contrary, in fact. Facebook’s Presto is a distributed SQL engine built directly on top of HDFS, and Google Spanner et al clones are as inevitable as Hadoop was once upon a time. Amazon’s RedShift, for its part, is gathering momentum amongst customers who don’t wish to build and own their own data infrastructure.

Of course, Hadoop could very well be years behind Google from a technology perspective. But even if the Hadoop ecosystem is the past to Google, it’s the present for the market. And questions about that market abound. How does the market landscape shake out? Are smaller players shortly to be acquired by larger vendors desperate not be locked out of a growth market? Will the value be in the distributions, or higher level abstractions? How do broadening platform strategies and ambitions affect relationships with would-be partners like a MongoDB? How do the players continue to balance the increasing trend towards open source against the need to optimize revenue in an aggressively competitive market? Will open source continue to be the default, baseline expectation, or will we see a tilt back towards closed source? Will other platforms emerge to sap some of Hadoop’s momentum? Will anyone seriously close the gap between MapReduce/SQL analyst and Excel user from an accessibility standpoint?

And so on. These are the questions we’re spending a great deal of time exploring in the wake of the first Strata/HadoopWorld in which Hadoop deliberately and repeatedly asserted itself as a general purpose technology. From here on out, the stakes are higher by the day, and margin for error low. To she who gets more of the answers to the above questions correct go the spoils.

Categories: Big Data, Open Source.

The Depth of Amazon’s Ambition

Not surprisingly for an organization that has updated its product line 200 times this year as of the first of the month, Amazon had a few tricks up its sleeve for its annual re:Invent conference. For the company that effectively created the cloud market, the show was an important one for showcasing the sheer scope of Amazon’s targets.

Amazon is correctly regarded as one of the fastest innovating vendors in the world, with the release pace up over 500% from 2008 through last year. And if Amazon keeps up its pace for releases through the end of the year, it will have released 36% more features this year than last.

But as impressive as the pace is, the more impressive – and potentially more important – aspect to their release schedule is its breadth. Consider what Amazon announced at re:Invent:

  • AppStream (Mobile/Gaming)
  • CloudTrail (Compliance and Governance)
  • Kinesis (Streaming)
  • New Instance Types in C3/I2 (Performance compute)
  • RDS Postgres (Database as a Service)
  • Workspaces (VDI)

The majority of cloud vendors today are focused on executing with core cloud workloads, or basic compute and storage. There are certainly players focused on adding value through differentiated, specialized technologies such as Joyent with its distributed-Unix data-oriented Manta offering or ProfitBricks with its scale up approach, but these are the exception rather than the rule. Whether it’s public cloud providers or enterprises attempting to build out private cloud abilities, most of the focus is on simply keeping the lights on.

At re:Invent, Amazon did upgrade its traditional compute offerings via C3/I2, but also signaled its intent to embrace and extend entirely new markets. Most obviously, Amazon has with Workspace turned its eye towards VDI, for years a market long on promise but short on traction. The theoretical benefits of VDI, from manageability to security, have to date rarely outweighed the limitations and costs of delivery, making it the Linux desktop of IT – with success always just over the horizon. Amazon’s bet here is that by removing the complexity of execution it can engage with customers in a manner that its core cloud businesses cannot, and thereby grow its addressable market in the process.

Similarly, Kinesis is an entry into a specialized market that has typically been the province either of vendor packages – e.g. IBM InfoSphere Streams – or more recent open source project combinations such as Storm/Kafka. Of specific interest with Kinesis is the degree to which Amazon is leading the market here rather than responding to it. When questioned on the topic, Amazon said that Kinesis was unlike other Amazon offerings such as Workspaces that were a response to widespread customer demand. Instead, Amazon is anticipating future market needs with Kinesis, and attempting to deliver ahead of same.

AppStream, for its part, is effectively a Mobile/Gaming-backend-as-a-service, putting providers in that space on notice. The addition of Postgres as an RDS option, meanwhile, came to wide developer acclaim, but means that Amazon will increasingly be competing with AWS customers like Heroku. And CloudTrail, particularly with its partner list, means that AWS is taking the enterprise market seriously, which is both opportunity and threat for its enterprise ecosystem partners.

Big picture, re:Invent was an expansion of ambition from Amazon. Its sights are even broader than was realized heading into the show, which should give the industry pause. It has been difficult enough to compete with AWS on a rate of innovation basis in core cloud markets; with its widening portfolio of services, the task ahead of would-be competitors large and small just got more difficult.

That being said, however, it is worth questioning the sustainability of Amazon’s approach over the longer term. Microsoft similarly had ambitions not just to participate in but fundamentally dominate and own peripheral or adjacent markets, and arguably that near infinite scope impacted their focus in their core competencies. The broader and more diverse the business, the more difficult it becomes to manage effectively – not least because you end up making more enemies along the way. It remains to be seen whether or not Amazon’s increasing appetite to cloudify all the things has a similar effect on its ability to execute moving forward, but in the interim customers have a brand new stable of toys to play with.

Disclosure: Amazon, Heroku, and IBM are RedMonk customers, Joyent, Microsoft and ProfitBricks are not.

Categories: Cloud, Conferences & Shows.

A Look at Public Offerings from 1980-2012

A year ago, a CTO that had landed a large public round and secured a quarter as much in a less public investment candidly described the process saying, “this used to be called going public.” MongoDB, the recent beneficiary of a $150M round led by Intel, and Sequoia would likely agree. As might Uber, who received $250M in financing from Google Ventures. Going public is clearly no longer the sole route to market for outsized capital requirements.

Which isn’t to imply that venture deal sizes are, on average, increasing. Thanks to a combination of factors from the rise of early stage investment vehicles like Y Combinator to open source software and the public cloud, data gathered by Chris Tacy (below) indicates that if we conflate angel and traditional venture investments, deal volume is up but the size of individual deals is actually in decline.

But at the opposite end of the spectrum, anecdotal evidence suggests that private funding is increasingly competing with public markets in ways not seen previously. The question is whether the data validates the assumption that private companies are being funded on a scale historically competitive with public market returns, and what this means for the wider market moving forward.

To expore the first question, it’s useful to examine data (PDF) on US Initial Public Offerings from 1980-2012 collected by Professor Jay R Ritter of the University of Florida. In his own words, the sample includes “IPOs with an offer price of at least $5.00, excluding ADRs, unit offers, closed-end funds, REITs, partnerships, small best efforts offers, banks and S&Ls, and stocks not listed on CRSP (CRSP includes Amex, NYSE, and NASDAQ stocks).” For example, here is the total number of IPOs per year beginning in 1980.

It should be no surprise to most that public offerings spiked in the late 1990’s. The Tulipmania hysteria that absorbed the technology industry – and eventually, the world – during the bubble has been well documented. What’s interesting about the this chart, however, is that it indicates that the market has yet to recover from the tech-driven crash in public offering volumes. The median number of IPOs per year from 1980 to 2012 is 174. We have not seen that many in a given year since 2004. The recent recession, of course, undoubtedly depressed the appetite for entities to take themselves public. But even in years of relative prosperity, domestically, IPOs seem to have lost some of their luster.

One potential explanation would be the returns. Below is a chart of the aggregate proceeds from all IPOs in a given year as calculated by Ritter. To normalize them for context, however, all numbers have been adjusted for inflation. Dollar amounts depicted, therefore, represent an approximated value in 2013 US dollars.

While the trendlines don’t match precisely, it’s interesting and perhaps not surprising to note the strong correlation between the returns from public offerings and their frequency. It is also worth noting that while proceeds have recovered more strongly than volume, the aggregate returns from public offerings remain depressed. From 1980 to 2012, the median return in 2013 dollars for the aggregate of a year’s worth of public offerings is $28.5B – a figure that hasn’t been reached in four of the last six years. An analysis of the average individual returns, however, challenges the hypothesis that the lack of an expected return is preventing would be IPOs from transacting.

The above chart depicts the aggregate returns for a given year divided by the number of IPOs – providing us with, essentially, an average IPO return. Even after normalizing against a 2013 dollar scale, it’s apparent that the realizable returns per transaction are still growing (if you’re curious about the 2008 outlier, that’s the year VISA went public and raised ~$17B). Which in turn should mean that the incentive to go public remains, and certainly entities from Google (1998) to Facebook (2012) to the aforementioned Twitter have chosen that path in spite of the availablility of capital in private markets.

Still, it is interesting to observe that deals like MongoDB’s $150M round dwarf the expected returns from historical IPOs, even after adjusting for inflation. For example, from 1980 to 1997 the average adjusted return from a public offering never eclipsed $100M. Since then it has dramatically expanded, with the median adjusted return since 1997 weighing in at approximately $253M, or approximately $100M more than MongoDB raised in its last round.

If more companies then are either delaying going public or avoiding the public markets entirely, one would expect to see a rise in venture backed companies eventually going public. While the costs of starting and running businesses have in many respects come down due to dramatic drops in the costs of technical infrastructure among other categories, these have in many respects been offset by spikes in other areas, notably healthcare. Which means that whether public or private, growing companies are likely to still require financing to fuel growth. And indeed, we find exactly this sort of trajectory in venture backed companies.

The above chart depicts percentage of IPOs from technology entities that were backed by venture capital. While the overall percentage has always been high, the trendline is clearly towards greater VC participation. Which makes sense in the wake of a decade of decreased reliance on public market alternatives.

As for what all of this means moving forward, the answers are unclear. In the aggregate, the private market is obviously not lacking for available capital. Just as clearly, decline in volume or no, the returns remain there for public market entrants – or at least some of them. But as the number of large venture deals that approximate the anticipated returns from a public offering appears to be on the rise, it’s worth monitoring the dynamic between public and private funding sources. In the meantime, we’re likely to continue seeing the kinds of deals that “used to mean going public.”

Categories: Venture Capital.

The 2013 Monktoberfest

Monktoberfest 2013
(All photos courtesy Maney Digital)

In a 2001 piece for the New York Times, the now sadly departed Elmore Leonard summed up his tenth and final rule on how to write simply: “Try to leave out the part that readers tend to skip.” Without claiming any particular success, this is essentially the philosophy behind the Monktoberfest. In effect, it’s an attempt to answer the question: what would happen if we threw a conference without the parts that people skip?

Consider sponsored talks, for example. While it is not technically impossible to deliver a sponsored talk that engages an audience, the overlap between great talks and paid talks is tiny. Most end up as little more than infomercials. So we lose them. Then there’s timing. For a conference aimed at and built for developers, who tend to not be the early rising type, why would we start the conference at the more typical 8 AM? 10 AM is much more civilized. And what do people most frequently want to skip at a conference? Meals delivered by a staff whose focus is scaling the food, not crafting the food. Many fewer people, on the other hand, skip a sushi lunch or a dinner that includes lobsters caught by the caterer’s husband the afternoon before.

While this is a bit of a different approach for conferences, the logic behind it seems straightforward. In my experience, the quality of any given conference will ultimately be determined not by the food, drink or even the speakers – as important as they are. The value of a conference is determined instead by its people. Why, then, would we optimize for anything but the people?

Monktoberfest 2013

Whether we succeeded will be determined in the weeks and months ahead, as the impact of the individual talks ripples outwards, we see the manifestations on social media and elsewhere of new connections made at the show and so on. But the early returns are gratifying.

The last quote from Mike is perhaps the most important to me personally. People who have never attended the Monktoberfest will ask me what it’s all about, and my answer is that it’s about the intersection of social and technology. It’s about how technology changes the way that we socialize, and how the way that we socialize changes the way that we build technology. But within that broad framework, speakers have a great deal of latitude to interpret the constraints in interesting ways. In doing so, as Mike says, they make me think about why I think what I think. They make me think about what I’m doing, why I’m doing it, and how I can help. They inspire me, and I seriously doubt that I’m the only one. They are, in short, the kinds of talks that don’t necessarily have a home at other shows.


As with most large productions, the Monktoberfest is a group effort, and as such, there are many people to thank.

  • Our Sponsors: Without them, there is no Monktoberfest
    • IBM MobileFirst: In an industry littered with the carcasses of businesses that couldn’t adapt to change, IBM is one of the few major technology companies in existence that has survived not one but multiple waves of disruption. The driving force behind most disruption today is the developer – nowhere is this more apparent than in mobile – and we appreciate IBM’s strong support as our lead sponsor in helping to bring them the conference they deserve.
    • Red Hat: As the world’s largest pure play open source company, there are few who appreciate the power of the developer better than Red Hat. Their support as an Abbot Sponsor – the third year in a row they’ve sponsored the conference, if I’m not mistaken – helps us make the show possible.
    • ServiceRocket: When we post the session videos online in a few weeks, it is Service rocket that you will have to thank.
    • EMC: Enjoyed your surf & turf dinner? Take a minute to thank the good folks from EMC.
    • Rackspace/Splunk: It’s much easier to splurge on fresh sushi when you have partners like Rackspace and Splunk helping to make it possible.
    • Basho: When you came in a little under the weather on Thursday and treated yourself to a breakfast sandwich, that was Basho’s doing.
    • Atlassian/AWS/Brick Alloy/Citrix/CloudSpokes/Docker/Moovweb/Opscode/Rackspace: Remember the rare beers served at the event – one of which included the only barrel available in the US? These are the people that brought it to you. And be sure to thank Atlassian especially, as they brought you four separate rounds.
    • Brick Alloy/Crowd Favorite: While we continue to search for a reasonable solution to the difficult challenges posed by a hundred plus bandwidth-hungry geeks carrying three or more devices per person, Brick Alloy and Crowd Favorite at least deferred the load onto local repeaters.
    • Rackspace: The glasses this year came courtesy of Rackspace, as our attendees will be reminded every time they drink a craft beverage from one.
    • Moovweb: Moovweb, meanwhile, addressed the afternoon munchies.
    • O’Reilly: Lastly, we’d like to thank the good folks from O’Reilly for being our media partner yet again.
  • Our Speakers: Every year I have run the Monktoberfest I have been blown away by the quality of our speakers, a reflection of their abilities and the effort they put into crafting their talks. At some point you’d think I’d learn to expect it, but in the meantime I cannot thank them enough. Next to the people, the talks are the single most defining characteristic of the conference, and the quality of the people who are willing to travel to this show and speak for us is humbling.
  • Ryan and Leigh: Those of you who have been to the Monktoberfest previously have likely come to know Ryan and Leigh, but for everyone else they are one of the best craft beer teams not just in this country, but the world. And they’re even better people, having spent the better part of the last few months sourcing exceptionally hard to find beers for us. It is an honor to have them at the event, and we appreciate that they take time off from running the fantastic Of Love & Regret on behalf of Stillwater Ales down in Baltimore, MD to be with us.
  • Lurie Palino: Lurie and her catering crew have done an amazing job for us every year, but this year was the most challenging yet due to some unfortunate and unnecessary licensing demands presented days before the event. As she does every year, however, she was able to roll with the punches and deliver on an amazing event yet again. With no small assist from her husband, who caught the lobsters, and her incredibly hard working crew at Seacoast Catering.
  • Kate (AKA My Wife): Besides spending virtually all of her non-existent free time over the past few months coordinating caterers, venues and overseeing all of the conference logistics, Kate was responsible for all of the good ideas you’ve enjoyed, whether it was the masseuses last year or the cruise this year. She also puts up with the toll the conference takes on me and my free time. I cannot thank her enough.
  • The Staff: From Juliane and James securing and managing all of our sponsors to Marcia handling all of the back end logistics to Kim, Ryan and the rest of the team handling the chaos that is the event itself, we’ve got an incredible team that worked exceptionally hard.
  • Our Brewers: I’d like to thank Jim Conroy of The Alchemist, Josh Wolf of Allagash, Greg Norton of Bier Cellar, Mike Fava and Tim Adams of Oxbow, and Brian Strumke of Stillwater for taking time out of their busy schedules to be with us. The Alchemist and Allagash, in addition, were kind enough to provide giveaways to our attendees and speakers, respectively.
  • Mike Maney: If he’s not the most enthusiastic Monktoberfest attendee, I’m not sure who would be. Last year he embarked on an epic 7 state road trip to the conference, and this year he sourced three bottles of Dogfish hand signed by none other than the founder of the brewery, Sam Calagione. These we were able to give away to attendees thanks to Mike’s efforts.
  • Caroline McCarthy & Mike McClean of Abbey Cat Brewing: At the conclusion of our brewer’s panel featuring the Alchemist, Allagash, Bier Cellar, Oxbow and Stillwater, our panelists were each issued a customized Monktoberfest mash paddle. This came courtesy of a connection from Monktoberfest speaker Caroline McCarthy, who introduced me to Mike McClean, who graciously furnished us with the paddles gratis. Abbey Cat Brewing, in Mike’s words, makes “mash paddles, with the help of a sweatshop staffed entirely by foster kittens.” What he failed to add is that they are gorgeous creations. And before you ask, yes, we have pictures of the paddles with kittens.

With that, we close this year’s Monktoberfest. For everyone who was a part of it, I owe you my sincere thanks. You make all the blood, sweat, tears worth it. Stay tuned for details about next year, and in the meantime, you might be interested in Thingmonk or the Monki Gras, RedMonk’s other two conferences.

Categories: Conferences & Shows.

Are PaaS and Configuration Management on a Collision Course and Four Other PaaS Questions

The following was meant to be ready in time for the Platform conference last week, but travel. While it’s belated, however, the following may be of interest to those tracking the PaaS market. At RedMonk, the volume of inquiries related directly and indirectly to PaaS has been growing rapidly, and these are a few of the more common questions that we’re fielding.

Q: Is PaaS growing?
A: The short answer is, by most measurements – search traffic included – yes.

The longer answer is that while interest in PaaS is growing, its lack of visibility on a production basis is adding fuel to those who remain skeptical of the potential for the market. Because PaaS was over-run in the early days by IaaS, there are many in the industry who continue to argue that PaaS is at best a niche market, and at worst a dead end.

To make this argument, however, one must address two important objections. First, the fact that the early failures in the PaaS space were of execution, not model. Single, proprietary runtime platforms are less likely to be adopted than open, multi-runtime alternatives for reasons that should be obvious. But perhaps more importantly, those arguing that the lack of production visibility for PaaS today means that it lacks a future must explain why this is true, given that history does not support this point. Quite the contrary, in fact: dozens of technologies once dismissed as “non-production” or “not for serious workloads” are today in production, running serious workloads. The most important factor for most technologies isn’t where they are today, but rather what their trajectory is.

Q: How convenient is PaaS, really?
A: That depends on one’s definition of convenience. It is absolutely true that PaaS simplifies or eliminates entirely many of the traditional challenges in deploying, managing and scaling applications. And given that developers are typically more interested in the creation of applications than the challenges of managing them day to day, these abilities should not be undersold.

That said, PaaS advocates are frequently unaware of the friction relative to traditional IaaS alternatives. Terminology, for example, is frequently an object of confusion: the linguists of infrastructure-as-a-service, which is essentially a virtual representation of physical alternatives, are simple. Servers are instantiated, run applications and databases, have access to a storage substrate and so on. Would-be adopters of PaaS platforms, however, must reorient themselves to a world of dynos, cartridges and gears. Even the metrics are different; rather than being billed by instance, they may be billed by memory or transactions – some of which can be difficult to predict reliably.

Is PaaS more convenient, then? Over the longer term, yes, it will abstract a great deal of complexity away from the application development process. In the short term, however, there are trade offs. It’s akin to someone who speaks your language, but with a heavy accent or in a different dialect. It’s possible to discern meaning, but it can require effort.

Q: What’s the biggest issue for PaaS platforms at present?
A: While the containerization of an application is far from a solved problem – some applications will run with no issues, while others will break instantly – it is relatively mature next to the state of database integrations. Most PaaS providers at present have distanced themselves from the database, for reasons that are easy to understand: database issues associated with multi-tenant, containerized and highly scalable applications are many. But it does present problems for users. PaaS platform database pricing has typically reflected this complexity, with application charges forming a fraction of the loaded application cost next to data persistence. And many platforms, in fact, have openly advocated that the data tier be hosted on entirely separate, external platforms, which spells high latency as applications are forced to call to remote datacenters even for simple tasks like rendering a page. Expect enhanced database functionality and integration to be a focus and differentiation point for PaaS platforms in the future. This is why several vendors in the space have invested heavily in relationships with communities like PostgreSQL and MongoDB.

Q: Where do the boundaries to PaaS end and the layers above and below it begin?
A: This is one of the most interesting, and perhaps controversial, questions facing the market today. In many respects, PaaS is well defined and quite distinct from other market categories; the previously mentioned lack of database integration, for example. But in others, the boundaries between PaaS and complementary technologies is substantially less clear. Given the PaaS space’s ambition to abstract away the basic mechanics of application and deployment, for example, it seems logical to question the intersection and potential overlap of PaaS and configuration management/orchestration/provisioning software such as Ansible, Chef, Puppet, or Saltstack. PaaS users, after all, are inherently bought into abstraction and automation; will they be content to manage the underlying physical and logical infrastructures using a separate layer? Or would they prefer that be a feature of the platform they choose to encapsulate their applications with?

If we assume for the sake of argument that, at least on some level, traditional configuration management/provisioning will become a feature of PaaS platforms, the next logical question is: what does this mean both for PaaS platform providers and configuration management/orchestration/provisoning players? Should the latter be aggressively be pursuing partnership strategies? Should the former rely upon one or more of these projects or attempt to replicate the feature themselves?

From the conversations we’re having, these are the important strategic questions providers are asking themselves right now.

Q: What’s the market potential?
A: We do not do market sizing at RedMonk, believing that it is by and large a guess built on a foundation of other guesses. That said, it’s interesting that so many are relegating PaaS to niche-market status. Forget the fact that even those companies serving conservative buyers such as IBM have chosen to be involved. Consider instead the role that PaaS was built to play. Much as the J2EE application servers abstracted Java applications from the operating systems and hardware layers underneath them, so too does PaaS. It is the new middleware.

Given the size of the Java middleware market at its peak, this is a promising comparison for PaaS. Because while it is true that commercial values of software broadly have declined since traditional middleware’s apex, PaaS offers something that the application servers never did: multi-runtime support. Where middleware players then were typically restricted to just those workloads running in Java, which was admittedly a high percentage at the time, there are few if any workloads that multi-runtime PaaS platforms will be unable to target. Which makes its addressable market size very large indeed.

Disclosure: IBM and Pivotal (Cloud Foundry) are clients, as is Red Hat (OpenShift), MongoDB and Salesforce/Heroku. In addition, Ansible, Opscode, Puppet Labs are or have been clients.

Categories: Cloud, Devops, Platform-as-a-Service.

The Moto X Bet

If you haven’t been following the saga of the Moto X, the short version is that it’s one of the first post-Google acquisition products from the company that gave us the Star Tac and the RAZR. Besides carrying the expectations of a market that needs to see something compelling from Motorola because it’s been a while, the X is also the focal point for analysts seeking an answer to one simple question: why did Google pay over twelve billion dollars for the company? Given the input the folks from Mountain View have had into the product, it’s been assumed that the Moto X would, if not answer that question outright, at least provide a hint.

If that’s the case, however, the answer for many seems to be: because they made a mistake. While the Moto X has seen its share of excellent reviews – see Gizmodo‘s “Moto X Hands On: Forget Specs, This Thing Is Awesome” or the Verge which gave it an 8 out of 10 – negative reactions have been common. It may not be a surprise to see John Gruber dismiss the product, but pieces like BGR’s “Motorola in Dreamland” or TechCrunch’s “Hell no Moto X” are representative of the industry’s disappointment.

While I have yet to get my hands on one of the devices, my bet is some of the gadget reviewers are simply missing the bigger picture. Which is, at least in part, that this phone isn’t built for them.

Consider the various complaints about the device. The disappointing processor? Yes, the benchmarks confirm the Moto X is based on a chip that’s slower than the equivalent in the HTC One or Samsung Galaxy S4. So? How many consumers, realistically, are aware of the chipset in their phone? The only time they’ll notice the processor is if the phone feels slow; none of the hands-on reviews I’ve seen yet make this claim.

How about the “gross” AMOLED Screen? Well, if a reviewer like The Verge’s Joshua Topolsky has to study the Moto X and a higher resolution screen such as the HTC One “side by side to make out the difference,” it seems unlikely the average consumer will have a problem with the display. Particularly given that the pixel density is just shy of iPhone-Retina.

Most of the negative reactions are focusing, in other words, on the phone’s underwhelming technical benchmarks. Which is interesting, because the history of this market, brief though it may be, does not suggest that the market rewards the most sophisticated handset. The iPhone, you might recall, did not add the 3G connectivity common to competitive handsets until Apple could ensure acceptable battery life. The HTC Evo, by contrast, was a marvel of engineering with an enormous screen and every connectivity option known to man – from HDMI to WIMAX. It also had a battery life of about an hour.

What Apple understands, and what the Moto X may reflect, is that technology is less important than experience. All things being equal, faster processors and brighter, higher resolution screens are preferable. But until we see significant advances in battery technology – the kinds of advances that are perpetually two to three years away – all things are not going to be equal.

So while those critiquing the Moto X for its pedestrian processor and so on focus on the components that won’t be found in the coming Ifixit Teardown, my guess is that the average user will be more impressed by a full day’s worth of usage – which is very different than a full day’s worth of talk time (who uses their phone as a phone these days anyway) – than a faster phone with a brighter screen. Just as they once picked EDGE capable iPhones over 3G competition.

Maybe the Made-in-the-USA factor will emerge as a selling point as well, and probably the ability to customize the appearance will be, but at the end of the day the performance of the Moto X will depend on how well Google and Motorola have learned from Apple. Apple’s never been about the underlying technology, and much to the consternation of tech reviewers everywhere, the Moto X doesn’t appear to be either.

Whether that pays off will be interesting to see.

Disclosure: There’s nothing to disclose. Google is not a client, and I do not have a Moto X, review unit or otherwise.

Categories: Mobile.

The Top 55 in Tech: Market Findings and Other Items of Interest

Bottom mark chaos

One of the things we track internally, for the sake of contextual curiosity more than anything, is the market performance of firms that can be at least loosely described as technology oriented. While it’s foolish to assign any serious import to rankings based on the vagaries of market performance, it is nevertheless interesting to understand how the market values or does not various entities, particularly in relation to one another. Besides simple market metrics such as market capitalization, it’s also useful to be aware of the wider context: when was a firm founded? How does it generate revenue? From these patterns, and particularly from watching them over time, it is possible to get a sense for how the technology landscape is evolving over time, and from there understand what adaptations may be necessary moving forward.

The list of the 55 largest public technologies entities that we’re tracking at present, ordered by current market cap, is available here. Please note, however, that no claims are made that this list is definitive. The most notable omission is carriers, and their omission looks increasingly problematic as they push further into cloud and network related services. It’s likely they’ll be added in future iterations.

If there are other public entities you believe to be missing, then, by all means let us know in the comments and we’ll review them and amend the list as necessary.

With the aforementioned caveats that the list is not definitive and that market perceptions do not necessarily match company merit, a few notable takeaways from a quick examination of the list.

  1. Age: The median age of the Top 55 tech companies is 28 years old, meaning that a representative entity would have been founded in 1985. This is less than surprising in one sense, given that larger companies have long leveraged startups as a means of outsourced innovation; rather than enter higher risk emerging markets themselves – which they are not built to attack in any event. Instead, they can sit back and attempt to acquire the successful innovators, considering the M&A premium their cost of innovation. Still, it’s interesting that the shape of the technology market, often considered one of the fastest moving industries in the world, is in part defined by the decisions of companies that might have been founded the year that New Coke debuted and Back to the Future was released.
  2. Revenue Source: Of the 55 companies tracked, 21 derive their revenue primarily from the sales of software while 34 of them do not. In other words, the average Top 55 technology firm is around one and a half times more likely to not generate the bulk of their revenue from software than it is from it. Interestingly, the Top 25 members of the list are even less likely to rely on software as their primary revenue source; 7 are primarily software oriented while 18 are not. Make no mistake: software is absolutely eating the world, as Marc Andreessen has said. Every company on this list relies on software for their business. But the data indicates that more companies are making money with software rather than from software, which is something of a departure from a decade ago when Microsoft’s dominance encouraged many to replicate the software revenue model.
  3. Annual Performance: In market performance over time, here is the list of companies in descending order of market cap generated per year of existence. 1) Google ($20B/year), 2) Apple ($11B), 3) Facebook ($10B), 4) Amazon ($8B), 5) Microsoft ($7B), 6) Cisco ($5B), 7) Oracle ($4B), 8) Qualcomm ($4B), 9) Baidu ($4B), 10) Taiwanese Semiconductor ($3B). Given that this measurement advantages to some degree younger firms, the presence of entities like Apple, Microsoft and Oracle is impressive.
  4. The Arena: A few relative valuations that may be of interest. Google is currently worth almost 10 Yahoo’s. Dell ($22.4B) is currently worth less than LinkedIn ($23.1B). ARM Holdings is worth less than you might expect, given its importance; Nokia is currently worth ~$2B more. The world’s only pure play open source company in Red Hat, meanwhile, is worth almost as much as Teradata ($10.2B to $9.9B) and more than Electronic Arts, F5 and Rackspace. And while Qualcomm tends to be something of a behind the scenes player, its Top 10 performance has it more valuable than VMware, Yahoo and combined.
  5. Industry Size: The combined worth of the these entities is $2.8T.

Again, it’s important not to read too much into the above, particularly with respect to market valuations which are volatile by nature. As a snapshot of a given point in time, however, it’s useful to understand how companies and their strategies are perceived more broadly, and what this means for them moving forward.

Categories: Market.

What IBM Joining the Cloud Foundry Project Means

When the OpenStack project was launched in 2010, IBM was one of many vendors in the industry offered the opportunity to participate. And though OpenStack launched with a nearly unprecedented list of supporters, IBM was not among them. In spite of their lack of a public commitment to an existing open source cloud platform – they had their own service offering in SmartCloud – they declined to join the project.

Until they did two years later.

In 2012, IBM joined along with Red Hat, another industry player that had passed on the initial opportunity to get on the OpenStack train. The original decision and the subsequent about face may seem contradictory, but it is nothing more or less than the inevitable consequence of how IBM approaches emerging markets.

For many customers, particularly risk averse large enterprises and governments, one of IBM’s primary assets is trust. IBM is in many respects the logical reflection of its customers, who are disinclined – for better and for worse – to reinvent themselves technically as each new wave of technology breaks, as each new “game changing” technology arrives. Instead, IBM adopts a wait and see approach. It was nine years after the Linux kernel was released that IBM determined that the project’s momentum, not to mention the potential strategic impact, made it a worthwhile bet. At which point they promised to inject $1 billion dollars into the ecosystem, a figure that represented a little over 1% of their revenue and fully a fifth of its R&D expenditures that year.

Which is not to compare IBM’s commitment last week to Cloud Foundry to its investment in Linux, in either dollars or significance. As much as one-time head of VMware now-head of Pivotal Paul Maritz is seeking to make Cloud Foundry “the 21st-century equivalent of Linux,” even the project’s advocates would be likely to admit there’s a long way to go before such comparisons can be made.

The point is rather that when evaluating the significance of IBM’s decision to publicly back Cloud Foundry, it’s helpful to put their decision making in context. Decisions of this magnitude cannot be made lightly, because IBM cannot return to enterprise customers who have built on top of Cloud Foundry at their recommendation in two years with a mea culpa and a new platform recommendation.

IBM’s support for the Cloud Foundry project signals their belief that the PaaS market will be strategic. Given the aforementioned context, it also means that after an extended period of evaluation, IBM has decided that Cloud Foundry represents the best bet in terms of technology, license and community moving forward. These are the facts, as they say, and they are not in dispute. The primary question to be asked around this announcement, in fact, is less about Cloud Foundry and IBM – we now know how they feel about one another – and more to do with what it portends for the PaaS market more broadly.

A great many in the industry, remember, have written off Platform-as-a-Service for one reason or another. For some VC’s it’s the lack of return from various PaaS-related investments, for the odd reporter here or there it’s the lack of traction for early PaaS players like or Google App Engine relative to IaaS generally and Amazon specifically. And for developers, it’s frequently the question of whether yet another layer of abstraction needs to be added to virtual machine, IaaS fabric, operating system, runtime / server, programming language framework and so on. The developer’s primary complaint used to be the constraints – run time choice, database options and so on – but these have largely subsided in the wake of what we term third generation PaaS platforms. PaaS platforms that offer multiple runtimes and other choices, in other words. Platforms like Cloud Foundry, OpenShift and so on.

But while it’s difficult to predict the future of PaaS, particularly the rate of uptake – certainly it hasn’t gone mainstream as quickly as anticipated here – the history of the industry may offer some guidance. For as long as we’ve had compute resources, additional layers of abstraction have been added to them. Generally speaking this has been for reasons of accessibility and convenience; it’s easier to code in Ruby, as but one example, than Assembler. But some abstractions, middleware in particular, have long served business needs by offering greater portability between application environments. True, the compatibility was never perfect, and write-once-run-anywhere claims tried the patience of anyone who actually tried it.

Greater layers of abstraction, nevertheless, appear inevitable, at least from a historical perspective. Few would debate that C is a substantially more performant language than JavaScript. Regardless of this advantage, accessibility, convenience and other factors such as Moore’s Law have conspired to advantage the more abstract, interpreted language over the closer-to-the-metal C as demonstrated in this data from Ohloh.

Will PaaS benefit from the long term industry trend towards greater levels of abstraction? Having corrected many of the early mistakes that led to premature dismissals of PaaS, it’s certainly possible. Oddly, however, many of the would-be players in the space remain reluctant to make the obvious comparison, that PaaS is the new middleware. Rather than attempt to boil the ocean by educating and evangelizing the entire set of capabilities PaaS can offer, it would seem that the simplest route to market for vendors would be to articulate PaaS as an application container, one that can be passed from environment to environment with minimal friction. It’s not a dissimilar message from the idea of “virtual appliances” that VMware championed as early as 2006, but it has the virtue of being more simple than packaging up entire specialized operating systems, and is thus more likely to work.

If we assume for the sake of argument, however, that PaaS will continue to make gains with developers and the wider market, the question is what the landscape looks like in the wake of the Cloud Foundry-IBM announcement. It’s obviously early days for the market; IBM-approved or no, Cloud Foundry isn’t yet listed as a LinkedIn skill, and the biggest LinkedIn user group we track had a mere 195 members as of July 15th. But in an early market, the IBM commitment is unquestionably a boost to the project. Open source competitors such as Red Hat’s OpenShift project, closed source vendors like Apprenda, hosted providers like Engine Yard, or GAE will all now be answering questions about Cloud Foundry and IBM, at least in their larger negotiated deals.

As it always does, however, much will come down to execution. Specifically, execution around building what developers want and making it easy for them to get it. All the engineering and partnerships in the world can’t save a project that makes developers lives harder, as we’ve already seen with the first wave of PaaS vendors that failed to take over the world as expected. Whether or not Cloud Foundry can do that with the help of IBM and others will depend on who wins the battle for developers, and that’s one that’s far from over.

Disclosure: IBM is a RedMonk customer, as are Apprenda, Red Hat and Pivotal is not a RedMonk customer, nor are Google or Engine Yard.

Categories: Cloud, Platforms.