Blogs

RedMonk

Skip to content

On Steve Ballmer

Green Screen

Author’s note: this was originally written in September, but was lost amidst a sea of Sublime Text tabs until I rediscovered it earlier this week and never published. As late seems better than never, I offer it up months after the fact. You heard it here last, as always. – sog

Since Microsoft announced Steve Ballmer’s pending retirement, there has been no shortage of commentary and retrospective on his tenure. While opinions on the subject vary widely, it’s probably safe to characterize the conventional wisdom as more critical than not. The contrarian, therefore, is one that defends the outgoing Chief Executive. The former viewpoint is generally inclined to focus on a share price that has stagnated since the millennium, the latter Microsoft’s historically impressive ability to generate and sustain revenue.

As with most large entities, however, it’s probably a mistake to assign too much responsibility for either to a single person. In the case of the man most Microsoft employees refer to as Steve B, it’s true that his ascension to the role of Chief Executive came at a natural apex of the firm. Given how meteoric the rise of the company was, in other words, it would be difficult to expect anyone to sustain the growth that inflated the share price in the first place. On the other hand, it’s worth asking whether Ballmer created the twin revenue engines that have sustained Microsoft for over a decade and subsidized its efforts to replicate the success of Windows and Office, or whether he inherited them.

Which means that it’s probably best to deprecate evaluations of his tenure that focus purely on financial metrics and focus instead on broader assessments of the company’s ability to react to the market around it.

This is, of course, ground that has been covered nearly as completely as the financial metrics. Nearly every piece reacting to the Ballmer’s retirement and the forthcoming transition has referenced, appropriately enough, the disruption theory dissected and described by Harvard Business School Professor Clayton Christensen. By now, most are familiar with both the broader argument and how they pertain to Microsoft specifically. Businesses become successful over a period of time, and their success in one or more product areas blinds them from, and in fact actively disincents a response to, the threats posed by those that will succeed it. Companies built to dominate one market, in other words, are rarely able to replicate that success in the model that succeeds it. Applied to Microsoft, even laymen cannot fail to understand how a variety of market forces from open source to cloud to mobile collectively and individually first undermined and then actively disrupted Microsoft’s once unassailable dominant market position.

There might be some dispute over the specific implications, then, but there is near universal consensus that Microsoft has been disrupted just as numerous technology giants have before it. What’s up for debate is how Steve Ballmer responded – or failed to respond – to these disruptive forces. And, assuming that one acknowledges that the firm has been disrupted, whether anyone might have responded more constructively than Microsoft’s outgoing CEO.

For critics, the answer is simple: Steve Ballmer uniquely failed to identify the potential and threat of markets such as cloud or mobile, and is as a result directly responsible for the disruption. The majority of these arguments imply, further, that others might have done better in Ballmer’s stead.

Asymco’s Horace Dediu describes this line of thinking most clearly in a piece entitled “Steve Ballmer and the Innovator’s Curse“:

The most common, almost universally accepted reason for company failure is “the stupid manager theory”. It’s the corollary to “the smart manager theory” which is used to describe almost all company successes. The only problem with this theory is that it is usually the same managers who run the company while it’s successful as when it’s not. Therefore for the theory to be valid then the smart manager must have turned stupid at a specific moment in time, and as most companies in an industry fail in unison, then the stupidity bit must have been flipped in more than one individual at the same time in some massive conspiracy to fail simultaneously.

So the failures of Microsoft to move beyond the rapidly evaporating Windows business model are attributed to the personal failings of its CEO.

He goes on, however, to call these theories nonsense. He is correct, at least in the implication that Steve Ballmer did not suddenly become stupid. Whatever else that may be said about the man, even his enemies acknowledge his intelligence. And his track record supports this; as Dediu wryly notes, Ballmer’s “only failing was delivering sustaining growth (from $20 to over $70 billion in sales.)”

This defense, however, is built on a core assumption I do not happen to share. Specifically, the following:

The Innovator’s Dilemma is very clear on the causes of failure: To succeed with a new business model, Microsoft would have had to destroy (by competition) its core business. Doing that would, of course, have gotten Ballmer fired even faster.

Having studied under Christensen, Dediu doubtless understands the disruption theory articulated by the Innovator’s Dilemma as well or better than anyone save Christensen himself. But it seems worth examining the assumption that Microsoft was intrinsically and unavoidably vulnerable to the disruptions it is currently coping with.

Consider mobile. It’s easy to forget now, but Microsoft actually didn’t miss this market: it was simply out-executed. Along with the rest of the market, to be fair. Microsoft had a presence, and a sizable one, in mobile prior to the arrival of the iPhone, which fundamentally altered the landscape in one stroke. But its mobile offering was heavily and unfortunately influenced by its desktop roots. The important questions are first, having seen what mobile has done to the PC market, whether Microsoft should have been investing in mobile in the first place, and second, if they had invested, whether a desktop computer company could effectively adapt to a mobile market.

The answer to the first question is simpler than it might appear. Many analyses point to the massive disruption in the PC market as evidence that Microsoft could not have, and in fact should not have, invested in mobile due to the possibility (now a certainty) that their existing PC business would be cannibalized. But this response tends to omit the financial opportunity mobile came to represent. If Microsoft had decimated its own PC business but ended up owning the profits of Apple’s mobile businesses, one suspects the market would have few complaints.

As to whether or not Microsoft could have been successfully innovative in mobile the way that Apple was, what prevented it? Adherents to the theory of disruption might argue that it was impossible; that Microsoft was so fixated on its success on the desktop that success in a fundamentally different model – mobile – was virtually impossible. On a technical level, of course, but in broader terms as well. How could a company built on selling licenses of software, one utterly convinced of the superiority of software over hardware and validated by years of market confirmation, adapt to the radically different model of selling an integrated package of hardware and software?

Besides the fact that Microsoft today is in fact selling integrated hardware and software produts, challenging this theory of inevitable disruption are both Apple and Google. Apple was a computer company that created entirely new markets out of MP3 players, smartphones and tablets. Even granting that Apple enjoyed advantages in its singular focus and expertise in user experience, enabled in part by its ownership of the entire hardware and software package, it seems difficult to make the case that what Apple was able to accomplish was fundamentally impossible for Microsoft.

Google, meanwhile, was an online advertising company that created the operating system that’s the closest facsimile to the Windows model in mobile we have seen to date. They did this, in fact, understanding that it was likely to damage a relationship with Apple at the time that was remarkably close in retrospect. While their motives in doing so are a matter for speculation, it is probable that Android was created and pushed – much like Chrome – to avoid having their core business – which is, again, advertising – disrupted through third party control points, i.e. mobile operating systems.

If a computer market and an advertising company can both create new markets and stave off disruption, it is not reasonable to conclude that Microsoft – once the biggest, most powerful technology company on the planet, like Apple today – would be fundamentally unable to do the same.

Particularly because there is little intrinsic to mobile that is fundamentally incompatible with their primary software-based revenue model. Certainly Android, as mentioned, resembles it reasonably closely today. True, Google effectively gives Android away, because its development costs are subsidized by its advertising business. But Microsoft has managed, through its aggressive utilization of its intellectual property, to recreate a licensing business all the same. What if, for example, Microsoft had approached all of the current Android partners in the wake of the iPhone’s launch with a version of Windows Phone that was similar to what it is today? The bet here is that, just as happened with Android, they’d rush towards anyone offering them a weapon with which to do battle with Apple.

Which means, in turn, that Microsoft wasn’t necessarily doomed to disruption, but merely executed poorly.

Likewise cloud. If you are convinced that the fundamental value of the cloud lies in price, you must concede that Microsoft was doomed to an uphill battle in cloud without destroying its existing businesses. Microsoft’s primary competition then and now, obviously, was an operating system that could be obtained and run at no cost. And that is fundamentally disruptive to Microsoft’s business. But if you recall that Amazon has always commanded substantial margins above competitors, the opportunity for a premium for software seems somewhat plausible. And if instead of price one considers convenience as the primary driver of cloud consumption rather than price, Microsoft’s opportunity becomes clearer. Again, what if Microsoft had been able to offer Windows instances it hosted within a reasonable timeframe of the launch of EC2? As many cloud providers have discovered, it’s much easier to build in the premiums you need when you’re charging by the hour, as it mitigates the sticker shock by masking the premiums.

Both cloud and mobile, in that analysis, represent opportunity as much as threat.

It’s easier to understand, on the other hand, how Microsoft was willing to wage total war against open source for so many years before pivoting towards a more comprehensive strategy. Open source is, after all, a fundamental repudiation of the model Microsoft built itself on. To Microsoft, certainly then and to a lesser extent now, software is an asset of intrinsic and inextricable value. The tens of thousands of people Microsoft employs to write software, in fact, are dependent on this assumption. Open source, however, doesn’t imply that software has no value – but it does require a dramatically different understanding of its commercial value. How does one charge money for an asset that is available for free is a question that every open source business contends with daily. Each evolves different mechanisms to adapt, but none have nor are likely to replicate the growth that Microsoft, Oracle and other primarily software entities have achieved.

So how could Ballmer – the head of a company built upon an assumption fundamentally undermined by open source – possibly have avoided being disrupted by the explosive growth of open source? Maybe by watching the company Microsoft once itself disrupted and were specifically built to compete with: IBM. No less singularly focused on profit than the Redmond software giant, IBM nevertheless found ways to leverage free as in money software to its strategic gain. Rather than fight the tide, IBM found ways to leverage assets like the Apache web server, Eclipse or most recently Hadoop, to its gain. It recognized that for many of its customers, software sales were essentially a packaging exercise. If bottom-line and obsessively profit-focused IBM could perceive opportunities in free-as-in-beer software, it is difficult to make the case that it would be impossible for Microsoft to do the same. Microsoft could have embraced a wide variety of open source strategies while still protecting its crown jewels of Windows and Office, but for the first thirty-three years of its existence open source was a religious rather than business issue within the company.

It’s fine to defend Ballmer by saying that few, if any, perceived the opportunities that Steve Jobs and Jeff Bezos did. But it’s harder to make the case that, having seen that they did, that it would be impossible for Ballmer to do the same. Giving him a pass, effectively, on not cannibalizing his Windows or Office franchises would be akin to giving Steve Jobs a pass for not creating the iPhone to protect the iPod, or the iPad to shield the iPhone. Much like the iPhone and iOS, the cloud and mobile both could have been – and arguably are becoming so today – adjacent, complementary markets to Microsoft’s core Office/Windows franchises. It’s one thing to give Ballmer a pass for missing a fundamentally different opportunity like search; it’s quite another to forgive his slow reaction to two markets – cloud and mobile – that both are dependent to varying degrees on operating system technology.

Disruption is more than likely to overcome every company eventually, but the evidence suggests that Microsoft could at a minimum have acted more proactively to the threats and opportunities they presented. And that, as much as the overwhelming revenue growth, is Steve Ballmer’s responsibility.

Categories: Cloud, Mobile, People.

Updated IaaS Pricing Patterns and Trends

Thanks to a combination of market factors, but principally increased competition, there is no technology market where prices are moving more quickly than cloud infrastructure. After completing an initial survey and deconstruction of IaaS pricing trends in August of last year, in fact, follow ups proved difficult because prices dropped so frequently that an analysis was frequently obsolete before it was even published. As was the case last week, when Google at once announced the General Availability of its Cloud Compute Engine (GCE) as well as significant price drops – the day after the original analysis had been re-run.

A week later with no corresponding price drops from competitive providers, the decision was made to publish this before the inevitable arrives. As a reminder, this analysis is intended not as a literal expression of cost per service; this is not, in other words, an attempt to estimate the actual component costs for compute, disk, and memory per provider. Such numbers would be speculative and unreliable, relying as they would on non-public information, but also of limited utility for users. Instead, this analysis compares base hourly instance costs against the individual service offerings. What this attempts to highlight is how providers may be attempting to differentiate by prioritizing memory over compute capacity, as one example. In other words, it’s an attempt to answer the question: for a given hourly cost, who’s offering the most compute, disk or memory?

As with the previous iteration, a link to the aggregated dataset is provided below, both for fact checking and to enable others to perform their own analyses, expand the scope of surveyed providers or both.

Before we continue, a few notes.

Assumptions

  • No special pricing programs (beta, etc)
  • Linux operating system, no OS premium
  • Charts are based on price per hour costs (i.e. no reserved instances)
  • Standard packages only considered (i.e. no high memory, etc)
  • Where not otherwise specified, the number of virtual cores is assumed to equal available compute units

Objections & Responses

  • This isn’t an apples to apples comparison“: This is true. The providers do not make that possible.
  • These are list prices – many customers don’t pay list prices“: This is also true. Many customers do, however. But in general, take this for what it’s worth as an evaluation of posted list prices.
  • This does not take bandwidth and other costs into account“: Correct, this analysis is server only – no bandwidth or storage costs are included. Those will be examined in a future update.
  • This survey doesn’t include [provider X]“: The link to the dataset is below. You are encouraged to fork it.

Other Notes

  • HP’s 4XL (60 cores) and 8XL (103 cores) instances were omitted from this survey intentionally for being twice as large and better than three times as large, respectively, as the next largest instances. While we can’t compare apples to apples, those instances were considered outliers in this sample. Feel free to add them back and re-run using the dataset below.

  • Microsoft Azure’s “Extra Small” instance, which lists its core count as “Shared,” has been represented as .5 of a core for this analysis. If a better estimation is available, we’ll include it.
  • All of Google and Microsoft’s instances and two of Amazon’s are omitted from the disk cost comparison because they do not include a fixed disk amount per instance.
  • Versus the original analysis, IBM’s offerings have been replaced by Softlayer’s, by reason of that acquisition.
  • While we’ve had numerous requests to add providers, and will undoubtedly add some in future, the original dataset – with the above exception – has been maintained for the sake of comparison.

How to Read the Charts

  • There was some confusion last time concerning the charts and how they should be read. The simplest explanation is that the steeper the slope, the better the pricing from a user perspective. The more quickly cores, disk and memory are added relative to cost, the less a user has to pay for a given asset.

With that, here is the chart depicting the cost of disk space relative to the price per hour.

As mentioned, two of Amazon’s instances (which are EBS only) and all of Google and Microsoft’s are omitted from this chart because they do not include fixed disk price. That said, of the remaining providers, it’s interesting to see that Amazon remains the most aggressive in terms of the disk space made available. Even granting their economy-of-scale cost advantages versus some of the competitors here, with the cost of disk falling it’s interesting that no other players besides Joyent have been more willing to challenge Amazon on this front. Whether this signals a lack of ambition on the part of AWS competitors or a lack of interest from the market in disk versus memory and compute capacity is unclear, but it seems probable that it’s a combination of both.

If these charts are intended to expose prioritization from a feature perspective, meanwhile, the plot of memory capacity versus hourly cost is potentially the most revealing.

Several things become obvious from this chart.

  1. The high correlation in available memory/hourly costs indicates a shared understanding of the importance of memory pricing.
  2. Google is the most aggressive in terms of the memory per hourly unit of cost.
  3. HP is apparently pegging itself to AWS.
  4. Softlayer and, to a lesser extent, Rackspace, are likely to be less competitive for memory focused buyers.

Lastly, we have a chart of the available compute units relative to the hourly cost.

Clearly signalling its intent to compete with AWS is HP, which matched its pricing on relative available memory costs and betters it in available compute units. Rackspace is more competitive within compute than memory, and AWS – though the dominant market player – is still among the most aggressive in terms of pricing per compute unit. Softlayer and Microsoft’s Azure form the middle of the pack, with Google conspicuously less competitive in terms of available compute units as a function of hourly cost. This is a marked shift from our prior analysis, which had Google among the leaders in terms of compute per hourly cost.

A few overall takeaways, with the reminder that this is merely a survey of standard instances:

  • Amazon remains the standard against which other programs are judged and/or judging themselves. In virtually every category, Amazon is amongst the leaders, with competitive providers looking to advantage themselves by either matching or exceeding the price/component offered by Amazon.
  • Intentionally or not, providers are signaling their prioritizations from an infrastructure perspective. HP, for example, is clearly attempting to separate itself from the pack on the basis of compute value for the dollar (not to mention instance sizes), while Google is doing the same for memory. Disk, meanwhile, seems to be something of an afterthought, with multiple providers not including it as a standard portion of the offering and no one attempting to outcompete AWS as they are in compute and memory, save perhaps Joyent.
  • Given the aforementioned prioritization, it will be interesting to observe its impacts moving forward. Amazon is essentially pursuing a leader’s course: highly competitive by each factor, but not necessarily obsessed with being the absolute leader in each. Two obvious competitive strategies emerge from the above deconstruction. The first, exemplied best by Microsoft here, is a middle of the road value proposition, never the most expensive but never the least. The second appears to be a weighted bet, sacrificing performance in one category (e.g. Google with compute) to achieve leadership in another (Google in memory).
  • Besides the implications for users, it will be interesting to monitor how a given vendor’s price prioritization may vary over time, based both on customer demands as well as internal resource availability and costs. Google and HP’s strategies, as but two examples, certainly appear to have evolved since the last snapshot.

In the next iteration of our cloud pricing research, we’ll explore how pricing has changed over time across the surveyed vendors collectively. In the meantime, here is a link to the dataset used in the above analysis.

Disclosure: Amazon Web Services, IBM (Softlayer), and Rackspace are RedMonk customers. Google, HP, Joyent and Microsoft are not current customers.

Categories: Business Models, Cloud, Economics.

Community Metrics: Comparing Ansible, Chef, Puppet and Salt

In March of last year, spurred by in part by a high volume of requests, we examined a few of the community metrics around the configuration management tools Chef and Puppet. Not intended as a technical comparison, it was rather an attempt to assess their traction and performance relative to one another across a number of distinct communities. At the time, there was no clear winner or loser from the comparison.

An interesting thing has happened since we ran those numbers, however. In the interim, two new projects have emerged as alternatives that we’re encountering more and more frequently in our conversations with and surveys of various developer populations.

While it is true that there are a number of open source configuration management tools besides Chef and Puppet, those have commanded the majority of the attention in the category. But increasingly, and in spite of the relative maturity and volume usage of both Chef and Puppet, Ansible and Salt are beginning to attract a surprising amount of developer attention. Where it once was reasonable to conclude that the configuration management space would evolve in similar fashion to the open source relational database market – i.e. with two dominant projects – that future is now in question. Certainly that remains one possible path, but with the sustained in interest in alternatives it’s now worth questioning whether configuration management will more greatly resemble the NoSQL market – which is characterized by its diversity – than its relational alternative.

Because there has been no clear winner in the Chef/Puppet battle and because there are two new market entrants, then, it is not surprising that we’ve been fielding a similar volume of requests to compare the projects across some of the same community metrics as we did a year and a half ago. Here then are how Ansible, Chef, Puppet and Salt compare with one another within various developer related communities, open job postings and more.

Debian

Before we get to dissecting the charts, a word on Debian usage. Per a conversation with Jesse Robbins last year following the original Chef vs Puppet analysis, it should be noted that installing via the Debian package management system (apt) – what’s reflected in this chart – is not the preferrred installation method for Chef (gems is). This means that Chef will be under-represented in these charts. Salt, meanwhile, provides installation instructions for Debian that leverage apt and Ansible’s documentation explicitly recommends installation via operating system provided package management systems. One other caveat: while there are in some cases multiple packages for the individual projects, this analysis only includes the most popular for each.

While this is useful for communicating the dominance of Puppet in terms of installations via Debian packages, the chart obscures any other useful information on trajectory.

If we grant that Puppet leads in this context however and subtract it, it’s easier to perceive the growth of each platform. Chef is outpacing the other two projects, while Salt enjoys a moderate lead on Ansible. It’s possible that Ansible’s performance here is related to its close ties to the Red Hat ecosystem; it ships by default in Fedora and is available on RHEL via EPEL. Surveying the distribution on that ecosystem would be interesting, were data available.

GitHub

GitHub offers a variety of metrics about the projects it hosts. For our purposes here, we’ve chosen the number of times a project has been forked, the number of pull requests accepted over the last 30 days and the number of times a repository has been starred. This is intended to assess, among other things, project activity and developer interest.

Superficial though the signal of a GitHub star may be, it is interesting nevertheless to see Ansible outperforming Salt and Ansible and Salt both outperforming the better known Chef and Puppet.

The leadership of Ansible and Salt within the pull requests, meanwhile, was predictable. As more older and more mature projects, it’s natural that Chef and Puppet would see a lower rate of pull requests. It’s interesting to note, however, that Ohloh shows a much higher number of all time contributors for Ansible (559) and Salt (661) than Chef (331) and Puppet (332). GitHub doesn’t concur with those numbers precisely, but does show a similar disparity in terms of contributor volume.

In terms of the number of times each project has been forked on GitHub, the numbers are closer, but still advantage Ansible. Chef is forked slightly more often than Salt, which in turn is more widely forked than Puppet.

As far as we can tell from these rough GitHub metrics, then, developer activity within the new market entrants signals them as projects to be watched closely.

Hacker News

Within the Hacker News community, the metric is merely mentions of the individual technologies plotted over time. Unfortunately, plotting with ‘Salt’ points to an issue with the metric.

Not only does its performance on this chart wildly outperform expectations, the mentions predate the actual existence of the project by some four years. Clearly we’re dealing with artifacts then, recording mentions on “salting” password databases and the like rather than strictly mentions of the project. If we instead query using SaltStack, the results look slightly more reasonable.

It’s necessary to note that this disadvantages Salt in that the ‘SaltStack’ query will omit some legitimate mentions of ‘Salt,’ but that can’t be helped without a Google Trends-style topical understanding of the subject matter. In the meantime, the results are more or less in line with reasonable expectations. Chef and Puppet outperform their younger counterparts, particularly when the latter hadn’t yet been created, and appear to maintain a substantial edge in overall mentions – although Ansible has been spiking this year and may be currently competitive in terms of discussion volume.

Indeed.com

In terms of job queries, we run into similar issues as above. Chef and Salt massively outperform the other two projects, in part because they reflect jobs other than those working on these tools. If we attempt to subset the data we’re looking for, adding ‘technology’ to the query to restrict our search to technology jobs only, we still have issues with Salt and thus are forced to omit them, but have somewhat more reasonable looking data for the other three.

In terms of the absolute number of jobs, both Chef and Puppet are massively overrepresented relative to Ansible as would be expected. Chef’s lead over Puppet is clearly somewhat artificial, as its traction dates back to 2006 while the initial drop of the project was in 2009. But in general, it seems reasonable to conclude that Chef and Puppet offer a higher volume of jobs at the present time than either Ansible or Salt.

In terms of their relative performance, rather than the absolute number of jobs, the most notable feature of the chart is Puppet’s rapid and sustained growth. Ansible looks to be growing, but not nearly at the rate that Chef and Puppet are.

LinkedIn

In another counting statistic, meaning that time is a factor, the relative membership rates of LinkedIn user groups were no surprise.

Ansible and Salt were substantially outperformed by both Chef and Puppet. Interestingly, however, Puppet dominated not only the two newer projects but Chef as well. It’s difficult to say, however, whether this genuinely represents an advantage in traction for Puppet’s community, or whether it’s another artifact: this time of the low discoverability of Chef’s user group. Simply entering Chef turns up pages of cooking related user groups; would be members have to begin their LinkedIn query with Opscode to turn up the user group they’re looking for.

Stack Overflow

To examine the Stack Overflow dataset, this script by Bryce Boe was used to examine the performance of two of the selected projects by Stack Overflow tags by week over a multi-year period. Ansible and Salt did not generate high enough returns to be plotted here.

While Chef comes out slightly ahead, the correlation between questions tagged Chef or Puppet is strong, with neither taking a commanding lead. Importantly, however, the trajectories for both is upwards, if uneven. To get a sense of how all four projects compare in a snapshot, the following chart depicts the tag volume for each project.

To no one’s surprise, Chef and Puppet have generated substantially more questions over time than either Ansible or Salt – if only because they are older projects. Notable in addition to this, however, is Chef’s lead over Puppet. This is interesting because Puppet’s initial release was in 2005, four years before Chef became available. To be fair, however, Stack Overflow itself was only launched in 2008, so it’s not as if Puppet could capitalize on its first to market status with traction on a site that didn’t yet exist. Apart from Chef and Puppet, Ansible (84) demonstrates marginally more traction than Salt (37), but the total volumes mean the importance of that difference is negligible.

The Gist

What do we take then from all of these charts, with all the mentioned caveats? Most obviously, the data suggests no clear winner of this market at the present time. It indicates greater existing traction and usage for Chef and Puppet, of course, but this is to be expected given their longer track record. Even narrowing the field to those two projects, neither holds a position of dominance as judged by these metrics.

The most interesting conclusion to be taken from this brief look at a variety of community data sources, however, may well be the relevance of both Ansible and Salt. That these projects appear to have viable prospects in front of them speaks to the demand for solutions in the area, as well as the strong influence of personal preferences – e.g. the affinity for Salt amongst Python developers. Neither of the newer market entrants is remotely competitive with the incumbents in terms of counting stats, but they are more than holding their own in metrics reflective of simple interest.

How this market evolves in the future is still unclear, as few projected it to be more than a two horse race as recently as a few years ago. But while Chef and Puppet continue to sustain growth, it is likely that they’ll be facing more competition over time from the likes of Ansible and Salt.

Disclosure: Ansibleworks is a RedMonk customer. Opscode and Puppet Labs have been RedMonk customers, but are not currently. Saltstack is not a RedMonk customer.

Categories: Configuration Management, Open Source.

Will Python Kill R?

In an article entitled “Python Displacing R As The Programming Language For Data Science,” MongoDB’s Matt Asay made an argument that has been circulating for some time now. As Python has steadily improved its data science credentials, from Numpy to Pandas, with even R’s dominant ggplot2 charting library having been ported, its viability as a real data science platform improves daily. More than any other language in fact, save perhaps Java, Python is rapidly becoming a lingua franca, with footholds in every technology arena from the desktop to the server.

The question, per yesterday’s piece, is what this means for R specifically. Not surprisingly, as a debate between programming languages, the question is not without controversy. Advocates of one or the other platforms have taken to Twitter to argue for or against the hypothesis, sometimes heatedly.

Python advocates point to the flaws in R’s runtime, primarily performance, and its idosyncratic syntax. Which are valid complaints, speaking as a regular R user. They are less than persuasive, given that clear, clean syntax and a fast runtime correlate only weakly with actual language usage, but they certainly represent legitimate arguments. More broadly, and more convincingly, others assert that over a long enough horizon, general purpose tools typically see wider adoption than specialized alternatives. Which is again, a substantive point.

R advocates, meanwhile, point to R’s anecdotal but widely accepted traction within academic communities. As an open source, data-science focused runtime with a huge number of libraries behind it, R has been replacing tools like MATLAB, SAS, and SPSS within academic settings, both in statistics departments and outside of them. R’s packaging system (CRAN), in fact, is so extensive that it contains not only libraries for operating on data, but datasets themselves. Not only does it contain datasets for individual textbooks taught by academia, it will store different datasets by the edition of those textbooks. An entire generations of researchers is being trained to use R for their analysis.

Typically this is the type of subjective debate which can be examined via objective data sources, but comparing the trajectories is problematic and potentially not possible without further comparative research. RStudio’s Hadley Wickham, creator of many of the most important R libraries, examined GitHub and StackOverflow data in an attempt to apply metrics to the debate, but all the data really tells us is that a) both languages are growing and that b) Python is more popular – which we knew already. Searches of package popularity likewise are unrevealing; besides the difficulty of comparing runtimes due to the package-per-version protocol, there is the contextual difficulty of comparing Python to R. Python represents a superset of R use cases. We know Python is more versatile and applicable in a much wider range of applications. We also know that in spite of Python’s recent gains, R has a wider library of data science libraries available to it.

My colleague Donnie Berkholz points to this survey, which at least is context-specific in its focus on languages employed for analytics, data mining, data science. It indicates that R remains the most popular language for data science, at 60.9% to Python’s 38.8%. And for those who would argue that current status is less important than trajectory, it further suggests that R actually grew at a higher rate this year than Python – 15.1% to 14.2%. But without knowing more about the composition and sampling of the survey audience, it’s difficult to attribute too much importance to this survey. Granted, it’s context specific, but we have no way of knowing whether the audience surveyed is representative or skewed in one direction or another.

Ultimately, it’s not clear that the question is answerable with data at the present time. Still, a few things seem clear. Both languages are growing, and both can be used for data science. Python is more versatile and widely used, R more specialized and capable. And while the gap has been narrowing as Python has become more data science capable, there’s a long way to go before it matches the library strength of R – which continues to progress in the meantime.

How you assess the future path depends on how you answer a few questions. At RedMonk, we typically bet on the bigger community, but that’s not as easy here. Python’s total community is obviously much larger, but it seems probable that R’s community, which is more or less strictly focused on data science, is substantially larger than the subset of the Python community specifically focused on data. Which community do you bet on then? The easy answer is general purpose, but that undervalues the specialization of the R community on a discipline that is difficult to master.

While the original argument is certainly defensible, then, I find it ultimately unpersuasive. The evidence isn’t there, yet at least, to convince me that R is being replaced by Python on a volume basis. With key packages like ggplot2 being ported, however, it will be interesting to watch for any future shift.

In the meantime, the good news is that users do not need to concern themselves with this question. Both runtimes are viable as data science platforms for the foreseeable future, both are under active development and both bring unique strengths to the table. More to the point, language usage here does not need to be a zero sum game. Users that wish to leverage both, in fact, may do so via the numerous R<==>Python bridges available. Wherever you come down on this issue, then, rest assured that you’re not going to make a bad choice.

Disclosure: I use R daily, I use Python approximately monthly.

Categories: Programming Languages.

The Difficulty of Selling Software

On the surface, this statement by Asymco analyst Horace Dediu is clearly and obviously false. For 2013, Microsoft’s Windows and Business (read: Office) divisions alone generated, collectively, $44B in revenue. This number was up around 4% from the year before, after being up 3% in 2012 versus the year prior. This comment, in other words, is easily dismissed as hyperbole.

But given that the overwhelming amount of evidence contradicting the above statement, and his familiarity with capital markets, it’s highly unlikely that Dediu would be unaware of this. Which makes it reasonable, therefore, to conclude that he did not intend for the statement to interpreted literally. Which in turn implies that Dediu’s making a directional statement rather than a literal description of the market reality.

Even if one gives, for the sake of argument, Dediu the benefit of the doubt and assumes subtlety, the next logical counterargument is that he’s unduly influenced by his focus on consumer markets. The trend there, after all, is clear: the majority of available consumer software is subsidized by either advertising (e.g. Facebook, Google, Twitter) or hardware (e.g. Apple). More to the point, both of these models are attempting to exert pressure on the paid software model, as in the case of the Apple iWork and Google Docs competing for mindshare with the non-free Microsoft Office or the now free OS X (non-server) positioned against the non-free Microsoft Windows. Even in hot application spaces like mobile, it’s getting increasingly difficult to commercialize the output.

If this is your analytical context, then – and certainly Dediu’s primary focus (Asymcar notwithstanding) is on Apple and markets adjacent to Apple – the logical conclusion is indeed that software prices are heading towards zero in most categories, and that software producers need to adjust their revenue models accordingly.

No surprise then that it is by labeling the decline of realizable revenues as a consumer software-only phenomenon that enterprise providers are able to reassure both themselves and the market that they are uniquely immune, insulated from an erosion in the valuation of software as an asset by factors ranging from the price insensitivity and inertia of enterprise buyers to technical and/or practical lock-in. And to be fair, enterprise software markets are eminently more margin-oriented than consumer alternatives, not least because businesses are used to regarding technology as a cost of doing business. For consumers, it has historically been more of a luxury.

But the fact is that the assertion that it’s getting more difficult to charge for software is correct, as we have been arguing since 2010/2011.

The surface evidence, once again, contradicts this claim. Consider the chart of Oracle’s software revenue below.

This, for Oracle, is the good news. With few exceptions, notably a market correction following the internet bubble, Oracle has sustainably grown its software revenue every year since 2000. The Redwood Shores software giant, in fact, claimed in October that it was now the second largest software company in the world by revenue behind Microsoft, passing IBM. If a company that large can continue to generate growth, year after year, it’s easy to vociferously argue that the threat of broader declines in the viability of commercial software-only models is overblown. But this behavior, common to software vendors today, increasingly has a whistling-past-the-graveyard ring to it.

Whatever your broader thoughts on the mechanics of Dediu-mentor and Harvard Business School professor Clayton Christensen’s theory of disruption, history adequately demonstrates that even highly profitable, revenue generating companies are vulnerable. Oracle, for example, is as a software-sales business challenged by a variety of actors from open source projects to IaaS or SaaS service-based alternatives. To its credit, the company has hedges against both in BerkeleyDB/MySQL/etc and its various cloud businesses. It’s not clear, however, that even collectively they could offset any substantial impact to its core software sales business – while not broken out, MySQL presumably generates far less revenue than the flagship Oracle database. Software was 67% of Oracle’s revenue in 2011, a year after they acquired Sun Microsystems and its hardware businesses. In 2013, software comprised 74% of Oracle’s revenue.

The question for Oracle and other companies that derive the majority of their income from software, rather than with software, is whether there are signs underneath the surface revenue growth that might reveal challenges to the sustainability of those businesses moving forward. Consider Oracle’s 10-K filings, for example. Unusually, as discussed previously, Oracle breaks out the percentage of its software that derives from new licenses. This makes it easier to document Oracle’s progress at attracting new customers, and thereby the sustainability of its growth. The chart below depicts the percentage of software revenue Oracle generated from new licenses from 2000-2013.

There are a few caveats to be aware of. First, there are contradictions in the 2002 and 2003 10-K’s; second, where the 2012 10-K reported “New software licenses,” the 2013 10-K is now terming this “New software licenses and cloud software subscriptions.” With those in mind, the trendline here remains clear: Oracle’s ability to generate new licenses is in decline, and has been for over a decade. At 38% in 2013, the percentage of revenue Oracle derives from new licensees is a little less than half of what it was in 2000 (71%). Some might attribute this to the difficulty for large incumbents to organically generate new business, but in the year 2000 Oracle was already 23 years old.

What this chart indicates, instead, is that Oracle’s software revenue growth is increasingly coming not from new customers but from existing customers. Which is to the credit of Oracle’s salesforce, in spite what of the company characterized as their “lack of urgency.”

It may not be literally true, as Dediu argued above, that you can’t charge for software anymore. But it’s certainly getting harder for Oracle. And if it’s getting harder for Oracle, which has a technically excellent flagship product, it’s very likely getting harder for all of the other enterprise vendors out there that don’t break out their new license revenues as Oracle does. This is not, in other words, an Oracle problem. It’s an industry problem.

Consumer software, enterprise software: it doesn’t much matter. It’s all worth less than it was. If you’re not adapting your models to that new reality, you should be.

Disclosure: Oracle is not a RedMonk client. Microsoft has been a RedMonk client but is not currently.

Categories: Business Models, Cloud, Databases, Open Source, Software-as-a-Service.

The Questions for Hadoop Moving Forward

Strata + Hadoop World New York 2013

In the beginning – October, 2003 to be precise – there was the Google File System. And it was good. MapReduce, which followed in December 2004, was even better. Together, they served as a framework for Doug Cutting’s original work at Yahoo, work that resulted in the project now known as Hadoop in 2005.

After being pressed into service by Yahoo and other large web properties, Hadoop’s inevitable standalone commercialization arrived in the form of Cloudera in 2009. Founded by Amr Awadallah (Yahoo), Christophe Bisciglia (Google), Jeff Hammerbacher (Facebook) and Mike Olson (Oracle/Sleepycat) – Cutting was to join later – Cloudera oddly had the Hadoop market more or less to itself for a few years.

Eventually the likes of MapR, Hortonworks, IBM and others arrived. And today, any vendor with data processing ambitions is either in the Hadoop space directly or partnering with an entity that is – because there is no other option. Even vendors with no major data processing businesses, for that matter, are jumping in to drive other areas of their businss – Intel being perhaps the most obvious example.

The question is not today, as it was in those early days, what Hadoop is for. In the early days of the project, many conversations with users about the power of Hadoop would stall when they heard words like “batch” or compared MapReduce to SQL (see Slide 22). Even already on-board employers like Facebook, meanwhile, faced with a market shortage of MapReduce-trained candidates were forced to write alternative query mechanisms like Hive themself. All of which meant that conversations about Hadoop were, without exception, conversations about what Hadoop was good for.

Today, the revese is true: it’s more difficult to pinpoint what Hadoop isn’t being used for than what it is. There are multiple SQL-like access mechanisms, some like Impala driving towards lower and lower latency queries, and Pivotal has even gone so far as to graft a fully SQL-compliant relational database engine on to the platform. Elsewhere, projects like HBase have layered federated database-like capabilities onto the core HDFS Hadoop foundation. The net of which is that Hadoop is gradually transitioning away from being a strictly batch-oriented system aimed at specialized large dataset workloads and into a more mainstream, general purpose data platform.

The large opportunity that lies in a more versatile, less specialized Hadoop helps explain the behavior of participating vendors. It’s easier to understand, for example, why EMC is aggressively integrating relational database technology into the platform if you understand where Hadoop is going versus where it has been. Likewise, Cloudera’s “Enterprise Data Hub” messaging is clearly intended to achieve separation from the perception that Hadoop is “for batch jobs.” And the size of the opportunity is the context behind IBM’s comments that it “doesn’t need Cloudera.” If the opportunity, and attendant risk, was smaller, IBM would likely be content to partner. But it is not.

Nor is innovation in the space limited to those would sell software directly; quite the contrary, in fact. Facebook’s Presto is a distributed SQL engine built directly on top of HDFS, and Google Spanner et al clones are as inevitable as Hadoop was once upon a time. Amazon’s RedShift, for its part, is gathering momentum amongst customers who don’t wish to build and own their own data infrastructure.

Of course, Hadoop could very well be years behind Google from a technology perspective. But even if the Hadoop ecosystem is the past to Google, it’s the present for the market. And questions about that market abound. How does the market landscape shake out? Are smaller players shortly to be acquired by larger vendors desperate not be locked out of a growth market? Will the value be in the distributions, or higher level abstractions? How do broadening platform strategies and ambitions affect relationships with would-be partners like a MongoDB? How do the players continue to balance the increasing trend towards open source against the need to optimize revenue in an aggressively competitive market? Will open source continue to be the default, baseline expectation, or will we see a tilt back towards closed source? Will other platforms emerge to sap some of Hadoop’s momentum? Will anyone seriously close the gap between MapReduce/SQL analyst and Excel user from an accessibility standpoint?

And so on. These are the questions we’re spending a great deal of time exploring in the wake of the first Strata/HadoopWorld in which Hadoop deliberately and repeatedly asserted itself as a general purpose technology. From here on out, the stakes are higher by the day, and margin for error low. To she who gets more of the answers to the above questions correct go the spoils.

Categories: Big Data, Open Source.

The Depth of Amazon’s Ambition

Not surprisingly for an organization that has updated its product line 200 times this year as of the first of the month, Amazon had a few tricks up its sleeve for its annual re:Invent conference. For the company that effectively created the cloud market, the show was an important one for showcasing the sheer scope of Amazon’s targets.

Amazon is correctly regarded as one of the fastest innovating vendors in the world, with the release pace up over 500% from 2008 through last year. And if Amazon keeps up its pace for releases through the end of the year, it will have released 36% more features this year than last.

But as impressive as the pace is, the more impressive – and potentially more important – aspect to their release schedule is its breadth. Consider what Amazon announced at re:Invent:

  • AppStream (Mobile/Gaming)
  • CloudTrail (Compliance and Governance)
  • Kinesis (Streaming)
  • New Instance Types in C3/I2 (Performance compute)
  • RDS Postgres (Database as a Service)
  • Workspaces (VDI)

The majority of cloud vendors today are focused on executing with core cloud workloads, or basic compute and storage. There are certainly players focused on adding value through differentiated, specialized technologies such as Joyent with its distributed-Unix data-oriented Manta offering or ProfitBricks with its scale up approach, but these are the exception rather than the rule. Whether it’s public cloud providers or enterprises attempting to build out private cloud abilities, most of the focus is on simply keeping the lights on.

At re:Invent, Amazon did upgrade its traditional compute offerings via C3/I2, but also signaled its intent to embrace and extend entirely new markets. Most obviously, Amazon has with Workspace turned its eye towards VDI, for years a market long on promise but short on traction. The theoretical benefits of VDI, from manageability to security, have to date rarely outweighed the limitations and costs of delivery, making it the Linux desktop of IT – with success always just over the horizon. Amazon’s bet here is that by removing the complexity of execution it can engage with customers in a manner that its core cloud businesses cannot, and thereby grow its addressable market in the process.

Similarly, Kinesis is an entry into a specialized market that has typically been the province either of vendor packages – e.g. IBM InfoSphere Streams – or more recent open source project combinations such as Storm/Kafka. Of specific interest with Kinesis is the degree to which Amazon is leading the market here rather than responding to it. When questioned on the topic, Amazon said that Kinesis was unlike other Amazon offerings such as Workspaces that were a response to widespread customer demand. Instead, Amazon is anticipating future market needs with Kinesis, and attempting to deliver ahead of same.

AppStream, for its part, is effectively a Mobile/Gaming-backend-as-a-service, putting providers in that space on notice. The addition of Postgres as an RDS option, meanwhile, came to wide developer acclaim, but means that Amazon will increasingly be competing with AWS customers like Heroku. And CloudTrail, particularly with its partner list, means that AWS is taking the enterprise market seriously, which is both opportunity and threat for its enterprise ecosystem partners.

Big picture, re:Invent was an expansion of ambition from Amazon. Its sights are even broader than was realized heading into the show, which should give the industry pause. It has been difficult enough to compete with AWS on a rate of innovation basis in core cloud markets; with its widening portfolio of services, the task ahead of would-be competitors large and small just got more difficult.

That being said, however, it is worth questioning the sustainability of Amazon’s approach over the longer term. Microsoft similarly had ambitions not just to participate in but fundamentally dominate and own peripheral or adjacent markets, and arguably that near infinite scope impacted their focus in their core competencies. The broader and more diverse the business, the more difficult it becomes to manage effectively – not least because you end up making more enemies along the way. It remains to be seen whether or not Amazon’s increasing appetite to cloudify all the things has a similar effect on its ability to execute moving forward, but in the interim customers have a brand new stable of toys to play with.

Disclosure: Amazon, Heroku, and IBM are RedMonk customers, Joyent, Microsoft and ProfitBricks are not.

Categories: Cloud, Conferences & Shows.

A Look at Public Offerings from 1980-2012

A year ago, a CTO that had landed a large public round and secured a quarter as much in a less public investment candidly described the process saying, “this used to be called going public.” MongoDB, the recent beneficiary of a $150M round led by Intel, Saleforce.com and Sequoia would likely agree. As might Uber, who received $250M in financing from Google Ventures. Going public is clearly no longer the sole route to market for outsized capital requirements.

Which isn’t to imply that venture deal sizes are, on average, increasing. Thanks to a combination of factors from the rise of early stage investment vehicles like Y Combinator to open source software and the public cloud, data gathered by Chris Tacy (below) indicates that if we conflate angel and traditional venture investments, deal volume is up but the size of individual deals is actually in decline.

But at the opposite end of the spectrum, anecdotal evidence suggests that private funding is increasingly competing with public markets in ways not seen previously. The question is whether the data validates the assumption that private companies are being funded on a scale historically competitive with public market returns, and what this means for the wider market moving forward.

To expore the first question, it’s useful to examine data (PDF) on US Initial Public Offerings from 1980-2012 collected by Professor Jay R Ritter of the University of Florida. In his own words, the sample includes “IPOs with an offer price of at least $5.00, excluding ADRs, unit offers, closed-end funds, REITs, partnerships, small best efforts offers, banks and S&Ls, and stocks not listed on CRSP (CRSP includes Amex, NYSE, and NASDAQ stocks).” For example, here is the total number of IPOs per year beginning in 1980.

It should be no surprise to most that public offerings spiked in the late 1990’s. The Tulipmania hysteria that absorbed the technology industry – and eventually, the world – during the bubble has been well documented. What’s interesting about the this chart, however, is that it indicates that the market has yet to recover from the tech-driven crash in public offering volumes. The median number of IPOs per year from 1980 to 2012 is 174. We have not seen that many in a given year since 2004. The recent recession, of course, undoubtedly depressed the appetite for entities to take themselves public. But even in years of relative prosperity, domestically, IPOs seem to have lost some of their luster.

One potential explanation would be the returns. Below is a chart of the aggregate proceeds from all IPOs in a given year as calculated by Ritter. To normalize them for context, however, all numbers have been adjusted for inflation. Dollar amounts depicted, therefore, represent an approximated value in 2013 US dollars.

While the trendlines don’t match precisely, it’s interesting and perhaps not surprising to note the strong correlation between the returns from public offerings and their frequency. It is also worth noting that while proceeds have recovered more strongly than volume, the aggregate returns from public offerings remain depressed. From 1980 to 2012, the median return in 2013 dollars for the aggregate of a year’s worth of public offerings is $28.5B – a figure that hasn’t been reached in four of the last six years. An analysis of the average individual returns, however, challenges the hypothesis that the lack of an expected return is preventing would be IPOs from transacting.

The above chart depicts the aggregate returns for a given year divided by the number of IPOs – providing us with, essentially, an average IPO return. Even after normalizing against a 2013 dollar scale, it’s apparent that the realizable returns per transaction are still growing (if you’re curious about the 2008 outlier, that’s the year VISA went public and raised ~$17B). Which in turn should mean that the incentive to go public remains, and certainly entities from Google (1998) to Facebook (2012) to the aforementioned Twitter have chosen that path in spite of the availablility of capital in private markets.

Still, it is interesting to observe that deals like MongoDB’s $150M round dwarf the expected returns from historical IPOs, even after adjusting for inflation. For example, from 1980 to 1997 the average adjusted return from a public offering never eclipsed $100M. Since then it has dramatically expanded, with the median adjusted return since 1997 weighing in at approximately $253M, or approximately $100M more than MongoDB raised in its last round.

If more companies then are either delaying going public or avoiding the public markets entirely, one would expect to see a rise in venture backed companies eventually going public. While the costs of starting and running businesses have in many respects come down due to dramatic drops in the costs of technical infrastructure among other categories, these have in many respects been offset by spikes in other areas, notably healthcare. Which means that whether public or private, growing companies are likely to still require financing to fuel growth. And indeed, we find exactly this sort of trajectory in venture backed companies.

The above chart depicts percentage of IPOs from technology entities that were backed by venture capital. While the overall percentage has always been high, the trendline is clearly towards greater VC participation. Which makes sense in the wake of a decade of decreased reliance on public market alternatives.

As for what all of this means moving forward, the answers are unclear. In the aggregate, the private market is obviously not lacking for available capital. Just as clearly, decline in volume or no, the returns remain there for public market entrants – or at least some of them. But as the number of large venture deals that approximate the anticipated returns from a public offering appears to be on the rise, it’s worth monitoring the dynamic between public and private funding sources. In the meantime, we’re likely to continue seeing the kinds of deals that “used to mean going public.”

Categories: Venture Capital.

The 2013 Monktoberfest

Monktoberfest 2013
(All photos courtesy Maney Digital)

In a 2001 piece for the New York Times, the now sadly departed Elmore Leonard summed up his tenth and final rule on how to write simply: “Try to leave out the part that readers tend to skip.” Without claiming any particular success, this is essentially the philosophy behind the Monktoberfest. In effect, it’s an attempt to answer the question: what would happen if we threw a conference without the parts that people skip?

Consider sponsored talks, for example. While it is not technically impossible to deliver a sponsored talk that engages an audience, the overlap between great talks and paid talks is tiny. Most end up as little more than infomercials. So we lose them. Then there’s timing. For a conference aimed at and built for developers, who tend to not be the early rising type, why would we start the conference at the more typical 8 AM? 10 AM is much more civilized. And what do people most frequently want to skip at a conference? Meals delivered by a staff whose focus is scaling the food, not crafting the food. Many fewer people, on the other hand, skip a sushi lunch or a dinner that includes lobsters caught by the caterer’s husband the afternoon before.

While this is a bit of a different approach for conferences, the logic behind it seems straightforward. In my experience, the quality of any given conference will ultimately be determined not by the food, drink or even the speakers – as important as they are. The value of a conference is determined instead by its people. Why, then, would we optimize for anything but the people?

Monktoberfest 2013

Whether we succeeded will be determined in the weeks and months ahead, as the impact of the individual talks ripples outwards, we see the manifestations on social media and elsewhere of new connections made at the show and so on. But the early returns are gratifying.

The last quote from Mike is perhaps the most important to me personally. People who have never attended the Monktoberfest will ask me what it’s all about, and my answer is that it’s about the intersection of social and technology. It’s about how technology changes the way that we socialize, and how the way that we socialize changes the way that we build technology. But within that broad framework, speakers have a great deal of latitude to interpret the constraints in interesting ways. In doing so, as Mike says, they make me think about why I think what I think. They make me think about what I’m doing, why I’m doing it, and how I can help. They inspire me, and I seriously doubt that I’m the only one. They are, in short, the kinds of talks that don’t necessarily have a home at other shows.

Thanks

As with most large productions, the Monktoberfest is a group effort, and as such, there are many people to thank.

  • Our Sponsors: Without them, there is no Monktoberfest
    • IBM MobileFirst: In an industry littered with the carcasses of businesses that couldn’t adapt to change, IBM is one of the few major technology companies in existence that has survived not one but multiple waves of disruption. The driving force behind most disruption today is the developer – nowhere is this more apparent than in mobile – and we appreciate IBM’s strong support as our lead sponsor in helping to bring them the conference they deserve.
    • Red Hat: As the world’s largest pure play open source company, there are few who appreciate the power of the developer better than Red Hat. Their support as an Abbot Sponsor – the third year in a row they’ve sponsored the conference, if I’m not mistaken – helps us make the show possible.
    • ServiceRocket: When we post the session videos online in a few weeks, it is Service rocket that you will have to thank.
    • EMC: Enjoyed your surf & turf dinner? Take a minute to thank the good folks from EMC.
    • Rackspace/Splunk: It’s much easier to splurge on fresh sushi when you have partners like Rackspace and Splunk helping to make it possible.
    • Basho: When you came in a little under the weather on Thursday and treated yourself to a breakfast sandwich, that was Basho’s doing.
    • Atlassian/AWS/Brick Alloy/Citrix/CloudSpokes/Docker/Moovweb/Opscode/Rackspace: Remember the rare beers served at the event – one of which included the only barrel available in the US? These are the people that brought it to you. And be sure to thank Atlassian especially, as they brought you four separate rounds.
    • Brick Alloy/Crowd Favorite: While we continue to search for a reasonable solution to the difficult challenges posed by a hundred plus bandwidth-hungry geeks carrying three or more devices per person, Brick Alloy and Crowd Favorite at least deferred the load onto local repeaters.
    • Rackspace: The glasses this year came courtesy of Rackspace, as our attendees will be reminded every time they drink a craft beverage from one.
    • Moovweb: Moovweb, meanwhile, addressed the afternoon munchies.
    • O’Reilly: Lastly, we’d like to thank the good folks from O’Reilly for being our media partner yet again.
  • Our Speakers: Every year I have run the Monktoberfest I have been blown away by the quality of our speakers, a reflection of their abilities and the effort they put into crafting their talks. At some point you’d think I’d learn to expect it, but in the meantime I cannot thank them enough. Next to the people, the talks are the single most defining characteristic of the conference, and the quality of the people who are willing to travel to this show and speak for us is humbling.
  • Ryan and Leigh: Those of you who have been to the Monktoberfest previously have likely come to know Ryan and Leigh, but for everyone else they are one of the best craft beer teams not just in this country, but the world. And they’re even better people, having spent the better part of the last few months sourcing exceptionally hard to find beers for us. It is an honor to have them at the event, and we appreciate that they take time off from running the fantastic Of Love & Regret on behalf of Stillwater Ales down in Baltimore, MD to be with us.
  • Lurie Palino: Lurie and her catering crew have done an amazing job for us every year, but this year was the most challenging yet due to some unfortunate and unnecessary licensing demands presented days before the event. As she does every year, however, she was able to roll with the punches and deliver on an amazing event yet again. With no small assist from her husband, who caught the lobsters, and her incredibly hard working crew at Seacoast Catering.
  • Kate (AKA My Wife): Besides spending virtually all of her non-existent free time over the past few months coordinating caterers, venues and overseeing all of the conference logistics, Kate was responsible for all of the good ideas you’ve enjoyed, whether it was the masseuses last year or the cruise this year. She also puts up with the toll the conference takes on me and my free time. I cannot thank her enough.
  • The Staff: From Juliane and James securing and managing all of our sponsors to Marcia handling all of the back end logistics to Kim, Ryan and the rest of the team handling the chaos that is the event itself, we’ve got an incredible team that worked exceptionally hard.
  • Our Brewers: I’d like to thank Jim Conroy of The Alchemist, Josh Wolf of Allagash, Greg Norton of Bier Cellar, Mike Fava and Tim Adams of Oxbow, and Brian Strumke of Stillwater for taking time out of their busy schedules to be with us. The Alchemist and Allagash, in addition, were kind enough to provide giveaways to our attendees and speakers, respectively.
  • Mike Maney: If he’s not the most enthusiastic Monktoberfest attendee, I’m not sure who would be. Last year he embarked on an epic 7 state road trip to the conference, and this year he sourced three bottles of Dogfish hand signed by none other than the founder of the brewery, Sam Calagione. These we were able to give away to attendees thanks to Mike’s efforts.
  • Caroline McCarthy & Mike McClean of Abbey Cat Brewing: At the conclusion of our brewer’s panel featuring the Alchemist, Allagash, Bier Cellar, Oxbow and Stillwater, our panelists were each issued a customized Monktoberfest mash paddle. This came courtesy of a connection from Monktoberfest speaker Caroline McCarthy, who introduced me to Mike McClean, who graciously furnished us with the paddles gratis. Abbey Cat Brewing, in Mike’s words, makes “mash paddles, with the help of a sweatshop staffed entirely by foster kittens.” What he failed to add is that they are gorgeous creations. And before you ask, yes, we have pictures of the paddles with kittens.

With that, we close this year’s Monktoberfest. For everyone who was a part of it, I owe you my sincere thanks. You make all the blood, sweat, tears worth it. Stay tuned for details about next year, and in the meantime, you might be interested in Thingmonk or the Monki Gras, RedMonk’s other two conferences.

Categories: Conferences & Shows.

Are PaaS and Configuration Management on a Collision Course and Four Other PaaS Questions

The following was meant to be ready in time for the Platform conference last week, but travel. While it’s belated, however, the following may be of interest to those tracking the PaaS market. At RedMonk, the volume of inquiries related directly and indirectly to PaaS has been growing rapidly, and these are a few of the more common questions that we’re fielding.

Q: Is PaaS growing?
A: The short answer is, by most measurements – search traffic included – yes.

The longer answer is that while interest in PaaS is growing, its lack of visibility on a production basis is adding fuel to those who remain skeptical of the potential for the market. Because PaaS was over-run in the early days by IaaS, there are many in the industry who continue to argue that PaaS is at best a niche market, and at worst a dead end.

To make this argument, however, one must address two important objections. First, the fact that the early failures in the PaaS space were of execution, not model. Single, proprietary runtime platforms are less likely to be adopted than open, multi-runtime alternatives for reasons that should be obvious. But perhaps more importantly, those arguing that the lack of production visibility for PaaS today means that it lacks a future must explain why this is true, given that history does not support this point. Quite the contrary, in fact: dozens of technologies once dismissed as “non-production” or “not for serious workloads” are today in production, running serious workloads. The most important factor for most technologies isn’t where they are today, but rather what their trajectory is.

Q: How convenient is PaaS, really?
A: That depends on one’s definition of convenience. It is absolutely true that PaaS simplifies or eliminates entirely many of the traditional challenges in deploying, managing and scaling applications. And given that developers are typically more interested in the creation of applications than the challenges of managing them day to day, these abilities should not be undersold.

That said, PaaS advocates are frequently unaware of the friction relative to traditional IaaS alternatives. Terminology, for example, is frequently an object of confusion: the linguists of infrastructure-as-a-service, which is essentially a virtual representation of physical alternatives, are simple. Servers are instantiated, run applications and databases, have access to a storage substrate and so on. Would-be adopters of PaaS platforms, however, must reorient themselves to a world of dynos, cartridges and gears. Even the metrics are different; rather than being billed by instance, they may be billed by memory or transactions – some of which can be difficult to predict reliably.

Is PaaS more convenient, then? Over the longer term, yes, it will abstract a great deal of complexity away from the application development process. In the short term, however, there are trade offs. It’s akin to someone who speaks your language, but with a heavy accent or in a different dialect. It’s possible to discern meaning, but it can require effort.

Q: What’s the biggest issue for PaaS platforms at present?
A: While the containerization of an application is far from a solved problem – some applications will run with no issues, while others will break instantly – it is relatively mature next to the state of database integrations. Most PaaS providers at present have distanced themselves from the database, for reasons that are easy to understand: database issues associated with multi-tenant, containerized and highly scalable applications are many. But it does present problems for users. PaaS platform database pricing has typically reflected this complexity, with application charges forming a fraction of the loaded application cost next to data persistence. And many platforms, in fact, have openly advocated that the data tier be hosted on entirely separate, external platforms, which spells high latency as applications are forced to call to remote datacenters even for simple tasks like rendering a page. Expect enhanced database functionality and integration to be a focus and differentiation point for PaaS platforms in the future. This is why several vendors in the space have invested heavily in relationships with communities like PostgreSQL and MongoDB.

Q: Where do the boundaries to PaaS end and the layers above and below it begin?
A: This is one of the most interesting, and perhaps controversial, questions facing the market today. In many respects, PaaS is well defined and quite distinct from other market categories; the previously mentioned lack of database integration, for example. But in others, the boundaries between PaaS and complementary technologies is substantially less clear. Given the PaaS space’s ambition to abstract away the basic mechanics of application and deployment, for example, it seems logical to question the intersection and potential overlap of PaaS and configuration management/orchestration/provisioning software such as Ansible, Chef, Puppet, or Saltstack. PaaS users, after all, are inherently bought into abstraction and automation; will they be content to manage the underlying physical and logical infrastructures using a separate layer? Or would they prefer that be a feature of the platform they choose to encapsulate their applications with?

If we assume for the sake of argument that, at least on some level, traditional configuration management/provisioning will become a feature of PaaS platforms, the next logical question is: what does this mean both for PaaS platform providers and configuration management/orchestration/provisoning players? Should the latter be aggressively be pursuing partnership strategies? Should the former rely upon one or more of these projects or attempt to replicate the feature themselves?

From the conversations we’re having, these are the important strategic questions providers are asking themselves right now.

Q: What’s the market potential?
A: We do not do market sizing at RedMonk, believing that it is by and large a guess built on a foundation of other guesses. That said, it’s interesting that so many are relegating PaaS to niche-market status. Forget the fact that even those companies serving conservative buyers such as IBM have chosen to be involved. Consider instead the role that PaaS was built to play. Much as the J2EE application servers abstracted Java applications from the operating systems and hardware layers underneath them, so too does PaaS. It is the new middleware.

Given the size of the Java middleware market at its peak, this is a promising comparison for PaaS. Because while it is true that commercial values of software broadly have declined since traditional middleware’s apex, PaaS offers something that the application servers never did: multi-runtime support. Where middleware players then were typically restricted to just those workloads running in Java, which was admittedly a high percentage at the time, there are few if any workloads that multi-runtime PaaS platforms will be unable to target. Which makes its addressable market size very large indeed.

Disclosure: IBM and Pivotal (Cloud Foundry) are clients, as is Red Hat (OpenShift), MongoDB and Salesforce/Heroku. In addition, Ansible, Opscode, Puppet Labs are or have been clients.

Categories: Cloud, Devops, Platform-as-a-Service.