tecosystems

What’s in Store for 2012: A Few Predictions

Share via Twitter Share via Facebook Share via Linkedin Share via Reddit

The cost of delaying my 2012 predictions is that one has already come to pass. Nginx – the web server now powering all of the redmonk.com properties – passed IIS according a January 4 Netcraft release. Because the quantitative data available to us has indidicated surging interest in the alternative web server – the logical result of which was a commercial response – we’ve been expecting something like this. But of course we can’t count this as a prediction any longer because it’s January 13th.

Here instead are a few things that have not yet come to pass, but will, I believe, in the year ahead. These predictions are informed by historical context and built off my research, quantitative data that’s available to me externally or via RedMonk Analytics, and the conversations I’ve had over the past twelve months, both digital and otherwise. They cover a wide range of subjects because we at RedMonk do.

For context, my 2010 predictions graded out as 66% accurate while 2011’s were 82% correct.

With that, the 2012 predictions.

Data & The Last Mile

It is not technically correct to assert that large scale data infrastructure is a solved problem. Decades of innovation remain, as the Cambrian explosion of projects demonstrate. It is nevertheless true that relative to the user interface, data storage and manipulation is a solved problem. Since the original creation of Hadoop in 2006, for example, we have seen multiple user interfaces applied: connectors (e.g. R), standard MapReduce, scripting (e.g. Jaql/Pig), SQL (e.g. Hive), spreadsheets (e.g. BigSheets), client tooling (e.g. Karmasphere). Each has its strengths, none bridges the last mile: putting the power of Big Data in the hands of ordinary users.

Which is perhaps unsurprising; even the mature relational database world uses abstractions of varying levels of complexity to interface with business users. But with data driven decision making on the rise, premiums are being placed on tooling which can expose in sensible fashion data to those without degrees in computer science. Hence, the elevated visibility of startups such as Metamarkets, who excite data scientists with tools like Druid but whose valuation may ultimately depend on its last mile expertise.

At this point in time, whatever my preferred model for data storage and whatever the type, there will be greater than one credible option for a data engine. The same cannot be said for presentation. Which would be less problematic if the market for Big Data talent were not so desperate; outsourcing to shops like Mu Sigma will be an option in some quarters, but comes with its own inefficiences and risks, not to mention per inquiry premiums.

This, then, will be an area of focus in 2012, for both innovation (look for assisted anomaly and correlation identification, a la Google Correlate) and M&A.

Desktop Importance Declines

The most interesting characteristic of the forthcoming Windows 8 release isn’t the technology, which is curious because it’s revolutionary from a Microsoft standpoint. From the support for ARM to the addition of the Windows Store to the ability to author in JavaScript and HTML5, there is much to digest. Instead, the single most defining characteristic of the pending launch is apathy.

Overall inquiries and discussion of the platform demonstrate curiosity but limited interest; the visibility of the once dominant Windows platform is secondary to mobile platforms like Android and iOS.

While this is not a function of any specific or general design failures on the part of Microsoft – indeed, the platform is incorporating important changes while making itself more developer accessible – it is symptomatic of a broader and more difficult to attack problem: the declining role of the desktop.

The desktop is simply not as important as it once was. Mobile usage is eroding the central role PC’s once played; while they are still the dominant form of computing, the trendline is declining and there is no reason to expect it to invert. It’s been suggested that mobile computing in general is additive; that it’s being used to extend the usage of computing to areas where PCs were not employed, and is thus non-competitive. But our data as well as Asymco‘s indicates that, at least in part, mobile usage is coming at the expense of traditional platforms. General search volume data, as we’ve seen, validates this assertion.

There are two implications here. Most obviously, Microsoft’s ability to generate interest in and thus leverage for its flagship operating system is jeopardized. Worldwide developer populations are not necessarily zero sum as skills overlap, but they tend to be rivalrous; an Android or iOS developer is often a lost potential Windows developer – experiments like BlueStacks aside. We can therefore expect Microsoft to have to expend more effort to attract fewer developers to their platform, a negative cycle which becomes cyclical. Second, as the desktop’s primacy abates, we can expect to see greater competition in the marketplace. As enterprises become by necessity more heterogeneous, incorporating Android and iOS devices, the costs of supporting second operating systems drifts towards marginal, which means that forecasts of greater Apple penetration become more probable.

Developer Shortages

It’s become axiomatic that industry hiring is all demand and short supply, and none of our clients expect any relief in the year ahead. Nor will they receive it. Shortages for in demand skillsets will continue over the next twelve months, advantaging entities that are either geographically positioned to leverage markets less competitive than the Valley or with the logistical ability to incorporate remote hires.

That said, we will in 2012 see the first steps towards a more rational market, through a combination of cultural shift and educational model innovation that will increase supply. Regarding the former, it’s no secret that technology has had a profound impact on the erosion of middle class jobs. In Race Against the Machine, MIT Professors Andrew McAfee and Erik Brynjolfsson document the role that rapid innovation has had on jobs:

Digital technologies change rapidly, but organizations and skills aren’t keeping pace. As a result, millions of people are being left behind. Their income and jobs are being destroyed, leaving them worse off in absolute purchasing power than before the digital revolution.

Even skill industries are not immune. From John Markoff’s New York Times piece, “Armies of Expensive Lawyers, Replaced by Cheaper Software“:

“From a legal staffing viewpoint, it means that a lot of people who used to be allocated to conduct document review are no longer able to be billed out,” said Bill Herr, who as a lawyer at a major chemical company used to muster auditoriums of lawyers to read documents for weeks on end. “People get bored, people get headaches. Computers don’t.”

While Brynjolfsson and McAfee are ultimately optimistic about the prospects of technical progress as they relate to employment, the outcome is far from certain.

What is becoming clear, however, is that unemployment rates that have been north of 8% in the US since February of 2009 are driving people into industries that are desperate for help. For some, this means oil & gas employment in traditionally underpopulated environments like North Dakota. For others, however, technology – long an enemy – is becoming a refuge.

We’re seeing a spike in inquiries about transitioning to technology careers. Lawyers, management consultants, teachers and others are seeking – and often finding – homes for themselves within the technology sector. Some are self-taught or trained on the job, others merely apply existing skills in new contexts, but both represent a potential cultural shift. Which begs the question: could technology be the next major middle class employment sector?

For that to happen, the education system needs to improve, because even an industry which has been one of the few economic bright spots of the last decade can only absorb so many unskilled workers without slowing. This is the real significance of applications like Code Academy or programs like Harvard’s free CSCI E-52, MITx or Stanford Engineering Everywhere: they are one potential solution to the perpetual shortage of talent. For all of the limitations of distance learning, the scale means that some subset of motivated students will become productive developers, and by extension, contributors to the larger economy.

This is a long term process, so obvious progress within 2012 will be minimal, and talent shortages will continue. But we will in the next twelve months begin to see distance trained students hired at scale, and this will be one of the first steps towards lower talent costs as well as, possibly, the restoration of middle class employment opportunities.

Monitoring as a Service

We are not oriented around category definitions at RedMonk; we prefer market driven names to those conceived and marketed by the analyst industry. That said, it seems clear that the time of Monitoring-as-a-Service (MaaS) is at hand. New Relic’s growth led to a $15M round in November, Boundary took $4M a year ago this month, Monktoberfest speaker Theo Schlossnagle’s Circonus has been in market for over a year, and virtually every vendor that we speak with today is adding monitoring and management facilities, from 10gen’s MMS to Cloudera’s Cloudera Manager.

The proliferation of these services is a direct response to the increasingly heterogeneous nature of application architecture and the reality that the substrate is frequently network based, rather than local. Given accelerating rather than declining consumption of network resources, we predict a strong increase in interest and adoption of MaaS tools. Much as I don’t care for the term itself.

Intelligent usage of generated telemetry – which we’ll come back to – will further cement adoption, delivering previously unseen value.

Open Source and the Paradox of Choice

Gartner in March of last year asserted that open source had hit a tipping point, saying:

“Mainstream adopters of IT solutions across a widening array of market segments are rapidly gaining confidence in the use of open source software.”

We concur, although we would argue that the tipping point actually occured ten years or more prior. The Apache web server and MySQL were originally written in 1995. In 1999, we saw the public offering of Red Hat and the creation by IBM – as mainstream a technology brand as there is in the enterprise – of the Linux Technology Center. Firefox was first released in 2003. None of these reached their relative levels of popularity in the past twelve months; they have instead been the de facto infrastructure for the better part of the last decade.

Regardless of when one asserts that open source crossed the chasm, however, it remains that it is a model whose popularity is increasing over time. As understanding of the benefits increases and concerns about the risks abate, more organizations are not only consuming open source but contributing to it. Evidence suggests, in fact, that perceptions of the value of software are in decline – we’ll come back to that too, and that the end result of this is that more proprietary code is being released as open source software.

Widely perceived as a net benefit, however, the influx of new projects does present problems for would be adopters. Specifically, the paradox of choice implies that developers will increasingly be forced to select from a growing sea of projects which may or may not be suitable for their needs. And while the nature of open source guarantees developers the ability to apply this code to their projects without restriction or commercial engagement, this is a process with a limited ability to scale. Consider the NoSQL space, as an example. Presuming for the sake of argument that the developers in question understand the different categories of database – key value stores, document databases, columnar databases, MapReduce engines, graph databases and so on – well enough to understand their high level needs, there are at least two and sometimes as many as half a dozen credible options to consider.

This paradox of choice, or too much of a good thing, will become more problematic over time rather than less as contributions will continue to rise. The net impact is likely to be increased commercial opportunities around selection, and therefore attention to vendors like Black Duck, Open Logic, Palamida and Sonatype.

PaaS: The New Standard

It has been evident for some time that runtime fragmentation – an aggressive diversification of programming languages and frameworks, specifically – will change the development landscape. The market failure of the first generation PaaS providers, in fact, was primarily a function of their over-prescriptive natures. The benefits to outsourcing management and scale were obsoleted by the constraints; Java shops were never likely to rewrite their application stack in Python or Ruby strictly to benefit from a platform. Which is why virtually every relevant PaaS provider today offers a choice of runtimes, so as to maximize their addressable market.

But in a fragmented world, what might emerge as a standard? From a developers’ perspective, the standard is most often the framework they’re deploying to, whether that’s Django, Node.js, Lift, Play, Rails, Spring, the Zend Framework or another. From a vendor perspective, however, the new standard is likely to be one level of abstraction up from individual language frameworks: the platform itself. Certainly this is VMware’s opinion, as they are in Maritz’ words trying to construct “the 21st-century equivalent of Linux” – i.e. the substrate that everything else is built on top of.

In 2012, this will become more apparent. PaaS platforms will emerge as the new standard from a runtime and deployment perspective, the middleware target for a new generation of application architectures.

Service Proliferation

With the inevitable adoption of multiple third party services – varying cloud resources, multiple, possibly overlapping, management and monitoring services and so on – will come challenges in making sense of the whole. Overall, instrumentation and visibility on a per service level is improved, but aggregating these views into a cohesive picture of overall architectural health and performance is likely to be highly problematic. Not least because the services themselves may present conflicting information and data. Google Analytics and New Relic, for example, are frequently at odds over load times and other delivery related performance metrics. Introduce in to that mix services like Boundary or CloudWatch and the picture becomes that much more complex. Connecting their data back to underlying log management and monitoring solutions such as 10gen’s MMS or Splunk is more complicated still.

The challenges of service intregration will create commercial opportunities for aggregating services which consume individual performance streams, normalize it and present customers with a consolidated single picture of their network performance. Commercial solutions will not fully deliver on this vision in 2012, but we will see progress and announcements in this direction.

Telemetry Usage

Five years ago, we began publicly discussing revenue models based around what we termed telemetry, or product generated datastreams. The context was providing open source commercial vendors with a viable economic model that better aligned customer and vendor needs, but the approach is by no means limited to that category: Software-as-a-Service vendors, as an example, are well positioned to leverage the data because they maintain the infrastructure. In 2011, we finally began seeing vendors besides Spiceworks take the first steps towards incorporating data based revenue models. For products like Sonatype Insight [coverage], data is not a byproduct, but the product.

In 2012, this trend will accelerate as necessary monitoring capabilities are added to product portfolios and industry understanding and acceptance of the model overcomes conservative privacy concerns. Many more vendors will begin to realize that like New Relic, which observed a decline in commercial application server usage, their accumulated data is full of insights on customer behaviors and wider market trends both.

Value of Software Will Continue to Decline

Capital markets have not, traditionally, been overly fond of software firms, perhaps because comparatively few of them eclipse annual revenue marks of a billion dollars – less than twenty, by Forbes‘ count. Microsoft’s share price has languished for over a decade in spite of having not one but two licenses to print money. The mean age of the PwC’s Top 20 software firms by revenue is 47 years; a fact which cannot be encouraging to startups.

Higher valuations instead are being awarded to entities that employ software to some end, rather than attempting to realize revenue from it directly. Startups today realize this, and the value of software in their models has commensurately been adjusted downward. Tom Preston-Werner, for example, describes the GitHub philosophy as “open source (almost) everything.” Facebook, LinkedIn, Rackspace, Twitter and others exhibit a similar lack of protectiveness regarding their software assets, all having open sourced core components of their software infrastructure that would have been even five years ago fiercely guarded.

This is becoming the expectation rather than the exception because it is nothing more or less than an intelligent business strategy. Businesses can and will keep private assets they believe represent competitive differentiation, but it will be increasingly apparent that less and less software is actually differentiating. As a result, 2012 will see even less emphasis on the value of software and more on what the software can be used to achieve.

Bonus: Facebook’s Most Important Feature

In 2012 will be Timeline. Mark it down.

Disclosure: Black Duck, Cloudera, GitHub, IBM, Microsoft, Sonatype and VMware are RedMonk customers, while 10gen, Boundary, Circonus, Facebook, Open Logic, Palamida, and New Relic are not.