tecosystems

What’s in Store for 2013: A Few Predictions

Share via Twitter Share via Facebook Share via Linkedin Share via Reddit

Having completed the review of last year’s predictions, it is now time to look ahead. Every year in January I go through the exercise of forecasting developments for the upcoming months, basing my predictions off of an alchemic mix of facts, figures, rumor and speculation. Over the years I have been doing this, I have had a reasonable level of success with these predictions. In 2010 I was 67% correct, in 2011 that number jumped to 82%. This past year, I was back down to 70%.

The success rate this year, of course, is unknown. If it were otherwise, they wouldn’t be predictions. But as is the case every year, I have confidence in all of these predictions, if not in the actual timing of their completion. You may, of course, assign whatever weight you deem appropriate based on both the track record and substance of the predictions from years prior.

In any event, here is what I expect to see play out in 2013.

Apple Will Be Rumored to Be Looking at Yahoo

In a widely circulated post from November, Patrick Gibson, an engineer at Tilde, wrote about the idea that “Google is getting better at design faster than Apple is getting better at web services.” This was the basis for a proposed aquisition of Twitter; one that GigaOm’s Mathew Ingram later seconded.

Less the proposed Twitter transaction, this echoed a conversation that Alex King and I had a month earlier at the Monktoberfest, when we went through the rhetorical exercise of asking whether it is easier to improve at usability and design, or at delivering services at scale. Alex’s point was that Apple is generally considered peerless at the design component, while there are multiple players with an internet scale competency.

But while I believe that to be correct, Google’s Android design seems to be improving steadily while Apple’s issues (iCloud, Maps, MobileMe, Ping, etc) with delivering internet-scale services have become regular enough to raise questions about their ability to remedy them organically. Hence speculation about potential acquisitions like Twitter that might infuse the company with the DNA necessary to compete effectively with Google, Microsoft and others in the services arena.

While many names will continue to circulate, the prediction here is that one name that will emerge in 2013 will be Yahoo. Long regarded within the industry as a company adrift, one that excelled at alienating and hemorrhaging its best technical talent, the 2012 hire of Marissa Mayer seems to have something of a stabilizing effect on the company. Any potential turn around is likely years away, but Mayer seems to have at least stopped the bleeding. And with some recent successes like the Flickr iPhone app, they may be righting the ship.

That said, it’s unclear what Yahoo’s ultimate raison d’etre is. Google has an effective monopoly on search and other important consumer properties like Gmail/YouTube/etc, Facebook and Twitter collectively own social, and competition within media – where Yahoo has notable strengths – is fierce. What if Yahoo was able to bring its experience and services platform to Apple, however? The combination would be unwieldy in certain areas and doubtless require spinoffs and divestments, but Yahoo would improve Apple’s services delivery capability while adding more diverse resources than Twitter by itself.

An actual acquisition is unlikely to occur for any number of reasons, among them Apple’s blind spot with respect to services and the size of the acquisition: Yahoo is currently worth more than Dell and nearly as much as Salesforce.com. Which isn’t to say that Apple couldn’t finance the transaction – there are virtually no acquisitions out of their reach. But given Apple’s history in M&A, an acquisition of that size would almost certainly have to come with fewer questions.

But that won’t stop the rumors from circulating at some point, particularly if Yahoo continues to rebound under Mayer’s tenure. And to the Stephen O’Grady of January 2014, no, this piece does not count as a rumor.

The Biggest Innovation in Smartphones Will Be Pricing

Apple’s ability to extract outsized profits in a variety of sectors, but most notably mobile, is legendary. In Q2 of 2011, as one example, Horace Dediu argued that Apple collected two thirds of the available profits in the mobile phone sector.

Whether one attributes this to their abilities to design sought after products, the brilliance of their supply chain and the economies of scale it permits, or some combination of both is ultimately unimportant. Most companies have to actively choose between volume or margin-oriented business models; Apple is one of the few that can operate both simultaneously.

Would be competitors such as Amazon (Kindle), Google (Android), Microsoft (Windows Phone) and others have been forced to furiously innovate in an attempt to not only keep pace, but differentiate themselves. In certain cases they’ve even been successful.

But as much as strategies for competing with Apple like Google’s iOS application end around are important on a tactical level, the strategic play may have nothing to do with features or technology. Instead, the real battle may be fought around pricing.

Amazon and Google are clearly trying to compete on a price basis, the former with the Kindle Fire and the latter with the Nexus 7, both of which are better than $100 cheaper than lowest cost iPad. Google in particular seems to have learned from the Xoom experiment that iPad prices attached to non-iPad devices are problematic; it will be interesting to see if Microsoft is forced to recalibrate downwards from its premium pricing position.

But potentially the far more interesting pricing battle to come is in smartphones. In January of 2010, Google released the Nexus One for sale directly to consumers, bypassing the traditional carrier route to market. The cost to purchase it directly, unsubsidized was $529. While Android was more or less credible at the time, it was less competitive with iOS than it is today and the high price tag was alarming to consumers acclimated to spending $200-$300 on a device. In July of the same year, Google closed its online store and the model – selling direct to consumers, without carrier assistance – was widely declared a failure.

In November of this year, however, Google released another Nexus device – the Nexus 4 – again direct to consumers. This time, however, the price for the base device was $299. At the upper bounds of what mainstream consumers pay for phones, but equivalent to the cost – with subsidy – of high end smartphones from a carrier. And these come with no carrier restrictions and no carrier software. The response is such that both models – the 8 GB and the 16 GB versions – are currently sold out.

While most reviews of the Nexus 4 center around its featurelist, or lackthereof in the case of LTE, its ultimate importance may well be the pricing model. Much as Amazon Web Services completely reset the markets expectation of what hardware should cost, or open source changed realizable price thresholds for proprietary vendors, Google’s $300 Nexus 4 has the opportunity to actually hurt Apple by changing the market dynamics.

On Apple.com right now, the cheapest unsubsidized iPhone is $649.00. Even if one concedes that Apple’s design and polish is worth paying a premium for, the question is whether it’s worth twice as much as an Android device with comparable specifications. Currently, Apple is relying upon carriers to make them price competitive by presenting customers with consumer afforable pricepoints from $199 to $399.

But if Google and its partners can continue to make high end devices available at a price point half that of what Apple is charging, something has to give. And that may well be Apple’s margins. Rumors of Apple’s low-cost iPhone indicate that they are not only aware of this pricing umbrella, but poised to eliminate it.

If I’m correct, the cost of smartphones will come down substantially in the next twelve months.

Collaboration Innovation and M&A Will Spike

Rapportive was not the first collaborative software add-on startup to be acquired (LinkedIn, 02/12), but it will certainly not be the last. With more collaboration softare users operating in SaaS environments, the opportunities expand horizontally for developers with bright ideas to target the space. Whether startups like 410 labs (Mailstrom), Baydin (Boomerang) and so on get acquired or merely have their features replicated by the larger platforms they plug in to is unclear, but like Rapportive before them the featuresets they’ve produced independently are too valuable not to be incorporated back into the products they complement.

We will see that happen, one way or another, in 2013.

Data Moats Will Become a Stated Goal

According to Wired, it took Apple less than two years to build and demonstrate the first version of the iPhone. In its first attempt at building a phone, Apple at once delivered a device that was better than anything else on the market and one that for the first time put the real internet, rather than some poor mobile-optimized subset, in a user’s pocket.

While it’s not known precisely how long Apple worked on its Maps application, it seems reasonable that its ambitions date at least back to its first acquisition in the mapping space (Placebase, 2009). The day Maps launched, it was hailed as a welcome update in the software design arena and an unmitigated disaster with respect to the corpus of data behind it. Bad as the reviews of the device were – and they were poor enough to compel Tim Cook to apologize for it – the worse news for Apple was that it was not an immediately fixable problem.

Depending on who you believed, it would take anywhere from “quite some time” to “400 years” to rehabilitate Apple’s Maps database. Even if we regard those projected timeframes as sensationalistic hyperbole, it’s clear that data is something different than software: ground lost in this space cannot be made up quickly, regardless of the resources involved. Unless you can acquire the missing data from a third party, it’s virtually impossible to accelerate the process of data collection and processing.

That a company as large and powerful as Apple can not only fail in a data based business but be unable to quickly remedy the error has not been lost on some in the industry. Intelligent businesses, and in some cases the venture capitalists that fund them, are beginning to recognize that data is in many cases the best barrier to entry there is.

What if MySQL, and subsequently Sun and Oracle, for example, had been collecting active telemetry from running instances of the databases – say, number of nodes, database size, number of tables, query construction metrics, and so on – for several years for even a small fraction of the total userbase. What would that kind of dataset be worth? Far more than the codebase, in all likelihood, because while there are many databases to choose from, there are comparatively few datasets of information about how tens of thousands of databases are run.

As mentioned in the review of last year’s predictions, many businesses are beginning to comprehend the opportunity – and more importantly, threat – of data aggregation and collection. What we’ll see in 2013 is an increased understanding of the data moat, and a more widespread utilization of them as points of differentiation.

Google’s Compute Engine Will Emerge as the Most Important Amazon Challenger

By dint of their first-to-market status as well as their relentless pace of innovation, Amazon has a Secretariat-sized lead in the market for cloud services. Competitors large and small are racing to catch up, fearful of ceding more ground in the all important cloud market, but few question Amazon’s overall dominance.

As Benjamin Black says, however, GCE is a Big Deal. Just as AWS was in 2006.

Amazon is everyone’s target, of course, as the market leader. But many would-be cloud providers are competing with AWS only in the nominal sense that they’re also a cloud provider. Providers like HP and IBM, for example, are typically oriented towards a different buyer type than Amazon.

Google, however, is aimed squarely at Amazon. Their product strategy and more importantly pricing telegraphs their intent.

Consider the rough product equivalencies:

  • Amazon offers compute services via EC2, Google’s alternative is GCE
  • Amazon offers storage via S3, Google’s alternative is Cloud Storage
  • Amazon offers hosted MySQL via RDS, Google’s alternative is Cloud SQL
  • Amazon offers non-relational hosting via Dynamo or SimpleDB, Google’s alternative is BigQuery
  • Amazon offers an application container in Elastic Beanstalk, Google’s alternative is GAE

Google cannot duplicate the full Amazon portfolio, obviously. There are no Google equivalents at present of Amazon services like CloudSearch, SQS, or FPS. But Google is approximating the core services at the heart of AWS. And Google’s product pricing is quite obviously targeting Amazon’s.

When we last looked at IaaS pricing, Google’s strategy with respect to Amazon was apparent. Like Microsoft, Google was taking its pricing cues from Amazon. Unlike Microsoft, it was willing to undercut the market leader.

Since then, the vendors have stepped up the conflict with a public price war: Google cut prices 20%, Amazon cut theirs 24-27%, so Google cut by another 10%.

While there are many would-be cloud providers in market, Google is different. The company has the advantage of having run infrastructure at a massive scale for over a decade: the search vendor is Intel’s fifth largest customer. It also has deep expertise in relevant software arenas: it has run MySQL for years, the company was built upon customized versions of Linux and it is indirectly responsible for the invention of Hadoop (via the MapReduce and GFS papers).

In 2013, then, Google will emerge as Amazon’s most formidable competitor.

The Focus of Online Education Innovation Will Be Less on Learning Than Certification

Interest in and subscription to Massive Open Online Courses (MOOCs) has, to date, not been a problem. As documented previously, Stanford’s Artificial Intelligence course alone enrolled 160,000 students from 190 countries. Coursera, for its part, claimed 1 million students from 196 countries in August of last year. Khan Academy, meanwhile, says it serves six million students – per month. Even downward adjusting the numbers, it’s clear that there’s demand.

It’s equally clear that the course content, in many cases, is excellent; on par with in class instruction, and in some respects a superior experience (the ability to back up and rewatch sections, for example).

What remains undetermined is how employers feel about online educational opportunities. Tactically, they are clear wins because they allow employees to increase their value at a fraction of the cost of traditional classroom based courses. Strategically, however, they present two major challenges.

First, and most obviously, there are questions regarding the rigor of the educational experience. Without classroom supervision, and at massive scale, how can educators ensure that students are actually paying attention to the coursework? Traditionally, this is accomplished via testing, but distance education poses challenges here as well. How can employers be sure that those claiming to have completed online courses were actually the ones taking tests?

Second, and potentially more problematically, most online learning startups haven’t focused on one of the more important secondary purposes of educational institutions, which is to allow employers to outsource the screening of job applicants. Let’s say, for example, you’re an employer looking to hire five applicants. And let’s say that you receive one hundred applicants from graduates of Harvard’s Extension School or online course offerings (edX), and ten with Harvard diplomas. Even if we assume for the sake of argument that the online education is exactly equivalent to the on campus experience, and even if we further assume that you could be sure the online applicants completed all of the coursework themselves without issue, there’s a problem of scale.

It’s faster to interview ten people than it is one hundred, and if you know that Harvard accepted 6.2% of its applicants in 2011 you can be reasonably sure those candidates have been vetted more carefully than you’ll have time for in your hiring. Which isn’t a fair system, obviously, but it’s the practical reality from an employers standpoint.

In 2013, then, we’ll online educators using innovation to tackle the first problem. The second problem, however, will remain, and may in fact be an innate characteristic of online education.

Every Business Will Throw Money at Data Teams


As LinkedIn’s Pete Skomoroch suggests, one of the most common reactions in the wake of a US election largely viewed to have been heavily influenced by an effective use of data – and to have been called perfectly by an analyst using data, was: “we need people like that.”

The difficulty, of course, is that the scarcity of the skillset in question goes up in direct proportion to its national visibility. Where organizations are under or simply unstaffed for data teams now, spinning them up in 2013 will be a challenge. More so than even traditional developer skillsets, so-called data science skillsets are in high demand.

Where they can’t hire the right people, businesses will attempt to close the gap using software. Software is a poor substitute for people, of course, but desperate organizations will descend on BI providers large (Business Objects, Cognos, SAS) to small(er) (Tableau) asking them to turn ordinary business analysts into quote unquote data scientists. Service alternatives like Mu Sigma will also benefit.

In other contexts, enterprises will eschew efficiency and consume infrastructure resources (primarily via public cloud offerings like Redshift) at alarming rates, believing that the solution for poor algorithms is more, bigger data.

The outcomes from these collective efforts will be mixed, but spending will continue unabated. With the world having been revealed as the province of the Big Data winners, businesses will ratchet up already heightened spending on data fields to unprecedented levels.

It’ll be a good year to be on a data team, in other words.

Explicit Services Will Be Advantaged Over Implicit Services

If you’re Android user and are on Jelly Bean or later, this list of 70 voice commands is amazing. If you ask Android “How hot is it going to be on Sunday?,” or “Where’s the closest bowling alley?,” or “What was the score of the Red Sox game last night?,” or “Listen to Pearl Jam,” or “Wake me up at 7 am tomorrow,” it will (generally) correctly process your voice input, map it to system actions and do what you’re asking it to do. Who knew?

And that is precisely the problem. I’ve looked at this list at least once a month since it was published in August, and the only voice command I use regularly is to set the alarm. Most Apple users seem to have the same reaction to Siri; it’s interesting, and magical when it works, but the really interesting features are submerged and implicit. And therefore unused.

Google Now, however, is the inverse of this. Compared to Android’s voice search or iOS’s Siri, a user has to know only how to activate Google Now, and nothing else. By combing your search history, inbox and so on and leveraging other data from sources like Maps, Google Now will automatically:

  • Tell you what time you have to leave for your meeting
  • Track your packages
  • Update you on your flight status
  • Report on your monthly walking/biking totals
  • Give you weather forecasts for home, your present location and any other cities in your itinerary
  • Point out the nearest public transit stop

And so on. This is pretty easily Android’s best new feature, and is clearly the shape of things to come. In 2013, then, we’ll see Google continue to make Google Now more useful, but also see competitors like Apple or Microsoft attempt to surface some of the latent features of their platform in a more explicit fashion. Voice search, Siri and competitors may be enormously capable, but as long as vendors are depending on users to discover those hidden features, they will go generally unused.

Telemetry Based Models Will Be Democratized

In November, 37signals’ Noah Lorang wrote about how the Chicago-based SaaS provider had been using telemetry to improve its offerings. Generating a terabyte of log data from Rails, Nginx, HAproxy and so on a day, the company couldn’t rightly be classified as a Big Data problem similar to a Facebook or Google, it was nevertheless enough data to be an obstacle. As Lorang described the challenges of working with their telemetry:

None of this is an insurmountable problem, and it’s all pretty typical of “medium” data – enough data you have to think about the best way to manage and analyze it, but not “big” data like Facebook or Google…The challenges of this medium amount of data are, however, enough that I occasionally wish for a better solution for extracting information from logs or a way to enable more interaction with large amounts of data in our internal analytics application.

And then Cloudera released Impala. Suddenly queries that took two and a half or three minutes to run on MySQL came back in two or three seconds. Their rough benchmarks indicated an average improvement of 96%, in fact. Overnight, at no up front cost, 37signals had the same ability to explore and mine its generated telemetry that they would if they were sorting small spreadsheets in Excel.

As many have commented on, 2013 is going to be a big year for real time querying of medium to large datasets. Predicting that tools like Impala and Storm will be popular this year is like predicting the sun will rise in the east. What’s more interesting is how such tools will be leveraged.

The bet here is that the 37signals experience will be predictive, and that many shops will begin collecting and mining their generated telemetry in search of performance gains, feature improvements or even additional value added services for customers. There’s not much choice in the matter: if competitors will be using their telemetry, businesses will be forced to use theirs to keep up. And the democratization via open source nature of these tools will enable this.

The Most Important Cloud Question Will Be Not Whether to Use it But How Much Am I Already Using?

At a conference in 2012, an executive at a systems vendor sheepishly admitted to using AWS to complete an internal project. The shadow-IT-style effort was typical of cloud projects; fast-moving but of only transient importance. At the end of the project, the engineers’ expense report included a $40,000 bill for Amazon’s services. The amount didn’t happen to be a problem in this case, as it was of strategic importance. The surprise at the total, rather, was the issue.

And this was for services that were actually leveraged. In many cases, businesses are paying for instances that are highly under-utilized. One study of 250,000 AWS instances indicated that the utilization rates by instance type were 16.7% (Small), 11.9% (Medium), 12.8% (Large), 3.9% (Huge). Imagine facing an enormous bill for infrastructure you hadn’t even used.

For those enterprises that have yet to experience this cloud sticker shock on some level, it’s just a matter of time. As Jevon’s Paradox states, as technology increases the efficiency of resource consumption, the rate of consumption of that resource will rise in response. In simpler terms, the easier cloud vendors make consuming cloud resources, the more cloud resources that are consumed.

And while the cloud model generally and price competition specifically have acted to make cloud resources more and more affordable, at scale even small costs add up. As executives everywhere are discovering.

It should be no surprise, then, that a cottage industry of startups (Cloudability, Newvem, PlanForCloud (Rightscale), etc) has emerged to provide not only visibility into total costs, but proactive advice on utilization rates and optimization functionality.

While much speculation will still center on whether enterprises are or should be using public cloud resources, then, intelligent organizations will acknowledge that, like it or not, they will be using the public cloud in some form and seek the ability to measure that usage carefully.

3 comments

  1. I love what you’re proposing, but I also want to point out that Apple’s products have always cost ~2x what other computers cost. I think Apple is more comfortable being the expensive option than we give them credit for. 

  2. Interesting article. Big Data is already ruling the roost in the start-up and incubating field. Series funding are largely going towards firms with Big Data aspirations. 

  3. its great that you have set out to predict things, but i would agree with Ryan that Apple has interesting way of selling less than android and still making lot more for its share holders.

Leave a Reply

Your email address will not be published. Required fields are marked *