Blogs

RedMonk

Skip to content

The Depth of Amazon’s Ambition

Not surprisingly for an organization that has updated its product line 200 times this year as of the first of the month, Amazon had a few tricks up its sleeve for its annual re:Invent conference. For the company that effectively created the cloud market, the show was an important one for showcasing the sheer scope of Amazon’s targets.

Amazon is correctly regarded as one of the fastest innovating vendors in the world, with the release pace up over 500% from 2008 through last year. And if Amazon keeps up its pace for releases through the end of the year, it will have released 36% more features this year than last.

But as impressive as the pace is, the more impressive – and potentially more important – aspect to their release schedule is its breadth. Consider what Amazon announced at re:Invent:

  • AppStream (Mobile/Gaming)
  • CloudTrail (Compliance and Governance)
  • Kinesis (Streaming)
  • New Instance Types in C3/I2 (Performance compute)
  • RDS Postgres (Database as a Service)
  • Workspaces (VDI)

The majority of cloud vendors today are focused on executing with core cloud workloads, or basic compute and storage. There are certainly players focused on adding value through differentiated, specialized technologies such as Joyent with its distributed-Unix data-oriented Manta offering or ProfitBricks with its scale up approach, but these are the exception rather than the rule. Whether it’s public cloud providers or enterprises attempting to build out private cloud abilities, most of the focus is on simply keeping the lights on.

At re:Invent, Amazon did upgrade its traditional compute offerings via C3/I2, but also signaled its intent to embrace and extend entirely new markets. Most obviously, Amazon has with Workspace turned its eye towards VDI, for years a market long on promise but short on traction. The theoretical benefits of VDI, from manageability to security, have to date rarely outweighed the limitations and costs of delivery, making it the Linux desktop of IT – with success always just over the horizon. Amazon’s bet here is that by removing the complexity of execution it can engage with customers in a manner that its core cloud businesses cannot, and thereby grow its addressable market in the process.

Similarly, Kinesis is an entry into a specialized market that has typically been the province either of vendor packages – e.g. IBM InfoSphere Streams – or more recent open source project combinations such as Storm/Kafka. Of specific interest with Kinesis is the degree to which Amazon is leading the market here rather than responding to it. When questioned on the topic, Amazon said that Kinesis was unlike other Amazon offerings such as Workspaces that were a response to widespread customer demand. Instead, Amazon is anticipating future market needs with Kinesis, and attempting to deliver ahead of same.

AppStream, for its part, is effectively a Mobile/Gaming-backend-as-a-service, putting providers in that space on notice. The addition of Postgres as an RDS option, meanwhile, came to wide developer acclaim, but means that Amazon will increasingly be competing with AWS customers like Heroku. And CloudTrail, particularly with its partner list, means that AWS is taking the enterprise market seriously, which is both opportunity and threat for its enterprise ecosystem partners.

Big picture, re:Invent was an expansion of ambition from Amazon. Its sights are even broader than was realized heading into the show, which should give the industry pause. It has been difficult enough to compete with AWS on a rate of innovation basis in core cloud markets; with its widening portfolio of services, the task ahead of would-be competitors large and small just got more difficult.

That being said, however, it is worth questioning the sustainability of Amazon’s approach over the longer term. Microsoft similarly had ambitions not just to participate in but fundamentally dominate and own peripheral or adjacent markets, and arguably that near infinite scope impacted their focus in their core competencies. The broader and more diverse the business, the more difficult it becomes to manage effectively – not least because you end up making more enemies along the way. It remains to be seen whether or not Amazon’s increasing appetite to cloudify all the things has a similar effect on its ability to execute moving forward, but in the interim customers have a brand new stable of toys to play with.

Disclosure: Amazon, Heroku, and IBM are RedMonk customers, Joyent, Microsoft and ProfitBricks are not.

Categories: Cloud, Conferences & Shows.

A Look at Public Offerings from 1980-2012

A year ago, a CTO that had landed a large public round and secured a quarter as much in a less public investment candidly described the process saying, “this used to be called going public.” MongoDB, the recent beneficiary of a $150M round led by Intel, Saleforce.com and Sequoia would likely agree. As might Uber, who received $250M in financing from Google Ventures. Going public is clearly no longer the sole route to market for outsized capital requirements.

Which isn’t to imply that venture deal sizes are, on average, increasing. Thanks to a combination of factors from the rise of early stage investment vehicles like Y Combinator to open source software and the public cloud, data gathered by Chris Tacy (below) indicates that if we conflate angel and traditional venture investments, deal volume is up but the size of individual deals is actually in decline.

But at the opposite end of the spectrum, anecdotal evidence suggests that private funding is increasingly competing with public markets in ways not seen previously. The question is whether the data validates the assumption that private companies are being funded on a scale historically competitive with public market returns, and what this means for the wider market moving forward.

To expore the first question, it’s useful to examine data (PDF) on US Initial Public Offerings from 1980-2012 collected by Professor Jay R Ritter of the University of Florida. In his own words, the sample includes “IPOs with an offer price of at least $5.00, excluding ADRs, unit offers, closed-end funds, REITs, partnerships, small best efforts offers, banks and S&Ls, and stocks not listed on CRSP (CRSP includes Amex, NYSE, and NASDAQ stocks).” For example, here is the total number of IPOs per year beginning in 1980.

It should be no surprise to most that public offerings spiked in the late 1990′s. The Tulipmania hysteria that absorbed the technology industry – and eventually, the world – during the bubble has been well documented. What’s interesting about the this chart, however, is that it indicates that the market has yet to recover from the tech-driven crash in public offering volumes. The median number of IPOs per year from 1980 to 2012 is 174. We have not seen that many in a given year since 2004. The recent recession, of course, undoubtedly depressed the appetite for entities to take themselves public. But even in years of relative prosperity, domestically, IPOs seem to have lost some of their luster.

One potential explanation would be the returns. Below is a chart of the aggregate proceeds from all IPOs in a given year as calculated by Ritter. To normalize them for context, however, all numbers have been adjusted for inflation. Dollar amounts depicted, therefore, represent an approximated value in 2013 US dollars.

While the trendlines don’t match precisely, it’s interesting and perhaps not surprising to note the strong correlation between the returns from public offerings and their frequency. It is also worth noting that while proceeds have recovered more strongly than volume, the aggregate returns from public offerings remain depressed. From 1980 to 2012, the median return in 2013 dollars for the aggregate of a year’s worth of public offerings is $28.5B – a figure that hasn’t been reached in four of the last six years. An analysis of the average individual returns, however, challenges the hypothesis that the lack of an expected return is preventing would be IPOs from transacting.

The above chart depicts the aggregate returns for a given year divided by the number of IPOs – providing us with, essentially, an average IPO return. Even after normalizing against a 2013 dollar scale, it’s apparent that the realizable returns per transaction are still growing (if you’re curious about the 2008 outlier, that’s the year VISA went public and raised ~$17B). Which in turn should mean that the incentive to go public remains, and certainly entities from Google (1998) to Facebook (2012) to the aforementioned Twitter have chosen that path in spite of the availablility of capital in private markets.

Still, it is interesting to observe that deals like MongoDB’s $150M round dwarf the expected returns from historical IPOs, even after adjusting for inflation. For example, from 1980 to 1997 the average adjusted return from a public offering never eclipsed $100M. Since then it has dramatically expanded, with the median adjusted return since 1997 weighing in at approximately $253M, or approximately $100M more than MongoDB raised in its last round.

If more companies then are either delaying going public or avoiding the public markets entirely, one would expect to see a rise in venture backed companies eventually going public. While the costs of starting and running businesses have in many respects come down due to dramatic drops in the costs of technical infrastructure among other categories, these have in many respects been offset by spikes in other areas, notably healthcare. Which means that whether public or private, growing companies are likely to still require financing to fuel growth. And indeed, we find exactly this sort of trajectory in venture backed companies.

The above chart depicts percentage of IPOs from technology entities that were backed by venture capital. While the overall percentage has always been high, the trendline is clearly towards greater VC participation. Which makes sense in the wake of a decade of decreased reliance on public market alternatives.

As for what all of this means moving forward, the answers are unclear. In the aggregate, the private market is obviously not lacking for available capital. Just as clearly, decline in volume or no, the returns remain there for public market entrants – or at least some of them. But as the number of large venture deals that approximate the anticipated returns from a public offering appears to be on the rise, it’s worth monitoring the dynamic between public and private funding sources. In the meantime, we’re likely to continue seeing the kinds of deals that “used to mean going public.”

Categories: Venture Capital.

The 2013 Monktoberfest

Monktoberfest 2013
(All photos courtesy Maney Digital)

In a 2001 piece for the New York Times, the now sadly departed Elmore Leonard summed up his tenth and final rule on how to write simply: “Try to leave out the part that readers tend to skip.” Without claiming any particular success, this is essentially the philosophy behind the Monktoberfest. In effect, it’s an attempt to answer the question: what would happen if we threw a conference without the parts that people skip?

Consider sponsored talks, for example. While it is not technically impossible to deliver a sponsored talk that engages an audience, the overlap between great talks and paid talks is tiny. Most end up as little more than infomercials. So we lose them. Then there’s timing. For a conference aimed at and built for developers, who tend to not be the early rising type, why would we start the conference at the more typical 8 AM? 10 AM is much more civilized. And what do people most frequently want to skip at a conference? Meals delivered by a staff whose focus is scaling the food, not crafting the food. Many fewer people, on the other hand, skip a sushi lunch or a dinner that includes lobsters caught by the caterer’s husband the afternoon before.

While this is a bit of a different approach for conferences, the logic behind it seems straightforward. In my experience, the quality of any given conference will ultimately be determined not by the food, drink or even the speakers – as important as they are. The value of a conference is determined instead by its people. Why, then, would we optimize for anything but the people?

Monktoberfest 2013

Whether we succeeded will be determined in the weeks and months ahead, as the impact of the individual talks ripples outwards, we see the manifestations on social media and elsewhere of new connections made at the show and so on. But the early returns are gratifying.

The last quote from Mike is perhaps the most important to me personally. People who have never attended the Monktoberfest will ask me what it’s all about, and my answer is that it’s about the intersection of social and technology. It’s about how technology changes the way that we socialize, and how the way that we socialize changes the way that we build technology. But within that broad framework, speakers have a great deal of latitude to interpret the constraints in interesting ways. In doing so, as Mike says, they make me think about why I think what I think. They make me think about what I’m doing, why I’m doing it, and how I can help. They inspire me, and I seriously doubt that I’m the only one. They are, in short, the kinds of talks that don’t necessarily have a home at other shows.

Thanks

As with most large productions, the Monktoberfest is a group effort, and as such, there are many people to thank.

  • Our Sponsors: Without them, there is no Monktoberfest
    • IBM MobileFirst: In an industry littered with the carcasses of businesses that couldn’t adapt to change, IBM is one of the few major technology companies in existence that has survived not one but multiple waves of disruption. The driving force behind most disruption today is the developer – nowhere is this more apparent than in mobile – and we appreciate IBM’s strong support as our lead sponsor in helping to bring them the conference they deserve.
    • Red Hat: As the world’s largest pure play open source company, there are few who appreciate the power of the developer better than Red Hat. Their support as an Abbot Sponsor – the third year in a row they’ve sponsored the conference, if I’m not mistaken – helps us make the show possible.
    • ServiceRocket: When we post the session videos online in a few weeks, it is Service rocket that you will have to thank.
    • EMC: Enjoyed your surf & turf dinner? Take a minute to thank the good folks from EMC.
    • Rackspace/Splunk: It’s much easier to splurge on fresh sushi when you have partners like Rackspace and Splunk helping to make it possible.
    • Basho: When you came in a little under the weather on Thursday and treated yourself to a breakfast sandwich, that was Basho’s doing.
    • Atlassian/AWS/Brick Alloy/Citrix/CloudSpokes/Docker/Moovweb/Opscode/Rackspace: Remember the rare beers served at the event – one of which included the only barrel available in the US? These are the people that brought it to you. And be sure to thank Atlassian especially, as they brought you four separate rounds.
    • Brick Alloy/Crowd Favorite: While we continue to search for a reasonable solution to the difficult challenges posed by a hundred plus bandwidth-hungry geeks carrying three or more devices per person, Brick Alloy and Crowd Favorite at least deferred the load onto local repeaters.
    • Rackspace: The glasses this year came courtesy of Rackspace, as our attendees will be reminded every time they drink a craft beverage from one.
    • Moovweb: Moovweb, meanwhile, addressed the afternoon munchies.
    • O’Reilly: Lastly, we’d like to thank the good folks from O’Reilly for being our media partner yet again.
  • Our Speakers: Every year I have run the Monktoberfest I have been blown away by the quality of our speakers, a reflection of their abilities and the effort they put into crafting their talks. At some point you’d think I’d learn to expect it, but in the meantime I cannot thank them enough. Next to the people, the talks are the single most defining characteristic of the conference, and the quality of the people who are willing to travel to this show and speak for us is humbling.
  • Ryan and Leigh: Those of you who have been to the Monktoberfest previously have likely come to know Ryan and Leigh, but for everyone else they are one of the best craft beer teams not just in this country, but the world. And they’re even better people, having spent the better part of the last few months sourcing exceptionally hard to find beers for us. It is an honor to have them at the event, and we appreciate that they take time off from running the fantastic Of Love & Regret on behalf of Stillwater Ales down in Baltimore, MD to be with us.
  • Lurie Palino: Lurie and her catering crew have done an amazing job for us every year, but this year was the most challenging yet due to some unfortunate and unnecessary licensing demands presented days before the event. As she does every year, however, she was able to roll with the punches and deliver on an amazing event yet again. With no small assist from her husband, who caught the lobsters, and her incredibly hard working crew at Seacoast Catering.
  • Kate (AKA My Wife): Besides spending virtually all of her non-existent free time over the past few months coordinating caterers, venues and overseeing all of the conference logistics, Kate was responsible for all of the good ideas you’ve enjoyed, whether it was the masseuses last year or the cruise this year. She also puts up with the toll the conference takes on me and my free time. I cannot thank her enough.
  • The Staff: From Juliane and James securing and managing all of our sponsors to Marcia handling all of the back end logistics to Kim, Ryan and the rest of the team handling the chaos that is the event itself, we’ve got an incredible team that worked exceptionally hard.
  • Our Brewers: I’d like to thank Jim Conroy of The Alchemist, Josh Wolf of Allagash, Greg Norton of Bier Cellar, Mike Fava and Tim Adams of Oxbow, and Brian Strumke of Stillwater for taking time out of their busy schedules to be with us. The Alchemist and Allagash, in addition, were kind enough to provide giveaways to our attendees and speakers, respectively.
  • Mike Maney: If he’s not the most enthusiastic Monktoberfest attendee, I’m not sure who would be. Last year he embarked on an epic 7 state road trip to the conference, and this year he sourced three bottles of Dogfish hand signed by none other than the founder of the brewery, Sam Calagione. These we were able to give away to attendees thanks to Mike’s efforts.
  • Caroline McCarthy & Mike McClean of Abbey Cat Brewing: At the conclusion of our brewer’s panel featuring the Alchemist, Allagash, Bier Cellar, Oxbow and Stillwater, our panelists were each issued a customized Monktoberfest mash paddle. This came courtesy of a connection from Monktoberfest speaker Caroline McCarthy, who introduced me to Mike McClean, who graciously furnished us with the paddles gratis. Abbey Cat Brewing, in Mike’s words, makes “mash paddles, with the help of a sweatshop staffed entirely by foster kittens.” What he failed to add is that they are gorgeous creations. And before you ask, yes, we have pictures of the paddles with kittens.

With that, we close this year’s Monktoberfest. For everyone who was a part of it, I owe you my sincere thanks. You make all the blood, sweat, tears worth it. Stay tuned for details about next year, and in the meantime, you might be interested in Thingmonk or the Monki Gras, RedMonk’s other two conferences.

Categories: Conferences & Shows.

Are PaaS and Configuration Management on a Collision Course and Four Other PaaS Questions

The following was meant to be ready in time for the Platform conference last week, but travel. While it’s belated, however, the following may be of interest to those tracking the PaaS market. At RedMonk, the volume of inquiries related directly and indirectly to PaaS has been growing rapidly, and these are a few of the more common questions that we’re fielding.

Q: Is PaaS growing?
A: The short answer is, by most measurements – search traffic included – yes.

The longer answer is that while interest in PaaS is growing, its lack of visibility on a production basis is adding fuel to those who remain skeptical of the potential for the market. Because PaaS was over-run in the early days by IaaS, there are many in the industry who continue to argue that PaaS is at best a niche market, and at worst a dead end.

To make this argument, however, one must address two important objections. First, the fact that the early failures in the PaaS space were of execution, not model. Single, proprietary runtime platforms are less likely to be adopted than open, multi-runtime alternatives for reasons that should be obvious. But perhaps more importantly, those arguing that the lack of production visibility for PaaS today means that it lacks a future must explain why this is true, given that history does not support this point. Quite the contrary, in fact: dozens of technologies once dismissed as “non-production” or “not for serious workloads” are today in production, running serious workloads. The most important factor for most technologies isn’t where they are today, but rather what their trajectory is.

Q: How convenient is PaaS, really?
A: That depends on one’s definition of convenience. It is absolutely true that PaaS simplifies or eliminates entirely many of the traditional challenges in deploying, managing and scaling applications. And given that developers are typically more interested in the creation of applications than the challenges of managing them day to day, these abilities should not be undersold.

That said, PaaS advocates are frequently unaware of the friction relative to traditional IaaS alternatives. Terminology, for example, is frequently an object of confusion: the linguists of infrastructure-as-a-service, which is essentially a virtual representation of physical alternatives, are simple. Servers are instantiated, run applications and databases, have access to a storage substrate and so on. Would-be adopters of PaaS platforms, however, must reorient themselves to a world of dynos, cartridges and gears. Even the metrics are different; rather than being billed by instance, they may be billed by memory or transactions – some of which can be difficult to predict reliably.

Is PaaS more convenient, then? Over the longer term, yes, it will abstract a great deal of complexity away from the application development process. In the short term, however, there are trade offs. It’s akin to someone who speaks your language, but with a heavy accent or in a different dialect. It’s possible to discern meaning, but it can require effort.

Q: What’s the biggest issue for PaaS platforms at present?
A: While the containerization of an application is far from a solved problem – some applications will run with no issues, while others will break instantly – it is relatively mature next to the state of database integrations. Most PaaS providers at present have distanced themselves from the database, for reasons that are easy to understand: database issues associated with multi-tenant, containerized and highly scalable applications are many. But it does present problems for users. PaaS platform database pricing has typically reflected this complexity, with application charges forming a fraction of the loaded application cost next to data persistence. And many platforms, in fact, have openly advocated that the data tier be hosted on entirely separate, external platforms, which spells high latency as applications are forced to call to remote datacenters even for simple tasks like rendering a page. Expect enhanced database functionality and integration to be a focus and differentiation point for PaaS platforms in the future. This is why several vendors in the space have invested heavily in relationships with communities like PostgreSQL and MongoDB.

Q: Where do the boundaries to PaaS end and the layers above and below it begin?
A: This is one of the most interesting, and perhaps controversial, questions facing the market today. In many respects, PaaS is well defined and quite distinct from other market categories; the previously mentioned lack of database integration, for example. But in others, the boundaries between PaaS and complementary technologies is substantially less clear. Given the PaaS space’s ambition to abstract away the basic mechanics of application and deployment, for example, it seems logical to question the intersection and potential overlap of PaaS and configuration management/orchestration/provisioning software such as Ansible, Chef, Puppet, or Saltstack. PaaS users, after all, are inherently bought into abstraction and automation; will they be content to manage the underlying physical and logical infrastructures using a separate layer? Or would they prefer that be a feature of the platform they choose to encapsulate their applications with?

If we assume for the sake of argument that, at least on some level, traditional configuration management/provisioning will become a feature of PaaS platforms, the next logical question is: what does this mean both for PaaS platform providers and configuration management/orchestration/provisoning players? Should the latter be aggressively be pursuing partnership strategies? Should the former rely upon one or more of these projects or attempt to replicate the feature themselves?

From the conversations we’re having, these are the important strategic questions providers are asking themselves right now.

Q: What’s the market potential?
A: We do not do market sizing at RedMonk, believing that it is by and large a guess built on a foundation of other guesses. That said, it’s interesting that so many are relegating PaaS to niche-market status. Forget the fact that even those companies serving conservative buyers such as IBM have chosen to be involved. Consider instead the role that PaaS was built to play. Much as the J2EE application servers abstracted Java applications from the operating systems and hardware layers underneath them, so too does PaaS. It is the new middleware.

Given the size of the Java middleware market at its peak, this is a promising comparison for PaaS. Because while it is true that commercial values of software broadly have declined since traditional middleware’s apex, PaaS offers something that the application servers never did: multi-runtime support. Where middleware players then were typically restricted to just those workloads running in Java, which was admittedly a high percentage at the time, there are few if any workloads that multi-runtime PaaS platforms will be unable to target. Which makes its addressable market size very large indeed.

Disclosure: IBM and Pivotal (Cloud Foundry) are clients, as is Red Hat (OpenShift), MongoDB and Salesforce/Heroku. In addition, Ansible, Opscode, Puppet Labs are or have been clients.

Categories: Cloud, Devops, Platform-as-a-Service.

The Moto X Bet

If you haven’t been following the saga of the Moto X, the short version is that it’s one of the first post-Google acquisition products from the company that gave us the Star Tac and the RAZR. Besides carrying the expectations of a market that needs to see something compelling from Motorola because it’s been a while, the X is also the focal point for analysts seeking an answer to one simple question: why did Google pay over twelve billion dollars for the company? Given the input the folks from Mountain View have had into the product, it’s been assumed that the Moto X would, if not answer that question outright, at least provide a hint.

If that’s the case, however, the answer for many seems to be: because they made a mistake. While the Moto X has seen its share of excellent reviews – see Gizmodo‘s “Moto X Hands On: Forget Specs, This Thing Is Awesome” or the Verge which gave it an 8 out of 10 – negative reactions have been common. It may not be a surprise to see John Gruber dismiss the product, but pieces like BGR’s “Motorola in Dreamland” or TechCrunch’s “Hell no Moto X” are representative of the industry’s disappointment.

While I have yet to get my hands on one of the devices, my bet is some of the gadget reviewers are simply missing the bigger picture. Which is, at least in part, that this phone isn’t built for them.

Consider the various complaints about the device. The disappointing processor? Yes, the benchmarks confirm the Moto X is based on a chip that’s slower than the equivalent in the HTC One or Samsung Galaxy S4. So? How many consumers, realistically, are aware of the chipset in their phone? The only time they’ll notice the processor is if the phone feels slow; none of the hands-on reviews I’ve seen yet make this claim.

How about the “gross” AMOLED Screen? Well, if a reviewer like The Verge’s Joshua Topolsky has to study the Moto X and a higher resolution screen such as the HTC One “side by side to make out the difference,” it seems unlikely the average consumer will have a problem with the display. Particularly given that the pixel density is just shy of iPhone-Retina.

Most of the negative reactions are focusing, in other words, on the phone’s underwhelming technical benchmarks. Which is interesting, because the history of this market, brief though it may be, does not suggest that the market rewards the most sophisticated handset. The iPhone, you might recall, did not add the 3G connectivity common to competitive handsets until Apple could ensure acceptable battery life. The HTC Evo, by contrast, was a marvel of engineering with an enormous screen and every connectivity option known to man – from HDMI to WIMAX. It also had a battery life of about an hour.

What Apple understands, and what the Moto X may reflect, is that technology is less important than experience. All things being equal, faster processors and brighter, higher resolution screens are preferable. But until we see significant advances in battery technology – the kinds of advances that are perpetually two to three years away – all things are not going to be equal.

So while those critiquing the Moto X for its pedestrian processor and so on focus on the components that won’t be found in the coming Ifixit Teardown, my guess is that the average user will be more impressed by a full day’s worth of usage – which is very different than a full day’s worth of talk time (who uses their phone as a phone these days anyway) – than a faster phone with a brighter screen. Just as they once picked EDGE capable iPhones over 3G competition.

Maybe the Made-in-the-USA factor will emerge as a selling point as well, and probably the ability to customize the appearance will be, but at the end of the day the performance of the Moto X will depend on how well Google and Motorola have learned from Apple. Apple’s never been about the underlying technology, and much to the consternation of tech reviewers everywhere, the Moto X doesn’t appear to be either.

Whether that pays off will be interesting to see.

Disclosure: There’s nothing to disclose. Google is not a client, and I do not have a Moto X, review unit or otherwise.

Categories: Mobile.

The Top 55 in Tech: Market Findings and Other Items of Interest

Bottom mark chaos

One of the things we track internally, for the sake of contextual curiosity more than anything, is the market performance of firms that can be at least loosely described as technology oriented. While it’s foolish to assign any serious import to rankings based on the vagaries of market performance, it is nevertheless interesting to understand how the market values or does not various entities, particularly in relation to one another. Besides simple market metrics such as market capitalization, it’s also useful to be aware of the wider context: when was a firm founded? How does it generate revenue? From these patterns, and particularly from watching them over time, it is possible to get a sense for how the technology landscape is evolving over time, and from there understand what adaptations may be necessary moving forward.

The list of the 55 largest public technologies entities that we’re tracking at present, ordered by current market cap, is available here. Please note, however, that no claims are made that this list is definitive. The most notable omission is carriers, and their omission looks increasingly problematic as they push further into cloud and network related services. It’s likely they’ll be added in future iterations.

If there are other public entities you believe to be missing, then, by all means let us know in the comments and we’ll review them and amend the list as necessary.

With the aforementioned caveats that the list is not definitive and that market perceptions do not necessarily match company merit, a few notable takeaways from a quick examination of the list.

  1. Age: The median age of the Top 55 tech companies is 28 years old, meaning that a representative entity would have been founded in 1985. This is less than surprising in one sense, given that larger companies have long leveraged startups as a means of outsourced innovation; rather than enter higher risk emerging markets themselves – which they are not built to attack in any event. Instead, they can sit back and attempt to acquire the successful innovators, considering the M&A premium their cost of innovation. Still, it’s interesting that the shape of the technology market, often considered one of the fastest moving industries in the world, is in part defined by the decisions of companies that might have been founded the year that New Coke debuted and Back to the Future was released.
  2. Revenue Source: Of the 55 companies tracked, 21 derive their revenue primarily from the sales of software while 34 of them do not. In other words, the average Top 55 technology firm is around one and a half times more likely to not generate the bulk of their revenue from software than it is from it. Interestingly, the Top 25 members of the list are even less likely to rely on software as their primary revenue source; 7 are primarily software oriented while 18 are not. Make no mistake: software is absolutely eating the world, as Marc Andreessen has said. Every company on this list relies on software for their business. But the data indicates that more companies are making money with software rather than from software, which is something of a departure from a decade ago when Microsoft’s dominance encouraged many to replicate the software revenue model.
  3. Annual Performance: In market performance over time, here is the list of companies in descending order of market cap generated per year of existence. 1) Google ($20B/year), 2) Apple ($11B), 3) Facebook ($10B), 4) Amazon ($8B), 5) Microsoft ($7B), 6) Cisco ($5B), 7) Oracle ($4B), 8) Qualcomm ($4B), 9) Baidu ($4B), 10) Taiwanese Semiconductor ($3B). Given that this measurement advantages to some degree younger firms, the presence of entities like Apple, Microsoft and Oracle is impressive.
  4. The Arena: A few relative valuations that may be of interest. Google is currently worth almost 10 Yahoo’s. Dell ($22.4B) is currently worth less than LinkedIn ($23.1B). ARM Holdings is worth less than you might expect, given its importance; Nokia is currently worth ~$2B more. The world’s only pure play open source company in Red Hat, meanwhile, is worth almost as much as Teradata ($10.2B to $9.9B) and more than Electronic Arts, F5 and Rackspace. And while Qualcomm tends to be something of a behind the scenes player, its Top 10 performance has it more valuable than VMware, Yahoo and Salesforce.com combined.
  5. Industry Size: The combined worth of the these entities is $2.8T.

Again, it’s important not to read too much into the above, particularly with respect to market valuations which are volatile by nature. As a snapshot of a given point in time, however, it’s useful to understand how companies and their strategies are perceived more broadly, and what this means for them moving forward.

Categories: Market.

What IBM Joining the Cloud Foundry Project Means

When the OpenStack project was launched in 2010, IBM was one of many vendors in the industry offered the opportunity to participate. And though OpenStack launched with a nearly unprecedented list of supporters, IBM was not among them. In spite of their lack of a public commitment to an existing open source cloud platform – they had their own service offering in SmartCloud – they declined to join the project.

Until they did two years later.

In 2012, IBM joined along with Red Hat, another industry player that had passed on the initial opportunity to get on the OpenStack train. The original decision and the subsequent about face may seem contradictory, but it is nothing more or less than the inevitable consequence of how IBM approaches emerging markets.

For many customers, particularly risk averse large enterprises and governments, one of IBM’s primary assets is trust. IBM is in many respects the logical reflection of its customers, who are disinclined – for better and for worse – to reinvent themselves technically as each new wave of technology breaks, as each new “game changing” technology arrives. Instead, IBM adopts a wait and see approach. It was nine years after the Linux kernel was released that IBM determined that the project’s momentum, not to mention the potential strategic impact, made it a worthwhile bet. At which point they promised to inject $1 billion dollars into the ecosystem, a figure that represented a little over 1% of their revenue and fully a fifth of its R&D expenditures that year.

Which is not to compare IBM’s commitment last week to Cloud Foundry to its investment in Linux, in either dollars or significance. As much as one-time head of VMware now-head of Pivotal Paul Maritz is seeking to make Cloud Foundry “the 21st-century equivalent of Linux,” even the project’s advocates would be likely to admit there’s a long way to go before such comparisons can be made.

The point is rather that when evaluating the significance of IBM’s decision to publicly back Cloud Foundry, it’s helpful to put their decision making in context. Decisions of this magnitude cannot be made lightly, because IBM cannot return to enterprise customers who have built on top of Cloud Foundry at their recommendation in two years with a mea culpa and a new platform recommendation.

IBM’s support for the Cloud Foundry project signals their belief that the PaaS market will be strategic. Given the aforementioned context, it also means that after an extended period of evaluation, IBM has decided that Cloud Foundry represents the best bet in terms of technology, license and community moving forward. These are the facts, as they say, and they are not in dispute. The primary question to be asked around this announcement, in fact, is less about Cloud Foundry and IBM – we now know how they feel about one another – and more to do with what it portends for the PaaS market more broadly.

A great many in the industry, remember, have written off Platform-as-a-Service for one reason or another. For some VC’s it’s the lack of return from various PaaS-related investments, for the odd reporter here or there it’s the lack of traction for early PaaS players like Force.com or Google App Engine relative to IaaS generally and Amazon specifically. And for developers, it’s frequently the question of whether yet another layer of abstraction needs to be added to virtual machine, IaaS fabric, operating system, runtime / server, programming language framework and so on. The developer’s primary complaint used to be the constraints – run time choice, database options and so on – but these have largely subsided in the wake of what we term third generation PaaS platforms. PaaS platforms that offer multiple runtimes and other choices, in other words. Platforms like Cloud Foundry, OpenShift and so on.

But while it’s difficult to predict the future of PaaS, particularly the rate of uptake – certainly it hasn’t gone mainstream as quickly as anticipated here – the history of the industry may offer some guidance. For as long as we’ve had compute resources, additional layers of abstraction have been added to them. Generally speaking this has been for reasons of accessibility and convenience; it’s easier to code in Ruby, as but one example, than Assembler. But some abstractions, middleware in particular, have long served business needs by offering greater portability between application environments. True, the compatibility was never perfect, and write-once-run-anywhere claims tried the patience of anyone who actually tried it.

Greater layers of abstraction, nevertheless, appear inevitable, at least from a historical perspective. Few would debate that C is a substantially more performant language than JavaScript. Regardless of this advantage, accessibility, convenience and other factors such as Moore’s Law have conspired to advantage the more abstract, interpreted language over the closer-to-the-metal C as demonstrated in this data from Ohloh.

Will PaaS benefit from the long term industry trend towards greater levels of abstraction? Having corrected many of the early mistakes that led to premature dismissals of PaaS, it’s certainly possible. Oddly, however, many of the would-be players in the space remain reluctant to make the obvious comparison, that PaaS is the new middleware. Rather than attempt to boil the ocean by educating and evangelizing the entire set of capabilities PaaS can offer, it would seem that the simplest route to market for vendors would be to articulate PaaS as an application container, one that can be passed from environment to environment with minimal friction. It’s not a dissimilar message from the idea of “virtual appliances” that VMware championed as early as 2006, but it has the virtue of being more simple than packaging up entire specialized operating systems, and is thus more likely to work.

If we assume for the sake of argument, however, that PaaS will continue to make gains with developers and the wider market, the question is what the landscape looks like in the wake of the Cloud Foundry-IBM announcement. It’s obviously early days for the market; IBM-approved or no, Cloud Foundry isn’t yet listed as a LinkedIn skill, and the biggest LinkedIn user group we track had a mere 195 members as of July 15th. But in an early market, the IBM commitment is unquestionably a boost to the project. Open source competitors such as Red Hat’s OpenShift project, closed source vendors like Apprenda, hosted providers like Engine Yard, Force.com/Heroku or GAE will all now be answering questions about Cloud Foundry and IBM, at least in their larger negotiated deals.

As it always does, however, much will come down to execution. Specifically, execution around building what developers want and making it easy for them to get it. All the engineering and partnerships in the world can’t save a project that makes developers lives harder, as we’ve already seen with the first wave of PaaS vendors that failed to take over the world as expected. Whether or not Cloud Foundry can do that with the help of IBM and others will depend on who wins the battle for developers, and that’s one that’s far from over.

Disclosure: IBM is a RedMonk customer, as are Apprenda, Red Hat and Salesforce.com/Heroku. Pivotal is not a RedMonk customer, nor are Google or Engine Yard.

Categories: Cloud, Platforms.

The RedMonk Programming Language Rankings: June 2013

[January 22, 2014: these rankings have been updated here]

A week away from August, below are our programming language ranking numbers from June, which represent our Q3 snapshot. The attentive may have noticed that we never ran numbers for Q2; this is because little changed. Which is not to imply that a great deal changed between Q1 and Q3, please note, but rather than turn this into an annual exercise snapshots every six months should provide adequate insight into the relevant language developments occuring over a given time period.

For those that are new to this analysis, it is simply a repetition of the technique originally described by Drew Conway and John Myles White in December of 2010. It seeks to correlate two distinct developer communities, Github and Stack Overflow, with one another. Since that analysis, they have published a more real time version of their data available for those who wish day to day insights. In all of the times that this analysis has been performed, the correlation has never been less than .78, with this quarter’s correlation .79.

As always, there are caveats to be aware of.

  • No claims are made here that these rankings are representative of general usage more broadly. They are nothing more or less than an examination of the correlation between two populations we believe to be predictive of future use, hence their value.
  • There are many potential communities that could be surveyed for this analysis. GitHub and Stack Overflow are used here first because of their size and second because of their public exposure of the data necessary for the analysis.We encourage, however, interested parties to perform their own analyses using other sources.
  • All numerical rankings should be taken with a grain of salt. We rank by numbers here strictly for the sake of interest. In general, the numerical ranking is substantially less relevant than the language’s tier or grouping. In many cases, one spot on the list is not distinuishable from the next. The separation between language tiers, however, is representative of substantial differences in relative popularity.
  • In addition, the further down the rankings one goes, the less data available to rank languages by. Beyond the top 20 to 30 languages, depending on the snapshot, the amount of data to assess is minute, and the actual placement of languages becomes less reliable the further down the list one proceeds.

With that, here is the third quarter plot for 2013.

(embiggen the chart by clicking on it)

Because of the number of languages now included in the survey and because of the nature of the plot, the above can be difficult to process even when rendered full size. Here then is a simple list of the Top 20 Programming Languages as determined by the above analysis.

  1. Java *
  2. JavaScript *
  3. PHP *
  4. Python *
  5. Ruby *
  6. C# *
  7. C++ *
  8. C *
  9. Objective-C *
  10. Shell *
  11. Perl *
  12. Scala
  13. Assembly
  14. Haskell
  15. ASP
  16. R
  17. CoffeeScript
  18. Groovy
  19. Matlab
  20. Visual Basic

(* denotes a Tier 1 language)

Java advocates are likely to look at the above list and declare victory, but Java is technically tied with JavaScript rather than ahead of it. Still, this is undoubtedly validation for defenders of a language frequently dismissed as dead or dying. Java’s ranking rests on its solid performance in both environments. While JavaScript is the most popular language on Github by a significant margin, it is only the fourth most popular language on Stack Overflow by the measure of tag volume. Java, meanwhile, scored a third place finish on Github and second place on Stack Overflow, leading to its virtual tie with perennial champ JavaScript. Not that this is a surprise; Java has scored a very close second place to JavaScript over the last three snapshots.

Elsewhere, other findings of note.

  • Outside of Java, nothing in the Top 10 has changed since the Q1 snapshot.
  • For the second time in a row, ASP lost ground, declining one spot.
  • For the first time in three periods, R gained a spot.
  • Visual Basic dropped two spots after rising one.
  • Assembly language, interestingly, jumped two spots.
  • After breaking into the Top 20 in our last analysis, Groovy jumped up to #18.
  • After placing 16th the last two periods, ActionScript dropped out of the Top 20 entirely.

Outside of the Top 20, Clojure held steady at 22 and Go at 28, while D dropped 5 spots and Arduino jumped 4.

In general, then, the takeaways from this look at programming language traction and popularity are consistent with earlier findings. Language fragmentation, as evidenced by the sheer number of languages populating the first two tiers, is fully underway. The inevitable result of which is greater language diversity within businesses and other institutions, and the need for vendors to adopt multiple-runtime solutions. More specifically, this analysis indicates a best tool for the job strategy; rather than apply a single language to a wide range of problems, multiple languages are leveraged in an effort to take advantage of specialized capabilities.

Categories: Programming Languages.

Why Software Platforms Should Be More Like Pandora

For many years after the de facto industry standardization on the MP3 format, the primary problem remained music acquisition. There were exceptions, of course: serious Napster addicts, participants in private online file trading or even underemployed office workers who used their company LAN to pool their collective music assets. All of these likely had more music than they knew what to do with. But for the most part, the average listener maintained a modestly sized music catalog; modest enough that millions of buyers could fit the entirety of their music on the entry level first generation iPod, which came with a capacity of 5 GB. Even at smaller, borderline-lossy compression levels – which weren’t worth using – that’s just over a thousand songs.

These days, however, more and more consumers are opting into platforms with theoretically unlimited libraries behind them. From iTunes Radio to Pandora to Play’s All Access to Rdio to Spotify, listeners have gone from being limited by the constraints of their individual music collection to having virtually no limits at all. Gone are the days when one needed to purchase a newly released album, or even more worse, drive to a store to buy it. Instead, more often than not, it’s playable right now – legally, even.

The interesting thing about music lovers getting what they always wanted – frictionless online access to music – was that it created an entirely new set of problems. Analysis paralysis, the paradox of choice, call it what you will: it’s become exponentially harder to choose what to listen to.

Which is why those who would continue to sell music are turning to data to do so. Consider iTunes Genius, for example, introduced in 2008. It essentially compares the composition of your music library and any ratings you might have applied to the library and ratings of every other Genius user. From the dataset created from the combined libraries, it automatically generates a suggested playlist based on a seed track. While it seems like magic, because curating playlists manually can be tedious, it’s really nothing more than an algorithmic scoring problem on the backend. Pandora takes an even more direct route, because it has real-time visibility into both what you’re listening to as well as metadata about that experience: did you rate it thumbs up or down, did you finish listening to it, did you even listen to it at all, are there other similar bands you wish played in the channel? All of this is then fed right back into the algorithms which do the best they can to pick out music that you, and thousands of other users similar to you, might like.

While the approaches of these and other services may differ, what they have in common is simple: a critical mass of listeners who are all voluntarily – whether they know it or not – building an ever larger, and ideally ever smarter, dataset of musical preferences on behalf of the vendor they’re buying from.

This is one of the examples that software companies should be learning from, although that should be “non-music” software companies, since just about every important new music company, including the examples above, is a software company first, music company second. Like the music companies, software companies should increasingly not be focused merely on the asset they wish to sell – software, in most cases – but data they might be in a position to collect that can be used to sell that software. Or as a saleable asset in and of itself.

For example, consider the case of a PaaS platform vendor. While the intial generation of platforms – GAE, Force.com, etc – were very opinionated in that they dictated runtime, database, schema and so on, the majority of players today offer multiple choices. Database options might include MongoDB, MySQL and Postgres, while runtimes might range from Java to JavaScript to PHP to Python to Ruby.

Many incoming customers, of course, may already know what technologies they prefer; they may even be locked into those choices. But those who haven’t made choices, and even some of those who have, would appreciate more detailed information on usage across the platform. What if, for example, you have real-time or near real-time numbers for the adoption of MongoDB, for example, which indicate exploding traction amongst other users of the platform? Or a spike in JavaScript runtime consumption? Even more interesting, how are the databases trending broadly versus for customers of a given size? Every choice a customer makes – to use Java, to deploy a MySQL instance – is the equivalent of a Pandora “Like” signal. But you have to capture these.

Like music services, most technology platforms – particularly those that are run in a service context – are generating valuable data that can be used to inform customer choices. To date, however, very few platform providers are even thinking about this data in a systematized fashion let alone exposing it back to their customers in meaningful ways. We know this because we ask about it in every briefing.

Those customers that embrace a software plus data approach, therefore, are likely to have a competitive advantage over their peers. And importantly, it’s the rare competitive advantage that becomes a larger barrier to entry – a data moat, if you will – over time.

Categories: Platforms.

Open Source Foundations in a Post-GitHub World

Solar Eclipse 2009 (NASA, Hinode, 7/22/09)

Two years ago Mikeal Rogers wrote a controversial piece called “Apache considered harmful” that touched a nerve for advocates of open source software foundations. Specifically, the piece argued that the ASF had outlived its usefulness, but in reality the post-GitHub nature of the criticism applied to a wide range of open source foundations.

For many years, open source foundations such as Apache counted project hosting as one of their core reasons for being. But in the majority of cases, the infrastrcture supporting this functionality was antiquated, as few of the foundations had embraced modern Distributed Version Control Systems such as Git. The Eclipse Foundation, for example, had a number of projects controlled by CVS, an application whose first release was in 1990. The ASF, meanwhile, was fully committed to its own Subversion project, a centralized VCS that was over a decade old at the time of Rogers’ post.

Outside the foundations, meanwhile, the traction of GitHub’s implementation of Git had exploded. It had become, almost overnight, the default for new project hosting. And because GitHub was in the business of hosting a version control system, and paid for it, it was no surprise that the quality of their hosting implementation was substantially better than what open source foundations like Apache or Eclipse could offer.

This preference for GitHub’s implementation led some developers, like Rogers, to question the need for foundations like Apache or Eclipse. In a world where GitHub was where the code lived and the largest population of developers was present, of what use were foundations?

One answer, in my view, was brand. Others included IP management, project governance, legal counsel, event planning, predictable release schedules and so on. But even assuming those services represent genuine value to developers, it would be difficult to adequately offset GitHub’s substantial advantages in interface and critical mass. GitHub makes a developer’s life easier now; intellectual property policies might or might not make their life easier at some point in the future.

As of this morning, however, developers at one foundation no longer need to choose. As the Eclipse Foundation’s FAQ covers, the Eclipse Foundation will now permit projects – just new ones, for the time being – to host their primary repository external to the foundations servers at GitHub.

The move is not without precedent; the OuterCurve (neé CodePlex) Foundation has permitted external hosting for several years. But the announcement by Eclipse is one of the first large mature foundations to explicitly fold external properties such as GitHub into its workflow.

This change should benefit everyone involved. Properties like GitHub gain code and developers, foundations can focus on areas they’re likely to add more value than project hosting, and developers get the benefits of a software foundation without having to sacrifice the tooling and community they prefer. For this reason, it seems probable that over time this will become standard practice, particularly as foundations look to stem criticism that they’re part of the problem rather than part of the solution. In the short term, however, there are likely to be some bumps in the road as new school populations within the foundations push their old school counterparts for change. Eclipse will in that respect be an interesting case study to watch.

Either way, while Eclipse may be the first large foundation to adapt itself to the post-GitHub environment, but it’s unlikely to be the last.

Disclosure: The Eclipse and OuterCurve Foundations are RedMonk clients.

Categories: Open Source.