Skip to content

Crossing the Amazon: IBM in an Age of Disruption

cloud formation over amazon river

The Wired headline in April of this year read, “Amazon Reveals Just How Huge the Cloud Is for Its Business.” The numbers for AWS were $4.6B for 2014, up 49% from the year before and on track to hit $6.23B by year’s end. The TechCrunch headline from October was “Amazon’s AWS Is Now A $7.3B Business As It Passes 1M Active Enterprise Customers.” Revenue at $7.3B, not $6.23B. A growth rate no longer of 49%, but 81%.

It is the velocity and trajectory of this business that has everyone in the industry spooked and valuations of the business formerly relegated to the “other” revenue category on financial statements accelerating. Even after seeing sales shrink for 14 consecutive quarters, after all, and amidst calls to rebrand the company from Big Blue to Medium Blue, each of IBM’s non-finance business units generated more revenue in 2014 than AWS projects to this year. Three out of the four were a multiple of the seven billion figure: GTS was ~$37B, Software $25B and GBS came in at ~$18B.

But the market and evaluators alike are less concerned, at least in the case of Amazon and IBM, with present day revenue figures than how they project to change over time, hence the euphoric AWS headlines and the quarterly pillorying IBM receives. What IBM is going through at present, in fact, suggests that Michael Dell’s original decision to take his firm private was a wise one.

Market disruption is a violent process, and surviving it can be almost as drawn out and painful as succumbing to it. As IBM knows, of course, having been one of the few companies to reinvent itself more than once. Expecting the same patience from investors, however, is a lot to ask, particularly in an age of activist shareholders carrying Damoclean swords.

If bullish perceptions of cloud native players, Amazon and otherwise, are driven by expectations of future returns driven by current models, however, it is perhaps worth taking a step back and evaluating IBM’s current models rather than current returns. The question is how should IBM, or companies in IBM’s position, respond to the macro-market factors currently disrupting its businesses.

From a high level, all of the incumbent systems players – from Cisco to Dell/EMC to HP to IBM to Oracle – need to recognize, among other market dynamics, the following:

  • Between the ascendance of ODMs and the explosion of IaaS, the market for premium low end hardware is gone. What hardware growth there is will come from the cloud – just ask Amazon, Google or Microsoft.
  • Traditional perpetual license software models are not gone, but in systemic decline. Customers instead are shifting to services-based models, with additional value adds from data (both collected and sourced).
  • Open source and commodity services have offered customers some relief from lock-in, but it remains as closely tied to profit as Shapiro and Varian described in 1999. This implies that while it’s important to offer commodity entrypoints, higher-end proprietary services will be critical to both profit and retention.
  • New market conditions require new partners.

Measured by this criteria, at least, IBM is making logical adjustments to its businesses.

  • Low-end hardware businesses have been divested, and investments redirected to potential growth businesses such as Softlayer.
  • An increasing emphasis within its software business is on services, e.g. Bluemix, acquisitions like Cloudant/Compose/etc, or the just announced Spark-as-a-Service.
  • Proprietary or exclusive offerings such as Watson or the Twitter and Weather Company partnerships offer IBM the ability to upsell customers to higher margin, more difficult to replicate externally services.
  • IBM’s partnership with Apple gives them a premium mobile hardware story, and Box CEO Aaron Levie was prominently on display at Insight.

From a directional standpoint, IBM appears to be responding to the systemic disruption across its footprint with a combination of internal innovation (Watson), open source (Cloud Foundry, Node, OpenStack) and inorganic acquisition (Bluebox, Cloudant, Softlayer, etc). Betting on cloud over traditional hardware, or SaaS rather than shrink-wrapped software may not seem aggressive to independent industry observers for whom the writing has been on the wall since halfway through the last decade, but the larger the business the more difficult it is to turn.

Much of IBM’s ability to reverse its recent downward financial trend, then, depends on its ability to execute in the emerging categories on which it has placed its bets. Some adjustments are clearly necessary. A heavy majority of the airtime at its Insight show this week, for example, has been devoted to its Watson product. While the artificial intelligence-like offering is intriguing and differentiated, however, as a business tool it’s a major marketing challenge. Positioning compute instances or databases offered as a service is a simple exercise. Explaining to audiences what “cognitive computing” means is non-trivial. Not least because unlike cloud, IBM is trying to push that rock up a hill by itself. Strangely, however, the company seems intent on leading with the most difficult to market product, rather than using more widely understood cloud or SaaS businesses as an on ramp and using Watson as a secondary differentiator. It would be as if AWS led with Machine Learning and mentioned, after the fact, that EC2 and RDS were available as well.

That being said, marketing and positioning is a solvable problem if the strategic direction is correct. And 14 quarters of declining revenue or no – remember that as AWS itself demonstrates, revenue is a lagging indicator – IBM is in fact making changes to its strategic direction. The company just makes it harder than it needs to be to see that at times.

Whether they can execute on these new directions, however, is what will determine whether the company’s turnaround is successful.

Disclosure: Amazon, Cisco, Dell, HP, IBM, and Oracle are RedMonk customers. Google and Microsoft are not current customers.

Categories: Cloud, Conferences & Shows.

All In: On Amazon, Dell and EMC

Datacenter Work

In her 1969 book, On Death and Dying, the Swiss psychiatrist Elisabeth Kübler-Ross attempted to capture and document the emotions most frequently experienced by terminally ill patients. The model is famous today, of course. Even if you don’t remember the model’s name, you’ll probably recall that individuals faced with a life-threatening or altering event are expected to experience a series of five emotions: denial, anger, bargaining, depression and acceptance. Though the model’s accuracy has been challenged and research doesn’t support it as either definitive or all-encompassing, its utility has sustained it through the present day.

While there are significant differences between corporate entities and human beings, Citizens United notwithstanding, there are interesting parallels between organizations faced with the threat of disruption and people faced with disruption’s human equivalent, death.

If you listen to incumbents talk about their would be disruptive competitors year after year, for example, specific, industry-wide patterns begin to emerge. Patterns which, as with the Kübler-Ross model, progress in stages. When you talk to a given incumbent about would-be disruptors, chances are good you’ll have a conversation like the following. The interesting thing is that you’ll have essentially the same conversation with any of the incumbents; their responses all follow this basic pattern. The timing of the conversational stages may vary, the substance almost never.

  • Stage 1: “I’ve never heard of that company.”
  • Stage 2: “Yes I’ve heard of them, but we’re not seeing them in any deals.”
  • Stage 3: “They’re beginning to show up in deals, but they’re getting killed.”
  • Stage 4: “They’re growing, but it’s all small deals and toy apps, they don’t get the enterprise.”
  • Stage 5: “Here’s how we compete against them in the enterprise.”

As with a patient facing a life-threatening diagnosis, the threat is difficult to acknowledge, let alone process. Acceptance, therefore, is arrived at but gradually.

Which brings us, oddly enough, to Amazon. Even shortly after S3 and EC2 debuted in March and August of 2006, respectively, it was evident that these services – their relatively primitive states notwithstanding – were strategically significant. The reaction of incumbents at the time? “I’ve never heard of Amazon Web Services.” Or if the company representative was especially progressive, “”Yes I’ve heard of Amazon Web Services, but we’re not seeing them in any deals.”

Five years ago last month, the only real surprise left was the lack of apparent concern about Amazon from the market incumbents it was busily disrupting. Here was a company that was quite obviously a clear and present danger, but much of the industry seemed stuck on the idea of Amazon as a mere retailer. Where companies should have been moving in earnest, what you’d hear most often was “Amazon Web Services is beginning to show up in deals, but they’re getting killed.”

In the years since, belated recognition of the threat posed has triggered massive responses. While claiming publicly that “Amazon Web Services is growing, but it’s all small deals and toy apps, they don’t get the enterprise,” behind the scenes massive investments in datacenter buildouts were underway, and organizations attempted to quickly retool to embrace and fight the cloud simultaneously.

None of those responses, however, are more massive than the announcement that Dell is acquiring EMC. Should the transaction close, at $67B it would be larger than the second largest technology acquisition – HP/Compaq – by a factor of two if you account for inflation, nearly three if you don’t. The obvious question is what this all has to do with Amazon.

On the surface, it might seem that the answer is very little. Dell went private in 2013 and so the numbers we have are old, but as of two years ago the revenue Dell derived from its traditional enterprise lines of business – servers, networking, storage and services – was $19.4B. The numbers for its traditional PC business – desktop PC and mobility – were $28.3B, or thirty percent higher. The problem for Dell, and one of the reasons it was making a big push amongst analysts at the time around its enterprise business, was the relative trajectories of the revenue streams. Even with modest to negative growth from its services (1%) and storage (-13%) businesses, servers and networking buoyed its enterprise business to 4% growth from the year prior. Mobility and PC returns over that same span were down 15%. All of which makes the decision to go private straightforward: it was going to get worse before it got better.

If Dell was going to bet on a business moving forward, then, it had essentially two obvious paths it could follow. Behind door number one was doubling down on the PC and mobile markets. The former market is being actively hollowed out, with both volume and margin cratering. In the latter, Apple effectively owns the entirety of the market’s profits.

Dell’s messaging and behavior, both before and after its decision to escape the limelight of the public market, suggested that Dell had picked door number two. Dating back at least to the 2009 $4B acquisition of Perot Systems, Dell has had ambitions of moving upmarket from the increasingly problematic fortunes and falling margins of the PC business. Every acquisition since then, in fact, is in service of the company’s enterprise ambitions.

In the context of 2008 and 2009, this directional shift was understandable. Amazon was growing fast, but unless you were paying close attention to the new kingmakers – the developers who were inhaling cloud services – its significance was not apparent. Certainly very few boards understood on a fundamental level the threat that cloud infrastructure and services would pose to their proven enterprise offerings.

The question facing Dell is whether the strategy that made sense in 2008 or 2009 makes sense today. As current EMC CEO Joe Tucci said in announcing the news, “The waves of change we now see in our industry are unprecedented and, to navigate this change, we must create a new company for a new era.” It is true as many observers have noted that this announcement is about more than AWS.

But it is also true that the new era Tucci referred to is increasingly defined by AWS. Witness the AWS announcements at re:Invent last week. It has been understood for a while that commodity servers and storage were vulnerable; if Dell going private wasn’t evidence enough, IBM’s deprecation of its x86 business and HP’s struggles with same should be. Many enterprise providers believed, however, that higher margin software areas were outside the scope of Amazon’s ambitions. This was a mistake. At re:Invent, AWS reconfirmed with offerings like QuickSight that there is very little outside the scope of their ambitions. Traditional enterprise providers must expect AWS to use its core services as a base of operations from which to target any and all adjacent enterprise software categories that promise sufficient margin.

When you couple the accelerating demand for and comfort level with infrastructure and software as a service with a widening enterprise-appropriate portfolios, it is indeed a new era, and one in which many traditional suppliers are playing catch up. To borrow Reed Hasting’s metaphor, Amazon is becoming the enterprise incumbents faster than they are becoming Amazon. Much faster.

The addition of EMC would obviously bring Dell a variety of assets that could be deployed towards a variety of ends. EMC plugs their most obvious infrastructure hole, and the company owns stakes in key software entities in Pivotal and VMware among others, the former of which is reportedly expected to go public and the latter of which is expected to be kept that way. The addition of EMC, however, better equips Dell to compete against the likes of Cisco, HP, IBM and Oracle than Amazon, however.

Which implies that the combined entity’s short term strategy will be competing against the traditional players for the enterprise dollar. Longer term, however, it will be interesting to see how it leverages its assets to compete in an increasingly cloudy world. Given the size of this deal, acquisitions that would move the needle from a cloud perspective can probably be ruled out. Which in turn means that any major push from the new Dell into the cloud – presuming there is one, eventually – will have to come from within or via the acquisition of much smaller players.

Given the meteoric rise of Amazon and widely assumed growth in demand for cloud services, it’s easy to criticize this acquisition, as many have, on the basis that it doesn’t make Dell an immediate alternative to the major public cloud suppliers. It is less obvious, however, whether another acquisition would. On paper, players like a CenturyLink (14.45B mkt cap) could be potentially be acquired for less than half the cost of EMC and bring with them a wide portfolio of infrastructure capabilities from bare metal through PaaS. In the real world, however, it’s difficult to imagine a company whose DNA dating back to the dorm room founding is manufacturing hardware for customers making a success of an acquisition that would be, for all practical purposes, a pivot into the cloud.

Instead, Dell went all in on building an organization that could more effectively compete with the traditional enterprise players. How they’ll all fare in the new era that Tucci referred to is the question.

Disclosure: Amazon, CenturyLink, Cisco, Dell, EMC, IBM and HP are all RedMonk customers. Google and Microsoft are not current customers.

Categories: Cloud, Hardware.

The 2015 Monktoberfest

As pre-conference headlines go, “astronomical tides,” “biblical rain” and “massive coastal flooding” would not be high on my list. Particularly if your conference is, like the Monktoberfest, on the coast. At a distance from it measurable in feet, in fact. The rains were so bad on the Wednesday before the Monktoberfest that it was for a brief period not clear that I would be able to make it back to Portland from Freeport where I was picking up the last of the conference supplies. According to the locals on our Monktoberfest Slack instance most of the major arteries into the city from Franklin Street to Forest Avenue had leveled up to actual rivers.

The Whole Foods in Portland, which is less than a mile from my office and the conference venue both, is on Franklin. It looked like this a bit before noon on Wednesday.

When that’s the scene a few hours before you’re supposed to host a chartered cruise to welcome your inbound attendees, things get interesting. Forecasts are consulted, phone calls are made, emails are answered and tweets are sent. The meteorologists assured us, however, that the worst was behind us and that the rain would blow through. Which, for once, is exactly what happened.

By five thirty, we were still looking at a lot of clouds but they’d quit actively dumping water on us. We were even treated to an actual sunset.

The moment the boat, biblical rains that day or no, nimbly pushed back from the dock everything was set in motion. The Monktoberfest at that point began its work, and arguably its most important function: connecting and re-connecting the people who take the time out of their schedule to be with us up here in Maine. On the boat, at dinner afterwards, and at Novare Res late that night, the kinds of conversations that people only have in person were had. Repeatedly.

At 10 AM the next day, we gathered at the Portland Public Library, as we have every year, to listen to talks, to contribute to talks with questions, and to meet each other. Over the next day and a half, we had talks on everything from building a volunteer legion and open APIs/platforms to medievalism in gaming and brewing beer with cylon.js and raspberry PIs. Being an impostor to the economics of the hop industry. Our speakers were, as always, prepared, unique and excellent. And before you ask, yes, all of the talks were filmed and will be available later.

The Monktoberfest is, as the saying goes, a labor of love. Like any other conference, it involves hundreds of hours of labor on the part of a great many people. But we love it. We hope the attendees do too, of course, and every year it is reactions like this that make it all worthwhile.

I say this every year because it’s true: it’s the people that make this event worth it. Every Monktoberfest the people who help put it on ask me about how the group we have assembled can possibly be so friendly. My answer is simple:

If I read that from someone else I’d dismiss it as hyperbole. I had a difficult time explaining that, for example, to Whit Richardson, a reporter for the Portland Press Herald, who stopped by to talk about the event with me.

But the simple fact is that it’s not hyperbole. That description is verbatim what I am told, year in and year out, by our caterers, by the people we have staffing the show, by Ryan and Leigh and by all of the people new to our event. Exactly how we end up with such a good group is a mystery to me, but I certainly appreciate it.

The Credit

I said this at the show, but it’s worth repeating: the majority of the credit for the Monktoberfest belongs elsewhere. My sincere thanks and appreciation to the following parties.

  • Our Sponsors: Without them, there is no Monktoberfest
    • HP Helion: We can’t make the investments in food, drink and venue that have come to characterize the Monktoberfest without a lot of help. We were very grateful that HP Helion stood up and made a major commitment to the conference. They’re one of the main reasons there was a Monktoberfest, and that we could deliver the kind of experience you’ve come to expect from us.
    • Red Hat: As the world’s largest pure play open source company, there are few who appreciate the power of the developer better than Red Hat. Their support as an Abbot Sponsor – the only sponsor to have been with us all five years, if I’m not mistaken – helps us make the show possible.
    • EMC{code}: The fact that we’re able to serve you everything from Damariscotta river oysters to lobster sliders is thanks to EMC{code}’s generous support.
    • Blue Box: We should first thank Poseidon that we were able to get out on the water at all, but once he cleared the weather for us Blue Box was the support we needed for our welcome cruise.
    • Apprenda: Hopefully your brilliant new Libby 16oz tulips made it home safely. When you get a chance, thank the good folks at Apprenda for them.
    • DEIS: Of all of our sponsors, none was quite so enthusiastic as the DEIS project. They sponsored coffee, breakfast, snacks and they bought you a round. Food, coffee and beer makes them one of the conference MVPs.
    • Cisco DevNet: Got some bottles while you were out and need to open them? Thank the team over at Cisco DevNet for your bar quality spinner.
    • Oracle Technology Network / Pivotal: Maybe you enjoyed the Allagash peach sour. Maybe it was the To Øl citra pale. Or the Lervig/Surly imperial black ale. Either way, these beers were brought you by the Oracle Technology Network and the team at Pivotal.
    • CircleCI: Our coffee, supplied to us by Arabica, got excellent reviews this year. Part of the reason it was there? CircleCI.
    • O’Reilly: Lastly, we’d like to thank the good folks from O’Reilly for being our media partner yet again and bringing you free books.
  • Our Speakers: Every year I have run the Monktoberfest I have been blown away by the quality of our speakers, a reflection of their abilities and the effort they put into crafting their talks. At some point you’d think I’d learn to expect it, but in the meantime I cannot thank them enough. Next to the people, the talks are the single most defining characteristic of the conference, and the quality of the people who are willing to travel to this show and speak for us is humbling.
  • Ryan and Leigh: Those of you who have been to the Monktoberfest previously have likely come to know Ryan and Leigh, but for everyone else they really are one of the best craft beer teams not just in this country, but the world. As I told them, we could not do this event without them; before I even start planning the Monktoberfest, in fact, I check to make sure they’re available. It is an honor to have them at the event, and we appreciate that they take time off from running the fantastic Of Love & Regret to be with us.
  • Lurie Palino: Lurie and her catering crew did an amazing job for us, and as she does every year, deliver on an amazing event yet again. With no small assist from her husband, who caught the lobsters, and her incredibly hard working crew at Seacoast Catering.
  • Kate: Besides having a full time (and then some) job, another part time job as our legal counsel, and – new for this year! – being pregnant, Kate did yeoman’s work once more in designing our glasses and fifth year giveaway, coordinating with our caterer, working with the venues and more and more.How she does it all is beyond me. As I like to say, the good ideas you enjoy every year come from here. I can never thank her enough.
  • Rachel: Knowing that Kate was going to be incapacitated to some degree by her pregnancy, we enlisted Rachel’s assistance to share some of the load. Little did we know that we were going to get one of the most organized and detail-oriented resources in existence. Every last detail was tracked, interaction by interaction, in GitHub, down to the number and timing of reminder phone calls made. We couldn’t have done this without Rachel.
  • The Staff: Juliane did her usual excellent job of working with sponsors ahead of the conference, and with James secured and managed our sponsors. She also had to handle all of the incoming traffic while we were all occupied with the conference. Marcia handled all of the back end logistics as she does so well. Celeste, Cameron, Kim and the rest of the team handled the chaos that is the event itself with ease. We’ve got an incredible team that worked exceptionally hard.
  • Our Brewers: The Alchemist was fantastic as always about making sure that our attendees got some of the sweet nectar that is Heady Topper, and Mike Guarracino of Allagash was a huge hit attending both our opening cruise and hosting us for a private tour on Friday afternoon after the conference ended. Oxbow Brewing, meanwhile, did a marvelous job hosting us for dinner. Thanks to all involved.

On a Sadder Note


The first year that we held the Monktoberfest – before there was such a thing as the Monktoberfest, in fact – Alex King offered to help. Some of you might know Alex from his early work as a committer on WordPress. Others from Tasks Pro. Or FeedLounge. Or the now ubiquitous Share This icon. Or Crowd Favorite. Anyway, you see where I’m going: Alex was a legitimately big deal professionally, yet still happy to help me get a small event off the ground. His team produced the t-shirt design that have been used every year of the show. His company Crowd Favorite was our first sponsor, and sponsored every year that Alex ran the company. And he attended and evangelized each and every show.

He was, in many respects, the conference’s biggest supporter.

In July, he called to tell me that for the first time he was not going to be able to make the conference – but used the opportunity to keep supporting us. On September 27th, three days before the conference he helped build began, Alex passed away after a long fight with cancer. I did my best to tell our attendees who he was and what he had accomplished, but to my discredit I could not hold it together long enough. The best I could do was call for a moment of silence.

View post on

I’ll have more to say about Alex, but in the meantime it is my hope that everyone who wears their 2015 Monktoberfest shirts for years to come will see the crown on the sleeve and be reminded of Alex King – a man who helped ensure there was a Monktoberfest, and a man who was my friend.

Categories: Conferences & Shows.

What SaaS Companies Forget to Talk About

Milk Shelves at Whole Foods

In the beginning, the problem facing Software-as-a-Service offerings was that they weren’t software. At least not in the traditional, on-premise sense that customers were accustomed to. To compete, then, as is often the case, SaaS had to become that which it competed with. Which meant that, fundamentally different delivery model or no, SaaS companies grew to resemble the software companies they were competing with, and in some cases, replacing. Everything from sales models (multi-year contracts) to the people hired away from on-premise software vendors by SaaS alternatives reinforced this idea, which to varying extents persists in SaaS companies to this day.

None of this is surprising, or in most cases, problematic. It was, after all, an adaptation to a market, one driven in large part by customer expectations. It does mean that SaaS companies have curious blind spots, however. So conditioned have they been to think and sell like the on-premise products they compete with that they’ve almost forgotten that they’re not on-premise themselves. The most obvious example of this at work is one that we’ve been discussing with our SaaS clients more and more of late.

Consider the enterprise IT landscape today. At virtually every level of infrastructure, there exist multiple, credible freely available open source options to pick from. From the VM to the container to the operating system to the scheduler to the database, choice abounds. Many of these were solutions first and products second, which means that while there may be rough edges, they have in many cases been proven to work at a scale that an average customer will never experience. Time was you had to have “bakeoffs” between competing commercial products to see if they would handle your transaction volume. These days, unless you handle more traffic than the likes of Facebook, Google or Twitter, that’s probably not necessary.

This is a remarkable achievement for an industry that at one time was dominated by expensive, functionally limited products that might or not work for a given project but guaranteed a poor developer experience. It is an achievement that has not come without a cost, however, and that cost is choice.

As has been documented in this space multiple times, the increasing volume of high quality open source software is bringing with it unintended consequences, among them lengthier and more challenging evaluations. As much as organizations didn’t appreciate only being able to choose from a small handful of expensive application servers, that was an approachable, manageable choice. There was one model and two or three vendors.

Today, there are many models to choose from, and thus many choices to be made. Public or private infrastructure. If it’s private, what does the infrastructure consist of? OpenStack? Cloud Foundry? Kubernetes? Mesos? What is the appropriate atomic unit of computing? VM? Container? App?

For organizations that are both capable and view their technical infrastructure as a competitive edge, this is a golden age. Never before has so much high quality engineering – tech that would been unimaginably expensive even a decade ago – been available at no cost. And free to modify.

This does not, however, describe most organizations. By and large enterprises are doing well today to merely keep their heads above water with a well thought out and accepted public versus private infrastructure strategy, never mind all of the other choices that follow from there.

Which brings us back to the businesses offering SaaS solutions. When they brief us, they spend the majority of their time talking about their engineering, their usability, their sales, their partners, their pedigree and so on. They discuss not at all, in general, the biggest potential differentiator they have: the absence of choice.

This again is understandable: very few vendors want to go to market saying “no more choices for you.” But the time is coming, quickly, when this might be exactly the right message. If you have a hosted application platform, for example, do you really want to get bogged down in a feature comparison between an on-premise alternative? Or would you prefer to call attention to the fact that all of the evaluation that goes into determining which of the Container/OpenStack/Cloud Foundry/Kubernetes/Mesos cabal to use and how they can be fit together no longer needs to occur? That the idea of having an application idea in the morning and deploying it in the afternoon is actually realistic because the infrastructure has already been assembled?

There are counters to this argument, clearly. If I’m an on-premise provider, I’d be making a lot of noise about “noisy neighbors,” large scale outages (because enterprises rarely have the internal numbers to know how they compare) and so on. But I’d be making a lot of noise because deep down I’d understand that, paradoxically, SaaS’s removal of choice is advantage that looks more compelling by the day. Which is why I tell all of our SaaS clients to articulate not only what they offer, but the technology that clients no longer have to evaluate, test and deploy as a result of going with a service, because that is a fundamental value they offer.

They just forget that at times, which happens when you’re used to competing head to head with on-premise every day.

Categories: Software-as-a-Service.

Lighting Out for the Territories

One of the things that gets me through a spring of being on a plane five out of six weeks is looking forward to summer vacation. A vacation, importantly, that involves zero planes. One of the best parts of living in Maine is that I don’t need to fly to experience some truly spectacular scenery. After months of running around from airport to airport, meeting to meeting, call to call, I take full advantage of the slowest month in our business to take a step back, relax and recharge for the fall sprint.

This year’s iteration will feature a week in a cottage on the water up north, some camping and, if all goes well, a fair amount of time spent on the water. With the summer’s major construction project already completed, unlike last year‘s, it will hopefully (knock on wood) be a vacation spent injury-free. Instead, I’ll try to plow through a dozen or more books, brew a beer or two and swim under one of my favorite waterfalls.

Though I’ll be doing a bit of work on the Monktoberfest over the next few weeks – some jobs are never done – as of a few hours from now I will not be checking email or voicemail. If you have a legitimate emergency, contact Marcia (marcia @ who will know how to get in touch with me. Otherwise, I will see you all on the other side.

Enjoy your summers.

Categories: Personal.

At Long Last, Some Scheduling Help

Business Calendar & Schedule

Beginning in early March when I was lucky enough to get into the beta (which is still closed), I’ve been using’s automated personal assistant Amy to schedule meetings for me. The only real problem I’ve had using the technology has nothing to do with the technology. My issue, rather, is one of etiquette. The artificial intelligence behind Amy is good enough that with the exception of people who’ve heard of her, most people never realize they’re communicating with a bot. Which is a credit to the service, of course. But it leaves users with an important question: do I have to tell people on the other end that Amy’s not a person? Should I tell them?

Now you might be thinking that if the worst problem you have is with a new service is etiquette-related, that’s a good sign for the technology. And you’d be right.

As some have noticed, I have a long and unpleasant history with scheduling tools. As an analyst, a big part of my job is to talk to a lot of people, which in turn means that scheduling is a big part of my job. Which explains why I have tried, at one time or another, virtually every scheduling tool known to mankind. Some were people based – things like FancyHands. Others were services, some of which are still around, some of which have retired and one or two that have been forgotten entirely: Appointly, Bookd, Calendly, Doodle, Google Free/Busy, MyFreebusy, ScheduleOnce, Sunrise and on and on. None worked for me, though some were less bad than others. Which is why I still waste time – whether it’s mine or Juliane or Marcia’s – scheduling meetings.

The root problem with all of them, even MyFreeBusy or Tungle which generated the fewest complaints for our usage, was that it was one more moving piece in an already too complicated process. Request is made via email, check calendar, check third-party site, back to email, hope the slot is still open – rinse, lather, repeat. This was because the technical approach that most of the tools took was to implement an outboard, externalizable version of my calendar.’s Amy breaks with this tradition. Instead of reproducing my actual calendar minus the private meeting details plus some booking features, Amy replicates a person – a personal assistant, more specifically. Scheduling from my end is very simple: I email back that I’d be happy to schedule a meeting, CC Amy and shortly thereafter I get a meeting invitation in my inbox. She (or he, has a male counterpart) does the legwork on the back end via nothing more complicated than email and I end up with an appointment on my calendar.

In terms of process, it’s in truth not that different from my end than sending an email saying I’ll take a meeting and including a link to my Tungle calendar. But I never have to explain what Appointly/Bookd/Calendly/Doodle/MyFreebusy/ScheduleOnce/Sunrise/etc is. I never have to field feedback about how the UI is confusing. I never have to explain the difference between a given service’s version of my calendar and my actual calendar, that just because requested an open slot slot on the former does not mean it’s written and confirmed into the latter. And perhaps most importantly, anyone can find and make a schedule request on a public calendar – which means I can be spammed with non-relevant requests. The only requests Amy schedules are those I’ve explicitly asked for. Big difference.

Amy’s not perfect yet. It’d be nice if I could whitelist email domains from clients, for instance, so that they could schedule me via Amy without having to ask me first, and book me for 60 minute increments while non-clients are limited to 30. There is a learning curve as well; used to a settings page, it wasn’t obvious to me at first that setting defaults like my conference number, weekday availability and so on would be done just by emailing Amy. There is also no way currently to grant her access to my colleague’s calendars to make group scheduling of multiple analysts simpler, or group features in general.

But as I said back in 2005, however, the scheduling space has been crying out for innovation, and Amy delivers that in spades – even if I’m not quite sure what the etiquette is for her usage yet. Here’s hoping we see artificial intelligence like Amy employed in many more use cases down the line, because she’s already made my life better just by tackling my calendar – who knows what else she could fix.

In the meantime, we’ll be waiting for access to open up so I can get my analyst colleagues on board, because I can’t see any reason not to standardize RedMonk on Amy as soon as she or it is publicly available. It’s that good.

Categories: Collaboration.

The SaaS Transition

Current Theory on "Cloud Computing"

One of the most common misinterpretations of the Software Paradox to date has been the assertion that traditional software license models are a binary, on or off switch. Many who read the book, or at least the title, come away with the impression that it’s arguing that it is impossible to generate revenue from software today. This is not, in fact, the argument being made. Not least because it’s very difficult to build the case that you can’t generate money from traditional software licensing when one company alone is generating double-digit billions of dollars in revenue from the sale of what is effectively two software products.

But more problematically, this simplistic interpretation obscures the reality that it remains very possible to generate revenue with software, it’s simply that the economic model for its monetization is evolving. One of the most common adaptations is to operate software as a service, and sell it in that fashion. This model has obviously been extant in the market for years; Salesforce, for one, went public in June of 2004. Over the past few years, however, we’ve seen a dramatic expansion in the availability of infrastructure software, rather than packaged applications, operated and sold as a service.

One of the latest examples of this is Amazon’s Aurora RDS flavor. Originally announced at its annual conference in November and made generally available last week, it is a bid to offer customers the performance of high end commercial databases in a MySQL compatible database. It is, in other words, a service whose addressable market theoretically includes both traditional Oracle customers and the volumes of MySQL users worldwide. Which makes it one of the more interesting arrows in AWS’ quiver, but just one for all of its engineering. AWS already had the ability to sell to both MySQL and Oracle customers, as well as PostgreSQL and SQL Server.

While AWS may be the poster-child for infrastructure software-as-a-service disruption, however, it doesn’t actually tell us much about the market direction. AWS is, after all, explicitly and solely a services-based business, so its focus on and attention to that model is no more surprising than Salesforce/Heroku’s efforts with respect to offering PostgreSQL in similar fashion. Services businesses, in other words, should be expected to provide service products. More telling with respect to testing the Software Paradox hypothesis would be traditionally on-premise software companies embracing service-oriented models. Which is what we’re seeing.

Most recently, IBM acquired the Y Combinator graduate Compose. At the time of aquisition, the company formerly known as MongoHQ provided services for Elasticsearch, MongoDB, PostgreSQL, Redis and RethinkDB. Compose adds new functional capabilities and experience to IBM’s portfolio, as well as additional expertise in running these datastores as services. Not that Compose is directionally new for IBM; a year and a half prior, the company picked up Cloudant, the CouchDB-derived database offered as a service. And this is on top of the services push the company is making with its Cloud Foundry-based Bluemix platform, which itself offers software such as Hadoop or Spark as a service.

The acceleration of IBM’s move towards services-business, from databases to application development, is understandable in the wider context of its financials. Over the last four years, here is IBM’s software revenue growth: 2011 (9.9%), 2012 (2.0%), 2013 (1.88%), 2014 (-1.9%). Whether or not one believes the Software Paradox to be true more broadly, then, two facts about IBM cannot be argued. First, that it is getting harder for the company to sell software. Second, that the longtime technology incumbent is investing heavily in software-as-a-service businesses across the board. The presumption here is that these two facts are not unrelated.

Some might argue that the struggles of an IBM or Oracle to reliably sell new licenses of on premise products is nothing more than a function of their particular markets. Relational databases, for example, are a mature market that should be expected to see minimal growth, particularly as it’s disrupted by various newer non-relational alternatives.

This argument makes sense on the surface, but ignores the reality that many of the would-be disruptors are eyeing services themselves as an additional growth channel, a means of hedging against the various trendlines that indicate traditional software monetization mechanisms are under pressure, or both. As a halfway step towards becoming more of a SaaS-type business, many commercial open source vendors are actively looking at their monitoring and management software as a potential step towards that model. The core software itself is sold in the traditional manner, but SaaS-style recurring revenue is the goal from the software – frequently proprietary – that manages and monitors the core open source asset.

Other startups are being even more proactive, however, and building or acquiring the resources necessary to spin up a legitimate SaaS product in-house, rather than relying on partners or partial-products like the aforementioned monitoring systems. Elastic, for example, may have acquired Found in part for the short term impact from the acquisition’s ability to package up Elasticsearch-as-a-service into containers deployable on premise. But over the longer term, the new ability to operate and run Elasticsearch as a public service is likely to be far more material to Elastic’s bottom line.

More importantly for Elastic, they timed the move well. As we move forward and see more service-oriented acquisitions like Compose or Found, the price for similar startups is only going to rise. Because the precedent for exits and escalating valuations is being established, but more importantly because on-premise only organizations are increasingly going to require the ability to offer customers not just a given software asset, but the ability to operate and maintain that software for them. Expect more SaaS startups and acquisitions, then, at higher prices.

None of which should be taken as a binary statement, again. On premise businesses have today and will continue to have the ability to generate both revenue and profits from traditional models. Some SaaS businesses, in fact, are effectively reverse commuting, taking their SaaS offerings, sealing them in a proprietary image, and selling it to customers that insist on on-premise deployments. GitHub has done this for years, and when we spoke with CircleCI last week on premise deployments were a growing opportunity for the SaaS company.

But even as on premise remains an opportunity, it is becoming more difficult to monetize efficiently. Selling software as a service is a very reasonable adaptation, but the window for adapting your on premise organization to the new reality will not remain open forever. Like it or not, the SaaS transition is underway.

Disclosure: Amazon, Elastic, IBM, Oracle and Salesforce are RedMonk customers. CircleCI and GitHub are not currently RedMonk customers.

Categories: Cloud, Software-as-a-Service.

The Implications of Cloud Native

Two months ago, “Cloud Native” was something of a new term, adopted most visibly by the Cloud Foundry project; a term both aspirational and unburdened by legacy at the same time. As of this week at OSCON, it’s a statement, borderline manifesto. As if it wasn’t enough that Google and a host of others adopted the term as well, it now has its own open source foundation – the imaginatively titled Cloud Native Computing Foundation. In the wake of its relatively sudden emergence, the obvious questions are first what is cloud native, and second what does it mean for the industry?

As far as the term itself, the short answer is a new method for building applications. The long answer depends in large part on who you ask.

There is a rough consensus on many Cloud Native traits. Containers as an atomic unit, for example. Micro-services as the means of both construction and communication. Platform independence. Multiple language support. Automation as a feature of everything from build to deployment. High uptime. Ephemeral infrastructure (cattle not pets). And so on.

Bigger picture, the pattern Cloud Native platforms have in common is that they are a further abstraction. Individual compute instances are subsumed into a larger, virtual whole, what has been referred to here as a fabric.

Where the consensus breaks down is what software – precisely – one might select to achieve the above.

One of the most interesting aspects of OSCON in the year 2015 is what may be taken for granted. In the early days of the conference, attendees were the rebel alliance, insurrectionists waging war against the proprietary empire and desperately asserting their legitimacy at the same time. Today, open source has won to the degree that you hear phrases like “single-entity open source is the new proprietary.” As Mike Olson once put it, “you can no longer win with a closed-source platform.” This victory for open source has many implications, but one much in evidence at OSCON this year is choice.

With open source rapidly becoming the default rather than a difficult-to-justify exception, naturally the market has more open source options available to it. Which on the one hand, is excellent news, because more high quality software is a good thing. Choice does not come without a cost, however, and that’s particularly evident in what is now known as the Cloud Native space.

One of the biggest issues facing users today is, paradoxically, choice. In years past, the most difficult decision customers had to make was whether to use BEA or IBM for their application server. Today, they have to sort through projects like Aurora, Cloud Foundry, Kubernetes, Mesos, OpenShift and Swarm. They have to understand where their existing investments in Ansible, Chef, Puppet and Salt fit in, or don’t. They have to ask how Kubernetes compares to Diego. Bosh to Mesos. And where do containers fit in with all of the above, which container implementation do they leverage and are they actually ready for production? Oh, and what infrastructure is all of this running on? Bare metal? Or is something like OpenStack required? Is that why Google joined? And on and on.

Even if we assume for the sake of argument that the Cloud Native vision will be an increasingly appealing one for enterprises, how to get there is an open question. One that, with rare exceptions such as the Cloud Foundry Foundation’s OSCON slides, few of the participants are doing much to help answer concerned as they are with their own particular worldviews.

Beyond the problem of choice, Cloud Native is, as mentioned previously, deliberately and explicitly exclusionary. It posits that there are pre- and post-cloud development models, and implies that the latter is the future. Certainly it’s the recommended approach. Traditional packaged applications or legacy three-tier architectures, in other words, need not apply.

But if we step back, Cloud Native also represents the return trajectory of a very long orbit. Decades ago, early mainframe virtualization capabilities notwithstanding, the notion of computing was a single large machine. When you deployed a mainframe application, you didn’t decide which mainframe, it went to the mainframe. With the subsequent mini-computer and client-server revolutions came a different notion, one further propagated and accelerated by Infrastructure-as-a-Service offerings: individual servers – physical or otherwise – as the atomic unit of computing. Instead of hosting an application on a large machine, as in the mainframe days, architectures were composed of individual machine instances – whether measured in the single digits or tens of thousands.

This has been the dominant computing paradigm for decades. While the primary obstacle to using the first wave of PaaS platforms like and Google App Engine was their proprietary nature, their break from this paradigm was a secondary obstacle. PaaS, like Cloud Native today, implies a fundamental rethinking of the nature of infrastructure. Where once applications would be deployed to a forest of individual instances, these platforms instead would have users push them to a single fabric – one large, virtual computer. Almost as if we’re back to the mainframe, if the mainframe was a federation of large numbers of individual instances integrated via systems software with scheduling capabilities.

The current Cloud Native crop of software isn’t the first time we’ve seen this “single virtual computer” concept made manifest, of course. There are many examples of this extant today, the most familiar of which may be Hadoop. There are no individual Hadoop servers; there are instead fleets of machines linked via distributed filesystems and schedulers to jointly execute large scale data operations.

In that respect, Cloud Native can be thought of as Hadoop for modern applications. Applications aren’t pushed to a server or servers, they’re deployed to a fabric which decides where and how they are run. Running an application on a single large computer requires that it be constructed in a fundamentally different way than if it were run on lots of traditional instances, which is why Cloud Native is deliberately and explicitly exclusionary.

Many applications will never make that jump; it won’t be technically feasible or, more likely, economically practical, to become Cloud Native. But the trajectory at this point seems clear. Just as organizations today take their hardware cues from internet pioneers like Facebook, Google or Twitter, so too will they be following the internet pioneers’ lead in software infrastructure. If a company builds its datacenter like Facebook, why run it differently? Not in the binary, all-or-nothing sense, but rather that the idea of Cloud Native, virtual single-computer infrastructures or fabrics will become a mainstream deployment option in a way that they are not today. The question is: on what timeframe?

That question is incredibly important, because Cloud Native going mainstream would have profound implications for vendors, open source projects and hosted offerings alike. Cloud Native becoming a first class citizen, if not the default, would be a major opportunity for the projects branded around projects such as Cloud Foundry or Kubernetes, the vendors that support those projects and the companies that offer them as a service – the likes of Google, HP, IBM, Pivotal or Red Hat, in other words. Any transition that impacted standard notions of infrastructure, meanwhile, would require projects (e.g. OpenStack) or vendors (e.g. Amazon, VMware) focused on that paradigm to adapt, and potentially implement one of the Cloud Native projects themselves.

To date, in spite of the long term availability of fabric-like PaaS products – debuted in September of 2007, some thirteen months after EC2 – infrastructure models that resembled traditional physical architectures have dominated the market. Nor should we expect this crown to be surrendered in the near term, given Amazon’s comical growth rate. But an increasing number of datapoints suggest that the traditional infrastructure paradigm will be at least complemented by an alternative, whether it’s called Cloud Native, a fabric or a platform. The question is not whether Cloud Native will be a destination, then, but rather how one gets there.

Disclosure: Amazon, Ansible, Chef, CoreOS, HP, IBM, Pivotal, Red Hat, and VMware are customers, as are multiple OpenStack participating projects. Facebook, Google, Puppet and Salt are not currently RedMonk customers.

Categories: Cloud.

Nadella’s Tough Decision

In late June, CEO Satya Nadella emailed Microsoft’s staff with an announcement that was part updated mission statement and part warning shot. In it, Nadella articulated his vision for Microsoft, which is “is to empower every person and every organization on the planet to achieve more.” A little less concrete than “a PC on every desk and in every home,” but it certainly doesn’t lack for ambition. Nadella also served notice, however, that “tough choices in areas where things are not working” were coming. The only question was what he meant.

On July 8th, he provided the answer. Microsoft cut 7,800 jobs from its phone business, or around 6% of its total work force, in what is generally understood to be a repudiation of Steve Ballmer’s acquisition of Nokia. Microsoft isn’t exiting the phone business entirely, but the company’s plans for the business have been dramatically scaled back to something more closely resembling Google’s Nexus hardware line.

While it’s difficult to argue the point that Microsoft’s phone business was, to borrow Nadella’s words, “not working,” there was nevertheless some surprise and discontent amongst observers of the company. The principal objection to this decommitment is perhaps best encapsulated by the Ars Technica piece “Analysis: Nadella threatens to consign Microsoft to a future of desktop obscurity.” These arguments can be summed up relatively simply: mobile is a vital and growing market, particularly when measured against its massive but stagnant desktop counterpart, and therefore Microsoft has no choice but compete in this market regardless of its performance to date.

Given the stakes of mobile, this argument is understandable. Former Microsoft CEO Steve Ballmer, the man responsible for the Nokia deal, was hardly the first to take major risks in search of mobile rewards. Google, who once enjoyed a close relationship with Apple, earned the Cupertino manufacturer’s undying enmity when the search giant felt compelled to jump into the market itself with Android.

But there are a few problems with the argument that Microsoft should have continued its Charge of the Light Brigade with what remained of Nokia.

  • First, there’s the question of approach. Even the critics of Nadella’s move would likely concede that Microsoft’s mobile platforms – both hardware and software – are also rans in their respective markets. This is in spite of years of investment and the multi-billion dollar acquisition of what was once one of the handset market’s preeminent manufacturers. If you’re going to argue, then, that Microsoft should not backtrack from the Nokia assets, then, it is necessary to provide a strategy for success in the market that Microsoft has not attempted yet. Otherwise, you’re essentially arguing that the company should throw good money after bad. If you want to argue that they should compete in a given market, that’s fine, but you have to be able to at the same time plausibly explain how they could compete in the market in question.
  • Second, there is the question of return. Let’s assume, counterfactually, that a unique and untried strategy was conceived and propelled Microsoft back to relevance. What would the return be? The market suggests that the financial return would be limited. As has been documented many times, in spite of its marketshare minority, the overwhelming majority of profits in the handset market are owned by Apple.

    It’s difficult to conceive of a scenario in which this would not be true of Microsoft as well. Microsoft has tried to duplicate Apple like margins in other hardware lines such as the marginally more successful Surface, but markets without carrier intermediaries are more straightforward. To be relevant in this market, Microsoft would likely have to follow the same course as the Android manufacturers, which is to keep margins minimal.

    Some have argued that Microsoft’s return for a minimally profitable hardware business would come elsewhere. The question is where? They don’t need a mobile platform to sell Office, that’s already available – thanks in part to Nadella – on both of the most popular mobile platforms. They don’t need a hardware platform to sell OS licenses, because the company has already conceded to the market reality: the market value of a mobile OS is $0, thanks to Google. What about the message of “Universal Windows Apps” for the legions of Windows developers out there? This idea has some intrinsic problems, in that independent of technology, universal applications are very diffcult to build because of intrinsic differences in form factors and input methods. But Microsoft also doesn’t need a flagship handset business to make this argument.

  • Third, there are other opportunities. Arguments that it’s mobile or else ignore the reality that the public cloud is going to be a large and growing market. And unlike mobile, where it was likely facing a sisphyean task, Microsoft is as well positioned in the public cloud as anyone save Amazon to capitalize on that growth. Some will look at Apple’s comical, absurd profit lines and conclude that Nadella’s decision to abandon the path Ballmer tried to set the company on is like walking away from a potentially winning lottery ticket. But thus far, that lottery has only produced one Apple.

    It is also worth asking whether or not Microsoft was positioned to compete effectively in a consumer market. While there are obvious exceptions – the Xbox, for example – Microsoft is at its core more business oriented than consumer. Gates may have wanted the PC in every home, but for most consumers the operating system was an afterthought: it was never an object of desire in the way that an iPhone is. Microsoft did well to extend into the home from the business, and to get consumers using its business-focused Office software, but the company was never really about consumers.

    Azure, on the other hand, is explicitly and expressly a business play, in a market that offers massive opportunities for growth. One that Microsoft has the DNA to be far more successful – and profitable – in than handsets.

Even if, as reported, Nadella was not in favor of the Nokia acquisition originally, it was undoubtedly, as he put it, a tough decision for the company. There’s the human cost of telling almost eight thousand people that they need to seek employment elsewhere, and there’s the public relations cost of telling the market the company you lead had effectively made a $7 billion dollar mistake. But as tough as it must have been, Nadella made the logical decision for the company. Now it’s up to Microsoft to capitalize on the focus he has afforded it.

Disclosure: Microsoft is not currently a RedMonk customer.

Categories: Mobile.

Meet the New Monk: Fintan Ryan

The problem with a good problem to have is that it’s still a problem. It is, by definition, better than whatever the alternative is. It is also, by definition, still a challenge. This is how I came to consider our recent hiring process. On the one hand, we had a legitimately overwhelming number of bright, talented and passionate candidates. On the other, well, we had a legitimately overwhelming number of bright, talented and passionate candidates. How does one sift through dozens of applicants who would all bring something different, something important, to the table?

In our case, the answer is: very deliberately. We went through multiple interview rounds. We reviewed submitted materials. We researched backgrounds. We tested. And internally we debated. And debated. And debated. We’d spend a half hour agonizing over whether one candidate would simply make it on to the next round. Just to help myself in the decision-making process, I put together a baseball-style scouting scoreboard for our finalists, ranking them on a variety of characteristics as a scout would, with a numerical ranking from 20-80.

We could have made our lives easier, of course, by narrowing the funnel. One of our candidates asked us about this, in fact:

This was my answer:

We kept the funnel wide, knowing that it would cost us time, because we wanted to get it right. Hiring a BMC developer and a Mayo Clinic scientist worked for us in the past, after all, so we talked to evangelists, electrical engineers, professors, COOs, consultants, a marketer or two and developers, naturally. The notes from the first round alone streched over 40 pages.

Eventually, however, our lengthy starting list was funneled down to a single name, and that name was Fintan Ryan.

Fintan may be familiar to some of you, whether it’s from the work he’s done with RedMonk in the past on a few conferences or some of the community work he’s done in London. In any event, those of you who follow what we do at RedMonk will have the chance to get to know him better.

As you’ve come to expect with new RedMonk analysts, Fintan brings an eclectic mix of skills to the table. He’s been a developer – holds a few US patents, in fact. He’s been the one tasked with managing developers as well, from waterfall to agile. He’s done yeoman’s work in community organizing, whether it’s conferences with us like IoT at Scale and Thingmonk or external events such as the CoreOS London meetups.

Analytically, his quantitative research chops are excellent; he did some very interesting research just prior to our opening, in fact, for no other reason than he was curious. And it’ll be nice to have someone else on board working in R. Beyond the technical skills, however, Fintan seems to have a knack for asking interesting questions, a trait that can be harder to find than the ability to answer them.

Most importantly, however, Fintan is passionate about what we do at RedMonk. From my perspective, almost every other requirement for this job is negotiable. Believing in what we do, however, isn’t optional.

Starting on August 5th, then, Fintan will be the next monk. With us, he’ll be covering the same broad spectrum of topics that we cover, and based on the quality of the research he did as a function of our interview process, you’re going to enjoy his work. In the meantime, please join me in welcoming Fintan to the RedMonk family, and if you’re so inclined, feel free to hunt him down on Twitter to say hello.

Categories: RedMonk Miscellaneous.