Blogs

RedMonk

Skip to content

How operations, design, and data affect software and business: Ops edition

I’ve been doing a lot of thinking recently about the interaction between software development and three key areas: operations, data, and design. After conversations this week with Dell’s James Urquhart (courtesy of the enStratius acquisition) and Rod Smith from IBM’s emerging-tech group, I decided that it’s worth writing up to hopefully drive some discussion around these ideas.

By starting at the center of the below diagram with developers and then blending them with any of operations, data, and design, a number of key trends emerge from not only software development, but more nascently from business as well. While RedMonk’s traditionally been a firm that looks at the bleeding edge of software, this perspective shows that the same ideas reflect upon the function of organizations as a whole.

data_ops_design_venn_diagram

And yes, I’m very much abusing the purpose of Venn diagrams because I’m going inside-out rather than outside-in, but it gets the idea across. Let’s start at the top and work our way around first the immediate intersections with development, then the more complex interactions.

Development + Operations = DevOps ➡ Agile, collaborative business

data_ops_design_venn_diagram_ops

The idea of DevOps is most simply described to developers as the extension of agile software development to operations. It’s about bringing (1) software-development techniques like version control, testing, and collaborative development and (2) rapid iteration and minimum viable products (MVPs) to the world of systems administration and infrastructure.

On the side of software development, it’s about transitioning toward a philosophy of infrastructure as code — defining what your datacenter or cloud looks like in terms of tools such as Puppet and Chef. This enables much easier application of basic software tools such as storing system configurations in Git and using continuous integration on your infrastructure and not just the code that runs on it. By learning from the distributed, asynchronous collaboration found particularly in open-source communities and largely virtual companies (e.g. GitHub), teams can benefit greatly in terms of the quality of people they can hire as well as their ability to collaborate more effectively.

On the side of agility, as trends like the shift to cloud-based systems continue to accelerate, it’s increasingly critical to work in a more agile fashion that decreases cycle times and thus increases the frequency at which teams can iterate on an MVP. This rapid iteration also means that when things break, as they inevitably do, there’s already a process in place to not just fix them quickly but to get those fixes to users quickly without an epic series of manual hacks.

For DevOps, it’s about extending the cycles beyond just development teams all the way through operations, so you can iterate all the way from development to production using continuous delivery or continuous deployment. Companies like Etsy and Netflix deploy as much as 100 times per day, which blows the quarterly deployment model out of the water. When you  change anything like this by an order of magnitude, you have the potential to not just save time but to transform how people work. Some of that transformation is a prerequisite, while some of it is a consequence of making organizational and tooling changes.

Carrying this into broader business, it means shifting the entire company to an agile model that’s highly collaborative. The collaborative aspect is most easily accomplished not by gathering 1000 people around a real-life water cooler but rather by making the whole company social, along the lines of Salesforce Chatter, IBM Social Business, Yammer, and Jive.

Open-source communities have pioneered the kinds of technologies needed to collaborate effectively in our increasingly connected world. Look at IRC, archived mailing lists, wikis, and more. All of them were pioneered by open-source developers. What’s happening in the world of “social business” today is the same thing that happened 25 years ago in the open-source world (IRC was created in 1988). Those developers have lived in the future ever since, while the rest of the world is slowly picking up the importance of improved collaboration and distributed workforces.

Below is a video of GitHub’s Ryan Tomayko from Monktoberfest 2012 talking about how GitHub (the company, not the product) works incredibly effectively as a distributed organization:

Another video worth watching is from this year’s Monktoberfest where GitHub’s Adam Roben talks about the importance of face time in distributed companies:

While I won’t embed it, there’s another classic video on distributed development by Zach Urlocker (former COO Zendesk and previously running product at MySQL) from Monki Gras 2012. Here’s the writeup.

This interaction between development and operations is a key trend in software, which was been foreshadowed by open-source development and will be followed by its enactment across the broader business.

Disclosure: Dell, IBM, and Salesforce.com are clients. Puppet Labs, Microsoft (Yammer), and GitHub have been clients. Jive, Opscode, Etsy, and Netflix are not not clients.

by-sa

Categories: community, devops, distributed-development, ibm, microsoft, open-source, redmonk-brew, salesforce, social.

Conway’s law but for software: Salesforce and SAP

Conway’s law aptly states:

Organizations which design systems … are constrained to produce designs which are copies of the communication structures of these organizations.

I’d like to propose a parallel “law,” if you will. I hypothesize that software companies (that’s increasingly all of them) are incented to innovate primarily around the central object, or data structure, in their software — and consequently constrained from innovation outside the central object.

For Salesforce.com (SFDC), its software is centered around people, primarily customers. This has influenced and will continue to influence corporate strategy, product design, and innovation. Witness the recent “customer company” tagline that SFDC is attaching to everything it does and that affects its higher-order thinking.

For SAP, on the other hand, everything is centered around purchases. Think ERP, purchasing, inventory management, and so on. Its central object is in fact the purchase rather than the person, as it is for SFDC.

In each case, innovation will be highly constrained outside of these core objects, particularly when it comes to second-level connections. In other words, for SAP, the purchase connects to the buyer connects to ?? And for SFDC, the converse holds true — it sees what’s directly connected to the customer, but what about the next step? This mindset will influence initial strategy but pervades into hiring, promotion, and performance reviews to continually lock companies further into their central objects.

If you want to understand company mindsets and their future strategies, you could do a lot worse than combining Conway’s law and this idea of central objects.

Disclosure: Salesforce.com and SAP are clients.

by-sa

Categories: corporate-strategy, salesforce, sap.

VMworld: the pundits versus the practitioners

Post-match pundits

After coming home from last year’s VMworld, I said this:

My overall impressions of VMworld were that VMware had one of the world’s largest stages to make some huge announcements, and it didn’t — instead, it showed incremental improvements. … Now it’s only a matter of waiting till next year to see whether VMware continues down that road of stability over major announcements.

The pattern has now become clearer: it’s about continuing a stable path for existing tech like vSphere while pushing the innovation toward new products like NSX (the evolution of Nicira’s software-defined networking product), vSAN, and vCloud Hybrid Service (vCHS). This pattern makes a lot of sense for the enterprise users that comprise the core of vSphere’s customer base, who care much more about stability than the latest bling.

But rather than rely too much on one analyst’s opinion, let’s take a look at some data. I performed an analysis of everyone tweeting about VMworld (tens of thousands of tweets, for the curious) to get a better sense of what VMware users cared about and what kinds of disconnects might exist between them and pundits — analysts, journalists, etc. Here, I’m going to focus on the keynote talks and look into how much practitioners and pundits cared about each of the topics and announcements during those sessions.

Day 1

The first morning started out slow, with acting CMO Robin Matlock coming up to talk about the 10th VMworld conference. She brought up a few of the 10-time attendees to talk about their highlights, which (predictably?) focused on parties and swag over talks. After that, new CEO Pat Gelsinger, announced at last year’s VMworld, came up on stage to sparse applause. He talked through a survey that the audience generally didn’t care too much about, and then got into the real content. To help evaluate the importance of that content, here’s a graph of the Twitter volume and sentiment throughout the keynote. I’m going to use this to target each of the peaks in volume or sentiment that I annotated at the top of the graph:

test

All #vmworld tweets were automatically filtered to remove the following roles, for practitioners: analysts, journalists, marketing, PR, evangelism, product, and employees of VMware/EMC/Pivotal/VCE. For pundits, only analysts and journalists were included. I then manually examined the top 100 tweeters in practitioner or pundit groups to ensure they belonged. Sentiment measurement shows the positive/negative ratio based on dictionaries from Blue Bird, with word groups of 250 or 500 for pundits and the less expressive practitioners, respectively.

Gelsinger on stage: The CEO gets introduced and steps up on stage. Practitioners and pundits both announce this as the start of the “real” keynote.

Positive quotes: Gelsinger walked through a survey of VMware customers and cited a bunch of statistics in a favorable light, talking about customers as masters of the universe, ninjas, etc. Many people tweeted them using identical language.

Key pillars: He then described the underlying themes behind all of this year’s announcements, wrapped in the philosophies of “mobile-cloud era” and VMware’s now-old standby, the software-defined datacenter:

  • Expand virtual compute to all apps
  • Transform storage by aligning it with app demands
  • Virtualize the network for speed and efficiency
  • Automation: Management tools to define policies

Unsurprisingly, there’s a consistently high pundit volume throughout the positive quotes and the key pillars, while practitioners are much more subdued and want to get into the technical guts of the talk.

5.5 announced: The high point of the keynote for practitioners was the announcement of vSphere 5.5 (and vCloud Suite 5.5, in definitely parenthetical terms), with the following key features:

  • 2x the number of cores, VMs (oops, 32x on disks: 64 TB max disk size — Audience excitement about this)
  • Application-aware high availability (HA): monitor not just apps but components of apps
  • Big Data extensions: scaling Hadoop clusters and sharing them

In this section, excitement centered around “64TB” disk images (apparently 62TB in actuality, but still 30x bigger than today’s 2TB). The trend within the 5.5 features was generally mirrored by the pundits in relative terms, although it was definitely not the section of the keynote they were most excited about. Secondary was the app-aware HA, while the audience seemed to perceive Hadoop as basically a fad of the day. For example:

Of additional note is the announcement of Cloud Foundry on vSphere, to a subdued response from pundits and practitioners alike. This is likely due to a combination of factors such as the conference audience (very focused on ops, not developers) and the early state of the PaaS market. This audience mismatch could also partially explain the response to the Hadoop news.

vSAN announced: The first real surprise of the day, in terms of products rather than features, was vSAN ­— VMware’s entrant into the software-defined storage market. It’s a welcome addition to compete with both the top (e.g. EMC, NetApp) and bottom ends of the market (open-source options like Ceph, Gluster, and Swift), with the easy competitive advantage of “The best one to use with vSphere” and an expected GA in the first half of 2014. But if the pricing isn’t also intermediate, it’s going to run into serious issues. From practitioners, this got nearly as much buzz as the vSphere 64TB announcement and was the most positive peak of the day in terms of sentiment. On the pundit side, it outperformed vSphere 5.5, indicating a possible bias toward new technology rather than what’s useful today without major effort.

NSX announced: Martin Casado, CTO/cofounder of Nicira (acquired last year), announced VMware NSX,  framing it as ESX for networks. Importantly, he took time to introduce the SDN concept, which is one familiar to cloud developers but perhaps less so to enterprise ops outside of the VLAN context. They brought up 3 customers to tell their stories running previous versions of the software: Ebay, GE, and Citi. Among practitioners, there was just as much excitement about this as vSAN. Pundits were quieter, perhaps because they’d been expecting this ever since the Nicira acquisition, but simultaneously the most excited about it. I see this as another example of how practitioners are more excited about “now” and pundits are more excited about potential for the future.

Virt > phys ports: Martin Casado showed an interesting graph of the growth of physical and virtual ports over time. Although I’m unsure of its data source, here it is: phys_v_virt_vmworld   As you can see, there’s a crossover point in 2012 where the number of virtual ports exceeded the number of physical ports. Gelsinger had just previously cited 2010 as a similar crossover point for the number of machine instances that were virtual versus bare metal to underscore the importance of this graph, which is surprisingly accurate, considering it goes all the way out to 2015.

GA, no Cisco: The GA date for NSX was announced as 4Q2013, and numerous people among both practitioners and pundits noted that Cisco was conspicuously absent from the partners slide.

vCHS announced: The GA of their vCloud Hybrid Service (vCHS) was announced by Bill Fathers, SVP & GM, Hybrid Cloud Services. It will, as the name suggests, be a VMware-hosted cloud designed to integrate with private clouds. Services of note include Cloud Foundry, Disaster Recovery as a Service, and Desktop as a Service. It’s launching with 2 datacenters and will add 2 more this fall, plus another 2 via a Savvis partnership — all US-based. This got a spurt of activity out of practitioners, but far less than the on-prem products.

DRaaS with vCHS: Showing the importance of traditional on-ramps to cloud like backup and disaster recovery (DR), the announcement of DR as a service (DRaaS) on vCHS got just as much buzz as the announcement of vCHS itself. It will use vCHS as a new backend to VMware’s existing Site Recovery Manager product. Practitioners and pundits both got excited about this news, with clear bumps in sentiment and activity visible for both groups.

Day 2

The keynote opened up with President and COO Carl Eschenbach coming up on stage to talk about the conference itself and rehash yesterday’s news, followed by a dive into the products mentioned yesterday, in the form of a skit between Eschenbach and principal engineer Kit Colbert. Again, we’ll do a visualization and walk-through of the highlights of our practitioners and pundits:

test

Analysis here is the same as in the day 1 graph.

22,500 attendees; Carl Eschenbach: Some opening stats, everyone likes to hear how many people there are who, like themselves, came to the show.

Keynote not streamed: After 10 minutes, it seems like most of the online viewers of Day 1 figured out that Day 2 would not be streamed. This was a particularly strange choice by VMware because attendees have historically enjoyed day 2 more than day 1.

vCAC: This was the start of the skit/demo between Eschenbach and Colbert, which walked through vCAC, App Director, NSX, and vSAN.

70% traffic is betw. VMs: Colbert said that up to 70% of traffic in the datacenter is just between VMs. This got a big sentiment and traffic spike from pundits but relatively little from practitioners. Chances are it’s not news to them since they see it every day in their monitoring.

NSX + vMotion: They showed that vMotion, which moves VMs from one physical host to another, will integrate with virtual networks to migrate the networking as well. Practitioners were extremely interested, but pundits didn’t show much of a response at all. 

vSAN: While pundits showed little enthusiasm, the demo of vSAN got practitioners very excited. They were particularly intrigued by the apparent ease of use.

vSAN beta: Again, practitioners were active and excited about the vSAN beta, while pundits were largely absent. However, there were questions about enterprise readiness regarding scalability and data safety, particularly in the event of failures.

vCAC DaaS (EUC): This was a demo of provisioning an end-user desktop with the vCloud Automation Center, dubbed Desktop as a Service (DaaS). It was the only real focus on end-user computing (EUC) during any of the keynotes.

Joe Baguley: VMware’s CTO for EMEA stepped up on stage.

Policy-based automation: First came the demonstration of auto scale-out via new vCAC + vCOPS (vCloud Operations Manager) integration. Then Baguley made the case that scripting is fragile and so we should rely on policy instead. Key quote: “Scripts are just a posh way of saying manual.”

Log Insight giveaway:  VMware announced that every VMworld attendee would get 5 free licenses to its new Log Insight product, which is directly competing with (most obviously) Splunk as well as newer entrants like Sumo Logic and IBM. Unsurprisingly this produced a huge spike in both traffic and sentiment, with more interest from practitioners than pundits since they’ll be the ones actually using the software.

Practitioners versus pundits

This emphasizes a key difference between practitioners and pundits: what’s practical and tangible today vs where the new innovations are. Practitioners cared most about the technical details of updates, as well as things directly bordering on areas they work in and problems they have (vSAN). Today’s VMware customers are conservative — they don’t care a whole lot about SDN or the cloud, whether it’s in the form of vCHS or OpenStack. A mention of OpenStack as a supported vCAC provisioning target, just after the NSX GA announcement, on day 1, maintained interest but showed no spike in activity or sentiment among practitioners, although it did show an upward trend among pundits.

Rational does not make right

With the divestment of the somewhat tangential Pivotal assets (SpringSource and Cloud Foundry, among others), VMware has enhanced its focus on its original core of virtualization and is now extending that to storage and networks. Its direction seems to be much more rational and logical at this point, which is encouraging because not many companies can manage products across such a broad array of areas successfully. SocialCast seems to be a weird remnant that I’d expected to go to Pivotal, and I’m waiting to see what comes of that now that Sanjay Poonen has moved over from a high-level position at SAP to run VMware’s end-user computing efforts. VMware’s profit for the past two years combined was below $1.5 billion (and that’s prior to the Pivotal handoff). That number is not much greater than the $1.2 billion purchase price of Nicira, indicating the seriousness of VMware’s future commitment to SDN.

However, a rational product lineup doesn’t mean it’s the right product lineup for today’s market. Everyone’s banging on how vCHS is terrible in large part because it’s not Amazon. But there’s always going to be a place in every market for a full-service offering. [See our 2007 writeup predicting a VMware cloud.] However, Amazon’s clearly working on improving its enterprise offerings, as evidenced by analyst sessions at last fall’s Re:invent conference, plenty of webinars (you should see my inbox), and instances like its hire of former sales chief Mike Clayville away from VMware.

Disruption tends to come from below, and VMware’s biggest concern (not to mention IBM’s) should not be Amazon in its present form as DIY cloud, but rather Amazon succeeding in bringing its low-margin expertise to higher-touch enterprise audiences. And between clones of the AWS API in any number of IaaS stacks, and Amazon’s own moves in the private cloud sector like winning battles against IBM to build a cloud for the CIA, certainly the hosted aspect of VMware’s hybrid cloud is facing serious, even existential, challenges.

And that’s before we even get into the challenges hybrid cloud faces with data gravity [here's one of our writeups]. Shifting data back and forth between public and private instances depends on pipes of limited bandwidth. And if you get into synchronizing distributed systems, you’re faced with yet another set of challenges for which VMware has little answer today — its in-memory options of SQLFire/GemFire/Redis are not always the right tool.

That said, the strategy of providing identical containers on-prem and in the public cloud is clearly a good one, as evidenced by the recent emergence of tools and companies such as Docker and Ravello. Unfortunately VMware’s story today appears to be about migration rather than about dev/test and DevOps workflows. This is strange, because migrating heavyweight VMs back and forth between private and public environments is hard-hit by data gravity. I would argue that it’s a failure on VMware’s part to appeal to its new developer audience, because it’s got a long history of working with and understanding sysadmins. Not only that, but it’s recently handed off (to Pivotal) or lost pretty much every group with a deep understanding of what developers want, Salvatore of Redis fame notwithstanding.

Don’t alienate your audience

VMware’s announcement of NSX garnered a great deal of attention and excitement from VMworld attendees, but not all of it was positive. In much the same way as companies making a DevOps transition should avoid making enemies of the ops or DBAs but rather recruit them to new roles, a shift toward SDN should avoid alienating network admins but rather bring them into the fold. VMware today appears to be failing at this:

Alienating those who could be your most ardent supporters seems like a foolish move, but maybe that’s just me.

Disclosure: VMware, Pivotal, Cisco, Splunk, IBM, Red Hat (which acquired Gluster), and Ravello Systems are clients, as are a number of cloud vendors including Amazon, Cloudscaling, Citrix, Joyent, and Rackspace. EMC (primary owner of both VMware and Pivotal), Inktank (which sells Ceph), SwiftStack, Sumo Logic, Savvis, and dotCloud are not.

by-sa

Categories: big-data, cloud, devops, virtualization.

Entrepreneurial technologists as another creative class

I recently tripped across another great post by Rick Turoczy (author of Silicon Florist) that really resonated with me, and I couldn’t resist quoting it here:

His hypothesis was that startups and the entrepreneurial technologist who founded them were another creative class; a group of people who chose to express themselves with apps and code instead of pictures and words.

And the company for whom he worked—a Portland-based communications firm called Wieden+Kennedy—wanted to help. They wanted to foster that next generation of creatives in town. Because they were changing the way stories were told. Because W+K wanted to be part of that.

Developers as creatives — that’s an important thought. One that companies like Adobe should definitely be considering, as they think about how code-based products fit into a creative lineup including things like Photoshop and InDesign.

Disclosure: Adobe is a client.

by-sa

Categories: creatives, design.

What do Stack Overflow developers care about and use?

For the past three years, Stack Overflow has run a survey of its userbase to see what sorts of things they care about in jobs and technologies they use. To my surprise, I couldn’t find anyone who really dug into this data other than Stack Overflow’s own writeup, which wasn’t particularly detailed. Their writeup does do a reasonably good job of breaking down the distributions within individual questions, but it entirely ignores the question of how everything is tied together.

Fortunately, the data from the 2012 survey has already been parsed into Statwing and is available as their demo, which made it convenient to explore. I dug through all of it to pull out all of the statistically significant and meaningful correlations between answers. I then imported that data into the graph visualization tool Gephi to create this (click to enlarge):

Graph showing all of the strong and normal effects between Stack Overflow survey questions, in addition to weaker effects among technologies. (Effect sizes as per Cramér’s V). Data is here.

Labels are sized based on how many other features are connected to that particular one (a.k.a. degree), while the line thicknesses are based on the size of the effect between pairs of features. There’s a whole lot to pull out of this graph, so let’s run through it cluster by cluster.

Top-left cluster: How developers spend their days. This breaks down the correlations between companies, time at work, and job experience. One thing we can see here is that as team size grows, the time spent in meetings to coordinate a larger team grows with it. Another is an interesting correlation suggesting that most developers aren’t dedicated purely to features vs bugs vs refactoring. Rather the time spent on each is rather strongly correlated with the others, so more time on features also means more time fixing bugs in those features, etc.

Middle cluster: Work-life balance. This is the population of 9-to-5 developers. They want to work 40 hours, avoid nights or weekends, live close to work, and live somewhere nice. They make up 45–50% of survey respondents (who called these factors very important or non-negotiable).

Top-right cluster: What good companies look like. Interestingly, this cluster has a very high interconnectedness, showing how important many of these factors are, not just a few. It’s clear from looking at this cluster for even a moment that good jobs are much more about engagement than salary. They’re about working with top-notch people in teams with opportunities to grow and work on things that matter.

Bottom cluster: What technologies get used by the same developers. Some of the strongest correlations here were actually between OS and languages — PHP developers are very biased toward OS X, while Java developers are very biased toward Windows. Weirdly, C# did not show a strong OS bias. There was also a very strong overlap between C++ and C#, presumably another indication of the Windows-based development ecosystem. Some of the particularly interesting and strong effects in actual use (not just excitement) were jQuery and Python (0.385 on a 0–1 scale) and jQuery and C (0.334). What I found most interesting about the technologies is this — one of the strongest effects is between those two wildly disparate technologies.

There’s a lot more data to be pulled out of these surveys; this is just a taste. Head over to Statwing and try it out; if you learn anything, post it in the comments.

Disclosure: Statwing and Stack Exchange are not clients.

by-sa

Categories: adoption, employment, operating-systems, programming-languages.

Are we getting better at designing programming languages?

In the aftermath of my earlier work on the expressiveness of programming languages, I started wondering whether our ability to design and choose optimal languages might have improved since the ’50s when Fortran and Lisp were invented. To answer this question, I returned to my original data on the expressiveness of the top two language tiers by our ranking. By mapping the medians against the years each language was invented, I created the below plot (click to enlarge):

Shown is the median weighted expressiveness of each language (as calculated in the previous post) plotted against the year each language was invented or first published.

Shown is the median weighted expressiveness of each language (as calculated in the previous post) plotted against the year each language was invented or first published.

I’ve broken this plot into four clusters. The first two, in gray, indicate overall low vs high expressiveness (gray). The third, red cluster shows all new, popular languages from the past decade, while the empty cluster (“The Gap”) shows the complete lack of less expressive new, popular languages. Each language is colored in red (tier one) or black (tier two) to make it easier to assess where the most highly used languages fall.

Caveats? The meaning of expressiveness as measured by this metric is deep and complex. As mentioned in the initial post on the topic, JavaScript appears to be an artifact due to its unusual development norms. And if you look closely, perhaps a language here and there doesn’t fall quite where you’d expect. But overall, things largely make sense, and this is borne out by correlations between my measurement of expressiveness and developer surveys.

What can we get out of this?

Broadly, we’ve gotten better at designing or choosing expressive languages. In both the low-level and high-level clusters, a slight to moderate downhill slope is clearly visible. Because I haven’t shown all new languages but only popular ones, it’s difficult to ascertain whether our design skills have improved or whether it’s a larger crowd effect of natural selection.

Tier-one languages are in both high- and low-expressiveness clusters. Interestingly, however, most of the ones in the high-level cluster got much slower uptake than in the low-level cluster. For example, compare Java and C# vs Ruby and Python. While you could argue about their relative popularity today, I don’t think you can make a viable argument that Ruby and Python got faster uptake from a large population after their initial releases.

An exceedingly small number of languages have strong staying power. Of today’s tier-one languages, only C and shell script are more than 30 years old, while the remainder are all 20–30 or 10–20 years old. The next-closest example, Objective C, should nearly be disqualified because it’s only tier one due to the recent (on this timescale) popularity of iOS.

In the past 10 years, no very popular, lower-expressiveness languages have shown up. None of the tier-one languages are less than a decade old. I think this says something about the time it takes to gain significant adoption, even in the predictive communities we rely upon for our popularity analysis (GitHub and Stack Overflow, in this case). Of the ones in the red cluster, it wouldn’t surprise me to see Go, CoffeeScript, or even Clojure show up on the tier-one list in time, based on the changes in our rankings.

All in all, it’s very suggestive that something important has changed in the past decade regarding how developers design, evaluate, and select programming languages. It could be reflective of the shift toward developer empowerment rather than management handing down decrees from on high regarding which technical choices get made.

by-sa

Categories: adoption, data-science, programming-languages.

OSCON meetups: FLOSS lunch, RedMonk beers

It’s been a few years, but I used to have an OSCON tradition of getting a bunch of interesting people together for lunch, from a variety of free-software and open-source communities.

This year I’m suggesting we revive the FLOSS lunch gathering, on Wednesday of OSCON week. Let me know in the comments or via email if you’re interested. We’ve got around 35 people interested, consider it at capacity.

I’ll also be hosting a RedMonk beering, beginning around 9:30pm Wednesday at Bailey’s Taproom (213 SW Broadway, which is downtown). The place is open till midnight and we’ll likely be there till then.

If you’re thinking you need to choose between them, you’ve got another think coming. Go to both.

Update (2013/07/19): Added beering details.

Update (2013/07/22): Lunch is at capacity.

by-sa

Categories: open-source, social.

RedMonk’s analytical foundations, part 2: 2006–2007

a.k.a. the advent of Coté

In celebration of RedMonk’s 10th birthday year, this is the next post in the continuing series on RedMonk’s foundational works, which continue to define our philosophy and approach today. The big event of 2006 was the addition of Michael Coté to the RedMonk team.

2006

2007

by-sa

Categories: adoption, analyst, cloud, data-science, devops, open-source, packaging, social.

6 reasons you can’t ignore the new SAP, and 1 huge caveat

I was at SAP’s Sapphire conference earlier this summer, and it was a great glimpse into the early strides one of the most boring software companies in the world (in a good way, just like boring airplane rides are the best kind) has made toward modernizing its approach to software development, design, and its customers. Here, I’m going to describe why I think SAP is worth a fresh look — it’s not just for old suits anymore — through a series of examples of a diverse set of areas where it’s innovating.

1. The PaaS: HANA Cloud Platform

Although I’m convinced PaaS is the future of development (see the last slide), the lack of traction vs IaaS as far back as Google App Engine makes it continually unclear exactly when that future will arrive for most of us. It’s certainly not next year, but will it be 5 years, 10 years?

At this point, I would argue that any vendor pushing a PaaS is doing one of a few things:

  • Thinking about the long term rather than the next few quarters;
  • Thinks it’s large or influential enough to create its own market; or
  • Simply shipping software it uses for internal productivity.

Thus far, PaaS offerings packaged as a service rather than shipped software have carried the day in terms of use (revenue is more opaque, and questionable, for PaaS as a whole). A look at the traction various PaaS providers have in predictive communities like Hacker News supports this:

Blah

PaaS traction in Hacker News, adjusted for changes in posting frequency over time.

SAP is offering its PaaS as a service with a free developer license, so it’s clearly doing what it can to start generating traction as soon as possible. Not to mention that it put Aiaz Kazi on the case, one of the sharpest people I’ve met at the company. Given its long-term expertise in nailing vertical markets, SAP has a rare opportunity to create stories around things like a PaaS for finance, a PaaS for retail, etc.

2. Consumerization of IT: Fiori apps

James wrote last fall about SAP’s focus on design including new Mobility Design Centers. At Sapphire, SAP took the next step and started shipping its own, well-designed mobile apps to help people accomplish the most highly used tasks in its core software. While SAP clearly isn’t the first enterprise software company to start shipping beautiful, easy-to-use software, it’s also far from the last.

3. The Startup Focus program

SAP created a program to help incubate startups using its technology, namely the HANA database, last year. It’s grown like gangbusters. At just a year old now, in that time it’s grown from 10 showcased at last year’s Sapphire Orlando in May to 150 by November, and 430 as of this summer’s Sapphire Orlando. Oddly they don’t actually track the numbers any more closely than that, because it would’ve been great to look at a graph with more data points, but c’est la vie. Regardless, the ~200% growth from 150 to 430 startups in the span of half a year is awfully impressive.

4. The Internet of Things

Cap

This little device sticks on the inside of a Pirelli tire, then the yellow piece is removed. The idea is to measure all sorts of things to improve the performance of truck fleets. Credits: Donnie Berkholz

One of the things SAP was pushing at Sapphire was its initiatives on the Internet of Things, in particular around HANA (its realtime, in-memory database) as a core piece of a realtime M2M toolchain. I heard from Pirelli Tyre, an Italian tire company that has a large footprint on trucking fleets. They’ve put together a hardware+software prototype based on HANA to send all kinds of data back from a fleet about things like air pressure, GPS location, and more to aid in improving efficiency and regulatory compliance. Dealing with M2M data is going to become increasingly important in the age of data, and this is just another example that at least parts of SAP see where the future is going.

5. Open-source citizenship

Did you know SAP is one of the top code contributors to Eclipse? I didn’t think so. The HANA Cloud Platform is built on open-source software as well. SAP’s Matthias Steiner wrote about SAP’s recent internal open-source summit, which is aimed at helping the company shift toward a more OSS-friendly mentality. They’ve even got a GitHub site now.

6. Internal DevOps

I was shocked recently when I came across a slide deck that made the rounds on Twitter. It’s from a presentation of SAP’s Darren Hague at JFokus, and it’s titled “Continuous Delivery: From dinosaur to spaceship in 2 years.” If they can help their customers through that same journey for the code they ship, it would be a huge deal because today SAP implementations and upgrades are so complex that they typically require teams of consultants (at significant expense).

Really, is everything this warm and fuzzy?

Of course not. SAP is a huge company with more than 60,000 employees, and that brings with it some serious inertia, politics, and bureaucracy. Its historic revenue stream is based on holding businesses together, and making a transformation to that part of the company will not happen quickly, if at all. That’s why nearly everything I’m talking about here is greenfield for SAP, where it’s much easier to start from scratch with the right approaches. But if SAP wants this transformation to stick, its legacy divisions need to take note of, and invest in, all of the changes it’s pioneering in greenfield projects. Develop internally using open-source methodologies, build using its own PaaS and/or accounting for “infrastructure as code”-style approaches, and apply the attention to design shown in the Fiori apps across the entire portfolio

In contrast to the negative point about inertia, its size and existing, basically locked-in revenue stream gives SAP some significant advantages as well. For example, it has the time and financial safety net to wait for the PaaS market to ripen, as it contributes its own efforts to the problem.

However, if SAP encounters issues with its ERP revenue, it could decide to decelerate the transformation I’m seeing. In reality, it should do exactly the opposite — if legacy software businesses are going downhill, it’s time to invest even more deeply in their replacements, where the new opportunities exist.

The sheer number and variety of interesting things happening makes it obvious that SAP’s a company to watch. It’s clear that SAP is looking toward its future opportunities rather than its existing stable businesses, and that’s where its biggest challenges and rewards may lie.

The broader context

Some enterprise companies are changing with the times, and some aren’t. If I look at companies like IBM and Microsoft, I see a lot of changes to think about what’s happening in the IT industry, like being more interoperable, friendly to open source, thinking about the effects of BYOD, consumerization of IT, and migration toward the cloud, and so on. On the other hand, if you look at Oracle, there’s not much of that happening, and things seem pretty well stuck in legacy mode — the recent “partnership” with Salesforce seems just more evidence of business as usual. And it’s a business that’s going to continue shrinking, based on revenue models, relying on purely top-down approaches, and failing to adjust to the ways that practitioners want to consume technology. As for the final company on the PriceWaterhouseCoopers top 5 by software revenue, EMC, it seems that while the parent company is relying on similar methods as Oracle, significant parts of Pivotal have seen the light, while VMware is somewhere in the middle, searching for the answers.

Disclosure: SAP is a client and covered T&E for Sapphire. IBM, Microsoft, Salesforce.com, and VMware are also clients, while Oracle, EMC, Pivotal, and Pirelli Tyre are not.

by-sa

Categories: apps, cloud, devops, internet-of-things, mobile, nosql, open-source, packaging, sap.

Interest withering in Java application servers

Prompted by recent encounters with WebSphere’s Liberty Profile, which is targeted at developers, and the realization that we haven’t talked seriously about Java application servers for years, the time seemed right for a fresh look at them. Going into it, the outlook from my fellow Monks was grim. For example, RedMonk alumnus Coté wrote in 2010:

The idea of an “application server” is being eliminated, component by component. In eliminating that idea there’s a huge vacuum when it comes to the runtime developers use to house their projects.

James even pointed out as far back as 2008 that advances including Ruby on Rails and XML’s ability to aid interop would drag things away from the J2EE world. Let’s examine the data to see whether those predictions held true.

Broadly fading traction for Java app servers

Data from Google Trends search on the app server name plus "java," normalized by hits for "java" alone.

Data from Google Trends search on the app server name plus “java,” normalized by hits for “java” alone.

A look at Google search data, reflecting a global and general technical audience, shows an interesting trend — across the board, overall interest in app servers is shrinking. You can see clearly visible peaks around 2005 for Tomcat, 2007 for JBoss, and 2010 for GlassFish. This is countered in part by an increase in alternatives like Spring, as well as the migration of many use cases to alternative, dynamic languages.

Given the number of acquisitions in this industry over the past decade, it’s worth taking a look at whether they’ve caused any impact. Oracle bought BEA (2008) and therefore prolonged the life of WebLogic. I’ve noted the “Oracle bump” in the graph above, where WebLogic went from a steady downhill slope to a level one for a couple of years post-acquisition. In contrast, after Oracle bought Sun (2010), it essentially put the open-source GlassFish into stasis, resulting in a downward spiral. As a third example, Red Hat bought JBoss before either of those examples (2006), and it also transitioned from growth to decline. Now, it may be that there are other factors influencing these changes, but the scattering of acquisitions over time suggests that it’s more specific to acquisition than it is to broader industry trends.

Only two popular app servers showed increases in the last 5 years: GlassFish and Jetty. By these metrics, GlassFish is well past its prime, post-Oracle. On the other hand, Jetty appears to have very quickly found a solid niche and stuck with it, given its nearly static levels since mid-2006 on this graph.

Early-adopter communities agree

If we look at a community of earlier adopters like Stack Overflow, we see general agreement with the above trends with subtle differences.

Data from Stack Overflow tags for each app server, normalized by the "java" tag count.

Data from Stack Overflow tags for each app server, normalized by the “java” tag count. Obtained from this query.

For example, here are some of the differences:

  • Spring passed Tomcat in traction roughly a year earlier than it did for a general audience;
  • JBoss peaked in late 2009 rather than late 2007;
  • GlassFish held steady since its peak rather than declining;
  • Jetty and GlassFish both have stronger showings relative to WebLogic and WebSphere.

Many of these could reflect a stronger bias in the Stack Overflow userbase toward open-source technologies with a low barrier to entry. The GlassFish differential could indicate Oracle’s preference for WebLogic over GlassFish in the broader audience that Google data reflects, while a more strongly open-source-leaning audience like Stack Overflow may continue to use GlassFish.

If we look at another audience, Hacker News, the results are largely similar with a couple of exceptions worth discussing.

Methods

Data from Hacker News via HN Trends, normalized by Hacker News growth and mentions of “java.” Lines are shown with Bezier smoothing to remove higher levels of noise.

The Spring framework has an overwhelmingly dominant lead over all app servers for this audience. On the left-hand graph, you can see that all the app servers are nearly overlapping down at the bottom of the plot, while Spring is trending even further upward beyond its already high share.

On the right, I’ve removed Spring from the calculations so we can more clearly see what’s happening within the app servers. The trends are largely consistent over time and in relative ranking, although Jetty shows a much higher share on Hacker News than in the other two data sets. To clear this up, perhaps we can correlate these conversational data with different types of information.

How about some actual usage data?

While results based on conversations and searches are all well and good, the question remains whether this correlates with real-world usage. Fortunately we partnered with New Relic and Steve wrote up a post on the state of the stacks at this time last year. I’ve reproduced one of his charts here, and as it turns out, the ranks largely hold true between conversation and usage, with a couple of exceptions:

nr-java-app-servers

 A significant exception is Jetty’s emergence with much greater usage than in Google Trends or Stack Overflow, although it agrees fairly well with Hacker News.  This may reflect a bias in the Jetty userbase toward more open-source– and startup-oriented audiences, since Hacker News > Stack Overflow > Google Trends. Given my interactions with the DevOps community, this larger share for Jetty seems reasonable among that population, likely to overlap with the New Relic userbase. The reality of this breakdown is further supported by data from ZeroTurnaround’s 2012 developer productivity report, which shows Tomcat in the lead, followed by a nearly tied JBoss and Jetty.

Another exception, this time among commercial options, is JRun. While not shown earlier, it trails Resin in Google Trends and is almost entirely absent from Stack Overflow. Adobe no longer sells JRun as of this spring, so it can largely be ignored insofar as predictive power goes. Steve talked about some potential reasons for these differences in his post so I won’t discuss them in any greater depth here.

Also worth noting is that the size of the differentials — the size of the real-world usage leads the top app servers hold above their competitors are amplified beyond the size of the conversations and searches occurring about them. I would argue that this illustrates a benefit of the transparent development we see in open-source communities. If the barrier to entry is low — if it’s easy to install and use, and the documentation is friendly and accessible — then conversational and search data may be relatively depressed because people have no reason to go that far.

Conclusions

Years ago, James and Coté were on to something with their ideas about the future of the app server, and now we have the data to back up their analysis and qualitative research. As James wrote back in 2005, decomposing monolithic software into components has many benefits. Developers today prefer a more composable, flexible system, and I expect anything resembling a successful app server in the future will in reality be a packaging exercise of many components that have already gained developer traction, with mix-and-match pieces to suit more individualized development.

Disclosure: IBM, Red Hat, Adobe, and New Relic are clients. Oracle, Pivotal, Caucho, and ZeroTurnaround are not.

by-sa

Categories: ibm, java.