Blogs

RedMonk

Skip to content

Red Hat’s CentOS “acquisition” good for both sides, but ‘ware the Jabberwock

Red Hat and CentOS announced earlier this week (in the respective links) they are “joining forces” — whatever that means. Let’s explore the announcements and implications to get a better understanding of what’s happening, why, and what it means for the future of RHEL, Fedora, and CentOS.

LWN made some excellent points in its writeup (emphasis and links mine):

The ownership of the CentOS trademarks, along with the requirement that the board have a majority of Red Hat employees makes it clear that, for all the talk of partnership and joining forces, this is really an acquisition by Red Hat. The CentOS project will live on, but as a subsidiary of Red Hat—much as Fedora is today. Some will disagree, but most would agree that Red Hat’s stewardship of Fedora has been quite good over the years; one expects its treatment of CentOS will be similar. Like with Fedora, though, some (perhaps large) part of the development of the distribution will be directed by Red Hat, possibly in directions others in the CentOS community are not particularly interested in.

Plenty of benefits to go around

Whether it’s the rather resource-strapped CentOS gaining more access to people and infrastructure, not to mention those pesky legal threats, or Red Hat bringing home a community that strayed since it split Red Hat Linux and created RHEL/Fedora in 2002–3, the benefits are clear to both sides.

I’m not convinced it had to go nearly as far as it did to realize those benefits, though — formalizing a partnership would have sufficed. However, giving three of the existing lead developers the opportunity to dedicate full-time effort to CentOS will be a huge win, as well the other resources Red Hat is providing around infra, legal, etc. But the handover of the trademark and the governance structure are a bit unusual for the benefits as explained, although entirely unsurprising for an acquisition and company ownership of an open-source project.

What about Fedora?

It’s worth reading what Robyn Bergeron, the Fedora Project Leader, said on the topic.

Red Hat still needs a breeding ground for innovation of the Linux OS, so I don’t see anything significant changing here. What I would hope to see over time is a stronger integration of developers between Fedora and CentOS such that it’s easy to maintain packages in both places if you desire.

Perhaps the largest concern for Fedora is a lessening of Red Hat employees contributing to it on paid time, in the longer term. As the company pivots more toward cloud infrastructure (see its recent appointment of Tim Yeaton and Craig Muzilla to lead groups that own cloud software at Red Hat) with a clear hope of increasing its cloud revenue share, Red Hat’s need to differentiate at the OS level may shrink and thus its need to contribute as many resources to Fedora. However, Robyn duly points out that Fedora’s role as upstream for RHEL isn’t going anywhere, so neither is the project.

The hidden BDFL

Red Hat’s Karsten Wade seems to have become the closest thing there is to a CentOS BDFL (or at least an avatar of Red Hat as BDFL) by virtue of being the “Liaison” on the newly created governing board. The other named board role is Chair, who is a coordinator and “lead voice” but cannot take decisions for the board as the liaison can. In case you didn’t see the fine print, here’s the reason I say that:

The Liaison may, in exceptional circumstances, make a decision on behalf of the Board if a consensus has not been reached on an issue that is deemed time or business critical by Red Hat if: (1) a board quorum (i.e., a majority) is present or a quorum of Board members has cast their votes; or (2) after 3 working days if a Board quorum is not present at a meeting or a quorum has not cast their votes (list votes); provided that the Chair may (or at the request of the Liaison, will) call a meeting and demand that a quorum be present.

Unless the Liaison specifically indicates on a specific issue that he/she is acting in his/her official capacity as Liaison, either prior to a vote or later (e.g., after an issue has been deemed time or business critical), the Liaison’s voice and vote is treated the same as any other member of the Board. Decisions indicated as Liaison decisions made on behalf of the Board by the Liaison may not be overturned.

Translation? If the board (the majority of which is Red Hat employees) can’t come to a consensus or can’t meet/vote within 3 days, the Red-Hat–appointed liaison can make an irrevocable, unilateral decision on behalf of Red Hat. Also worth noting is that Karsten will be the direct manager of the three CentOS employees joining Red Hat, giving him further influence in both formal and informal forms. Although whoever’s in the liaison role theoretically steps down in power when not acting as liaison, this is much like temporarily removing “operator” status on IRC. Everyone knows you’ve got it and could put it back on at any point in time, so every word you say carries much more weight. It is therefore of great interest to understand Karsten more deeply.

He’s got a long history in community management with Red Hat and I’ve had excellent experiences working with him in the Google Summer of Code and many other venues, so I’m confident in his abilities and intentions in this regard. But it’s definitely worthwhile to read his take on the news and understand where he’s coming from. Here’s an excerpt:

 In that time, Red Hat has moved our product and project focus farther up the stack from the Linux base into middleware, cloud, virtualization, storage, etc., etc. … Code in projects such as OpenStack is evolving without the benefit of spending a lot of cycles in Fedora, so our projects aren’t getting the community interaction and testing that the Linux base platform gets. Quite simply, using CentOS is a way for projects to have a stable-enough base they can stand on, so they can focus on the interesting things they are doing and not on chasing a fast-moving Linux.

In other words, they were putting code directly into RHEL that hadn’t had a chance to bake in Fedora first, which is less than ideal for an enterprise distro. Thus the need for a place to test higher-level software on stable platforms (CentOS).

That post makes it perfectly clear where Karsten’s interests lie, so it, along with his background in community management is what drives my initial expectations of Red Hat’s influence upon CentOS. It remains to be seen how often Karsten will need to step up to liaison mode, and to what extent his actions in that role will be handed down from higher up in Red Hat vs independent, so I’m looking forward to seeing how these changes play out.

 Disclosure: Red Hat is a client.

by-sa

Categories: community, linux, operating-systems, red-hat.

IBM’s billion-dollar bets mean less today, but you still can’t ignore them

Yesterday’s news about IBM creating a new Watson Group and investing $1 billion in it was surprising to me because the company just announced a different billion-dollar bet on Linux on its Power architecture back in September, another billion on flash memory in April, along with another major investment in DevOps over the past couple of years. Not to mention its $2 billion acquisition of SoftLayer to develop a stronger cloud story. [Sidenote: Watson is IBM's Big Data software aimed to do what IBM calls "cognitive computing."]

IBM initially made a big bang with its announcement of a billion-dollar investment in Linux back in late 2000. Significantly, it was $1B to be spent in a single year, not some indeterminate future (best of luck verifying that). Given the apparent acceleration in extremely large commitments by IBM, I thought a couple of quick calculations were in order to put the recent ones in context.

Inflation since 2000 puts $1B today at $739M in 2000 dollars (when IBM announced the billion-dollar bet on Linux). Furthermore, IBM’s net income (profit) doubled to $16.6B in 2012 from $8.1B in 2000. The inflation means a $1B bet goes only ~75% as far as it did in 2000, while the significance to IBM’s financials is roughly half of what it was back then. In other words, a bet that was the size of an 800-pound gorilla is only 400–600 pounds these days — but that’s certainly enough to crush most humans, as you might imagine some of its competitors to be. IBM’s increasingly long series of billion-dollar bets continue to draw headlines, but you can’t ignore the reality that an investment like that is going to make a significant impact.

Disclosure: IBM is a client.

by-sa

Categories: big-data, devops, ibm, linux, marketing, open-source, operating-systems.

The parallel universes of DevOps and cloud developers

The City and the City, by China MiévilleWhen I think about people who live in that foggy world between development and operations, I can’t help being reminded of a China Miéville novel called The City & the City. It’s about two cities that literally overlap in geography, with the residents of each completely ignoring the other — and any violations, or breaches, of that separation are quickly enforced by a shadowy organization known as the Breach.

Much like people starting from development or operations, or for you San Franciscans, the Mission’s weird juxtaposition of its pre-tech and tech populations, The City & the City is a story of parallel universes coexisting in the same space. When I look at the DevOps “community” today, what I generally see is a near-total lack of overlap between people who started on the dev side and on the ops side.

At conferences like Velocity or DevOpsDays, you largely have just the ops who have learned development rather than the devs who learned to be good enough sysadmins. Talks are almost all ops-focused rather than truly in the middle ground or even leaning toward development, with rare exceptions like Adobe’s Brian LeRoux (of PhoneGap fame) at Velocity NY last fall.

On the other hand, that same crowd of developers shows up not at DevOps conferences but rather at cloud conferences. They often don’t care, or perhaps even know, about the term “DevOps” — they’re just running instances on AWS. Or maybe another IaaS or possibly a PaaS, most likely Heroku, GAE, or Azure.

The closest thing to common ground may be events for configuration-management software like PuppetConf or ChefConf, or possibly re:Invent. But even when I was at PuppetConf, the majority of attendees seemed to come from an ops background. Is it because ops care deeply about systems while devs consider them a tool or implementation detail?

The answer to that question is unclear, but the middle ground is clearly divided.

Disclosure: Amazon (AWS), Salesforce (Heroku), and Adobe are clients. Puppet Labs and Microsoft have been clients. Chef and Google are not clients (although they should be).

by-sa

Categories: cloud, community, devops, Uncategorized.

What were developers reading on my blog and tweetstream in 2013?

As a strong believer in transparency, I wanted to share the actual data from hits on my blog over the past year instead of just a popularity ranking. Using a combination of WordPress stats, Google Analytics, and RedMonk Analytics, I compiled a set of data that reflects what my readers cared about over the past year.

Blog overview: Nearly 90,000 unique visitors

This roughly corresponded to my second year at RedMonk (I started Dec. 1, 2011), so I wanted to take a look at how things changed since the year prior in addition to the raw numbers.

  • 135114 page views (+393% year over year)
  • 105133 visits (+383% YOY)
    • 15770 phone (+163% YOY)
    • 6015 tablet (+7% YOY)
  • 89348 unique visitors (+469% YOY)

Beyond being quite pleased at how well I’ve personally done, the disparity between a large increase in phone visitors and a near-constant rate from tablet users is noteworthy. It makes me wonder whether tablet ownership and usage among our generally predictive community is becoming saturated, while the same audience already owned smartphones and is just using them more.

What are people reading?

As is typical, the post traffic is highly asymmetric, with the top hits dwarfing the remainder.

As always, developers love reading about rankings, data, and tooling, and the top posts reflect that. The surprises, to me, are some of the more conversational pieces — one on the Bay Area bubble and the other on SAP. Both of them got fairly strong traction within niche communities on Twitter, which may explain where the traffic came from.

How are people getting here?

Here’s a graph of the top-ranked sources for inbound visits:

The top sources of traffic to my blog in 2013

In terms of the top sources of inbound traffic, search (namely Google, which dominates search at 99.3%)  was the best draw of readership. Social media and Twitter in particular, however, topped search as a category, with Twitter alone garnering roughly 2/3 of the visits that search did.

Where are they coming from?

Below is a map from Google Fusion Tables that I’ve colored by continent. Deeper greens indicate more visitors, which are absolute rather than normalized by population.

The raw numbers:

  1. 49429: North America (88% US)
  2. 33193: Europe
  3. 13415: Asia
  4. 3861: South America
  5. 3062: Australia
  6. 1259: Africa
  7. (0: Antarctica)

There’s very strong representation among Western countries with 85% of visitors coming from the Americas, Europe, and Australia. This comes as no great surprise since they share the same Latin alphabet and the majority are likely to speak English well.

In fact, I’m quite pleased to have as many people from Asia in particular as I do, but also South America and Africa, because it provides some additional insight about what those developers are doing similarly and differently from their compatriots.

Twitter: 1 million readers in the top week, 4 million in the year

I recently signed up for a service called SumAll to more effectively track how many people are seeing what I’m talking about. Here’s a weekly graph from that service over the course of 2013:

Screenshot from 2014-01-02 22:13:02

Graph courtesy SumAll. I signed up for their service in September so “mention reach” is missing before then.

Retweet reach (how many total followers see my tweets by following the RT chain) in a typical week is around 75,000. Mention reach (whenever I’m credited, even when I didn’t originate it) has been near or above 500,000 three times since I signed up for SumAll in September. It typically hovers around 3x–5x my RT reach, indicating a combination of independent discovery of my content and Twitter clients that quote me or use the letters “RT” rather than a Twitter API retweet.

I’ve had reasonable success in making data graphics go viral on Twitter — each of the 3 highest peaks over the 4 months where I tracked mention reach was the result of me tweeting a graph based on my original research.

Across the entire year, my retweet reach was 4.02 million users. SumAll didn’t calculate mention reach before I signed up in September, but based on the 4.79 million in the last trimester and the typical multiple mentioned above, I would estimate around 10–25 million users encountered my name this year if that 3x–5x ratio holds true.

Year in review wrap-up

2013 was a great year for my RedMonk research, with a gratifying growth in readership over 2012. On average I published 3 posts per month, which I hope to improve in 2014 with a more focused approach to how I balance research time in terms of collection vs production.

Disclosure: SAP and GitHub have been clients. Automattic, Google, Twitter, and SumAll are not clients.

by-sa

Categories: analyst, social.

BAM! GitHub prediction nailed: 4M users in August, 5M in December

In January, I used data on GitHub’s past growth to predict what would happen over the next year in a post titled “GitHub will hit 5 million users within a year” and said:

In the near term, I’d estimate, based on my Bass model, that GitHub will hit 4 million users near August and 5 million near December.

Prediction from my January 2013 post

Prediction from my January 2013 post. Take the red pill.

On August 7, GitHub reached 4 million and today, it topped 5 million — exactly as I predicted. Given these almost uncannily good results (according to its own user-search API), I couldn’t help but be reminded of a classic XKCD comic:

Science. It works.

Caveat: Users appearing in search may also include GitHub organizations.

Disclosure: GitHub has been a client.

by-sa

Categories: Uncategorized.

The year developers and designers collided

We’ll start with a single graph. This shows the number of new CSS repositories created on GitHub every month this year.

Cap

Data from the GitHub search API; projected through the end of December as of Dec. 13. Forks were excluded from this search. Classification based on the language in a repository with the most lines of code.

If you’re not blown away, look again. Growth of CSS, the language of web design, has gone completely insane on GitHub over the past year. This does not include forks. This is newly created repositories. Prior to 2013, it hovered near 0% of total repositories created since GitHub’s launch and then suddenly shot up this year. This corresponds to a jump from ~7,500 repos created last year to ~102,000 this year, a 13.6x increase.

Design is now happening more and more in the form of code. It’s less and less about people who boot up Photoshop (and I use that term purposefully, because it takes that long to start up), work exclusively in there, and hand off a PSD to the development team. It’s about designers who understand the language of code, and coders who understand the visuals of design. If you’ve heard of DevOps [writeup], you’ll notice a lot of similarities. While it’s not about mythical unicorns who are full-stack developers and top-notch designers too, it is a much deeper integration and cooperation that it has been in the past.

Developer vs. designer isn’t a binary choice

In fact, when you mix developers and design you get something incredibly powerful. Last month I wrote the first of a three-part series on the value of blending developers and operations. Here’s the second piece, on design:

data_ops_design_venn_diagram_design

 

If you combine dev + design, you get the next piece of the puzzle, which I’ve summarized as two key points:

  • Developer-designer spectrum, and
  • Developer experience / API experience

The developer-designer spectrum

developer_designer_spectrum

While placing people into buckets is a convenient fiction, in reality the interface is blurred. And increasingly so. This is particularly true with the web because of the tight interlink between code and presentation, which became tighter with the advent of Ajax [2004 writeup] and even more recently as JavaScript has spread throughout the stack, courtesy of Node.js [2010 writeup]. In the web community, it’s fairly common for UIs to be designed and customized via coding in an IDE or text editor rather than a GUI along the lines of Adobe’s Dreamweaver. The latter product has led for a near-universal hatred of GUIs for building websites because maintaining and customizing generated code over time is a miserable experience.

What’s actually happening here is the advent of the technical designer and the creative coder. At Eyeo this summer, a conference exclusively about data visualization, I had a rare opportunity to meet the other side of this spectrum (as someone who spends most of his time around developers). It was very much an artist-centric show — and yet the vast majority of talks used things straight out of the coder’s toolbox rather than manual work in Illustrator, Inkscape, or a GUI data-viz tool like Tableau. 


@boulabiar
 @strife25 @openframeworks: Presenters at #eyeo2013 are all using d3js, processing or openframeworks. — Donnie Berkholz (@dberkholz) June 7, 2013

When I went to Adobe’s Max show this spring, I proposed a similar thought, so it was very gratifying to see it in real life at Eyeo:

Today’s software missed the boat

While not the only example, Adobe is an obvious one of a company with existing and potential customers that exemplify the kind of people I’m talking about. The company as a whole has recently seemed rather unsure about whether it still cares about developers. Back in the Flex and ColdFusion days it clearly used to, and the vast majority of its Max conference attendees used to be devs. But no more — at this year’s Max, developers were distinctly in the minority, often overlooked in the mix.

Most of its products no longer cater to developers, and it hasn’t been clear whether Adobe knows why or how much it should care about them in a world of Photoshop and Omniture. The Nitobi acquisition, which brought PhoneGap into Adobe in late 2011, seemed rather out of place in the modern Adobe. Although PhoneGap seemed out of place at the time, how could it help Adobe fit into the broader world of software, which has changed a lot in the days since Dreamweaver was in its prime? And what changing trends among how software gets designed and developed might affect the broader vision of what companies like Adobe should stand for?

Interestingly, Adobe’s begun reaching toward the far end of the design workflow with its recent announcement of Generator for Photoshop. In short, you can take a design in Photoshop and send it directly to Reflow where it’s ready to be transformed into a responsive website. VentureBeat’s Jolie O’Dell was the only journalist to nail the core point in her writeup at the time, getting the broader context of a growing Adobe linkage between designers and developers.

However, the point I’m getting at is bigger than just a linkage. It’s not about a design team shipping things off to a developer team anymore. It’s becoming — and already is, in leading-edge companies — a much tighter integration that resembles the DevOps culture. It’s about developers who can do some design, and designers who can write some code. Maybe not enough to replace the other side, at least not yet and not with existing tools … but enough to have a strong understanding of how the other side lives, and what they care about, and how they work. What’s been missing is the equivalents of cloud and configuration management in the DevOps world, to provide those central stepping stones across the river between designers and developers.

Adobe’s made some interesting early steps along these lines. For example:

1) It open-sourced the Generator add-on for Photoshop rather than keeping it proprietary, a clear olive branch being held out to developers. Stephen Nielson, Photoshop product manager, said about the open-sourcing of its Generator add-on:

It would enable developers to better understand how to interact with Generator and what it’s capable of. Perhaps the best documentation is the source code itself.

2) As Jeffrey Zeldman aptly put it, Adobe didn’t acquire Typekit; Typekit acquired Adobe. In his words:

It sometimes seemed to me that Adobe hadn’t so much acquired Typekit as the reverse: that the people and thinking behind Typekit are now running Adobe (which is actually true), and that the mindset of some of the smartest consultants and designers in our industry is now driving a huge corporation.

3) PhoneGap and large portions of the Edge suite cater primarily to developers, but some of them, such as Reflow, do reach out to designers. However, they still stand out rather like a sore thumb in Adobe’s product lineup, because of the lack of integration and the lack of thinking about this as a spectrum.

4) Adobe’s experimented with open-sourcing some fonts, showing that the design side of the house is testing the waters of development. LWN wrote it up; consider it recommended reading on a group open-sourcing something for the first time.

Extending more broadly: developer experience and design thinking

Adobe’s not alone in thinking about design. First SAP, and more recently IBM [writeup], have been pushing the idea of design thinking in software. What makes them interesting is that it’s software that has traditionally been an absolute nightmare to run, because those vendors had only cared about the buyers. Who gives a crap about the users? They don’t have budgets, they can’t make the call about what to buy. But times have changed, and now everyone, even the enterprise behemoths, needs to think about design.

At RedMonk we’re focused primarily on helping companies create a great developer experience, which surprisingly focuses on a lot of things that coders consider peripheral, such as packaging, barriers to entry, and convenience. My colleague James describes it briefly in this off-the-cuff video from the streets of London.

The extension of developer experience is design thinking throughout the company, including its products, portfolio, and even the experience for its own employees. I’m not going to talk about service design in this post to keep it from turning into a book, but that would be a natural consideration if you mix devs, designers, and ops.

It’s time to integrate design into DevOps

As I wrote last month, DevOps needs to pervade back into the whole company to help it transition toward becoming an agile, collaborative business. In this context, however, we should think about extending DevOps to include design — call it DesDevOps if you like (I don’t particularly care about semantics). Why is it solely an interaction starting with the development team and going through to production? Why shouldn’t it go back to designers too? In the same way that DevOps enabled companies to deploy software 100 times a day, there’s no reason, besides broken workflows and processes, that we can’t dramatically accelerate how we bring designed software to users to iteratively improve their UX.

Update: Dion Almaer pointed out that what a “CSS repo” is lacked clarity. Caption in the first graph updated to clarify this.

Disclosure: Adobe, IBM, and SAP are clients and covered substantial portions of travel expenses to their events. GitHub has been a client. Tableau is not a client. I randomly won a pass to Eyeo in a drawing after not getting one of the few media passes.

by-sa

Categories: adobe, creatives, design, devops, ibm, mobile, packaging, sap.

How operations, design, and data affect software and business: Ops edition

I’ve been doing a lot of thinking recently about the interaction between software development and three key areas: operations, data, and design. After conversations this week with Dell’s James Urquhart (courtesy of the enStratius acquisition) and Rod Smith from IBM’s emerging-tech group, I decided that it’s worth writing up to hopefully drive some discussion around these ideas.

By starting at the center of the below diagram with developers and then blending them with any of operations, data, and design, a number of key trends emerge from not only software development, but more nascently from business as well. While RedMonk’s traditionally been a firm that looks at the bleeding edge of software, this perspective shows that the same ideas reflect upon the function of organizations as a whole.

data_ops_design_venn_diagram

And yes, I’m very much abusing the purpose of Venn diagrams because I’m going inside-out rather than outside-in, but it gets the idea across. Let’s start at the top and work our way around first the immediate intersections with development, then the more complex interactions.

Development + Operations = DevOps ➡ Agile, collaborative business

data_ops_design_venn_diagram_ops

The idea of DevOps is most simply described to developers as the extension of agile software development to operations. It’s about bringing (1) software-development techniques like version control, testing, and collaborative development and (2) rapid iteration and minimum viable products (MVPs) to the world of systems administration and infrastructure.

On the side of software development, it’s about transitioning toward a philosophy of infrastructure as code — defining what your datacenter or cloud looks like in terms of tools such as Puppet and Chef. This enables much easier application of basic software tools such as storing system configurations in Git and using continuous integration on your infrastructure and not just the code that runs on it. By learning from the distributed, asynchronous collaboration found particularly in open-source communities and largely virtual companies (e.g. GitHub), teams can benefit greatly in terms of the quality of people they can hire as well as their ability to collaborate more effectively.

On the side of agility, as trends like the shift to cloud-based systems continue to accelerate, it’s increasingly critical to work in a more agile fashion that decreases cycle times and thus increases the frequency at which teams can iterate on an MVP. This rapid iteration also means that when things break, as they inevitably do, there’s already a process in place to not just fix them quickly but to get those fixes to users quickly without an epic series of manual hacks.

For DevOps, it’s about extending the cycles beyond just development teams all the way through operations, so you can iterate all the way from development to production using continuous delivery or continuous deployment. Companies like Etsy and Netflix deploy as much as 100 times per day, which blows the quarterly deployment model out of the water. When you  change anything like this by an order of magnitude, you have the potential to not just save time but to transform how people work. Some of that transformation is a prerequisite, while some of it is a consequence of making organizational and tooling changes.

Carrying this into broader business, it means shifting the entire company to an agile model that’s highly collaborative. The collaborative aspect is most easily accomplished not by gathering 1000 people around a real-life water cooler but rather by making the whole company social, along the lines of Salesforce Chatter, IBM Social Business, Yammer, and Jive.

Open-source communities have pioneered the kinds of technologies needed to collaborate effectively in our increasingly connected world. Look at IRC, archived mailing lists, wikis, and more. All of them were pioneered by open-source developers. What’s happening in the world of “social business” today is the same thing that happened 25 years ago in the open-source world (IRC was created in 1988). Those developers have lived in the future ever since, while the rest of the world is slowly picking up the importance of improved collaboration and distributed workforces.

Below is a video of GitHub’s Ryan Tomayko from Monktoberfest 2012 talking about how GitHub (the company, not the product) works incredibly effectively as a distributed organization:

Another video worth watching is from this year’s Monktoberfest where GitHub’s Adam Roben talks about the importance of face time in distributed companies:

While I won’t embed it, there’s another classic video on distributed development by Zach Urlocker (former COO Zendesk and previously running product at MySQL) from Monki Gras 2012. Here’s the writeup.

This interaction between development and operations is a key trend in software, which was been foreshadowed by open-source development and will be followed by its enactment across the broader business.

Disclosure: Dell, IBM, and Salesforce.com are clients. Puppet Labs, Microsoft (Yammer), and GitHub have been clients. Jive, Opscode, Etsy, and Netflix are not not clients.

by-sa

Categories: community, devops, distributed-development, ibm, microsoft, open-source, redmonk-brew, salesforce, social.

Conway’s law but for software: Salesforce and SAP

Conway’s law aptly states:

Organizations which design systems … are constrained to produce designs which are copies of the communication structures of these organizations.

I’d like to propose a parallel “law,” if you will. I hypothesize that software companies (that’s increasingly all of them) are incented to innovate primarily around the central object, or data structure, in their software — and consequently constrained from innovation outside the central object.

For Salesforce.com (SFDC), its software is centered around people, primarily customers. This has influenced and will continue to influence corporate strategy, product design, and innovation. Witness the recent “customer company” tagline that SFDC is attaching to everything it does and that affects its higher-order thinking.

For SAP, on the other hand, everything is centered around purchases. Think ERP, purchasing, inventory management, and so on. Its central object is in fact the purchase rather than the person, as it is for SFDC.

In each case, innovation will be highly constrained outside of these core objects, particularly when it comes to second-level connections. In other words, for SAP, the purchase connects to the buyer connects to ?? And for SFDC, the converse holds true — it sees what’s directly connected to the customer, but what about the next step? This mindset will influence initial strategy but pervades into hiring, promotion, and performance reviews to continually lock companies further into their central objects.

If you want to understand company mindsets and their future strategies, you could do a lot worse than combining Conway’s law and this idea of central objects.

Disclosure: Salesforce.com and SAP are clients.

by-sa

Categories: corporate-strategy, salesforce, sap.

VMworld: the pundits versus the practitioners

Post-match pundits

After coming home from last year’s VMworld, I said this:

My overall impressions of VMworld were that VMware had one of the world’s largest stages to make some huge announcements, and it didn’t — instead, it showed incremental improvements. … Now it’s only a matter of waiting till next year to see whether VMware continues down that road of stability over major announcements.

The pattern has now become clearer: it’s about continuing a stable path for existing tech like vSphere while pushing the innovation toward new products like NSX (the evolution of Nicira’s software-defined networking product), vSAN, and vCloud Hybrid Service (vCHS). This pattern makes a lot of sense for the enterprise users that comprise the core of vSphere’s customer base, who care much more about stability than the latest bling.

But rather than rely too much on one analyst’s opinion, let’s take a look at some data. I performed an analysis of everyone tweeting about VMworld (tens of thousands of tweets, for the curious) to get a better sense of what VMware users cared about and what kinds of disconnects might exist between them and pundits — analysts, journalists, etc. Here, I’m going to focus on the keynote talks and look into how much practitioners and pundits cared about each of the topics and announcements during those sessions.

Day 1

The first morning started out slow, with acting CMO Robin Matlock coming up to talk about the 10th VMworld conference. She brought up a few of the 10-time attendees to talk about their highlights, which (predictably?) focused on parties and swag over talks. After that, new CEO Pat Gelsinger, announced at last year’s VMworld, came up on stage to sparse applause. He talked through a survey that the audience generally didn’t care too much about, and then got into the real content. To help evaluate the importance of that content, here’s a graph of the Twitter volume and sentiment throughout the keynote. I’m going to use this to target each of the peaks in volume or sentiment that I annotated at the top of the graph:

test

All #vmworld tweets were automatically filtered to remove the following roles, for practitioners: analysts, journalists, marketing, PR, evangelism, product, and employees of VMware/EMC/Pivotal/VCE. For pundits, only analysts and journalists were included. I then manually examined the top 100 tweeters in practitioner or pundit groups to ensure they belonged. Sentiment measurement shows the positive/negative ratio based on dictionaries from Blue Bird, with word groups of 250 or 500 for pundits and the less expressive practitioners, respectively.

Gelsinger on stage: The CEO gets introduced and steps up on stage. Practitioners and pundits both announce this as the start of the “real” keynote.

Positive quotes: Gelsinger walked through a survey of VMware customers and cited a bunch of statistics in a favorable light, talking about customers as masters of the universe, ninjas, etc. Many people tweeted them using identical language.

Key pillars: He then described the underlying themes behind all of this year’s announcements, wrapped in the philosophies of “mobile-cloud era” and VMware’s now-old standby, the software-defined datacenter:

  • Expand virtual compute to all apps
  • Transform storage by aligning it with app demands
  • Virtualize the network for speed and efficiency
  • Automation: Management tools to define policies

Unsurprisingly, there’s a consistently high pundit volume throughout the positive quotes and the key pillars, while practitioners are much more subdued and want to get into the technical guts of the talk.

5.5 announced: The high point of the keynote for practitioners was the announcement of vSphere 5.5 (and vCloud Suite 5.5, in definitely parenthetical terms), with the following key features:

  • 2x the number of cores, VMs (oops, 32x on disks: 64 TB max disk size — Audience excitement about this)
  • Application-aware high availability (HA): monitor not just apps but components of apps
  • Big Data extensions: scaling Hadoop clusters and sharing them

In this section, excitement centered around “64TB” disk images (apparently 62TB in actuality, but still 30x bigger than today’s 2TB). The trend within the 5.5 features was generally mirrored by the pundits in relative terms, although it was definitely not the section of the keynote they were most excited about. Secondary was the app-aware HA, while the audience seemed to perceive Hadoop as basically a fad of the day. For example:

Of additional note is the announcement of Cloud Foundry on vSphere, to a subdued response from pundits and practitioners alike. This is likely due to a combination of factors such as the conference audience (very focused on ops, not developers) and the early state of the PaaS market. This audience mismatch could also partially explain the response to the Hadoop news.

vSAN announced: The first real surprise of the day, in terms of products rather than features, was vSAN ­— VMware’s entrant into the software-defined storage market. It’s a welcome addition to compete with both the top (e.g. EMC, NetApp) and bottom ends of the market (open-source options like Ceph, Gluster, and Swift), with the easy competitive advantage of “The best one to use with vSphere” and an expected GA in the first half of 2014. But if the pricing isn’t also intermediate, it’s going to run into serious issues. From practitioners, this got nearly as much buzz as the vSphere 64TB announcement and was the most positive peak of the day in terms of sentiment. On the pundit side, it outperformed vSphere 5.5, indicating a possible bias toward new technology rather than what’s useful today without major effort.

NSX announced: Martin Casado, CTO/cofounder of Nicira (acquired last year), announced VMware NSX,  framing it as ESX for networks. Importantly, he took time to introduce the SDN concept, which is one familiar to cloud developers but perhaps less so to enterprise ops outside of the VLAN context. They brought up 3 customers to tell their stories running previous versions of the software: Ebay, GE, and Citi. Among practitioners, there was just as much excitement about this as vSAN. Pundits were quieter, perhaps because they’d been expecting this ever since the Nicira acquisition, but simultaneously the most excited about it. I see this as another example of how practitioners are more excited about “now” and pundits are more excited about potential for the future.

Virt > phys ports: Martin Casado showed an interesting graph of the growth of physical and virtual ports over time. Although I’m unsure of its data source, here it is: phys_v_virt_vmworld   As you can see, there’s a crossover point in 2012 where the number of virtual ports exceeded the number of physical ports. Gelsinger had just previously cited 2010 as a similar crossover point for the number of machine instances that were virtual versus bare metal to underscore the importance of this graph, which is surprisingly accurate, considering it goes all the way out to 2015.

GA, no Cisco: The GA date for NSX was announced as 4Q2013, and numerous people among both practitioners and pundits noted that Cisco was conspicuously absent from the partners slide.

vCHS announced: The GA of their vCloud Hybrid Service (vCHS) was announced by Bill Fathers, SVP & GM, Hybrid Cloud Services. It will, as the name suggests, be a VMware-hosted cloud designed to integrate with private clouds. Services of note include Cloud Foundry, Disaster Recovery as a Service, and Desktop as a Service. It’s launching with 2 datacenters and will add 2 more this fall, plus another 2 via a Savvis partnership — all US-based. This got a spurt of activity out of practitioners, but far less than the on-prem products.

DRaaS with vCHS: Showing the importance of traditional on-ramps to cloud like backup and disaster recovery (DR), the announcement of DR as a service (DRaaS) on vCHS got just as much buzz as the announcement of vCHS itself. It will use vCHS as a new backend to VMware’s existing Site Recovery Manager product. Practitioners and pundits both got excited about this news, with clear bumps in sentiment and activity visible for both groups.

Day 2

The keynote opened up with President and COO Carl Eschenbach coming up on stage to talk about the conference itself and rehash yesterday’s news, followed by a dive into the products mentioned yesterday, in the form of a skit between Eschenbach and principal engineer Kit Colbert. Again, we’ll do a visualization and walk-through of the highlights of our practitioners and pundits:

test

Analysis here is the same as in the day 1 graph.

22,500 attendees; Carl Eschenbach: Some opening stats, everyone likes to hear how many people there are who, like themselves, came to the show.

Keynote not streamed: After 10 minutes, it seems like most of the online viewers of Day 1 figured out that Day 2 would not be streamed. This was a particularly strange choice by VMware because attendees have historically enjoyed day 2 more than day 1.

vCAC: This was the start of the skit/demo between Eschenbach and Colbert, which walked through vCAC, App Director, NSX, and vSAN.

70% traffic is betw. VMs: Colbert said that up to 70% of traffic in the datacenter is just between VMs. This got a big sentiment and traffic spike from pundits but relatively little from practitioners. Chances are it’s not news to them since they see it every day in their monitoring.

NSX + vMotion: They showed that vMotion, which moves VMs from one physical host to another, will integrate with virtual networks to migrate the networking as well. Practitioners were extremely interested, but pundits didn’t show much of a response at all. 

vSAN: While pundits showed little enthusiasm, the demo of vSAN got practitioners very excited. They were particularly intrigued by the apparent ease of use.

vSAN beta: Again, practitioners were active and excited about the vSAN beta, while pundits were largely absent. However, there were questions about enterprise readiness regarding scalability and data safety, particularly in the event of failures.

vCAC DaaS (EUC): This was a demo of provisioning an end-user desktop with the vCloud Automation Center, dubbed Desktop as a Service (DaaS). It was the only real focus on end-user computing (EUC) during any of the keynotes.

Joe Baguley: VMware’s CTO for EMEA stepped up on stage.

Policy-based automation: First came the demonstration of auto scale-out via new vCAC + vCOPS (vCloud Operations Manager) integration. Then Baguley made the case that scripting is fragile and so we should rely on policy instead. Key quote: “Scripts are just a posh way of saying manual.”

Log Insight giveaway:  VMware announced that every VMworld attendee would get 5 free licenses to its new Log Insight product, which is directly competing with (most obviously) Splunk as well as newer entrants like Sumo Logic and IBM. Unsurprisingly this produced a huge spike in both traffic and sentiment, with more interest from practitioners than pundits since they’ll be the ones actually using the software.

Practitioners versus pundits

This emphasizes a key difference between practitioners and pundits: what’s practical and tangible today vs where the new innovations are. Practitioners cared most about the technical details of updates, as well as things directly bordering on areas they work in and problems they have (vSAN). Today’s VMware customers are conservative — they don’t care a whole lot about SDN or the cloud, whether it’s in the form of vCHS or OpenStack. A mention of OpenStack as a supported vCAC provisioning target, just after the NSX GA announcement, on day 1, maintained interest but showed no spike in activity or sentiment among practitioners, although it did show an upward trend among pundits.

Rational does not make right

With the divestment of the somewhat tangential Pivotal assets (SpringSource and Cloud Foundry, among others), VMware has enhanced its focus on its original core of virtualization and is now extending that to storage and networks. Its direction seems to be much more rational and logical at this point, which is encouraging because not many companies can manage products across such a broad array of areas successfully. SocialCast seems to be a weird remnant that I’d expected to go to Pivotal, and I’m waiting to see what comes of that now that Sanjay Poonen has moved over from a high-level position at SAP to run VMware’s end-user computing efforts. VMware’s profit for the past two years combined was below $1.5 billion (and that’s prior to the Pivotal handoff). That number is not much greater than the $1.2 billion purchase price of Nicira, indicating the seriousness of VMware’s future commitment to SDN.

However, a rational product lineup doesn’t mean it’s the right product lineup for today’s market. Everyone’s banging on how vCHS is terrible in large part because it’s not Amazon. But there’s always going to be a place in every market for a full-service offering. [See our 2007 writeup predicting a VMware cloud.] However, Amazon’s clearly working on improving its enterprise offerings, as evidenced by analyst sessions at last fall’s Re:invent conference, plenty of webinars (you should see my inbox), and instances like its hire of former sales chief Mike Clayville away from VMware.

Disruption tends to come from below, and VMware’s biggest concern (not to mention IBM’s) should not be Amazon in its present form as DIY cloud, but rather Amazon succeeding in bringing its low-margin expertise to higher-touch enterprise audiences. And between clones of the AWS API in any number of IaaS stacks, and Amazon’s own moves in the private cloud sector like winning battles against IBM to build a cloud for the CIA, certainly the hosted aspect of VMware’s hybrid cloud is facing serious, even existential, challenges.

And that’s before we even get into the challenges hybrid cloud faces with data gravity [here's one of our writeups]. Shifting data back and forth between public and private instances depends on pipes of limited bandwidth. And if you get into synchronizing distributed systems, you’re faced with yet another set of challenges for which VMware has little answer today — its in-memory options of SQLFire/GemFire/Redis are not always the right tool.

That said, the strategy of providing identical containers on-prem and in the public cloud is clearly a good one, as evidenced by the recent emergence of tools and companies such as Docker and Ravello. Unfortunately VMware’s story today appears to be about migration rather than about dev/test and DevOps workflows. This is strange, because migrating heavyweight VMs back and forth between private and public environments is hard-hit by data gravity. I would argue that it’s a failure on VMware’s part to appeal to its new developer audience, because it’s got a long history of working with and understanding sysadmins. Not only that, but it’s recently handed off (to Pivotal) or lost pretty much every group with a deep understanding of what developers want, Salvatore of Redis fame notwithstanding.

Don’t alienate your audience

VMware’s announcement of NSX garnered a great deal of attention and excitement from VMworld attendees, but not all of it was positive. In much the same way as companies making a DevOps transition should avoid making enemies of the ops or DBAs but rather recruit them to new roles, a shift toward SDN should avoid alienating network admins but rather bring them into the fold. VMware today appears to be failing at this:

Alienating those who could be your most ardent supporters seems like a foolish move, but maybe that’s just me.

Disclosure: VMware, Pivotal, Cisco, Splunk, IBM, Red Hat (which acquired Gluster), and Ravello Systems are clients, as are a number of cloud vendors including Amazon, Cloudscaling, Citrix, Joyent, and Rackspace. EMC (primary owner of both VMware and Pivotal), Inktank (which sells Ceph), SwiftStack, Sumo Logic, Savvis, and dotCloud are not.

by-sa

Categories: big-data, cloud, devops, virtualization.

Entrepreneurial technologists as another creative class

I recently tripped across another great post by Rick Turoczy (author of Silicon Florist) that really resonated with me, and I couldn’t resist quoting it here:

His hypothesis was that startups and the entrepreneurial technologist who founded them were another creative class; a group of people who chose to express themselves with apps and code instead of pictures and words.

And the company for whom he worked—a Portland-based communications firm called Wieden+Kennedy—wanted to help. They wanted to foster that next generation of creatives in town. Because they were changing the way stories were told. Because W+K wanted to be part of that.

Developers as creatives — that’s an important thought. One that companies like Adobe should definitely be considering, as they think about how code-based products fit into a creative lineup including things like Photoshop and InDesign.

Disclosure: Adobe is a client.

by-sa

Categories: creatives, design.