Blogs

RedMonk

Skip to content

DevOps and cloud: A view from outside the Bay Area bubble

devops_cloud_venn_diagram

I saw two starkly different worlds of IT almost side-by-side last week, thanks to the absurdities of airline pricing, and it illustrated very clearly the contrast between how we perceive the world in our Bay-Area–centric bubble and how the world really is.

First, I spent some time at Amazon’s AWS Summit in San Francisco, where Amazon was pushing best practices at the bleeding edge of tech to one of the most technically sophisticated communities on the planet. Following that, I spent a day at DevOpsDays in Austin, Texas, en route to my home in Minnesota. (For some reason this was hundreds of dollars cheaper than a direct flight.)

In the Bay Area, I saw the same thing that’s endemic of the area. There’s a clear best way to do things, pretty much everyone is aware of it, and that’s what everyone does. Thanks to the heavy startup presence, there’s much less inertia in terms of existing cultures or infrastructure, so changes are easier. When you’ve got a next-door neighbor doing something amazing, it’s very hard to resist the peer pressure and the local culture, so everyone’s doing The Right Thing™. Very similar things hold true in the open-source world, where neighbors may be virtual but they’re still highly visible.

In Austin, it was an entirely different story. I saw yet another example of how the rest of the IT world, at least in this country, lives. I’ve seen it in places like Minnesota, Maine, and Oregon. It’s a world where trendy software vendors and startups don’t represent any meaningful part of the tech community, where businesses mostly don’t yet realize that software is eating the world. It’s a world where inertia rules the day, where business is king and sysadmins have little to no say in major changes. And it’s a world where even experimentation is difficult and must be done on the smallest of scales.

What happens in places like this? Let’s call it Everytown, USA. In Everytown, IT departments can’t afford to build a new infrastructure from scratch using Puppet or Chef in the cloud. They don’t have the freedom to do it externally or the resources to implement a private cloud internally.

Even at a conference like DevOpsDays Austin, if you ask people what they’re actually doing today, most of the time it has little to no resemblance to how a new Bay Area startup would set up its infrastructure. Don’t ask them about their plans, that’s often so ambitious as to be unusable. Maybe they’re maintaining cloud instances by hand in AWS, or maybe they’re slowly migrating a large datacenter full of pets to configuration management, which they’ve been working on for the past five years. If they’re open-source fans, chances are they’re running Nagios and have a huge collection of Nagios-related infrastructure that would need serious, dedicated effort to shift to anything different.

More modern shops could have migrated most or all of their servers to tools like Puppet or Chef, so everything’s at least under configuration management and thus documented and reproducible. But in many cases, this is for datacenter use only, either true on-prem or in a colo. Gaining the capacity, budget, and permission to even migrate to private cloud is impossible for many companies, and it could be that way for a while.

You can see the same thing at conferences for larger enterprise vendors like IBM — talking to attendees at IBM’s Pulse conference this spring, most of them are in exactly the situation I’ve described. IBM’s jump into both cloud and DevOps will make a significant difference to their adoptability in many places; it’s like a stamp of approval that these things are really ready for the enterprise.

“Shadow IT” developers outside the purview of IT-controlled infrastructure, on the other hand, often don’t have or don’t want to develop the expertise to learn DevOps philosophies and approaches. Developers may well be working in the cloud, but chances are they aren’t running tools like Puppet or Chef, and they don’t have any monitoring set up. They may hack things around by hand and hope everything doesn’t break too often, or they may outsource the infrastructure to somewhere external and run in a PaaS.

IT shops like this may be aware that better ways exists and they may have ambitions of going there, someday. The Bay Area view of the right infrastructure is always going to be years away for the rest of us — we even put William Gibson’s quote regarding this on our website:

The future is already here, it’s just unevenly distributed.

 

Update (5/5/13): Of course this is a generalization of reality, which is always more complex than a single answer at either end of the spectrum. I’ve just simplified it to communicate the overall points, which remain true regardless of the details. Reality looks like a distribution on both ends — but the distribution is shifted. I’m just talking about the most common cases within those distributions. There are clearly going to be some Bay Area companies with plenty of inertia, and some Everytown companies overflowing with cloud- and DevOps-based approaches — but it’s the minority. Even within a single company, there’s a distribution of approaches, with some areas more modern and others more legacy (heard of systems of engagement and systems of record?).

Disclosure: Amazon (AWS) and IBM are clients. Puppet Labs has been a client. Opscode and Nagios are not.

by-sa

Categories: cloud, devops, open-source.