Blogs

RedMonk

Skip to content

DevOps and cloud: A view from outside the Bay Area bubble

devops_cloud_venn_diagram

I saw two starkly different worlds of IT almost side-by-side last week, thanks to the absurdities of airline pricing, and it illustrated very clearly the contrast between how we perceive the world in our Bay-Area–centric bubble and how the world really is.

First, I spent some time at Amazon’s AWS Summit in San Francisco, where Amazon was pushing best practices at the bleeding edge of tech to one of the most technically sophisticated communities on the planet. Following that, I spent a day at DevOpsDays in Austin, Texas, en route to my home in Minnesota. (For some reason this was hundreds of dollars cheaper than a direct flight.)

In the Bay Area, I saw the same thing that’s endemic of the area. There’s a clear best way to do things, pretty much everyone is aware of it, and that’s what everyone does. Thanks to the heavy startup presence, there’s much less inertia in terms of existing cultures or infrastructure, so changes are easier. When you’ve got a next-door neighbor doing something amazing, it’s very hard to resist the peer pressure and the local culture, so everyone’s doing The Right Thing™. Very similar things hold true in the open-source world, where neighbors may be virtual but they’re still highly visible.

In Austin, it was an entirely different story. I saw yet another example of how the rest of the IT world, at least in this country, lives. I’ve seen it in places like Minnesota, Maine, and Oregon. It’s a world where trendy software vendors and startups don’t represent any meaningful part of the tech community, where businesses mostly don’t yet realize that software is eating the world. It’s a world where inertia rules the day, where business is king and sysadmins have little to no say in major changes. And it’s a world where even experimentation is difficult and must be done on the smallest of scales.

What happens in places like this? Let’s call it Everytown, USA. In Everytown, IT departments can’t afford to build a new infrastructure from scratch using Puppet or Chef in the cloud. They don’t have the freedom to do it externally or the resources to implement a private cloud internally.

Even at a conference like DevOpsDays Austin, if you ask people what they’re actually doing today, most of the time it has little to no resemblance to how a new Bay Area startup would set up its infrastructure. Don’t ask them about their plans, that’s often so ambitious as to be unusable. Maybe they’re maintaining cloud instances by hand in AWS, or maybe they’re slowly migrating a large datacenter full of pets to configuration management, which they’ve been working on for the past five years. If they’re open-source fans, chances are they’re running Nagios and have a huge collection of Nagios-related infrastructure that would need serious, dedicated effort to shift to anything different.

More modern shops could have migrated most or all of their servers to tools like Puppet or Chef, so everything’s at least under configuration management and thus documented and reproducible. But in many cases, this is for datacenter use only, either true on-prem or in a colo. Gaining the capacity, budget, and permission to even migrate to private cloud is impossible for many companies, and it could be that way for a while.

You can see the same thing at conferences for larger enterprise vendors like IBM — talking to attendees at IBM’s Pulse conference this spring, most of them are in exactly the situation I’ve described. IBM’s jump into both cloud and DevOps will make a significant difference to their adoptability in many places; it’s like a stamp of approval that these things are really ready for the enterprise.

“Shadow IT” developers outside the purview of IT-controlled infrastructure, on the other hand, often don’t have or don’t want to develop the expertise to learn DevOps philosophies and approaches. Developers may well be working in the cloud, but chances are they aren’t running tools like Puppet or Chef, and they don’t have any monitoring set up. They may hack things around by hand and hope everything doesn’t break too often, or they may outsource the infrastructure to somewhere external and run in a PaaS.

IT shops like this may be aware that better ways exists and they may have ambitions of going there, someday. The Bay Area view of the right infrastructure is always going to be years away for the rest of us — we even put William Gibson’s quote regarding this on our website:

The future is already here, it’s just unevenly distributed.

 

Update (5/5/13): Of course this is a generalization of reality, which is always more complex than a single answer at either end of the spectrum. I’ve just simplified it to communicate the overall points, which remain true regardless of the details. Reality looks like a distribution on both ends — but the distribution is shifted. I’m just talking about the most common cases within those distributions. There are clearly going to be some Bay Area companies with plenty of inertia, and some Everytown companies overflowing with cloud- and DevOps-based approaches — but it’s the minority. Even within a single company, there’s a distribution of approaches, with some areas more modern and others more legacy (heard of systems of engagement and systems of record?).

Disclosure: Amazon (AWS) and IBM are clients. Puppet Labs has been a client. Opscode and Nagios are not.

by-sa

Categories: cloud, devops, open-source.

  • http://twitter.com/sixty4k sixty4k

    Some of us in the Bay Area still look like the world outside the Bay.

  • blahism

    I’d also argue that on the face the bay area may look good, but deep down inside the gizzards of those companies, they’re still running the same pets. There is what makes your business money, and there is what makes your business run. If you’re not in the business of making businesses run, well then, don’t worry about the pets that come with the territory.. (accounting/hr/finance/compliance/audit/benefits.. yaddy yaddy yadda)

  • SleeZee Lyers

    Interviewed with Amazon AWS in the past week and it’s frustrating – I am from outside the Bay Area / Seattle center of excellence and want in. Amazon seems to want me in, but acknowledges that with my outside the Bay Area experience, it’s difficult to hire me in.

    Forever alone

    • blahism

      After interviewing for Amazon, I wouldn’t want to work there..

  • Doug K

    The best way to run a Bay Area startup is not necessarily the best way to run a business that has a family of Cobol, SAP, JD Edwards, .NET and Java applications of varying ages. If we had the luxury of throwing them away and starting again, it would be done differently..

    • blahism

      The generation today often doesn’t realize what the legacy of yesterday was able to accomplish. They don’t know of the days when a manufacturing plant used to have 50-100 people working for receivables and payables and how a simple ERP running on an old HP K server brought that down to 4-5 people with less errors, faster service and better workflows. The “wins” they feel that they’re winning or innovating with, are the same ones everyone has felt with every prior generation. Nothing today says what we have from yesterday is wrong, i’d just wager they’re solving entirely different problems. We used to have HR manage these people we replaced with systems, then we thought process could control the systems and now we’re back to realizing people are the key and the same way we manage people with manufacturing WIP can apply to any WIP. Regardless of that WIP being Cobol, SAP, JD Edwards, eBusiness, .Net or Java. And to be completely honest, java and .net are absolutely amazing solutions..

  • Pingback: M-A-O-L » DevOps and cloud: A view from outside the Bay Area bubble

  • la6470

    No sir we don’t have inertia. We just don’t want unemployed developers to walk into Sysadmin territory … Nowadays simple change in 10 systems takes three weeks so that DevOps can put it into their backlog, agile voodoo board, scrum bum meetings and then implement it. Before all we would do is write a good old bash script and push those changes in one hour after appropriate change management approvals. A perfectly good way of doing things for true unix says admins.. Why don’t you automate development itself and make code generation tools as easy as msword and leave says admins to develop their own tools instead of forcing puppet crap?

  • http://twitter.com/taidevcouk Daniel Bryant

    Interesting thoughts, and it’s something I also see in the UK, obviously on a more micro scale. Many small companies around London are investing heavily in the new DevOps philosophy, particularly around the “Silicon Roundabout” area which is a haven for tech-focused start-ups.

    Although there are many other tech-hubs around the UK (as I’m sure there are all over the US; NYC for example?) it’s all too easy to see the pattern mentioned above, especially in the more established IT sectors. When it comes to the Cloud these people talk a good game, but often play very badly.

    Having said this, I am generally very encouraged to see these new ‘DevOps’ approaches emerging. Anyone who is a true practitioner of this philosophy knows and can demonstrate the benefits it will bring to a business. My personal favourites are how DevOps methodologies can enable the implementation of Continuous Delivery, which allow more iterative deployments (agile for Ops?), and how the automation of provisioning/deployments/rollouts can lower the cost of experimentation.

    In my opinion it’s only a matter of time before these early DevOps adopters spread the good word and this practice becomes mainstream. Just like may other methodologies that came before which were also built on sound and proven principles… TDD and Agile development anyone? Surely only the cool kids do that? ;-)

  • Pingback: DevOps and the Cloud. Not just localised in the Bay Area… | The Tai-Dev Blog

  • http://twitter.com/cloudtoad Derick Winkworth

    I have ranted about this before. This is absolutely true.

  • Paul D

    I grew up in the Bay Area and worked for statrups, moved 10 years ago to Everytown USA (remember the 2002 dot com bubble) . I agree with your premise entirely. It’s all just a matter of perspective.

    If you are a brand new greenfield startup in the bubble, you can make choices that allow you to go with the latest and greatest technologies. Those in Everytown have the weight of possibly decades of hardware and software choices to be considered.

    As an Everytowner now, I’d love to get to a full DevOps cloud solution, but we have to make choices that are best for the business. Where we can we make the incremental choices, best of breed that will make us more efficient more reliable and provide true business value, we go there at a planned pace. The weight of the past makes it harder.

    But while the future is DevOps and the Cloud, I think what the bubble is missing is the reality component that in 10 years they will either not be around (lack of funding, not enough business value) and that DevOps and the Cloud technologies didn’t help make them viable, they just provided the smartest and quickest way to market.

    I think the business value for DevOps in the bubble or Everytown is the same. To reduce footprint and cost in the short term, stability and scalability in the long term. That’s what makes it the right thing to do, not that the bubble is any smarter, they just have more freedom right now.
    Check back in a few years and see if they are still at warp speed.

  • Pingback: How can Everytown IT adopt DevOps and achieve business benefit?

  • Pingback: On SF and the Shared Craft bubble | Monki Gras

  • Pingback: What were developers reading on my blog and tweetstream in 2013? – Donnie Berkholz's Story of Data