James Governor's Monkchips

Is capacity planning dead or set for a revival?

Share via Twitter Share via Facebook Share via Linkedin Share via Reddit

Capacity planning is something I broadly associate with the mainframe market. Its something data center used to do before the advent of commodity computing – when chips became cheap and storage even cheaper. Web companies are notorious for their “throw servers at the problem, don’t worry its scale out” approaches. But virtualisation, and dare I say it, the Cloud, are bringing issues of strategic infrastructure planning back to the fore. I recently recommitted to blogging, and said I would try and bring some of my twitter conversation back to my blog, making it less ephemeral and perhaps even more timeless. The most simple method I can think of is to search and paste, like so. I got some really solid responses. The cool thing is now you can go talk to these people yourself.

@jiludvik: @monkchips Capacity planning will be less of a worry for end users and more so for cloud providers. Capacity does cost money,even at scale

@jesserobbins: @monkchips Capacity Planning = Tactical Advantage, Strategic Ability. John Allspaw’s explains why & *how* http://tr.im/1g79

@jevdemon: @monkchips capacity planning will be dead when people stop building and deploying systems. In other words, never.

2 comments

  1. I don’t agree that capacity planning was simply a mainframe discipline. It has certainly been a large company discipline at the various large organizations that I have worked with. SAP systems with 15,000 world wide users demand that we plan and manage capacity carefully. Package shipping for millions of packages per day globally demand great forethought in the planning and management of capacity. We can’t just throw resources at that kind of a problem.

    However when problems are smaller/more discrete then maybe the needs change. Certainly there is so much headroom in server boxes today and at such low price points that we may not need to worry because we know that 2 clustered for fail over boxes will run the critical services.

    I do argue though, that most large organizations (and no I won’t define large precisely what large means) will need to plan and manage capacity and scale of their core infrastructure – even as it becomes more of a utility.

  2. Capacity of what? Not necessarily the same things as in the mainframe era. Sometimes more interesting or complex factors that limit the scaleability of large systems.

    For example, if you have x faults per million transactions, and you need one full-time systems engineer for every y faults, then your system capacity is limited by the number of systems engineers you can employ.

    Of course you can try to change x and y, but maybe that’s an important part of capacity planning as well.

Leave a Reply to Richard Veryard Cancel reply

Your email address will not be published. Required fields are marked *