Tom Raftery recently wrote a piece calling for public cloud providers to be more open about the energy footprints of their services to allow for customer and consumer benchmarking. You might expect the likes of Amazon and Google would be open to publishing a footprint, but sadly that’s not the case yet. The Silicon Valley leviathans are doing some great work in terms of efficient IT – see Facebook’s OpenCompute initiative for example.
But its interesting that a company with a rather different heritage is banging the drum for sustainability metrics in the cloud. Step forward traditional outsourcing and systems integration firm CSC, and vp of cloud computing Siki Giunta.
“We really need to really understand a workload, how long it runs. We need to understand the rhythm of the business, and provision to that… At the moment metrics collection is all over the place.”
Giunta said that regulatory environments such as the UK Carbon Reduction Commitment would start to force enterprises to be more rigorous about energy monitoring and management. But what should you measure, in order to get a better handle on energy use?
Obviously we eventually need to instrument everything, for an internet of things that drives more sustainable outcomes, but Siki argues that a simpler metric would be a good place to start.
“In terms of servers a common area of metrics is RAM. It doesn’t matter how many VMs you what matters is RAM. But customers don’t know the RAM capacity…. of their workloads. they just provision to the spike.”
“Going forward we’ll see on spot memory… spot markets. At a couple of banks, like energy markets today- there is a spot rate. In IT RAM is the metric – like kilowatts.”
Its very early days, but its good to see Giunta leading the debate, talking to her customers, and folks like the CSC Advisory Council, about measuring server use, and moving towards more sustainable clouds.
Given that CSC bills for cloud on the basis of RAM you can see the attraction of a RAM-based energy measurement metric.
I also like the idea of a brute force metric so organisations can’t use complexity as an excuse not to report on energy use.
William Louth says:
June 28, 2011 at 11:51 am
James we should separate metering from billing. We should always meter resources (especially for conservation reasons) but billing itself is based on economics and the business (i.e. rate plans).
Meters can represent any sort of cost and consumption – latency, liability, levy, lease. Using metering we can optimize and trade across a number of variables in the deliver of a service.
We should also think about the causes of such cost & consumption which can be done using ABC better than any other method available today.
Here are some links for an alternative but broader vision on metrics and meters.
Activity Based Costing & Metering (ABC/M)
http://opencore.jinspired.com/?page_id=3210
Cost Aware Runtimes & Services (CARS)
http://opencore.jinspired.com/?page_id=3063
James Governor says:
June 28, 2011 at 12:02 pm
thanks William. that’s helpful. totally agree about metering and billing being different things.
Tom Raftery says:
June 29, 2011 at 12:53 pm
I don’t get it James.
Surely, to get a better handle on energy use, you should measure energy use, no?
I’m not seeing the relationship to RAM. I have 16gb RAM in my desktop. Before I upgraded it from 8gb it wasn’t using half as much energy.
Cloud is more complicated, I get that.
But wouldn’t a metric like watts/compute cycle be a more accurate metric?
James Governor says:
June 29, 2011 at 2:43 pm
great question. As I understand it, RAM is effectively a proxy here for server utilisation. That is – virtualisation means counting VMs rather than processors, which creates a quandry in terms of the “unit of work”. And also – and I should say this was only my understanding… RAM would be correlated with an energy measure. RAM in itself won’t give you the data you need, but rather the correlation with energy used.
Sunil Bhargava (CSC - Cloud Services) says:
July 1, 2011 at 3:45 pm
Tom – you are right in that the measure should be watts/some_compute_measure. Measuring computing has always been complex – IBM came up with MIPS that is considered inaccurate by many; SAP has come up with SAPS which many deride as often. We wanted to take the opportunity cloud provides to make measuring compute simpler. Simpler means a metric that application owners can relate to. Having assessed workloads in the large variety of x86 based outsourced/hosted servers we have determined that the single metric that best align to compute resources is RAM utilization.
Further, the contemporary blade-based technology continues to improve RAM density and CPU processor density at about the same rate. Though not ideal, for blade based architecture, RAM is definitely a good proxy; and in our measure it is the best proxy. So, for energy consumption of cloud infrastructure I would put forth the measure to be watts/RAM. Of course, as in the case of your example, for traditional server based architecture this is a poor metric. Our focus on a metric for blade-based architecture is further reinforced by the fact that last year blade shipments outpaced physical server shipments – it is the future.
James Urquhart says:
July 2, 2011 at 2:51 pm
I would love to have you invite someone from CSC to guest post an explanation of how this would work. I have to admit being confused, as well.
RAM is an interesting measurement of compute resource usage, so I guess a correlation is possible. But I’m not sure it’s “simple” or even best, though I am open to learning more.
Thanks for the write-up, though. Thought provoking.
James Governor says:
July 4, 2011 at 9:06 am
james- of course I asked CSC for clarification, particularly after Tom asked the question. And thanks Sunil for giving it… Now I just have to hassle CSC to provide some watts/RAM metrics for their public and private cloud offerings… 😉