If you spend enough time talking with fans or purveyors of the various virtualization technologies, eventually the conversation will shift around to the migration of “live workloads” – the movement, in other words, of running software. VMWare talks about this quite a bit, and Red Hat hastened to point out this feature in its latest release.
There are a variety of reasons users would want to do this. Say, for the sake of argument, that you have a workload running on a failing hardware platform. Would it not be preferable to seamlessly move it to an uncompromised machine? Of course. Ditto for consolidation, upgrades, and a host of other situations.
What I’m curious about, however, is when this will become less a virtualization feature and more of a backup standard practice.
At RedMonk, for example, we have an automated set of scripts that backs up our webroot, MySQL databases, and various other configuration files nightly to S3. If or when our underlying hardware – a Sun V20z – fails, I’m confident that I could recreate our current environment, less a few hours worth of comments depending on the timing, within an hour or two. Assuming that I can arrange new hardware, which is in our case a rather bold assumption.
If we were to leverage virtualization capabilities, however, we could easily back up the environment itself as an instance snapshot – a capture of our workload and applications at a particular point in time. It’s not migrating a live workload, per se, but rather an image of one. Given this ability, the time required to recreate the software environment is reduced to the time it takes to click a few buttons.
Combine that with Hardware as a Service (HaaS) platforms such as EC2/S3, which can provision new hardware near instantaneously, and the mean time to recreate hicks – our production server – would drop from an hour or two to a minute or two.
For virtualization customers, of course, this notion is (very) old hat. A significant percentage of VMWare’s customers embrace the technology strictly for the purpose of disaster recovery. Larger customers, too, are not likely to discover anything new in this combination as the cost of downtime is such that they already have elaborate backup solutions in place.
But it seems clear that the combination of open source virtualization technologies and economical HaaS options has dramatic implications for down market customers, customers that typically cannot afford to maintain a complex backup infrastructure.
What would be the simplest means of economically assembling a virtualized infrastructure, I wonder? What pieces would I need to cut redmonk.com’s potential downtime by hours?