James Governor's Monkchips

On multi-cloud tradeoffs and the paradox of openness

Share via Twitter Share via Facebook Share via Linkedin Share via Reddit

woman wiring an early ibm computer

In any technology adoption decision organisations are faced with a balancing act between openness and convenience. Open standards and open source, while in theory driving commoditization and lower cost, also create associated management overheads. Choice comes at a cost. Managing heterogeneous networks is generally more complicated, and therefore resource intensive, than managing homogenous ones, which explains why in every tech wave the best packager wins and wins big – they make decisions on behalf of the user which serve to increase convenience and manageability at the individual or organisational level.

One of the key reasons that Web Scale companies can do what they do, managing huge networks at scale, is aggressive control of hardware, software, networks and system images to reduce choice at certain levels of the stack, so increasing flexibility in other areas. Pretty much the basic premise of scale out networks is that the nodes are the same.

The cloud is no different from previous technology waves. Unix for example was a standard that drove choice but it also created new management costs, which translated into multi-milion dollar expenditures for enterprises on systems management software such as CA Unicenter, BMC Patrol and IBM Tivoli. Over time customers began to coalesce around a single well packaged Unix standard, Solaris, which of course was then supplanted by Linux. In the Linux wave Red Hat quickly emerged as the opinionated Linux for enterprise application deployment. There were plenty of other distribution choices, out there, but IT organisations needed a de facto standard to keep their costs down, and do some of the management for them. Of course outside the Unix world Windows had emerged as a defacto application deployment platform.

While enterprises were adopting Red Hat for on premise workloads the Cloud was emerging as a thing, and we saw the associated rise of Ubuntu. Turns out apt-get suited the way modern developers wanted to work, with simple networked installation, removal and management of open source software packages and libraries.

In the early days of cloud choices were limited. Amazon won not because it offered a huge choice, but because it kept things simple.  At least initially folks were worried about cloud lock in to specific services. Today the landscape is a lot more complicated and fluid. Anyone with kids knows that sometimes apples do indeed compete with oranges. Stephen does a sterling job laying out some of the history and layered complexity in The Road to Abstraction.

“These technologies are also, more often than not, distinct from one another in their architecture, philosophy and approach. They have different strengths, different weaknesses and, often, production uses. It is fair to characterize many of these technologies as apples and oranges. Yet each of these technologies has competed to some degree with the generations that preceded it, and those that have come after. Cloud Foundry and similar projects actively target traditional middleware such as WebLogic; container technologies like Docker coupled with management technologies like Kubernetes, meanwhile, have been a thorn in the side of projects like Cloud Foundry since their inception. So much so that the company that spawned Cloud Foundry has chosen to also offer customers Kubernetes with its Kubo offering. Hosted services such as Force.com or Lambda, for their part, compete with all of the above, selling against pure software with software that’s delivered as a service.”

So with all this choice, what is an enterprise to do when looking for the investment protection of avoiding a single vendor solution?

One answer comes in the shape of abstraction. The container revolution is making operating system choices far less important in a sense. One of Docker’s key selling points is application portability across clouds and on prem. To that end Docker this week just announced its Modernize Traditional Applications (MTA) program. Docker also this week made an an explicit play for OS independence with the release of LinuxKit, a toolkit for building container-optimised Linux operating systems. There is no love lost between Docker and Red Hat. Red Hat is an interesting target because of the significant headway it has been making since it retooled OpenShift around Kubernetes, which has emerged as basically every platform player’s hedge against full stack Docker.

Obviously OpenStack was also designed to allow cloud portability, but the platform has so many different centers of gravity in terms of modules,it’s so open there is indeed a significant management overhead, which has led to it primarily being deployed in very large organisations with large IT staffs.

So what about multicloud?

Avoid platform specific APIs and calls – an obvious aspect of ensuring application portability across clouds. Easier said than done though in an age of serverless, PaaS, and micro-services, with developers making use of whatever is nearest to hand to solve a problem. The swift adoption of Amazon Lambda shows just how attractive an event-based model for taking advantage of cloud platform services can be, and with great power comes great lock in.I BM sought to create its own Open source serverless model in the shape of OpenWhisk, but serverless is really about managed platform services and as such tends by definition towards lock in. Amazon is the only third party adopter.

PaaS can also enable portability. Thus customers can choose to run for example Pivotal Cloud Foundry apps either on prem, on AWS Cloud or Google Cloud Platform. On the other hand, platform between different CF platforms such as IBM Bluemix are hard to achieve out of the box because of third party API platform calls.

Management as a service tradeoffs make it hard to avoid platform lock-in. Thus, for database on AWS Cloud a developer can choose to run MySQL themselves on EC2, or adopt Amazon RDS for MySQL. But developer don’t want to manage databases, they want to write code. Same for MongoDB or Postgres. Mongo is now building its own database as a service platform called Atlas, to enable management portability and better support on prem customers. Specialist management providers for Postgres are emerging, such as Citus.

Monitoring can support a multicloud strategy. It’s no accident that Google acquired Stackdriver, which began life as a cross platform monitoring platform. Google still pitches the value of multicloud monitoring as it seeks to compete more effectively with AWS Cloud and Microsoft Azure.

CI/CD is another on ramp that can support some platform portability. See for example Jenkins Kubernetes toolchains. Spinnaker from Netflix, was designed explicitly for multicloud CD. Oracle acquired Wercker with this kind of promise in mind.

Configuration as code. Chef, Puppet and Ansible automation scripts can also support application portability. Chef Habitat has the notion of a “supervisor” rather than a hypervisor, so that code once developed can be deployed consistently across containers, cloud, VMs, or bare metal.

Cost as a management challenge. For companies seeking a multicloud strategy they’re going to need good third party spend management tools to help decision making about where to run workloads.  There are many vendors in the space, eg Cloudability.

Data Gravity is another challenge. Costs of storage are low enough that it is potentially viable to support the same data sets in multiple clouds, but data governance and regulatory management is going to make such a strategy extremely management intensive.

The list goes on. In sum, it is certainly possible to adopt a multicloud strategy, and there are multiple control points/vectors for doing so. However if organisations are are adopting multicloud for portability reasons, rather than take take advantage of respective strengths of particular clouds for particular apps and workloads they’re going to have a tough job justifying the management overhead for anything outside the most basic Infrastructure as a Service workloads. Convenience is the killer app.

 

AWS, Citus, Docker, MongoDB, Red Hat are clients.

7 comments

  1. Great post. I might add that Openstack is being packaged by companies like Stratoscale, and in containers too…

  2. Data gravity is an interesting one. Data gravity essentially decreases as latency and bandwidth improves. Improvement in networking components ultimately reduces data gravity. The counter point to this is as storage costs decrease, we store more data, which increases data gravity.

    In terms of compliance as a reason for data gravity, I don’t see this as a long term challenge. Most countries will eventually harmonise on a limited number of compliance standards allowing interchange of data. The technical challenge of data gravity is still the bigger challenge.

  3. […] On Multi-Cloud Tradeoffs and the Paradox of Openness […]

  4. […] short, "Choice comes at a cost," O'Grady's Redmonk compatriot James Governor said. He's spot on. The vendors that have won have tended to be those that limit choice in key ways, […]

  5. […] The easy but futile response: “Don’t build on those cloud-specific services.” Good luck with that. As RedMonk analyst James Governor notes: […]

  6. […] The easy but futile response: “Don’t build on those cloud-specific services.” Good luck with that. As RedMonk analyst James Governor notes: […]

  7. […] short, “Choice comes at a cost,” O’Grady’s Redmonk compatriot James Governor said. He’s spot on. The vendors that have won have tended to be those that limit choice in key […]

Leave a Reply

Your email address will not be published. Required fields are marked *