Rancher Labs recently ran an analyst day in San Francisco. The event had thirteen customer speakers, which, given Rancher claims to have 200 paying customers, is a pretty decent chunk of the installed base. The talks were high on signal and low on noise – every speaker had interesting technical insights to impart.
Rancher CEO Sheng Liang kicked off proceedings with a solid take on platforms, with respect to power versus simplicity. AWS may have started simple, but it’s now undeniably a powerful platform with an advanced set of primitives that compare very favourably with traditional enterprise suppliers such as VMware.
So customers that say they want open platforms, to avoid lock in, for example using vanilla Kubernetes, are faced with a dilemma – openness vs power and convenience. Generally people choose convenience, which is one reason AWS dominates the tech market today.
“It’s not enough to be able to work with multiple clouds, the solution has to be better than using the clouds directly,” said Liang: ”To get people to use multiple clouds you need something better than AWS.”
So that’s the challenge for Rancher and the the rest of the industry. Offering portability isn’t going to be enough – the cross platform experience needs to be demonstrably better.
The idea we should treat cloud infrastructure and microservices as cattle rather than pets is now widely accepted in the industry – VMs and containers are transitory and disposable. In an age of continuous deployment and scale out apps images and services aren’t meant to last. Patching is replaced by immutable infrastructure.
When Docker took off, Rancher Labs ran with the disposability narrative and created its own container orchestration and management platform, called Cattle. Rancher 1.6 gained a reputation for being easy to deploy and manage, with Cattle managing the Docker images built on developer laptops. Kubernetes on the other hand, while very powerful, is not simple to manage.
But Kubernetes established itself as the de facto deployment environment for container-based services. For version 2.0 Rancher therefore took Cattle and put a bolt to its head. Rancher actually lived the disposability we’re all talking about. Customers were concerned though – having been through selection processes and deciding that Cattle was more appropriate to their needs than Docker Swarm, Apache Mesos or Kubernetes, their platform choice was going to be deprecated. Rancher isn’t alone here – Docker itself has to manage customer inertia around Swarm, for example. Mesosphere now supports Kubernetes on DC/OS.
Kubernetes is not the competition, it’s the environment in which you compete.
The Rancher event was probably most telling and useful because this customer tension was exposed. Customers had a degree of cognitive dissonance. On the one hand, they wanted to just keep using Cattle, while on the other they realise that the Kubernetes juggernaut is unstoppable. Meanwhile for Rancher standardising on Kubernetes was always going to be easier from an engineering perspective than supporting multiple third party orchestration engines.
So for Rancher 2.0 the job is to provide the Rancher UX, API, with command line interface (CLI), user interface (UI), Compose etc, but on Kubernetes pods. Supporting Kubernetes natively means all the usual tooling works – for organisations looking to use Kubectl, Helm charts etc.
Rancher also plans to deliver better integration with tools such as Prometheus (for monitoring and metrics), and roles-based access control. Rancher has its own authentication model and supports SAML, LDAP and Microsoft Azure Active Directory. Users can set alerts and thresholds – for example if etcd memory consumption is more than 70% it will notify the team on Slack.
Darren Shepherd, Rancher chief architect took a slightly different take on the power vs simplicity theme, pointing to a new project Rancher is working on called Rio “familiar and simple docker 1.11.x style UX on k8s, end to end including build, runtime, logging, monitoring, serverless.
I asked about the balance of training to product, service and support. Shannon Williams, Rancher co-founder and VP of sales said: “With Kubernetes most of the services are training. AWS didn’t need that, VMware didn’t need that. Tech waves don’t need that if products are easy to use”.
So what did the customers have to say? Here is a selection of views.
Sling TV is currently running containers on VMware on premises, and wants to avoid further lock in by adopting vanilla Kubernetes. It has plans to burst to AWS, so portability is at a premium. Thus Rancher.
Toyota Connected was an interesting case study, partly because unlike the other customers, it had chosen Rancher 2.0, rather than 1.6 or earlier. That is, it chose Rancher because of Kubernetes, rather than in spite of it.
Ross Edman, Lucas Harms, Toyota Connected senior devops engineer, said: “Kubernetes isn’t perfect but it has a enough batteries included functionality, and is sustainable”
Toyota is going to run Kubernetes in every Camry sold in the “head unit”, the part of the dashboard with radio functions, Bluetooth, networking and so on. Toyota wanted the flexibility to allow for rapid development, and use of various stacks (it writes software in Java and Elixir, which requires the Erlang virtual machine, for example). It’s kind of incredible that software which began life an open source implementation of software used to manage Google’s server fleet will now be used in car dashboards.
Toyota is using Kubernetes to support the back end services for communication with the “head unit”, the part of the car dashboard with radio functions, Bluetooth, networking and so on. Toyota wanted the flexibility to allow for rapid development, and use of various stacks (it writes software in Java and Elixir, which requires the Erlang virtual machine, for example). Toyota telematics for the Camry will constitute over 100 microservices. At launch the back end will need to be capable of supporting 15m vehicles. The cluster size is currently 20 to 30 nodes in HA configurations.
Edman continued: “Rancher Kubernetes Engine removes the work of Bring Your Own Containers. It’s not tied to underlying infrastructure. When we picked up RKE it really lowered startup costs in using other clouds.”
National Energy Research Scientific Computing Center, part of the Department of Energy, was another fascinating use case – including running Docker on a Cray Supercomputer. It uses containers for computational workloads – a NERSC open source project Shifter converts Docker images on the fly to run as unprivileged users in the Cray supercomputing environment – that’s 9000 nodes. NERSC also users containers in more “traditional” fashion, for application development workflows. It chose Rancher for authentication, CLI, management tools, and policy enforcement.
In conclusion – like other players in the container ecosystem, Rancher is now focusing on making Kubernetes easier to adopt and use. Kubernetes will be the infrastructure play for the next few years. I came away from the event with the impression that Rancher has a good basis to win new customers in that context.
disclosure statement: Rancher paid some expenses for my trip, but is not a RedMonk client. Docker and Microsoft are RedMonk clients.