James Governor's Monkchips

Progressive Delivery at Sumo Logic

Share via Twitter Share via Facebook Share via Linkedin Share via Reddit

Last year I started talking about what I call Progressive Delivery, because I feel like the industry is missing a term to describe a set of approaches and technologies used by Web companies, such as Feature Flags, Blue Green deployments, A/B testing at scale, routing new services to a subset of users before broader roll out. I still haven’t nailed an elegant definition, thought I quite like this from Carlos Sanchez at Cloudbees:

“Progressive Delivery is the next step after Continuous Delivery, where new versions are deployed to a subset of users and are evaluated in terms of correctness and performance before rolling them to the totality of the users and rolled back if not matching some key metrics.”

Definition aside I’ve spoken to a bunch of folks for which the idea has resonated, with an aha moment, so I am going to keep working on research around the idea. I recently met Bruno Kurtic, founding VP, product and strategy at Sumo Logic, the cloud-based log monitoring and analysis platform. This is what he had to say when I asked him if the idea made sense:

“Progressive Delivery makes sense– we use our own product Sumo Logic to do this. We’re heavy users of feature flags, and progressive roll out, we’re multi-tenant. We have customers that are extremely bursty, such as online gaming companies with unexpected successes. Customers can create so much extra traffic, and effectively Distributed Denial of Services (DDoS) traffic. But we can turn any feature in our product on, we can roll it out to one specific customer, in this region, for this particular use case. We can also do shadow testing in production. We have a number of machine learning techniques we expose as capabilities to customers, for example. A customer might complain our pattern recognition isn’t working. But how do we know if we change the algorithm for other customers it won’t break their experience? We can silently spin up 2 clusters and test the performance of this algorithm. We do candidate testing of each service we roll out. How we we test it? We have a shadow copy of Sumo Logic we use for testing, industry regulations etc. We roll out a new service to 5% of our customers first. What sort of users choose to use this feature? We roll out the service then leverage our logs to understand the behaviours of the system and users. Logs are integral to understanding how new code is being shipped, how you do A/B testing in production. We do testing in production.”

Bruno argued that logs are particularly useful in the context of Progressive Delivery, because developers don’t know what problems are going to emerge with new services as they are rolled out. Metrics, he said, have to be premeditated.

“But logs are basically developers writing messages to themselves in the future. They’re going to be woken up at 3am in the morning, then the next morning say I wish I had this extra data. How do you take data that isn’t even in there and take advantage of it? Developers create this information to understand how something works. Ultimately key performance indicators, metrics and data send a signal about what is right and wrong, and the logs tell you what is wrong.”

We’re at an interesting point in the industry, where we’re seeing significant tooling fragmentation, yet at same time a lot of interest in a convergence of logging, monitoring, and tracing. Progressive Delivery is a use case where an integrated toolset will make a lot of sense.

 

full disclosure: this is an independent piece of research but CloudBees is a client.

 

 

No Comments

Leave a Reply

Your email address will not be published. Required fields are marked *