James Governor's Monkchips

Research in 2020: stuff I am thinking about

Share via Twitter Share via Facebook Share via Linkedin Share via Reddit

So it’s a new year, and I thought it might be useful to jot down some of the things I am likely to be thinking about, researching, and publishing on in 2020.

First off is Progressive Delivery, a term I came up with to describe a basket of approaches to application delivery that reduce risk by routing traffic to selected, targeted user or infrastructure segments before broader deployment, allowing for more flexibility for experimentation and greater safety – think A/B testing, canarying, feature flags, feature management, and blue/green deployments. Progressive Delivery isn’t new, rather it builds on the foundations of Continuous Integration and Continuous Delivery, but it has gained currency as a term in the industry from people and companies I greatly respect, I believe because it scratches an itch we continue to have. While industry leaders like Charity Majors argue it’s absurd to put any restrictions on when you ship code, for the vast majority of enterprises the idea of rolling out new features on a Friday afternoon sounds like a horrorshow. While industry leaders such as Charity Majors argue that we must improve our delivery processes so that we can be confident enough to ship code at any time, including Friday afternoons -“fear of deploys is the ultimate technical debt” – for many enterprises that still sounds really scary.

We all have work to do in getting to a place where everyone is more scientific about rollouts, and hopefully Progressive Delivery is part of the story that will get us there. Progressive Delivery is about putting the product owner in charge of when features are rolled out. Progressive Delivery is currently being packaged and automated in tooling by vendors such as CloudBees, GitLab, Split.io and Weaveworks. Launch Darkly’s Trajectory conference in April will have a lot of related content this year – the CFP is here. Whether you’re practicing Progressive Delivery, or building tools to support it, I’d love to talk to you.

Of course feature management and experimentation only really makes sense if you can close the loop, and query the system from end to end at any time in order to understand performance, find bugs, and make fixes. That’s one reason I am so excited about the the rise of Observability (Charity Majors of Honeycomb again!) as a guiding principle for application development and service management. 2019 saw the existing APM vendors begin to turn their eyes towards Observability. As with so many important technology areas, Observability is a mindset, and set of practices as much as it is a technology, though of course the underlying sensor and storage technologies also matter, with the right granularity, cardinality and flexibility to map to the domain of modern distributed/cloud native applications.

So we have Progressive Delivery, and total system Observability, but what about the automation approach to rolling out technology, making fixes and or rollbacks?

The term GitOps, coined by Alexis Richardson at Weaveworks, does a great job of capturing how modern development teams are developing, describing and managing systems with automation. Using Git as the system of record, and having all changes made with pull requests, means that you’re not dealing with infrastructure where you’re not sure who has done what, with related configuration creep and sprawl. You can’t have developers randomly SSHing into servers and making changes with Bash scripts, and still expect a team to be able to track performance of the system overall. GitOps maps particularly well to the kind of declarative container-based infrastructures we’re building with Kubernetes, but makes just as much sense with serverless systems such as AWS Lambda. As Kubernetes moves down the stack in terms of the targeted abstraction layer GitOps will become even more important. In 2019 Microsoft Azure, AWS, and Google Cloud all coalesced around GitOps as an approach to configuration as code, and we’re going to see lot more from the industry going forward.

Another area I will be digging into this year is Software Delivery Management based on telemetry of application delivery tools, to get a better handle on the performance of development teams. As the cloud has become pervasive, so has the ability to track what developers are doing, while Git has become the standard communication and collaboration mechanism, and standardisation makes everything easier from an aggregation and comparison standpoint. CloudBees did an excellent job in 2019 of pulling this kind of thinking about metrics together with the term Software Delivery Management – it’s as good a term as any. I am interested in talking to folks about the metrics and measurements they use, or the tools they use to track them. Also last year GitHub acquired Semmle. You can do some amazing things with the likes of source{d}. As as industry we need to get much better at measuring and explaining our work – and this will be another research focus for me.

Drop me a line at jgovernor at redmonk.com if you’d like to discuss any of the ideas laid out in this post. This won’t be the entirety of my work of course, something new is bound to come up, and I am not going stop tracking all the usual goodness in Agile, Data management, DevOps, Kubernetes, Middleware and so on, but I just wanted to quickly flag some areas of interest.

 

disclosure statement: CloudBees, Google Cloud, GitHub, GitLab, Launch Darkly, and Microsoft are all clients but this is not a sponsored post, and the analysis is independent.

2 comments

  1. Hi James. Thanks for sharing your plans!
    Could you explain or point me to some further reads on “GitOps maps particularly well to the kind of declarative container-based infrastructures we’re building with Kubernetes…”? I’d like to understand this better.
    Enjoy 2020,
    -Vitaliy

    1. Sure Vitaly. You can deploy K8s based systems imperatively or declaratively. With GitOps, Git becomes the place where you store a versioned, desired state with your manifests, which thus supports reproducible deployments. With Kubernetes, you can create an API object to represent what you want the system to do, then the components in the system work to drive towards that state. Does that make sense? Here’s a longer explanation https://www.weave.works/blog/automate-kubernetes-with-gitops

Leave a Reply

Your email address will not be published. Required fields are marked *