The opportunity to use Git-based workflows for compliance purposes is currently underappreciated, but there is a growing understanding in the industry that it’s a significant opportunity. One of the biggest challenges in any compliance project is understanding who did what, and when. With a GitOps-based approach you naturally track system changes, but also know who made them.
A lot of the work in compliance – the job to be done – is being able to track changes to records or infrastructure. If you are providing information to auditors, or just trying to meet internally defined standards, historically you’re often running around after the fact with a spreadsheet trying to build a record or audit trail. Having a system of record in place can alleviate this overhead, and leave you doing productive work. Using GitHub or GitLab means identity and access management are effectively baked into how you work. With pull requests or merge requests, permissions are baked in. Who signed off on a change and when? That’s baked into Git-based workflows.
In systems today you often have people randomly making infrastructure changes, SSHing into servers, VMs or containers without controls in place. That’s potentially a big problem in a regulated environment. GitOps can potentially be used to enforce guardrails. Meanwhile, the vast majority of system downtime is because of configuration changes. Yet we do a frankly horrific job of tracking these changes in IT.
One of the current buzzwords making the rounds is the “software supply chain.” It’s perhaps worth thinking about actual supply chain software and the way that we’ve delivered on that. Before enterprise resource planning (ERP) systems became commonplace, data was all over the place: in spreadsheets, paper-based systems, and in separate warehousing, manufacturing, logistics, and financial systems. ERP was a packaging exercise intended to allow organizations to track everything that happened in a supply chain, with a system of record, a single source of the truth at its heart.
Git is beginning to play a similar role in software. From planning or designing a system or application changes, through to the new code, and configurations. Our source code management system has become a key source of the truth, and can reveal a host of useful metrics about team and individual performance. That same metadata could be applied to compliance reporting.
One of the things that helped drive and accelerate the ERP wave as it was embraced by enterprises around the world was business process re-engineering (BPR). The two went hand-in-hand: BPR and ERP software, provided by a consulting company and a software company respectively. Today, we’re really looking to package the best practices in modern software development, and the kind of approaches and opinionated methodologies that the most effective companies use. Git is a common denominator in pretty much all elite performing software development organizations.
Talking of elite performers, Dr. Nicole Forsgren has helped the industry understand some of the practices common to elite and high performing software organisations through her DevOps Research Assessment Work (DORA) and book Accelerate (with contributions from Jez Humble and Gene Kim). Nicole is now leading research into individual and organisational performance at GitHub now.
We’re frankly still trying to work out the best methodology that will help organisations modernise traditional IT processes, the equivalent of a BPR, though – to make the shift from Waterfall to Agile, DevOps and production excellence. There have been some successes: Continuous Delivery, Agile, Test Driven Development (TDD), etc. But organisations are still struggling to modernise at scale. Pivotal did a great job of packaging an opinionated methodology, the Pivotal Way, with opinionated software (Cloud Foundry), in some Fortune 500 organisations (AllState, Comcast, Staples, etc.) allowing some repeatability, but it never became industry redefining. With the rise of Kubernetes it became clear we’d be looking at a different software stack, based on the Kubernetes.
But Git is helping us move forward as an industry, as a common underlying platform, not just in the Kubernetes world, but in pretty much every community and platform in software.
If we think of Martin Fowler’s precepts for Continuous Integration, some of the directives are increasingly commonplace:
- Maintain a Single Source Repository
- Automate the Build
- Make Your Build Self-Testing
- Everyone Commits To the Mainline Every Day
- Fix Broken Builds Immediately
When we get to Fowler’s “Everyone Can See What’s Happening,” and “Automate Deployment,” that’s a sweet spot for GitOps. Visibility and declarative automation.
Of course we shouldn’t get carried away that GitOps is the only way to do this stuff. Fowler was writing before Git was even a thing. Some of these patterns are also common to Infrastructure as Code, and organisations already use tools like Ansible, Chef, Hashicorp and Puppet to define guardrails and support compliance initiatives in enterprise IT. Declarative configuration management, with feedback loops, is a well understood discipline.
However, if we’re talking about software for tracking changes in any set of files, Git is indeed the common industry standard, and as such has huge utility as a platform to build higher level services on.
GitOps emerged from the ecosystem fostered by the Cloud Native Computing Foundation (CNCF). It was invented by engineers at a company called Weaveworks. CEO Alexis Richardson was smart enough to run with the idea and market the name. Now companies across the industry have adopted it. The term is nicely resonant: engineers, developers and platform people can all hear “GitOps” and kind of know what it means.
I spoke to Richardson recently, and argued that if GitOps is truly to find its place as a compliance tool, it needs to be positioned accordingly. GitOps for HIPAA, GitOps for ISO/IEC 27001, GitOps for FedRAMP etc. It potentially needs to be a conversation for auditors and those directly involved in compliance. Hashicorp does some of this, explicitly marketing its automation tooling alongside its Vault secrets management technology for compliance to specific standards- “Terraform can get you about 75-85% to FedRAMP” and PCI. Security and incident management companies often lead with regulations. Of course automation software is not the same thing as logging software, but if we’re talking about markets smooshing, they are certainly adjacent and related.
Richardson argues though that if we’re shifting compliance left, into a developer concern, it should be more fluid and less constrained by trying to sell into security teams etc. He used the example of Snyk, which doesn’t sell to security teams, but development organisations, as the kind of pattern that GitOps is likely to take, as it sees broader adoption for these kinds of challenges.
Compliance to internal standards is particularly important in the Kubernetes space, because of the complexity of the core platform itself, the likelihood of infrastructure drift, and fragmentation within the wider ecosystem. One of the industry conversations directly aligned to compliance is policy. Hashicorp Sentinel is positioned for Policy as Code. Cloud native players are investing in and contributing to Open Policy Agent, while Kyverno (it’s named after me, naturally) is another policy engine designed for Kubernetes. Red Hat acquired Stackrox to improve its container native security credentials.
Kubernetes is also seeing increasing adoption in heavily regulated industries – such as telecoms, financial services, and public sector, which have formal requirements for the platforms they adopt. Industry regulations and security standards though are indeed market-making mechanisms. Whether bottom up or top down though, I would argue that GitOps and similar patterns will play an increasingly important role in supporting compliance initiatives.
Disclosure: Hashicorp, GitHub, GitLab, VMware (Pivotal), Red Hat and and Weaveworks are RedMonk clients
Alex Hudson says:
November 19, 2021 at 1:43 pm
I like this overall approach, and having used gitops- and CD-type approaches within healthcare, I don’t think there are any great reasons not to do this. In particular, moving from CAB-based processes to small commits de-risks deployments and pushes decision-making down into engineering.
One thing I observe with k8s as a whole is that teams typically run a consolidated infrastructure (which makes sense – lots of containers on small numbers of pods is efficient), but this ends up “shifting down” significant security controls: network segregation is software within k8s rather than a VPC within AWS (for example). k8s admins tend to be pretty all-powerful.
Figuring out how to maintain strong audit, division of responsibility, checks/audit on security controls, and the overall “validation” that a k8s software-based definition of a control is strong enough is a tough problem. I think it’s better than the alternative – running disparate pods or even separate k8s clusters – and ensuring that your CD-deployed infra still means your compliance goals / control requirements feels a very new area!
James Governor says:
November 19, 2021 at 2:21 pm
Thanks Alex. Great feedback. Obviously this post was to some extent a trial balloon, so this is extremely helpful. Strongly agree about division of responsibility/separation of concerns. Yeah K8s security/audit/controls is a whole thing we need to work out. Perhaps I should have only focused on that here, but I do think the regulatory issue is always such a good driver of that kind of work.
B W says:
November 30, 2021 at 12:07 am
I find the focus on git a bit odd, git isn’t the only source control system and IMO it’s not the best for this type of usage. Git misses 2 key capabilities of other VCS’s, specifically intra repo merge/copy and granular security. There are of course systems that ‘bolt’ security on top of git, however it’s not git.
Trying to maintain a hierarchy of configurations in git is made much more difficult by lack of intra repository copy/merge. Using SVN for example allows you to have a base configuration and copy that to as many others ‘child’ configs in the same repository. Child configurations can be modified at will and when you change the base config, you can propagate that to ‘child’ items and an automated fashion with a simple script without risk of losing changes.
I’ve been using this tactic effectively for a variety of configuration controls, and it’s saved tons of time. Also makes controlling software supply chain simpler in cases where you’ve had to make modifications to third party sources that upstream hasn’t incorporated yet.
James Governor says:
December 1, 2021 at 5:17 pm
BW could have said GitHub, GitLab and GitPod. fact is Git-based systems are winning and becoming the standard in the market. That said the patterns are applicable with other technologies of course – as i tried to point to in my IaC section.
Alexis Richardson says:
January 18, 2022 at 11:21 am
It may be more useful to have signed OCI images downstream from Git, if supply chain is a c concern. A signed git commit can provide an attestation of the user, but it cannot be inherently relied upon to verify the authenticity of an artefact. This is due to the nature of the git protocol itself: https://docs.github.com/en/authentication/managing-commit-signature-verification/signing-commits
Rob Hirschfeld says:
November 30, 2021 at 3:36 pm
This is really important analysis of the impact of immutable controls for infrastructure. With GitOps, we’re often talking about just the configuration or declarative state. What about the downstream automation that executes and maintains those configurations? We also need to have that as part managed, immutable and observable of the system that manage.
James Governor says:
December 1, 2021 at 5:15 pm
thanks Rob. yes the pattern really appears to make sense. and declarative container infrastructures just map really well to it.
Radar trends to watch: December 2021 – O’Reilly says:
December 2, 2021 at 5:48 am
[…] Because Git by nature tracks what changes were made, and who made those changes, GitOps may have a significant and underappreciated role in compliance. […]
Alexander Hutton says:
December 12, 2021 at 1:57 pm
The drag on regulated enterprise IT (esp. global orgs) is such that this is inevitable. As a security exec – I long for the day when I can “push the audit button” and have:
1.) Evidence that IT and the business did a thing (utilize PaaS, deploy a containerized app, etc.),
2.) Evidence that the “thing” was according to expectation, and that the expectation itself was aligned to the expectation of oversight (regulators, auditors, etc.),
3.). Evidence that when the “thing” did “things” (that is, events of security interest occurred) they were dispositioned.
There’s an old saw in security that the Threat Agent wakes up in the morning, and works 8 hours a day to monetize. The Security person wakes up and spends 2 hours in meetings, 3 hours supporting audit, 1 hour for lunch (if lucky) 90 minutes actually defending the actions of the Threat Agent, and another 30 minutes finalizing the paperwork.
I will invest ALL.DAY.LONG. To recover those other 5.5 hours for my team.