On September 9, 2023 I attended a webinar hosted by Snyk. The webinar was called “AI Hallucinations and Manipulation: How to use AI coding tools securely.”
The webinar was interesting and well-done, but what I want to focus on here are the results of some of the audience polls. The responses caused me to question the intersection of code generation and DevSecOps philosophies. Both the assistive AI tools and the security tools will inevitably evolve rapidly in the months to come, but the question that rose for me was a broader question of developer perceptions of productivity and security.
At the beginning of the session the moderators polled the audience. The first question asked: If you use AI generated code tooling, do you think it improves your efficiency?
The results were:
- 42% YES I use it and it MODERATELY improves my efficiency
- 25% NO my company blocks us from using it
- 16% NO I choose not to use it
- 11% YES I use it and it GREATLY improves my efficiency
- 6% YES I use it but don’t see any improvement in my efficiency
The second question asked: How secure is AI-generated code?
The results were:
- 42% Moderately LESS secure than code I write
- 22% As secure as the code I write
- 19% Much LESS secure than code I write
- 15% Moderately MORE secure than code I write
- 2% Much MORE secure than code I write
Reminder: this is data from a self-selected population of Snyk webinar attendees and I’m unsure about the sample size (though the chat was active and the webinar was seemingly well-attended.) In other words, these should not be treated as statistically significant data points. (That said, there were some interesting nuggets here. For example, 59% of respondents using AI coding tools is a good check on AI tool penetration in September 2023.)
What I found most interesting, and what I want to dig into today, is that a modality of respondents perceived that they were moderately more productive with AI tooling, but that this AI tooling produced moderately less secure results than what they could write on their own.
What are some possible narratives that support both of those facts being true at the same time?
1. It’s faster to have an AI tool spit out insecure code and then correct it than it is to write correct code whole cloth.
In this interpretation, AI offers a big enough productivity gain that it outweighs any additional work required to correct the security flaws in the generated code.
2. Developers conceptually separate “productivity” and “security.”
The DevSecOps movement is predicated on the idea that the sooner you surface an issue in the SDLC, the easier and less costly it is to address it. Bugs that are found while the developer is writing code are much easier to address than bugs that are running in production. Cost of resolution increases the longer it takes to notice, isolate, and fix an issue. In a DevSecOps culture, developers take ownership of the security of their code so that they can find security issues early.
Snyk is an industry leader in the DevSecOps space; given that they were polling a group of technologists congregated in a Snyk forum, is the second answer plausible?
I would suspect that Synk would strongly argue for the former interpretation of the poll results: that the productivity gains from AI outweigh any costs of addressing its security shortcomings. And in my experience, it’s amazingly helpful to have Copilot or ChatGPT help template something and then tweak it as needed.
But what if there’s a grain of truth in the second interpretation?
It’s possible that developers —when asked to assess their own productivity without any outside prompts or qualifications on what’s included in the definition of productivity— instinctively think about the act of generating code before they think about the act of securing it.
Maybe the DevSecOps mindset is more nascent than we think.
No Comments