In this RedMonk conversation, Kate Holterhoff chats with Daniel Stenberg, founder of the cURL project, about the challenges and implications of AI-generated code in open source software, particularly focusing on the phenomenon known as AI Slop. They discuss the impact of AI on bug reporting, the effectiveness of bug bounty systems, and the importance of community engagement. Daniel shares insights on how AI can be both a tool for good and a source of problems, emphasizing the need for communication and collaboration in the open source community. The conversation also touches on the role of platforms like GitHub and the future of open source in an AI-driven world.
Links
- LinkedIn: Daniel Stenberg
- daniel.haxx.se
- Mastodon: @[email protected]
- The cURL Project
- Kate Holterhoff, “AI Slopageddon and the OSS Maintainers,” RedMonk Blog, 3 Feb 2026.
Transcript
Kate Holterhoff (00:04)
Hello and welcome to this RedMonk conversation. My name is Kate Holterhoff. I’m a senior analyst at RedMonk. And with me today, I have Daniel Stenberg. He’s a founder and lead developer on the cURL project. He’s also the president of the European Open Source Academy and a cURLmaster at wolfSSL. Daniel, you’ll have to tell us exactly that means.
Daniel Stenberg (00:26)
I don’t know really, but you know I need to have a title sometimes so I need to come up with something creative. But basically that means that I work on cURL every day as much as possible.
Kate Holterhoff (00:35)
Wonderful. I love to hear stories when folks get to work on their open source projects as their day job. This is fantastic.
Daniel Stenberg (00:43)
Yes, yeah it is hard sometimes when you start a project and to do it as a spare time thing and then how do you actually convert that into something you actually do for a living. And of course people tend to mention then that you shouldn’t work with your spare time hobby because that’s also a way to sort of ruin the fun and make but I did a transition in 2019 and since then I’m selling services or support.
Kate Holterhoff (00:55)
Right. Yeah.
Daniel Stenberg (01:11)
primarily on cURL to whoever needs it.
Kate Holterhoff (01:14)
Yeah, and they couldn’t ask for a more knowledgeable person to work with on those sort of projects, I can imagine. So, seems like a good fit.
Daniel Stenberg (01:21)
Yeah, that’s the plan, yeah, the thought.
Kate Holterhoff (01:24)
All right, fantastic. So I asked Daniel to join me today to chat about AI slop specifically. it seems to be a topic everyone is writing about, all the thought leaders. A lot of things happened in January where folks were wrestling with this larger issue. And I think the cURL project in particular kind of set the bar on how maintainers are going to be grappling with this.
looming issue of how are we going to be dealing with AI-generated code in our slop era, if you will, the onslop. So I published a Redmonk blog post on it. cURL came up a lot. I think maybe it would be smart for us to begin our conversation by maybe, Daniel, just having you lay out what happened in January on the AI slop front at cURL.
Daniel Stenberg (02:13)
Well, I think it started way before January, right? So I think what happened in January was more like it continued or it got even worse than it was before. So it was more of a trend that has been going on for the, I would say probably the last two years, right? So the frequency of just stupid reports generated with the help of AI has just been increasing over time. And we have seen it coming and we have
Kate Holterhoff (02:16)
Okay.
no.
Daniel Stenberg (02:43)
I don’t know, sometimes you see it coming and you just cross your fingers and you hope that it’s going to become better somehow. But there’s really no indication that anything has actually changed that would make it better. But you don’t want to really face it. So early on actually mid 2025 something we started to think about what should we do about it because it’s becoming a problem. So we had this, yeah, maybe we should remove the money thing from the bug bounty as one of the
plans and then the entire rest of the 2025 passed and 2026 started and it got it started out even more crazy than ever before and it just made it clear to us that we needed to do something because getting a lot of security reports I mean first they get longer and more they have more contents these days right they sort of get worse but they also get longer and more complicated
because that’s what AI does. is not only the volume goes up, the quality goes down. So we spend more time than ever to get less out of it than ever. And we’re all, well, I just said, I do this full time, but the rest of the team, they’re not doing this full time. So we’re just putting a lot of burden on people and it’s actually quite sort of tiring and annoying to have to plow through this.
Kate Holterhoff (03:51)
Mm-mm.
Daniel Stenberg (04:10)
It is a little bit of abuse because most of these, that we call them AI slop, they’re just generated by someone really quickly. They don’t really understand it, they haven’t really verified it. The AI told them this is a problem, they pass it on to us. No effort, quick, easy for them and they just hand it over to us and we get a couple of hours of work per every such instance. It doesn’t work.
Kate Holterhoff (04:36)
Mm-mm.
Daniel Stenberg (04:39)
We need to do something right. Previously, I mean two years ago, every such report, it actually meant that the reporter, if it had some kind of substance, it meant that the reporter actually spent some time thinking, testing, verifying. They actually invested something and then passed it on. And that made it, it was a built-in sort of balance to the whole thing. That balance is off now.
Kate Holterhoff (05:06)
Okay. And so for folks who haven’t been following this, the way that I’ve been thinking about AI Slop is, yeah, I think maybe to build on how you’ve sort of characterized it implicitly is just using AI code assistants or AI generation to create code that doesn’t really address a problem, as well as maybe it should. Can we add anything to that? How are you defining it?
Daniel Stenberg (05:29)
Yeah, so
the primary way we see it basically when you fire up chat GPT and you ask please point out the security problem in the curl project and make it sound horrible and write it at least 400 lines and it’ll do that right and it’ll happily please you and it I mean it knows within quotes right it knows that it has a problem and it’ll tell you about it and it sounds horrible and and perfectly
Kate Holterhoff (05:42)
no. Yeah.
Daniel Stenberg (05:56)
reasonably sounding if you don’t know the details. So from our point of view it’s more that kind. They ask, they try to find a security problem. The AI tells you one and you report that. Is it true? Well, it could be true. But in most cases when you just do it like that, you’re most likely just finding something that is one of these TV shows problems. Yeah, it looks real but it isn’t in fact.
Kate Holterhoff (06:09)
Okay.
my gosh.
Daniel Stenberg (06:25)
Many times it turns out that the AI just made up something in the end. The function call doesn’t actually exist. The output is wrong. Something is… Yeah, the AI pleased you with giving you an answer, but the answer is not correct.
Kate Holterhoff (06:35)
Yeah. Right, right. And I feel like the curl project in particular was hit with this because of the incentive problem, which has to do with the bug bounty. So can you tell folks who aren’t familiar what was the bug bounty and why might you have been targeted more than most?
Daniel Stenberg (06:56)
So in general, bug bounty is in a way, it’s an old system that has worked for decades that where software projects, guess in general or typically offer money to anyone who actually reports a problem that is a confirmed security issue in some way, vulnerability somehow, a flaw that can be used badly or maliciously or turn into a problem, which is a great way.
or has been a great way, at least in the past, to actually get people to spend a little extra time trying to actually pinpoint the error, report it to us. We get a better product and we reward the researcher with some money, which has been a good way. And in particular, it’s been a good way to attract some of those who actually do this as a profession, right? The really awesome bug bounty people that those you want on your team because they are awesome. They do this. They know stuff.
We want them to find our problems so that we can improve our product and everyone gets more secure as an end result. So it has traditionally been really good. In the curl project, we have paid over a hundred thousand dollars in reward money over the years. And I think we count 87 confirmed security problems fixed thanks to that. So over the years, I’ve always been a fan of this because I think it has worked.
Kate Holterhoff (08:09)
Wow.
Daniel Stenberg (08:23)
in particular for us because we have improved the code, the products and everyone has sort of we have sort of gotten the benefit out of it as sort of all of us because we didn’t really say this but curl is used in somewhere north of 20 billion installations in the world right so it’s literally in everything in every internet connected device you can find there is probably at least one installation of curl
Kate Holterhoff (08:50)
Okay, so hugely enormously important project. so finding these bugs it’s a service to, the world at large that’s online, right? And I mean, maybe it’s even worth pausing briefly for folks who aren’t in the weeds here. What does curl actually do? What is the function of the tool? Yeah. Yeah.
Daniel Stenberg (09:07)
It’s a good question and I always struggle to actually explain that. So curl is actually, well I would say, usually I explain this as a two-part because the common thing developers know about typically is that it’s a command line tool for internet transfers. But the large volume installation is actually the library which is just a component or an engine for doing internet transfers and that’s a really vague description.
So it’s fairly complicated. It helps services, tools, devices, to do transfers, internet transfers really in a convenient way. And we celebrate 30 years later this year. we’ve been around for a while. I think that’s also what the explanation why it is as widespread because we’ve been there, not since the beginning, but for a very long time.
Kate Holterhoff (10:02)
Right, right. And it is open source. What is the license on curl?
Daniel Stenberg (10:07)
It’s an MIT patched thing, but so it’s very liberal. You can do whatever you want with it without any requirements basically. Well, you have to show the license somewhere and you can’t say that you wrote it, otherwise, and I think that’s one of the explanation why it is used everywhere because it’s very easy for everyone to just integrate it into anything. So it’s in every phone, car, fridge printer, TVs, game consoles, know, light bulbs, kitchen devices, anything.
Kate Holterhoff (10:37)
Wow, so it’s everywhere, amazing. Okay, so again, extremely important, we wanna make sure there’s no bugs in this, it because it is so important, folks look to curl as a way of getting maybe involved in open source, even if they aren’t just interested in the financial benefits of maybe, you know.
finding an actual bug. So you mentioned that there have been some upsides to having folks use AI, but that they’re a little more nuanced here. I guess maybe now would be a good time to introduce the situation with Joshua Rogers. How is it that he used AI in a non-slop way, to help the curl project?
Daniel Stenberg (11:16)
So, yeah, I think it just, we just have to realize that AI is, first is a very generic term, but it’s more of a tool set, right? And that tool set can be used in all sorts of ways and can augment whatever you do. And if you want to go the easy route and just get some fast output, it’ll give you that. But there are also ways to use it to really make you a better version of that person to actually use it to find real problems and research the problems properly.
Kate Holterhoff (11:24)
Indeed.
Daniel Stenberg (11:46)
And then in the more positive vein, there are several tools now and more researchers who actually try to not only get the AI to tell you something, but also, know, take the extra step. Was it correct? And dig a little more. And there are more tools now that do that more automatically. So even though if you head over to a chat LLM now and ask it for a curl security problem, it will make one up.
that’s incorrect. But at the same time, you will find another AI tool that is a little bit more specialized and tweaked for this that will give you potentially an actual vulnerability or a problem or something to research. And I think that is so Joshua was basically the first one we talked to who did the job and actually, and he used one of these now more dedicated tools for finding flaws in code with AI.
Kate Holterhoff (12:42)
I see.
Daniel Stenberg (12:42)
And he used a particular tool that is one of these code analyzers with some AI spice on top. I’m not sure exactly what they do because they don’t tell all that because it’s a proprietary product. But it’s basically an AI tool analyzing code to mistakes in code in a really powerful way. In ways other
existing code analyzers don’t do in the way that AI is really powerful to do. I mean it’s good at text, Finding patterns or deviances from patterns and tiny mistakes and actually AI is when used correctly it’s a really powerful thing to use when working with code because code is a lot of text and for example in the context of curl, curl speaks internet stuff right? Internet protocols
is often dictated by your specification. This is how to do it and then we have an implementation. This is how we do it and then the LLMs can actually compare the specification with the code and actually find well you know the code does this, the specification says that and they don’t actually match here. So yeah that’s what Joshua started doing in autumn 2025 or something.
Kate Holterhoff (13:59)
yeah, and that just seemed like a good example that it’s not just you know, I’m pro-using AI-generated code in my project or I’m anti, which of course some open source projects have gone that route. Whether or not that’s going to be enforceable in the long term is a little debatable, but I feel like you have a little bit more of a nuanced practical view of this and the example of Joshua’s one. You’ve also spoke positively about AISLE and their ability to find some of these issues, so I know that there is—you’re thinking very critically about where this is valuable and where it isn’t.
Daniel Stenberg (14:32)
Yes and also like everything even though when even those cases when you actually use a good tool to do this or it’s more like anyone who’s ever used one of these tools even before like 10 years ago when we started to get these code analyzers or even 20 years ago we had code analyzers they were a little bit more crude back then but they could still you know give you hundreds and hundreds of possible problems in your source code and
A possible problem? Yeah sure, it might be right, but you have to sort of go through that list, maybe rate it or though I don’t agree with this and I don’t care about that and then know filter out the top 10 and maybe one of those could be a problem and it’s still the same even though now you can get that list done with AI but it’s still going to be like well maybe 90 % of that or 20 % or you know depending on tool and situation is going to be crap and you’d still need sort of filter through, sort it and
figure out which parts are interesting. In our case then, so Joshua did all that and he sorted out and filtered out the top sort of candidates that he found that he thought that we might be interested in. And I worked with AISLE in the same way. They have a similar tool. They developed the same kind of thing and they then run it on the code and they figure out how these look like them could be real. Like here are my top 10.
things and they send it over to me and say hey these look weird and then we work together to actually try to conclude are they real what do do about them what do we think about them and how do we fix them and so on because it’s certainly a good i mean just handing over hundreds and hundreds of errors that’s talk about sort of that’s a bus load of work right to just get an afternoon here look at these errors
Kate Holterhoff (16:18)
Yeah.
Daniel Stenberg (16:22)
And maybe for everyone it could take a couple of hours. That’s a lot of time and effort to go through. So there it is helpful and important that we actually have a little bit back and forth and we work together to actually manage to get through them. But of course, if there are valid bugs, we want to fix them.
Kate Holterhoff (16:29)
right.
Yeah.
Absolutely. And I mean, I think you’re pointing to I guess, one of the points of advice that I was hearing from a lot of different maintainers, which is that to join the community first. Don’t just blindly send off these PRs. Actually, maybe join the Discord or email the maintainers and try to have a dialogue before you dive in with these huge multi-page PRs that you’re talking about. have 400 words. Yeah. Yeah.
Daniel Stenberg (17:05)
Absolutely, and I think sort of and start communication is always key here. What would you do? I mean we also have the because just the load is could be potential breaker right. We heard about this other project sort of I talked to the FFMPEG people and when they sure they get a lot of valid reports from one of these tech giants and just overload them with a lot of that’s also work. So you need to
do it correctly you need to have a communication and cooperation researcher to project and sort of how to go about it. It is also slightly rude to just overload a poor little project with lots and lots of reports. So maybe… yeah.
Kate Holterhoff (17:53)
Yeah, I like that. Okay, so we’re all going to endeavor to not be rude to these maintainers. They’re working hard. Okay. Yeah, Okay, so I want to talk about the broader landscape here because as I mentioned, You’ve been around for a long time. A lot of the other folks who are grappling with the AI slop issue maybe don’t have the sort of…
Daniel Stenberg (18:02)
In many cases, yes, exactly.
Kate Holterhoff (18:17)
breadth of experience that you have, starting—it was 1996, right, when the curl project was launched under a different name? So, with that in mind, how are you trying to model leadership to some of these other projects in terms of how they deal with AI? mean, is there—do you think that it’s worth having some sort of a—I don’t know, a consortium or—
Daniel Stenberg (18:24)
Yep. Yep.
Kate Holterhoff (18:42)
Is it maybe, I mean, so you’re involved with some foundations. Who is it that should be providing leadership and what should that look like?
Daniel Stenberg (18:50)
Good question. I don’t know really. Personally, try to, I think of myself as having a pretty good position in that I don’t have any masters. I don’t need to be careful about, I don’t hurt anyone by stating what I think and know, voicing my opinion. And that’s what I try to do. And I try to then, I try to be pragmatic and just.
Kate Holterhoff (19:09)
Yeah, sure.
Daniel Stenberg (19:17)
do the way I think it should be and try to lead by example rather than state how anyone else should do it or. And also some of things it’s not that easy to say exactly what is the correct way forward, right? So we also have to try out a few things. Like I tried to say now that we close down the bug bounty part of this setup as an attempt to stop the flood of incoming AI slop. But we don’t know if that’s gonna work. As you mentioned.
There’s also that, you know, we have a name already. People want to have, they want to have get credit because of finding a CVE in curl, right? They want that on their CV and they think that’s so maybe turning away the money won’t make a significant difference. don’t know. And I think right now, and it’s also a wider problem than just this, right? Because this is now the new era in the world where everything that previously
Kate Holterhoff (19:55)
yeah.
Mm-hmm.
Daniel Stenberg (20:18)
took some effort to make, know, applying for a job, writing a science paper, who knows how many other things that previously it was sort of assumed or understood that it took some time to do it. Whatever we thought it took some time, now it doesn’t. So now everyone can flood all of those systems just with a lot of incoming data. we all, this is a setup sort of we share in a lot of areas now that we just have to
It’s going to take some time, think, to adjust to and insert rules and guidance how to deal with it. A little bit like spam and anti-spam. Suddenly, we need a way to filter out the spam. What’s the best way to do that? Difficult, but I’m sure we are going to have to work with this going forward.
Kate Holterhoff (21:07)
Yeah, and it’s moving so quickly too, right? The models get better, you know, if, yeah, you’re, good luck.
Daniel Stenberg (21:12)
and I think that’s also why it’s important to not get stuck on the AI part but on the abuse part because we’re not going to detect the AI because I don’t think it’s helpful to detect the AI because that’s not the point. The point is the crap, the abuse of it because if you use the AI to find something good then it’s good right. If you just send us junk then that’s bad.
Kate Holterhoff (21:32)
Yeah. I that is really—those are pearls of wisdom. I like that. I like how you put that. That makes a lot of sense to me. So let me ask you this, When I was writing my own piece on this subject, I did look at the foundations as maybe—I don’t know, having a SIG or something where they’re talking about this. I don’t know. Someplace where they’re debating how to go about
helping maintainers to deal with all the slop coming in. So I’m curious, are you bringing that up at all as part of your role in the European Open Source Academy? Is that a conversation you have there?
Daniel Stenberg (22:08)
I don’t think we have it in particular there. But okay, or rather perhaps it’s the other way around. It’s probably held there. Right now it feels like everyone is talking and debating this. Maybe also because I’ve talked so much about it. I’ve talks about it. I blogged about it and everyone is bringing up me and Curly in relation to that. So yeah, I think it’s certainly a subject that is
Kate Holterhoff (22:10)
Okay.
Okay.
Yeah.
Daniel Stenberg (22:36)
on everyone’s radar right now, I think. But again, it’s one of those things that we don’t have any really good answers yet. So I think it’s, yeah, a lot of ideas, people, people will, and when I say we do something, I get a lot of people telling me I should do something else instead, or why don’t you try this? So I don’t think there’s, there’s no shortcoming of ideas and approaches on how to do this. It’s just that I don’t think we’ve seen anyone have any real
Kate Holterhoff (22:39)
Okay.
I see.
Daniel Stenberg (23:06)
successful way on how to handle any of it. So I think we’re all just going to have to try different things moving forward and then try to learn from each other and see where we go.
Kate Holterhoff (23:18)
I see. Okay, so we’re at the throwing spaghetti at the wall stage. we’re, we’re individuals are trying things. Exactly. That’s great. Maybe we throw some rigatoni up there. Yes. Okay. So we’re, we’re throwing all the pasta at the wall, seeing what sticks. Got it. All right. But it sounds like maybe foundations are not the best place to, I don’t know about solve this issue, but maybe
Daniel Stenberg (23:23)
Yeah, and ideally the different teams try different spaghetti and then we see which spaghetti is the best and then we go that way. Exactly.
Kate Holterhoff (23:47)
to set standards, guess, maybe to finally come down on it,
Daniel Stenberg (23:50)
Yeah, the way I see it is that I think the ones who are actually the best suited to actually work on it is those platform that hosts all these incoming reports, PRs, issues, etc. like GitHub, HackerOne or all those sorts. But they also struggled with this and how to offer the correct tweaks or knobs and filters for all the projects that deal with this.
Kate Holterhoff (24:19)
Yeah.
Daniel Stenberg (24:19)
I guess also because all the projects have different ideas of what they want to allow and how they want to tweak it. I think we’re in a little bit of experimental phase or early days still. So it’s still a little bit of a we don’t really know how to deal with this. yeah, that’s the way I feel. I could just go back to, know, issues and pull requests are then slightly easier to manage in this regard when
people apply AI because then they are made in the open at least. then at least you theoretically have an entire universe of people that can help out. It’s a trickier thing with security reports because you want to have them done handled in private, right? So you have a limited amount of people that can deal with it because if they are genuine security problems, you want them handled carefully and responsibly first before you disclose them to the world, right? So they have to be managed by a small set of people.
Kate Holterhoff (25:17)
Right, and so thinking of the platform issue or these larger bodies that are involved in open source, another thing that came up a lot when I was researching the AI slop issue was GitHub’s role in this. And I noticed that curl is also hosted on GitHub. assume, well, let me ask, when did you move to GitHub?
Daniel Stenberg (25:36)
Long time ago, 2010 I think.
Kate Holterhoff (25:39)
Long time ago. Oh my goodness. Yes. Wow. See, I keep saying you guys are the OGs. This is it. You’ve been there from the beginning. Okay. Amazing. So, you know, some folks are unhappy with the way GitHub has made AI-generated PRs sort of invisible. It’s not pointed out that these are AI. And that could be either because of licensing issues, which comes up a lot. We haven’t even discussed that yet.
Daniel Stenberg (25:44)
you
Kate Holterhoff (26:08)
Or maybe it’s just because they’re frustrated by the AI slop, right? That it’s just too easy to make these PRs. Yeah, what’s your sense of maybe where GitHub sits in this entire conversation?
Daniel Stenberg (26:23)
Well, that’s exactly, I think they are the primary open source hoster in the world, right? So they are situated and they’re actually one of the best ones, most suitable to actually do something about it. But I think they struggle with how to behave or what to do about it as much as everyone else.
Kate Holterhoff (26:40)
Mm-hmm.
Yeah.
Daniel Stenberg (26:51)
So they’re going to be forced to do something at some point. I get the feeling that they are thinking about it. There’s this demand that is coming from a lot of projects now to allow projects to limit, for example, who’s going to be allowed to open a pull request in a project. Which previously has, why would you limit that? You want to have the entire world be able to help you out. But now suddenly, well, maybe you don’t. If everyone is going to send
Kate Holterhoff (26:55)
Okay.
Mm-hmm.
Daniel Stenberg (27:19)
crap at you? Do you really want the entire world to send crap at you? Or do you want to be able to limit that somehow? So I know there’s a discussion going on about that. So and I think that’s a sign of the times, right? Because suddenly we, the entire world that has previously been open to sure we have a potential millions of contributors. Now we have potential millions of abusers instead. And when we need to
Kate Holterhoff (27:37)
Mm-hmm.
Daniel Stenberg (27:49)
handle that somehow. I hope of course that we can go forward and find a way where we can, I don’t know, back again to my thoughts about spam, how we handle that. By volume maybe. If 10 other projects have banned a user on GitHub, maybe I don’t need to ban that user. Maybe I could just say if a user has been banned from another project, maybe I won’t allow pull requests from that user. Or use something else that we
mark users or accept things more as a communities maybe I don’t know but we’re certainly going to have to try different ways like that going forward and how to assess what is a I don’t know good human versus a bad human because I don’t believe in in detecting AI or fighting AI with AI that people tend to tell me that we should do I don’t believe in that because the AIs are changing all the time and again the
When the AIs do good, we want to have them. So we just want to get rid of the abuse and get all the good stuff.
Kate Holterhoff (28:54)
Yeah, I mean, this is making sense to me. Yeah, I always think it’s so interesting when folks talk about just removing AI contributions in general, because I’ve done some series on the like terminology now, and so everyone’s talking about, of course, vibe coding, my personal favorite, but also AI engineers. And so folks like Shawn Wang, who goes by Swyx Online.
I had him on the podcast to talk about this, and he actually said that AI engineers could be agents, basically, that we can count them basically as engineers. And so it’s like, if they’re the same as folks leveraging these tools, how are you going to differentiate? And I don’t know that everyone agrees with that definition, but I think that is what we’re seeing down the road, is that it’s going to become more more difficult to divorce folks who are augmented by these tools and
Daniel Stenberg (29:34)
Yeah.
Kate Holterhoff (29:46)
agents that are just kind of doing it in the background asynchronously.
Daniel Stenberg (29:50)
And just the people who are pretty much they don’t care or aren’t even aware. You just fired up your tool, you wrote your code, you’ve got some tab completions and it did something for you. It’s your code, right? Or is it? Did you use an AI or not? in the end, I’m much more of a pragmatic view there. So in the end, if you produce really good code and you provide it to us, should I just… So what’s the mix that should be allowed?
Kate Holterhoff (29:54)
Yeah.
Daniel Stenberg (30:17)
I don’t know. And no one is going to know exactly how much AI there was or not. And who am I going to tell the difference? And then people might bring up the license idea or the rights holders, who knows who owns this code. But as an open source maintainer, I didn’t know that before either. People could copy that from their proprietary employers code and submit that to us. And I have no idea.
Kate Holterhoff (30:31)
Mm-hmm.
Daniel Stenberg (30:46)
That doesn’t change now either. If the code is good and it’s sort of all the test cases go come back green and they say all thumbs up and a human reviewer and someone says looks great we should merge it then then I merge it and I don’t know if it was made by an AI or not and I don’t think it’s important. The distinction doesn’t matter to me if it was AI or not.
Kate Holterhoff (31:11)
Yeah, I think that’s what’s coming. I don’t know how anyone’s going to be able to differentiate here pretty soon. I don’t know that folks can differentiate now.
Daniel Stenberg (31:21)
No, I don’t think anyone can. Unless it’s super stupid and super obvious. But that’s going to be rare and soon to extinct.
Kate Holterhoff (31:31)
And then obviously they should have reviewed it a little bit or prompted better. I don’t know what. Yeah, that’s still that that becomes their problem.
Daniel Stenberg (31:35)
Right, but still, I mean people are still stupid and do stupid things even without AI. So I mean you’re going to get a lot of weird things suggested, you get that already, so of course you’re going to get that with AI too.
Kate Holterhoff (31:51)
Yeah. Well, let’s talk about some CTAs here. So, I’m interested, how should engineering leaders be approaching this problem? How are you speaking to folks who are, I don’t know, maybe even in the C-suite? Like, what should they know about AI slop and how to, I guess, empower their engineers to grapple with it better? Like, what should they be doing and knowing and, you know…how should they be leading in this particular era?
Daniel Stenberg (32:19)
Well, I try to always come back to the augmented thing that the AI is going to augment everyone. But if you don’t know what you’re doing, if you don’t understand what you’re doing, you might get augmented in the totally wrong way and you won’t know about it until someone else tells it or sort of informs you about it. right now you certainly need…
In most cases, you still need to know what you’re doing and have ideas of the outcome to assess it, to sort of value whatever you’ve got. Is it right? Is it wrong? And should we continue down this path or should we just make a little turn maybe? And if you don’t know what you’re doing, just ask an AI, you know, the vibe coding style. Sure, that works in some limited areas and depending on your use case, but
it’s not going to work for a lot of use cases and then you’re just going to get lost if you don’t know about it. So I think it’s important to understand that the AIs are still very, you know, the people pleasers. They say, yes sir, here it is. Even though it might be totally incorrect because it’s there to answer your questions, not to say no. And it’s going to be like that for a while too, for I’m sure. tools have always been like that, you need to
be able to filter out the bad suggestions and go with the good suggestions. that is still a key to have knowledgeable people involved. And that’s also why it’s so weird that we’re talking so much about people getting rid of their juniors or whatever. because who’s going to know this? You can’t get any seniors if you don’t have any juniors.
Kate Holterhoff (34:04)
I know it. Yeah, it’s bleak. Okay, so what I’m hearing from this is, be a little curious about what some of these tools are doing. I recognize that things are changing, that we’re all going to be augmented in some capacity. And so it’s a good idea, whether you’re a developer or someone who is in leadership or marketing or product or whatever, kick the tires on some of these tools and maybe have a little bit of empathy for your developers who are suddenly being inundated.
Now with PRs that maybe are not doing what they should. also, all teams are using open source, curl is in everything. So this is not just some abstract issue that’s not affecting you. It really is a core part of all of our experiences online. This is not a small problem.
Daniel Stenberg (34:52)
Yes. Yes. No, it’s not. And it’s really very much a real thing that happens now. And when you have code in literally every device on the globe, the security part of everything is kind of important. You want it to not burn the world. You want it to be sensible and fine. And you want to know that the stuff you write and ship is fine.
Kate Holterhoff (35:03)
Okay. I’m interested in, do you have any thoughts or prognoses about the state of open source today? I mean, we talk a lot about burnout here, but any more pearls of wisdom you’d like to share about, where you see open source going, maybe in our era of AI? I mean, Is there a bright side to this beyond the PRs?
Daniel Stenberg (35:41)
Well, I think there are lot of different trends that go at the same time. And I think as things are moving, it’s hard to tell exactly where we’re going. I mean, the general trend with more open source everywhere has been ongoing for a long time. So we have more open source now than ever. every product, everything that has software has more.
Kate Holterhoff (35:53)
Yeah.
Daniel Stenberg (36:06)
open source than ever before. it means that basically everything that is made today has open source in it. That’s sort of now more of an exception if you don’t do that. So open source has suddenly gotten into everything and that’s a condition why we have the digital infrastructure and society that we have today because it wouldn’t be possible otherwise.
Kate Holterhoff (36:13)
Mm-hmm.
Daniel Stenberg (36:34)
a lot of the code we have so that we can bring that on and into the next product next generation thanks to it being open source. So that’s certainly we’ve seen that happening and it’s sort of ongoing but at the same time of course so AI adding to this is going to do several things I think first a lot of AI that is then vibe coded or generated somehow is going to use open source components so that’s going to just push you to
Kate Holterhoff (36:59)
Mm-hmm.
Daniel Stenberg (37:00)
keep using open source because whatever you write is going to use those open source components because that’s how they’re trained to do. So because you always have to use some components and dependencies and a lot of those are going to be open source. So that’s why AI even though when you generate new code, you’re going to keep using open source. But on the other hand, if you just rewrite everything from scratch with AI, maybe that’s going to also take away some of the drive
to contribute and help out in open source because previously maybe you would have used an open source project and just tweak that a little bit for your use case. Maybe the attraction is now to instead try to just write your own vibe coded version instead of using the open source version to start with. And where we’re going, I don’t know.
Kate Holterhoff (37:53)
right. I hear you. All right, so you’re not riding the SaaS is dead train that everyone’s just going to vibe code their own tools, anything that they use right now. They’re going to vibe code their own curl.
Daniel Stenberg (38:08)
No, they are and everyone is going to vibe code their own. In particular, think maybe 2026 is going to be everyone is going to vibe code their own. And then, or at the end of 2026, everyone is going to start realizing what kind of burden they actually built themselves into. And wait a minute, my vibe coded stuff actually doesn’t really work. Now we need to fix those 22 bugs we have. How can we do that when everything, no one knows the code.
we just rewrite a new version? Sure we can do that and then we get 22 other bugs instead. So I don’t actually believe in that future. Sure it’s going to be a part of everything is going to be that but that’s not how we can’t build building blocks in the software ecosystem like that. need to as someone said writing code is easy. Writing code the first code that’s easy. Sure you can use an AI to write the code that’s
Kate Holterhoff (38:40)
no.
Daniel Stenberg (39:04)
Okay, you do it faster, but writing code is not the challenge. The challenge is to keep it there, maintain it there, make sure that it works and be a solid, stable component in an ecosystem and do that over time. That’s hard and you don’t do that as easily with an AI.
Kate Holterhoff (39:22)
Amazing. All right. Well, I think that’s a good place for us to wrap up. Talk to me about places where folks can follow where your musings are housed, where are you contributing all your thoughts and participating in these conversations on AI Slop or other topics.
Daniel Stenberg (39:37)
I am primarily on Mastodon as @[email protected] and I’m also posting on LinkedIn every once in a while. Otherwise my website is daniel.haxx.se with H-A-X-X.
Kate Holterhoff (39:49)
Amazing. Yes, and you have like a newsletter too, is that right? Okay.
Daniel Stenberg (39:54)
I do. It’s more like me rambling about what I’ve done this week and I’m gonna do next week. you know, complete into curl and open source. But if anyone is interested in that, I can keep them satisfied.
Kate Holterhoff (40:04)
Yes, I love that. Well, I know I have certainly benefited from reading a lot of what you have to say on these subjects in particular. But yes, I was pleasantly surprised by the breadth of your thoughts on this. I also saw that you did a talk at FOSDEM this year. So you’re participating in all the places, you’re doing all the talks, and on AI Slop, no less.
Daniel Stenberg (40:32)
Yes, yeah, right. there was just a few weeks ago was there. Yeah, that was well attended, was fun. So yeah, on exactly this topic that we’ve talked about today, AI slop and also mentioning some of the good stuff that you can actually do with AI.
Kate Holterhoff (40:48)
Right, yeah, well, I think it’s something that for as much as the developers are keenly aware of this issue, it is important to bring awareness to it because I think maybe folks take for granted that this is maybe an unmitigated good for developers or what. They’re like, how could it be a bad thing to send in more PRs? This is…
Daniel Stenberg (41:07)
Yeah, I actually find this to be very… AIs in general tend to be very polarizing. You’re either very anti or very pro. And I try to be neither or both or sort of try to be more pragmatic of the actual outcomes rather than the tool itself.
Kate Holterhoff (41:18)
Right. I think that that’s a good way to be. I think, I don’t know, the way that you framed it earlier, which is that we’re all just going to be augmented by it in a sense that it’s just going to become, I don’t know, just part of everything. That’s the future that I’m seeing. I think we’re going to have a hard time divorcing it from most aspects of our lives, especially ones that have to do with computers, right? OK, fantastic. All right, well, I have really enjoyed speaking with you today. Again, my guest was Daniel Stenberg.
Daniel Stenberg (41:49)
Yeah.
Kate Holterhoff (41:56)
My name is Kate Holterhoff. I’m a senior analyst at Redmonk. If you enjoyed this conversation, please like, subscribe, and review the MonkCast on your podcast platform of choice. If you’re watching us on Redmonk’s YouTube channel, please like, subscribe, and engage with us in the comments.

























