In this RedMonk conversation, Shawn (swyx) Wang discusses the evolving role of the AI Engineer with Kate Holterhoff. They chat about the definition of an AI Engineer, the differences between AI Engineers and traditional software developers, vibe coding, frontend observability, and the AI Engineer World’s Fair.
Links
- LinkedIn: Shawn Wang
- swyx.io
- Twitter/X: @swyx
- Bluesky: @swyx.io
- AI Engineer World’s Fair
- Shawn Wang, “The Rise of the AI Engineer,” Latent Space, 30 June 2023.
- Kate Holterhoff, “AI Engineers and the Hot Vibe Code Summer,” RedMonkBlog, 14 July 2025.
- Shawn Wang, “Observability for Frontend Developers,” Swyx blog, 17 March 2020.
- Kate Holterhoff, “Is Frontend Observability Hipster RUM?” RedMonk Blog, 2 April 2025.
Transcript
Kate Holterhoff (00:12)
Hello and welcome to this RedMonk conversation. My name is Kate Holterhoff, Senior Analyst at RedMonk. And with me today, I have Shawn Wang, better known as Swyx. Shawn is deep in the AI, DevTools and DevRel space. And I’m jazzed to have him here to chat with me about the AI Engineer. Shawn, thanks so much for joining me.
Shawn (swyx) Wang (00:27)
It’s my pleasure and you have a great radio voice. actually was just reflecting on it while I listened to you.
Kate Holterhoff (00:33)
Well, thank you. Thank you. That means a lot to me. All right. So let’s begin with the big idea that inspired me to reach out to you. And that’s just what in the world is an AI Engineer here? I’m seeing it a lot more on LinkedIn. I’m hearing about it on all my podcasts. And of course, you wrote a very influential post two years ago called The Rise of the AI Engineer. So.
can’t think of a better person to speak to about this. You know, had folks like Andrej Karpathy responding to it. So I guess just to begin with, you define it in that piece, kind of obliquely, I would say, he’s talking about it being a new sub-discipline. So I’m interested to hear, I guess, where we are today, especially in the wake of, you know, your conference, the AI Engineer World’s Fair. What is an AI Engineer? How do you define it?
Shawn (swyx) Wang (01:15)
So I typically define an AI Engineer as there’s a kind of three types of AI Engineers. This is the first keynote that I did for the AI Engineer Summit. it’s a software engineer that is enhanced by AI, so they use AI coding tools. The second one is a software engineer building AI products. And the third is a non-human software engineer that is completely AI. So like the Devins of the world where they are…
autonomously creating software. And I think the last two years has seen us play through each stage of this evolution. And obviously they’re all evolving at their own pace. But I basically started this whole thing because I had saw the, you know, I’m not a industry analyst like yourself, but like I sometimes like to pretend I’m And I’ve seen various forms of this sort of professionalization of an industry where
people don’t kind of don’t take it seriously initially and then it starts to develop its own language, its own frameworks, its own conferences and its own really sub community. And I felt that, you know, back in early 2023, people still weren’t taking software for AI enough. People were saying things like software is going to go away because AI is going to write all software. That’s very sloppy thinking because obviously this is kind of thing that is said by people who don’t actually do the work.
And like, if you actually do the work, you realize there’s a lot of things to learn and do and get right. And so that’s the role of the AI Engineer. I also noticed that the people who are successful in this field are qualitatively very different from the ML engineers and the ML researchers. These are all very much more established titles, but the AI Engineers are much more on the sort of product side. They are much more creative. They may be less tethered to the past in terms of ML optimization or ML ops techniques that
People with 10 years, 15 years of ML experience, they come to AI and they say, know everything and then they don’t do anything of consequence. Why is that? So I think that is a sort of anthropological observation. It’s not even technical, it’s just anthropology. There’s a new generation of software that’s trying to be built. The people who are successful in it qualitatively have different backgrounds than the people who were successful in the past. And I just wanted to articulate and tell you that difference.
Kate Holterhoff (03:33)
Right, right. And actually, you framed that so clearly So thank you And you’re kind of blowing my mind here. So some of the ways that I’ve approached this have been similar to yours. Where I will go on Hacker News and just see what developers are actually saying about AI Engineers. I’ll go on Reddit. And there’s a lot of folks who have competing definitions right now. Like, there’s a lot of turbulence in trying to land on what exactly this actually means, a lot of disagreement. And
What I like about your definition is that it really focuses on the builder and the fact that there is, I guess, a mentality shift going on right now about the difference between this sort of data scientist role, the folks who have historically been in machine learning and that have really dominated the space, you know, that’s no longer taken for granted as being an essential part of self-identifying as an AI Engineer. And what I also like about this is when I have…
done my own write-ups on trying to define what an AI Engineer means today in 2025, you know, two years after you wrote this original
Shawn (swyx) Wang (04:3)
you
Kate Holterhoff (04:33)
that I didn’t include robots, agents, maybe, as an AI Engineer. And you’re including them. You’re including nonhuman—yeah, That’s—to me, that’s remarkable. And I actually didn’t see that at all in my research. And I had missed that. Could you elaborate
on that Like is that interchangeable with an agent?
Shawn (swyx) Wang (04:53)
Is that interchangeable with an agent? Yeah, I think agent’s a broader concept. It’s more, you know, you can have customer support agents, you can have research agents. Here, we’re just specifically about agents that write software. And I honestly, I think I got lucky in laying this out in the first keynote that I did, because I haven’t had to change it in the last two years. It’s been…
Kate Holterhoff (04:56)
Yeah.
Okay.
Mm-hmm.
Shawn (swyx) Wang (05:19)
beautiful when you’re right, you know, it’s very rare, but like it’s nice when you get it right. I think like that just comes from taking AI seriously enough that someday they will be your coworker literally in Slack as though they were working like remote in the Philippines or in Dallas, Texas, it doesn’t really matter. And I think like, yeah, you should build with that future in mind because it is coming and reasonable people can disagree on if it’s this year.
Kate Holterhoff (05:38)
Wow.
Shawn (swyx) Wang (05:48)
or if it’s 10 years from now, 100 years from now. But like it is coming and I think it’d be stupid to ignore.
Kate Holterhoff (05:55)
Yeah, that’s amazing. And what I like about the way you’ve articulated this is that our remote work present, the fact that so many teams, engineering teams especially, work globally, internationally. And frankly, I’ve been talking to a lot of companies that are offshoring their entire engineering team. So there really is this sense of, I guess, geographic displacement that
is occurring at the same time as we’re seeing the rise of the AI Engineer. And so, knowing that someone is a human being is not as important to identifying as an AI Engineer, right? That you can say, hey, I would like to hire X number of AI Engineers. I don’t care, who they are, as long as they can accomplish these tasks. It’s a jobs-to-be-done situation, rather than you know, warm bodies-in-a-room situation.
Shawn (swyx) Wang (06:47)
Yeah, and look, I think the beauty of software engineers is we are perhaps the most adept people at going up levels of abstraction We used to manually manage memory with the lower level languages and then we didn’t have to. We used to have to write systems languages and now we have scripting languages and all that stuff. And so I think there’s some point at which the AI Engineer sort of rises to become a manager of AI agents and steps in as needed.
Yeah, I do think like that sort of human AI interchangeability, I think should be taken seriously. I think right now we’re still at the place where, yeah, a lot of people are using this to write like sort of routine, boring software. You know, the exciting software that the mission critical software is still human driven, but there’s a lot of boring, like change the copy on this button thing that like nobody really wants to touch. like, it actually takes up a lot of software engineer time.
write the tests, write the documentation. Then there’s all sorts of issues that are introduced. I’m not pretending that this is perfect by any means. Like security issues are huge with all this AI stuff. like, I think just the leverage on human time and cost, I think is undefeatable, undeniable.
Kate Holterhoff (08:05)
And so, we’re not there yet. You can’t hire an AI Engineer that is not human right now, but it’s coming and that’s…
Shawn (swyx) Wang (08:12)
We can’t, we can’t do,
you can do percentages of this. I think that different levels of it used to be able to hire an AI Engineer for $20 a month to auto-complete your text. know, single lines, then it became functions, then it became entire files and whole programs. So there’s different levels and they’re running up. So I think it’s too blasé to treat this as a binary zero or one, either you can or you cannot. You can hire.
Kate Holterhoff (08:16)
Okay.
Shawn (swyx) Wang (08:40)
10 % of an AI Engineer today. And that’s going up to maybe 15 or 20%. But it’s not nothing, it’s not 100. I think people have to calibrate that we are in the middle of this transition and adjust accordingly.
Kate Holterhoff (08:55)
OK, well, you are expanding my definition significantly here. I had not even considered that. So talk to me about what you’re seeing in terms of more conventional engineers adding the term AI to their LinkedIn profile as self-identifying as AI Engineers. What do you see around that? What is the motivation for that? Is it just like a market that is really in demand right now? Do you think that they’re forward looking in the same way that you are? Talk to me about what you’re seeing around that.
Shawn (swyx) Wang (09:24)
Well, I mean, I don’t want to do any comparisons with myself and them. Like I have a very unconventional career and I think lots of people should have more traditional careers than mine because mine has been very stressful. But also I think like people adding AI to their LinkedIn is yeah, it’s just a branding exercise. A lot of it and I think it’s an indication of where their interests currently lie. In the individual, it is rational and to something to be encouraged.
Kate Holterhoff (09:30)
Okay.
Shawn (swyx) Wang (09:52)
But in the aggregate, it can be very noisy and self-defeating. So I don’t really know what to do about that. You can’t really control the individual decisions of masses. But I do think that people have talked about things like, let’s do certifications. And LinkedIn is like the chief culprit of pointless certifications. But I do think like there should be some kind of standards that evolve over time. I think just keep in mind this field is, I don’t know, three, four years old.
Kate Holterhoff (10:1)
you
Shawn (swyx) Wang (10:2)
Software engineering didn’t really have standards for the first 20 years of its existence, right? We’re just in the middle of it. And I think everyone’s complaining, like, people adding AI to their LinkedIn and overnight you’re an AI expert. Yeah, that’s bad if you sort of hold yourself out to be an expert when you’re really not. But if you’re saying, like, aspirationally, this is what I’m interested in and this is where I’m doing the work and here’s where I want to go for my career, I think that’s completely reasonable.
Kate Holterhoff (10:49)
Yeah, and I think what we could add to that is just that even the term engineer has gained a lot of credence among employers in ways that the term web developer didn’t years ago. Actually, when I was a front-end engineer, I was there when our entire department relabeled and became engineers suddenly. We had all been developers before. That kind of cracked me up, of course. But, know, please.
Shawn (swyx) Wang (11:13)
I have a comment on that one. you know the Europeans don’t like it? Yeah, because for Europeans, engineer means something, whereas in the US it’s just a title, they just slap on anything. The Europeans, you have to be part of an institute, you have to pay membership dues. They take it as seriously as a practicing accountant or a doctor with a medical board.
Kate Holterhoff (11:19)
I didn’t know that.
Shawn (swyx) Wang (11:40)
Like this is a professional title with professional obligations and responsibilities and training. And here in the US, you just go to a bootcamp or something. Yeah. But I think obviously, I think people want to feel like the engineer. And I do think that there are a lot of serious elements to engineering. It’s just that it’s not enforced. And I think so there’s a wide, wide, wide variation between the good engineers and the bad ones.
Kate Holterhoff (11:47)
no idea how interesting yeah okay
Yeah, I’ve often heard folks complain, really before all the post-ZIRP labor issues arose, where there was still a very hot market for engineers, that there is a need—
for hiring managers to have a way to vet all of these developers. Yeah, and it doesn’t exist right now. It’s not like if you’re a hairdresser, you need to get some sort of certification or something from the state saying that you’ve done X number of hours, and that just doesn’t exist
Shawn (swyx) Wang (12:2)
yeah.
Kate Holterhoff (12:34)
And so folks are left floundering. so, and this can become predatory for folks who are trying to break into the industry and spend a bunch of money on, yeah, certifications, like you already mentioned.
And as a front-end engineer, I mean, I’m sure you can relate to the fact that full-stack engineers get paid more than front-end engineers. And so there’s a real reason for a lot of my colleagues in that field to sort of change their job title, even if the actual thing that they’re doing on a day-to-day basis doesn’t change at all.
Shawn (swyx) Wang (13:05)
Yeah. Look, think, you know, ultimately there are going to be inefficiencies on both sides, but ultimately I think in aggregate the market will suss it out. Like if you say you’re capable of something that you’re not really, then ultimately your employer will find out. So it is what it is. People are always going to try to represent the best version of themselves on
Kate Holterhoff (13:23)
You
Shawn (swyx) Wang (13:27)
online and sometimes it’s going to be too much of a reach and sometimes it’ll be just right. Some people even like understate themselves and that’s a problem. But yeah, mean, that’s nothing that we can do on our end. I do think like, you know, coming back to one on sort of the AI side, it is fresh grass. Yeah, so like blue ocean kind of. I think like in terms of general career advice,
It’s always nicer and easier to be an expert in a thing that is newer because there’s less background. You can’t really be an expert in Java if you haven’t been around the last 30 years. But you can be in LLMs just because everyone’s also running at the same learning curve.
Kate Holterhoff (14:04)
You
Yeah, that’s a really good example. And that can make for a really exclusionary experience. I often attend a Java conference in Atlanta called Devnexus. And I’ve actually heard that sentiment from folks who say, how do you even compete with the weight of history? think when I’m…
Shawn (swyx) Wang (14:27)
You can’t, it’s not impossible.
just, it takes skill and a lot of optimism. so I think what we’re saying is like conventional wisdom and the people who figure out the ways to buck the conventional wisdom by bringing sort of like a new energy or whatever to the space. I think they actually do extraordinarily well. I’ve seen people do this in PHP. Java, I’m not aware of people doing that, but like they could, like it just takes some enterprising person.
It’s not even about age, it’s about energy to do that. yeah, you’re going against the trend here. being, right now it’s relatively shooting fish in a barrel with LLMs. Even deciding to dedicate the majority of waking hours to covering the space and developing techniques in the space and building with it, you will be ahead of everyone else because everyone else has day jobs where they are not working with LLMs all day long.
Kate Holterhoff (15:25)
Well, let me ask you this then. If the future is going to have all developers working side by side with AI Engineers and using AI tools themselves, many of them are actually building AI products, should we just be called engineers then? Because the AI will be taken for granted.
Shawn (swyx) Wang (15:4)
Yeah, so I use this as an example of sloppy thinking because that means that there’s no returns to specialization. And then you just take, just think to yourself, you I don’t have to, I don’t have objective evidence to point to that says like there are returns to specialization, but do you believe fundamentally that people who specialize in the techniques spend a hundred hours a week on this will be better at you than this? If yes.
Kate Holterhoff (15:49)
Okay.
Shawn (swyx) Wang (16:1)
then they will be comparatively better performing and they will do better as AI Engineers. so labeling all AI Engineers as software engineers is just only true in a trivial sense is what I call it, in that they all develop software. But in the same way that a front-end developer is just gonna be way better at front-end than a generalist developer, a database developer is gonna be way better at databases stuff than a generalist engineer, there is a meaning to specialization.
Kate Holterhoff (16:24)
Okay.
Shawn (swyx) Wang (16:40)
and you just have to believe that meaning exists. Now with a magical, all-knowing AGI that can fully translate your thoughts when you plug into the matrix and build software, when that exists, maybe that goes away. But to me, that’s the distant future I’m dealing in the present.
Kate Holterhoff (17:0)
I think one of the most impactful things that you said from my perspective in that 2023 article was that it is a full-time job to follow AI models, tools, and products. Because as an analyst, that is absolutely true. I struggle to make sure that I am on top of it. I’ve joined all these newsletters. I listen to podcasts all the time. And I’m supposed to not only follow the AI situation, but also…
everything else going on in the industry. And so it is a tremendous challenge. And so I like that idea of someone who is just really specialized, and that’s what makes the job description particularly apt for AI Engineers. I just wonder if some of the AI Engineers that I’ve interacted with at conferences, because I’ve met a few who self-identified and have introduced themselves that way to me, if they truly spend that much time studying it, if they really are staying up on the latest
the latest models, if they really are living up to this aspirational AI Engineer pinnacle that you’ve established.
Shawn (swyx) Wang (18:04)
Probably not, but that’s okay. It’s just a directional ideal. I think ideals are important in the sense that nobody will ever reach them, but you know directionally what is good and what is not so good. And so I think establishing ideals is helpful even if they’re never real. then just like everyone is on a journey to better themselves. They have their own journey and we have ours, so I don’t judge there.
Kate Holterhoff (18:30)
Yeah. All right. Fair enough. So I’m also interested in juxtaposing the vibe coding idea with the AI Engineer idea. I would self-identify as a vibe coder. I would not self-identify as an AI Engineer. But I suspect I’m defining things on vibes. So in your mind, what differentiates these two roles?
Shawn (swyx) Wang (18:32)
Hahaha
Mmm. Yeah.
Someone will pay you to AI Engineer, no one will pay you to Vibe code. Yeah, right. So I think that this is one of the rare instances where I feel like, so Andrej Karpathy is a mentor of mine. We talk about two, three times a year and he’s definitely guided a of things that I do, including the original AI Engineer post.
Kate Holterhoff (19:00)
Okay.
Shawn (swyx) Wang (19:23)
This is one of the rare instances where I feel like he’s net done a disservice to the industry by coining this thing and making it cool because it should probably not be as cool as it is currently. And I do think that it is probably due for a little bit of a correction. I think it’s useful. think it’s, I think it’s a, it’s a necessary outcome of the model’s getting good enough that you can just kind of put whatever you think in there. And it mostly comes out aligned to what you, what you do.
But I think it’s currently being used to excuse sloppiness, to excuse 80-20 thinking, where you think you’ve done the work when you’ve done 80 % of the work, when actually the last 20 % is the majority of the work. And so I’ve tended to sheer away from endorsing vibe coding. I think it’s nice. think everyone’s like, I definitely think it gives everyone the warm fuzzies. And I think it enables a lot more other people to produce software than it
than previously was possible, which is obviously a good thing. I think what I’m trying to tweak the emphasis on is that it should not just be about the vibe, it should be about efficiency of the output, the quality of the output. Because the way I put it in one of my articles on Latent Space is that vibe coding is a slop attractor. Like it excuses slop because I was like, I was just vibe coding.
No, like, you know, if you are producing software meant for yourself or other people to use and something bad happens as a consequence of it, you are responsible, not the model. And so I think like what really matters is the efficiency of like, I didn’t have to hire an engineer for this or like I just put in a few words and I’ll came fully working program. That’s fantastic. So what I what the emphasis I’ve been trying to turn this into is what I’ve been calling tiny teams where
So aspirationally, the virtue here is that these are teams that are generating more millions in revenue than they have employees. And vibe coding is a means to an end. It’s a means to an end for efficiency, right? Because let’s just say like I have one person now who can do the job of five. Great, because they’re vibe coding, sure. But like there’s a lot of other things and it has to get to a meaningful output. And it’s, I think a lot of people are measuring vibe coding by like
number of lines of code that they generate. And I think that’s great and that’s a proxy for the thing you actually want, which is useful software.
Kate Holterhoff (21:59)
I guess when I compare vibe coding to its predecessor, Prompt Engineering, and things that have come after, like context engineering, it becomes more challenging for me to try to draw clear lines to distinguish them.
I think maybe a good example of this would be Kiro. I suspect you’ve had a time to play around with AWS’s new IDE. But there’s…
Shawn (swyx) Wang (22:19)
I’ve seen it,
but I actually haven’t tried it out. Is it good?
Kate Holterhoff (22:23)
it’s very good. It’s very good. Yeah, I love the idea of spec-driven development. There’s some other folks like Tessl that are kind of in this space. But I think if you have seen some screen captures of it, you probably noticed that there is a section for spec-driven development and also vibe coding. So again, they’re targeting the two markets of like the folks like me who, don’t want to work very hard and aren’t planning on pushing anything you know, this isn’t going to be attached to
someone’s credit card numbers, And then folks who are really deliberate and would self-identify as an AI Engineer and maybe are doing this for a living. And I did some spec-driven development myself. I tried to make a Ruby on Rails app. It did not work. And again, it was probably because I was trying to do like a hybrid model. Once I got the spec written, I tried to vibe code the
but I think all of that is leading me to this overall sense of confusion about what is actually going on here. sure, folks who are really keeping on top of it and are doing basically the work that machine learning, laborers have done for years, obviously they’re, they fit right in. They know what they’re doing. They’re working with the data. But the rest of us, I feel like are floundering. And so we’re looking for terms.
And a lot of us are afraid of being taken in by something that’s not real. if we think back to the Web3 developers, the blockchain engineers, all of that, even metaverse engineers, there’s been so many of these pushes. And not all them have staying power. And so when I talk to clients about what they should label things, whether that’s a vibe coding tool
or something else, I don’t know that anyone knows quite where they sit yet. And so I appreciate you speaking with me about this because I think you’ve seen so many facets of it and you’ve seen it played out over years, which not many folks have.
Shawn (swyx) Wang (24:07)
Sure, there’s a lot to respond in there. The only thing I’ll just say is I’ll speak up for the Web3 and the Metaverse people. I think they were just early, you know, but there doesn’t mean they’re wrong. And I think the amount of excitement people have was just relative to there was nothing else going on in tech at the time. So people are just looking for things to.
Kate Holterhoff (24:1)
Yeah.
Please.
Shawn (swyx) Wang (24:33)
to get excited about. And I think obviously blockchain itself has a financial limit that is very rarely present among technologies. No one can get as excited about object oriented programming when they can sort of see the coin go up.
Kate Holterhoff (24:5)
Right. Right. And you make this point in your post as well. I mean, you compare it to like SRE, DevOps engineers. I mean, there’s been so many of these moves. Actually, one of my colleagues at RedMonk, she was a DBA and you don’t hear as much from DBAs anymore, you know, so there’s even like a cool factor around these that we’re not talking about.
Shawn (swyx) Wang (25:10)
I think we should bring DBAs back. DBAs are very critical and we should rebrand the DBA into something cool again.
Kate Holterhoff (25:19)
It’s on you, this is you, you’re the one, you’re the cool maker. I mean, what’s it gonna be, Swyx?
Shawn (swyx) Wang (25:22)
I mean, I’m doing AI Engineering for the next 10 years. That’s my thing. know, someone else can do DBA.
Kate Holterhoff (25:29)
I love it. Okay. Well, maybe RedMonk will add that to our docket. but I know, think it’s so… interesting that our entire industry hinges on this cool factor, but at the same time, there is this earnest nerdiness at the core of what we do and this excitement around the details. And so, maybe it comes down to like marketing trying to walk the line between marketing goo and something real, something to get excited about, something that actually has staying power and that is meaningful and that’s gonna have ROI. I mean, that’s what everyone’s looking for,
Shawn (swyx) Wang (26:00)
I agree.
Kate Holterhoff (26:00)
Okay.
why don’t we talk a little bit about the AI Engineer World’s Fair. it happened relatively recently. Talk to me about the motivation for creating a conference around this entire notion. I mean, that’s no small feat.
Shawn (swyx) Wang (26:15)
Yeah, basically saw the I view the whole thing, everything I do, Latent Space, AI news and World’s Fair as giving the industry the necessary tools and community that it needs to flourish. And to some extent, it would have happened without me. But I think with my guidance, it at least is on a more sane path. It’s probably hopefully more
more substantial and incorporates the lessons that I’ve learned in my career from being in front end and cloud and data engineering. And so, yeah, I basically saw that I went to NeurIPS, which is the largest machine learning conference in the world. And I saw that NeurIPS was entirely based around PhDs and research.
So it is the oldest, I think it might be the oldest machine learning conference in the world. It’s 40 years old and every part of the schedule is based around a published paper by a grad student. And mostly it’s kind of like a meat market for grad students where the professors kind of stand around and say like, here’s my batch from this year and like, that’s all the papers they wrote, please hire them. all the companies like the companies and AI labs mostly go.
pick up the grad students. And I felt like things are moving from research into production. And I felt like a lot of interesting new work doesn’t have a paper involved. And also, I think a lot of interesting work happens at the engineering layer that is not worthy of a paper maybe, but it’s interesting in the products. basically, people in an industry need a place to gather and exchange the state of the art. You have this in
game, the AAA gaming industry with Game DevCon, which I really, really like. You have this in KubeCon for the cloud native people. had it in data engineering. had DBT’s conference, Coalesce, as well as the Modern Data Stack Conference that was done by Fivetran, so like every industry has its thing. And I felt like AI Engineering didn’t have its thing. At the time, even OpenAI didn’t actually have a developer day. So I was like,
Damn, OpenAI doesn’t really care about developers enough. I’m going to make my own conference that gathers people to talk about this, specifically the engineers. I think the other criticism of AI conferences in general is they’re all fireside chats. They’re all talking heads. They’re all talking about their AGI timelines. yours is two years, mine’s three years. Who gives a shit? It’s the engineers who are actually building real products that people use and make money.
And I think that thesis has generally proven to be true. So I think like that those all were important things. I think there’s personally for me, it’s like nice to be able to like network with all my speakers. And so this year we had like 300 speakers, we had 3000 attendees, and it was blast.
Kate Holterhoff (29:18)
That sounds amazing. what I really like about your framing of the AI Engineer is how you do focus on the builders. And I also, maybe putting words in your mouth, but the fact that you come from a front end background to me does not seem separate from this move on your part. Because, I can say from talking to a lot of front end developers and leaders in that space, that there’s something about the builder mentality and the inclusiveness that AI is enabling.
Maybe more on the vibe coding front. I’m still on the fence on that. And let me just give you an example. When I was a front-end engineer, worked on an interactive team. And what I saw was that if our team was overworked, if we didn’t have enough resources to help with a project, we would get a designer to spin up a Webflow project. And that was just way of empowering them to do the work themselves. now, this was five years ago, now they could use.
an AI tool to do it and probably have a pretty successful result. I love this idea of bringing more folks to the table who’ve been on the margins, whether that’s UX, design, product managers, folks like that who are involved and have a vision and they want to build. They want to be these builders that you’re characterizing. I think a lot of times they are focused on the front end. They’re focused on that usability angle. They want to experience this in a visual way. They want to see these things work.
I mean, I often talk about my, I had a great experience with
as being able to clone out a website because that visual experience. And it was like a single pane of glass where I saw it being created. It sounds like you’ve played around with that tool.
Shawn (swyx) Wang (30:54)
I haven’t played around and heard about it, but look, like I think it was just cloning HTML and JavaScript and doing a static capture, you know, so.
Kate Holterhoff (30:56)
Okay.
Sure. Yeah.
But what I liked about it is that it was focused on the visuals in ways that I think we’ve seen a history of our industry moving there. Not only just like the no code and low code movement, which of course nobody’s talking about that anymore, but also with page builders. I spent a lot of time in the CMS using things like WP Bakery building out
these marketing websites. And so I think that there’s a long tradition of folks who are designers at heart that have a vision of what they want things to look like and maybe just not the technical know-how or, you know, limited technical abilities. You know, now they can become builders too. And I love that And I just feel like, you know, with your personal background as a JavaScript developer, that maybe even if that’s not how you’re framing it, in my mind, I, you know, that’s the hope that I harbor, that there’s something about the front end.
that is deeply involved. Maybe it’s the Python don’t know. We’re attracted to the AI Engineering zeitgeist. There’s something there.
Shawn (swyx) Wang (32:04)
Yeah, I will say, it’s weird because I think people pegged me as a JavaScript or React guy, but that’s just where I got famous. But I’ve now been out of the front end longer than I was in front end. And it’s kind of interesting how that perception sticks around. But I do think that front end basically is synonymous with really good product.
because I think ultimately that’s where the engineers shine comparative to the researchers. The researchers are going to be concentrated in the labs and they’re going to produce the models. Where do you stand out as a separate entity that is not the models eating the world? It is when you can produce comparatively good product and most of product is going to be front end. It’s going to be a bit of back end with that, but most of product is going to be front end where you can introduce affordances and…
expose the model in interesting ways that sort of mind melds the domain and the user with the model. And I think that’s where comparatively the JavaScript people have an advantage over the Python people, right? Who historically have been the data scientists and the machine learning people. And I think that this is one of those things where like, yeah, the AI Engineer is going to have comparatively more JavaScript in them than the ML engineer. And the ML engineers don’t get it. They don’t value JavaScript at all.
Kate Holterhoff (33:23)
man. Well, I got to ask you this because did a research project on this new batch of front-end observability tools. And anyone who looks up front-end observability can really trace it to your conversation with Charity Majors like five years ago. So, well, yes, I’m telling you, it is, I don’t know that you coined it, but it is one of the…
Shawn (swyx) Wang (33:37)
Really? What?
No, I didn’t.
Kate Holterhoff (33:46)
It’s one of the earlier conversations that I can find where folks are sussing out what observability means. And so I point to you as the voice of the front-end engineer in the observability space, trying to figure out what the difference is between Fullstory and Honeycomb or something.
Shawn (swyx) Wang (33:53)
Wow.
Yeah, Log Rocket,
Honeycomb, yeah.
Kate Holterhoff (34:04)
But now we’ve got this new front-end observability market. You’ve got Grafana Cloud front-end observability, Honeycomb for front-end observability, and Observe front-end observability. in that interview, I know you do identify at that period in your life as a JavaScript engineer. So it’s just interesting to see how things have shaped up. don’t know. Do you follow that at all anymore, or is that old hat that you’re out of that?
Shawn (swyx) Wang (34:20)
Mm-hmm.
I don’t follow that as much, Don’t forget Sentry and all the other guys. I think like…
Kate Holterhoff (34:3)
Okay.
Absolutely. They don’t call it front-end
observability though, so I actually I exclude them on purpose.
Shawn (swyx) Wang (34:39)
Okay, all
okay, I see what you mean. I think that they’re all of a kind to me and obviously Charity is a force of nature and her take on observability is always worth listening to and I think just right. But I think like there are many facets to observability and you can get there with a number of tools. And I think she would support that too.
So I don’t care so much about what tool you use, it’s just so much as you have taken care of each of these aspects. But I think that’s part of engineering. Engineers who take their work seriously care about things like observability. They don’t just care about the happy path, they care about the unhappy path and getting feedback from their users.
in ways that are not just feedback forms. They actually care about the load times and what percent of times is this errors and all that stuff. And it’s surprisingly not well documented, particularly with the React world. And I think it’s improving, but probably not as quickly as obviously the observability folks would like. But I actually come at this more from like,
product management and a user advocacy point of view. Like the users, I as a user, I use apps all day long, which have terrible observability. And I know this because they have bugs all day, which never get fixed. And I’m sure they just, the dev team doesn’t even know they exist. So, so yeah, I mean, that’s, that’s terrible observability right there. So I think as a, I actually come at this from the PM hat of like, let’s get good instrumentation in there. So we know what our users are feeling.
so we can fix them because a lot of these bugs never get even reported because they just go on with their day or they just think that’s how the web works when no, it actually could be a lot better. And probably it’s like even like a five minute fix for us. We just don’t even know it’s a problem. And that’s bad. That’s not professional even.
Kate Holterhoff (36:37)
Right. I’ve heard some horror stories. So I think you’re right on. Is there an AI story around observability?
Shawn (swyx) Wang (36:47)
Huge. There are at least 80 companies in LLM observability. All of them are basically old school observability people coming to AI and going like, all right, what can I build here in AI? Oh, I know, I’ll do a logger and traces around your LLM calls and your agent calls. So there’s many, many of them. I think that the sort golden era of expansion was the last three years maybe. And I think this year they’re consolidating. Weights & Biases got bought by CoreWeave.
Humanloop just announced that they’re shutting down. And I think there would just be more of these. That 80 is gonna come down to two or three. So it’s gonna be pretty brutal in that space, may the best one win, right? And observability is weird because everyone just wants metrics, logs, and whatever the fourth thing is that Charity Majors wants. But I mean, I think it’s right. think if you have a Datadog,
and everyone in your company already uses a Datadog, you’re probably gonna use the Datadog thing, unless the money’s an issue, and then you go ClickHouse or whatever. But there’s a bunch of different ranges of solutions, there’s so many choices. I think the problem, there’s two problems. One is as a buyer, which one do you buy? And that’s an evaluation of the products and your needs and your team, and your existing context. And then the second layer is as an investor.
Which one will be the winner takes most? Which one will be the new Datadog? And those are just like different levels of difficulty of like forecasting effectively the performance of the team and the evolution in the market. I don’t have any particular insight there. I have tried. I think the Braintrust approach is really interesting. Braintrust was a sponsor of our evals track. So they’ve come at it from the point of view of like evals will lead everything and observability will come in as a natural part of evals.
Whereas other people have different points of view where they’re like, it’s the agent traces that lead everything and then you add evals on top of that. These are all very minor nuances. They’re all like functionally equivalent and they all build each other’s features over time. So it’s just really hard to cover this space. I don’t envy your job as an analyst if you’re trying to cover this because I’ve tried. I wanted to like start like a Wirecutter of AI where like I got a bunch of people together to like do personal playthroughs and testing and write them up.
And then ultimately I killed it just because everyone was bringing their own personal biases and I didn’t really agree with their reviews. So then I just killed it. I was like, I don’t think this is fair to the companies that are being evaluated.
Kate Holterhoff (39:15)
Yeah, you have to be selective in what you’re looking at. I don’t think anyone is going to be able to cover it all. And yeah, consolidation. you hit the nail on the head there because I don’t think we’re going to see all these companies next year in five years, whatever. It’s just moving too fast. We often compare this to the cloud era where, we thought cloud moved fast, but it’s nothing compared to what we see around AI. mean.
Shawn (swyx) Wang (39:34)
it’s…
my God, it moves so fast. It’s crazy.
Kate Holterhoff (39:38)
Yeah, I mean look at MCP.
It didn’t even exist, you know, what a year ago and now everyone has to have an MCP server. It’s I mean, it’s remarkable. It’s it’s fun, but the folks who’ve been doing this longer than me James and Steve of RedMonk they often talk about how it’s unprecedented. So I don’t know what
Shawn (swyx) Wang (39:45)
Yeah.
Yeah, think this is
in line with different waves of tech. I think maybe, you know, I’m not that familiar with the founding story of RedMonk, but I think like, you know, if you were covering early consumer internet in like the 1990s, you maybe felt this way, you know? And it’s just, this happens to be a bigger wave than the other smaller waves inside the bigger wave.
Kate Holterhoff (40:19)
yeah. RedMonk’s been around for 20 years, so not quite that long. it’s different in a lot of ways, although we are able to apply some of the broader things that we’ve always argued. So, you know, the open source conversations are there, you what does openness mean today? And also just the focus on developers, like, you what is the developer experience like with these tools and how are developers going to be contributing to these conversations?
And of course, I’m focused on front end design, that sort of stuff, but, we take a variety of approaches
Shawn (swyx) Wang (40:42)
Yeah.
Kate Holterhoff (40:48)
institution. I
know, every day is challenge, though, It is no easy task.
Shawn (swyx) Wang (40:55)
Yeah, my typical sound bite on this one, well, first of all, the open source side. Open source has kind of lost in AI. Everyone’s going proprietary. I mean, this might be a hot take depending on where you come from. But it’s not at all hot take in Silicon Valley. I think, again, I think this is one of those things where Git is open source and won, but before Git you had BitKeeper. And before BitKeeper there was…
Kate Holterhoff (41:10)
Please.
Shawn (swyx) Wang (41:24)
I don’t know Mercurio, SVN, And we’re so like, we’re just like, I think people just don’t understand. We’re like in a much earlier phase of this industry and you can’t force this along. we’re just gonna go through that churn, that certification thrash, like close goals first, because some people have to like make money and figure out the patterns, then open comes in. particularly with models,
Open models are just financially not going to keep up with closed models just because there’s the bitter lesson and the returns to scale of putting a trillion dollar cluster into a model that OpenAI is going to do at the end of this year. So yeah, you’re just not going to have that in open source. I think the developer focus, that one I agree a lot more with RedMonk on, to the point of I have this fun sound bite to consider, which
of these two jobs you think will last longer, the research scientist, the LLM research scientist or the AI Engineer. And so my argument is actually the engineer will be the last job. If you’re actually worried about job security, you should be an AI Engineer because the AI Engineer will be the job to automate all the other jobs. It’s not the research scientists because research scientists are done when they train the model. But to automate the research scientist job, it will be the engineer again. So because our job is the last mile, we’re definitionally the last job.
Kate Holterhoff (42:50)
I that qualifies as a hot take. I’m going to have to think about that one. This is my first time hearing it. If it’s a sound bite, it’s new to me. So I’ll have to chew on that one a little bit. I don’t know. I was a former academic. So
Shawn (swyx) Wang (42:56)
Yeah, I don’t say it that often.
Kate Holterhoff (43:00)
I tend to stand up research scientists in their labs, not getting the credit, not getting invited to the Silicon Valley parties.
Shawn (swyx) Wang (43:08)
Yeah.
So ultimately, to sort of play this through, the research scientists do a lot of intuition and inference by studying the logs of their training runs. And then they’d set up the next experiment to set up the models. Ultimately, this will all be automated. They’re actually tracking sort of that recursive self-improvement of models as a benchmark.
And I think like ultimately what you’re going to have to do is engineer a research scientist and AI research scientists that is not human, that will self-improve better than we can. And I think that’s the ultimate bootstrapping sequence to these things ultimately becoming beyond our comprehension. Like we already can’t really interpret what the weights mean, what the layers mean. Like we can kind of sort of manually map them to transformer circuit, which is what Anthropic is doing for interpretability. But these are all…
really low dimensional projections for our small primitive human brains. Ultimately, I think like, yeah, like they should be automated and the person that’s building that is probably an engineer, not a scientist.
Kate Holterhoff (44:12)
Man, I love ending this conversation on the notion of my small human brain. And so I think let’s do it. So thank you so much for coming on the Monkcast today, Shawn. This has been absolutely amazing. If folks want to hear more of your hot takes and insights on the AI Engineer, where would you direct them?
Shawn (swyx) Wang (44:20)
Hahaha
Yeah, my pleasure.
Yeah, so the podcast and newsletter is called Latent Space. Just go to latent.space. We like the short URLs. And then the conference, I do a keynote every conference and then obviously there’s a few hundred speakers, is ai.engineer as a website. And you can also see that on YouTube, Instagram, Twitter, all that stuff.
Kate Holterhoff (44:5)
All right, all the places. So let’s go ahead and wrap up there. It’s been an absolute pleasure speaking with you, Shawn. Super excited to hear your thoughts on AI Engineering and a little bit on front-end observability, which is a personal hobbyhorse of mine. So thank you for that. Again, my name is Kate Holterhoff, Senior Analyst at RedMonk. If you enjoyed this conversation, please like, subscribe, and review the MonkCast on your podcast platform of choice. If you’re watching us on YouTube, please like, subscribe, and engage with us in the comments.
Shawn (swyx) Wang (45:19)
Thanks, Kate.
No Comments