A RedMonk Conversation: Do Frontend Developers Want Frontend Observability? (with Todd Gardner)

A RedMonk Conversation: Do Frontend Developers Want Frontend Observability? (with Todd Gardner)

Share via Twitter Share via Facebook Share via Linkedin Share via Reddit

Todd Gardner, CEO of Request Metrics and TrackJS, discusses “Frontend Observability” with RedMonk senior analyst Kate Holterhoff. In an attempt to answer the question “Is Frontend Observability Hipster RUM?,” Kate and Todd delve into the concept of frontend observability by exploring its definition, historical context, and the uphill marketing hurdles involved in educating web developers about observability tools. Their conversation highlights the disconnect between observability terminology and the practical needs of developers (from SREs to frontend engineers), emphasizing the need for a user-centric approach to positioning these products for the market.

Links

Transcript

Kate Holterhoff (00:12)
Hello and welcome to this RedMonk conversation. My name is Kate Holterhoff, Senior Analyst at RedMonk and with me today is Todd Gardner, CEO of TrackJS and Request Metrics, which are web analytics monitoring and observability companies targeted at web devs and frontend engineers. Todd, thanks so much for joining me on the MonkCast.

Todd Gardner (00:32)
Thanks for having me, Kate, I appreciate it.

Kate Holterhoff (00:35)
All right, I’m so excited to have Todd on here. Today we are going to talk about frontend observability, which is quite possibly a term that I learned from you, Todd. I am not entirely sure where I heard it first, but you might be to blame.

Todd Gardner (00:49)
Yeah, yeah, I’ve been throwing that word around for a while, and I have this weird love-hate relationship with it where, when I first heard it, it was great. I was like, this is an awesome word about, like, combining a bunch of things that I care about together. But I also kind of hate it because it, you know, everybody has a different meaning for it, and those meanings all seem to strangely correlate with whatever that particular vendor is selling.

Kate Holterhoff (01:13)
I know. When I did research for my recent post, “Is Frontend Observability Hipster Rum,” on what makes frontend observability distinct from predecessors like Real User Monitoring and what it’s even doing and who the potential buyer and user might be, yeah, I ran into this as well. I think There’s a lot of confusion about whether “Frontend Observability” is doing anything different than what we’ve been doing all along or whether it is a marketing goo. What’s your sense?

Todd Gardner (01:42)
Mm-hmm.

I think, so I mean, it depends on what frontend observability or client side observability, because I’ve heard both terms. It depends on what we think it is, right? Because the word itself came out of the DevOps-y space, and some vendors had really very specific definitions of what it was. It had to be about logs, traces, and spans. And if it wasn’t about those things, then it wasn’t observability. But that doesn’t translate to the client side really at all.

Kate Holterhoff (01:53)
Okay.

Todd Gardner (02:10)
So like, if that’s what it is, then like, I think it’s just marketing spam. I kind of took it back to like, when I applied it to the frontend, I thought about like, well, what are they, what is like that one sentence snapshot definition that we were talking about, which is like understanding how a production system is operating. And if that’s the case, then I like to think of that observability word in my head was like, it’s about everything. It’s Real User Monitoring. It’s error monitoring. It’s performance monitoring. It’s analytics. It’s

Kate Holterhoff (02:10)
Okay.

I see.

Todd Gardner (02:39)
product analytics, it’s traffic, it’s security, it’s all of these things to understand how is my app running out in the end user’s device? Can I understand is it running effectively? Is it running fast? And is it accomplishing the goal? And so to me it was like this, this is this umbrella that we’re gonna bring all of these different facets of understanding the client side experience together under one roof. But.

Kate Holterhoff (03:04)
Right.

Todd Gardner (03:04)
I don’t know who wants that other than me. Like, I want that. But I don’t know anybody out there other than me who has, like, you know, money to spend on things. I don’t know if anybody wants that.

Kate Holterhoff (03:09)
That’s awesome. And what I like is that you have defined what it is in your mind. And it is… Katamari Damacy, that video game? It’s a ball and it rolls around and takes everything in? Are you familiar with this? Okay. Yeah, in any event, it sounds like observability for you. It just rolls around and picks up all the different things and smooshes it together and…

Todd Gardner (03:26)
Mm-hmm. Yeah, yeah.

Kate Holterhoff (03:38)
sort of making sure that that client experience is as good as it possibly can be. That’s the vision I have when you talk about observability.

Todd Gardner (03:41)
yeah.

huh, huh, I’d agree with that.

For me, it meant like it was like that top level umbrella of like, if I want to understand, is my web app good for the user? Is it performing the way I want? I need all of these different kind of technology concepts to really understand that. Like I could have an app that has, you know, all the logs are good and the performance is great. Fine. But the user’s never clicking on the button that I want them to click on. Well, that’s kind of an observability thing under my mind. Although Historically, it would be like a marketing responsibility, but like it’s all kind of the same thing to me.

Kate Holterhoff (04:20)
think so when I define it, I’m trying to keep it to a sound bite because I guess I’m trying to think about it historically and also take into account all of the different folks who have defined it in very specific ways. So when I think about it, I just think of monitoring the frontend by way of the UI browsers and end users. OK. All right.

Todd Gardner (04:41)
Mm-hmm. Yeah. And I would say part of that is, if that’s the case, I don’t think those two definitions are necessarily separate. I think that we’ve been doing observability on the frontend for 25 years. We’ve just been doing very specific things of it. The first, Urchin Analytics was.

Kate Holterhoff (04:49)
No.

Todd Gardner (05:02)
a simple kind of observability. And then like the first error monitoring tools like Track.js and error reception and all the original ones, like those were observability. And then like new RUM tools and like monitoring like the real user experience, those are observability. Like they were all like little slices of it, you know, like I only see part of the whole picture, but that’s all that we could ever hope for given the technology and how the world worked at the time.

Kate Holterhoff (05:30)
Yeah, And I appreciate you giving us a history of this Because the terminology is so fluid and because there’s so many different ways of characterizing what frontend observability now encompasses, it’s hard to research because there’s so many tools out there that have been doing some oblique things. And now there’s also new things that need to be tracked that didn’t exist in the past, like Web Vitals. So…

Todd Gardner (05:51)
Right. Yeah.

Kate Holterhoff (05:53)
It’s very challenging to try to put timestamps on anything or to try to figure out what is and what isn’t frontend observability.

Todd Gardner (06:03)
Right, right. Because yeah, some things is like the challenge of the frontend has always been that it’s not an environment that we control, right? It’s somebody else’s computer, right? Like I used to say this when I was like talking about error monitoring is that like error monitoring on server side is easy. You built that server. It’s running the versions that you say it’s running. It’s always running that. And if an exception happens in your server, you know it’s something you screwed up.

either you didn’t plan for it. It might not be catastrophic, but it’s every single exception in a server environment you should probably look into. But on the client side environment, you have no control of that. That is a browser that somebody else downloaded, somebody else is running it. They might have manipulated it in some way. They might have put browser extensions in some way. There might be malicious network actors between you and them. so I would say, I mean, based on our experience, like less than 10 % of exceptions that happen on the client side,

are something that you care about. It’s anything to do with you at all. It’s more likely something with that particular user’s device or a malicious actor of some kind or a bad network or something like that. And so that difference, that tension of that you can’t fully trust that other environment and you have to kind of discern the signal from all the noise that happens there is the big challenge of this space. And what all the tools

like error monitoring tools and performance monitoring tools and analytics tools, they all have to fight that headwind of like, I can’t trust the source of my data, not completely. I have to like throw away a certain percentage of that data because it’s not real.

Kate Holterhoff (07:40)
Right. And I know synthetics get brought into this conversation pretty often. Do you have a sense of how useful those can be for testing that user experience?

Todd Gardner (07:44)
Yeah.

they’re super good. Synthetic testing is valuable in that it’s got a very high signal level, signal to noise. You set up a robot somewhere on the internet to go and do a thing, and you can control it. And so you can test the happy path through your application. You can test that when it’s this version of Chrome on this fast a network and they do X, Y, and Z, then I can guarantee that there’s no errors and it performs well and it does all those sort of things.

Kate Holterhoff (07:51)
Okay.

Todd Gardner (08:16)
So that’s great. It’s super valuable to have that data. It’s just not reality because your users aren’t on that version of Chrome on a super fast network doing the order of things that you need to have. And so you need both sides of it. Like synthetic testing can tell you a whole lot about a little bit of use case and you need the real side, Real User Monitoring or observability or whatever you want to call it. You need that to understand like what users actually do but you need to understand there’s a lot of noise in that as well.

Kate Holterhoff (08:47)
Yeah, and I think when I spoke with David Cramer at Sentry for the post, one of the things that we spent some time discussing was just the challenges with there not being a clear signal with the frontend in ways that there are with the back end. And what we mean by that is that things don’t necessarily break and not work entirely. They can just be a little weird or not optimized or, you know, things don’t load in quite the way that you anticipated.

Todd Gardner (09:10)
Yeah. Yeah.

Kate Holterhoff (09:15)
And so there’s not just a, it’s not like Boolean in the sense of works does not work. There’s all these shades of gray here which are really frustrating and hard to track.

Todd Gardner (09:19)
Mm-hmm.

Yeah, exactly right. Like that’s the same thing I’m touching on with like, you know, an error on the server is something that you care about every single one of means something is bad, boolean false. But an error on the client side might be nothing. It might be like totally transparent to the user and it doesn’t matter. It could be like, the button’s not in the right place anymore or the wrong color or whatever. Or it could be a catastrophic failure. And it’s really hard to know the difference between those three things or three things it could be. It’s either

not broken or it’s broken or maybe it’s broken some of the time, shmaby. Like you don’t really know unless you like have, you know, the visibility and like start recreating and understanding more of those kinds of errors and why they happen.

Kate Holterhoff (10:07)
So we’re thinking about the challenges here. We’re discussing the fact that a lot of this has to do with the user’s machine, the user’s browser, network issues, like all of these things that the developers can’t control necessarily, but they still have to monitor. And so this creates these outrageous issues that I think we’re still grappling with here.

Todd Gardner (10:22)
Mm-hmm.

Kate Holterhoff (10:29)
So when would you say that the idea of using the terminology of observability, which kind of emerged from like microservices, you know, from the distributed systems era, you know, it’s certainly like an SRE type of term. When did you see you know, taking this concept of the three pillars, right, and applying it to what’s happening on the frontend or the client?

Todd Gardner (10:47)
Well, I’ve heard people talking about it since probably like 2018, 2019. Like I’ve heard people like, like you’re absolutely like, that’s where I heard observability come from too. I heard it come from the distributed DevOps kind of space and starting like, you know, 2018, 2019, I was at conferences and I’d be talking to people and

Kate Holterhoff (10:53)
Okay.

Todd Gardner (11:11)
I’d be talking to somebody talking about observability and they’d be like, but Todd does client-side monitoring. Like that’s observability. How do we like fold this in? And we would, you know, hash out ideas of what these things could look like. I have even like a couple of old blog posts where I actually use the word observability in talking kind of directly about indirectly about client-side. And so like this, this idea is there. It’s just, there wasn’t a really.

comprehensive story about what does client-side observability mean and how does it fold in with the rest of that ecosystem? And I would say I’ve invested a whole lot into trying to make that work. Like with one of my companies Request Metrics, we ran for about like 18 months on a whole like strategy that we were going to be client-side observability. That’s what we were going to try and do and fold in analytics and performance and error monitoring and those things together. And we ended up backing away from that because

I don’t think there is that super clear story. Like it was, we were trying to be client-side observability, but there was no clear hook-in to, I don’t know, server-side observability, for lack of a better term. And more than that, like frontend people just didn’t want it. Like there was the silos that exist in that frontend ecosystem around the engineering team.

which is separate than like the security team, which is separate than the DevOps team, which is separate than the like marketing teams. And they didn’t really want to share data. They didn’t have a shared budget to like see the value of bringing any of these things together. And so we ended up backing away from it and that word entirely, I don’t really even use that word anymore because frontend people tend not to know what it means or tend to immediately categorize it as, that’s a big core know, DevOps-y thing. That’s not what I’m doing.

Kate Holterhoff (12:57)
OK, that is super helpful. All right, so let’s dig in there, because I think what’s interesting is that observability is not only the set of tools and specifications, something of OpenTelemetry, that encompass this space, but also it’s a marketing term. And it is an organizational term. And it has to do with, yeah, who’s going to be paying for these tools? And so your experience is that you are trying to market to frontend buyers, if I’m understanding correctly. Great.

Todd Gardner (13:22)
Yeah, yeah, totally.

Kate Holterhoff (13:24)
And so what sort of challenges did you notice? Not only did they hear the word observability and maybe… Would you say they felt alienated by that? You know, they just felt like this isn’t something that they could grok?

Todd Gardner (13:35)
Absolutely. Absolutely. So I would start in with the tagline of we’re client-side observability. Because that’s what I was using frontend. I was using client-side. But those terms, think, are largely interchangeable. So I would come in and say, hey, we’re client-side observability. And I would have to do a ton of education off the frontend of just saying, this is what that word means to me. This is what we’re solving. This is why it’s different than RUM, or why it’s different than error monitoring, or why it’s different than

Kate Holterhoff (13:43)
Okay.

Todd Gardner (14:01)
anything else you might have done and why it’s better. And if I have to educate a buyer, if I have to spend so much energy educating a buyer about why this is even important, I have the strong headwind to go up against and like I’m a small business. I don’t have like a deep marketing budget or big sales teams to like do that and like try and educate a market about why this is interesting. And so me, I was never going to succeed at that level education. I needed to come in with like a

Hey, you have problem A, I have a solution to problem A. Here’s how we can help. But nobody saw client-side observability as a problem yet. Like nobody was looking for that. They were looking for RUM. They were looking for performance monitoring. They were looking for error tracking. They were looking for analytics or security scanning or something like that. Nobody was looking to integrate that data together.

Kate Holterhoff (14:52)
So today, we’ve got a number of vendors who maybe together do have the capital to do that sort of education move. So I’m thinking of Observe has a frontend observability product, so does Honeycomb and Grafana. And so with these new companies that are entering the space, do you think that if they make enough certification programs and get enough dev rel folks at frontend conferences like Render,

Todd Gardner (15:02)
Mm-hmm.

Kate Holterhoff (15:20)
that they’ll be able to approach that headwind in a concerted way and educate the market so that frontend engineers will be more receptive and see themselves as the appropriate users of observability tools?

Todd Gardner (15:31)
I think it’s possible. If there’s a swelling of like larger education that this is interesting and valuable, because I think that’s the real struggle is like, why is integrating this data valuable? And I think a lot of the vendors that you mentioned and others are treating client-side observability less as like trying to integrate the different facets of the client side and more trying to just, how do we stamp the pattern that we’ve had success with on the backend?

How do we stamp that onto the frontend so that our existing clients, we have like another thing to sell them another checkbox to integrate these things together? Not that that’s solving a really unique and interesting problem on the client side that doesn’t already exist. And what I would love to see is like actually integrating the different kinds of client side data together to have a better understanding of that in addition to integrating it with the backend.

Kate Holterhoff (16:24)
Right, right. And I’m even thinking of, in terms of that education, when I look at observability tools, you open up the dashboard, and there’s all these graphs immediately. And that is just not what frontend engineers spend a lot of time looking at. mean, maybe in the console, if you go into the network and try to figure out some JavaScript issue, you’re going to encounter a few. But in general, I could see that being very intimidating.

Todd Gardner (16:33)
Mm-hmm.

No.

Yeah.

Kate Holterhoff (16:49)
Are these vendors gonna have to rethink the dashboard experience?

Todd Gardner (16:54)
I think so. mean, if they want to appeal to the frontend engineer, I think they have to because we already struggle a little bit with that. like with Track.js, we have different kind of audiences. So sometimes it’s like an engineering manager and they want like a high level kind of, they want the charts and graphs about how many errors are happening in user experience. But the engineers frequently, they don’t want any of that. They just want to see like, hey, what is the biggest error in my site right now? And how do I fix it?

That’s what they want. And that’s what we’ve always gravitated towards. That’s how it’s always been our priority is, how do we give the actual engineers what they need to fix the thing? And there’s too much data in a lot of tools. Oftentimes, we’ll win in a bake-off versus a much bigger company just because they’re giving too much data. There’s too much, too many roll-ups, too many charts, too many graphs, too many things.

that like, it’s just hard. Like, I shouldn’t need to be trained on how to use a tool to debug my own system.

Kate Holterhoff (17:57)
and can we bring in the issue of mobile development to this? So remind me, Request Metrics and TrackJS don’t deal with mobile currently. Okay, yeah.

Todd Gardner (18:07)
No, I mean, I’m happy to talk about that, but like I don’t I work super deep in that. Like there are a lot of common patterns between mobile challenges and and web challenges. But like My products don’t work in mobile beyond when mobile is just a wrapper around web.

Kate Holterhoff (18:24)
Yeah, I’ve written a little bit about PWAs and where we’re going with that. But yeah, so, but for like native development, know, wanting the sort of React Native experience specifically. around that topic, I’m curious what your thoughts are in terms of like marketing to those mobile devs who…

I mean, back when I was a frontend engineer, we were doing both because we were creating responsive websites that were intended to be consumed in all device sizes, including tablets. So it seems like there is some overlap there. Do you anticipate that mobile developers are also going to be, I guess, feel a little alienated from the term observability in the same way that folks who typically are JavaScript developers, web devs, have felt a little hesitant to adopt this type of tooling.

Todd Gardner (19:13)
I would think so. And I think that’s partially because of where our focus tends to be. like I’ve been a client-side person for a very long time. And it adjusts, like it’s part of like how I think we think about problems, how we approach it. Whereas when I have an engineering problem, the first thing I start with, when I like pick up a pencil and I start like sketching something out, I’m drawing the UI. I’m drawing how I think it’s gonna work and like what the interaction is gonna feel like. And I’ll figure out the technical details later.

Kate Holterhoff (19:14)
Okay.

Todd Gardner (19:41)
And that’s also how, like, I think a lot of us, a lot of frontend people think about problems and it’s how we think about our systems. And so if it’s not organized in a way of like, it’s the user first, the user is what’s important. How do I give the user the best experience and guide them down the journey I want them to take and like get value out of the product? I don’t really think about it in terms of like servers and flow and traces and stuff like that.

unless it’s grounded in like, this is the user action that I’m trying to like make better. And I think, I think that’s very hard. I think the whole observability space is very much like a tech details up point of view, which is why it’s so hard for a lot of frontend people to understand it because we’re very user experience down people. And so like, there’s just a bit of a mismatch in what we think is important.

Kate Holterhoff (20:37)
That’s really heavy. I like that. I’m thinking of mobile first development and how that Anticipated a lot of the the way that I would approach projects when I was handed a PSD file and told to go code it out And yeah make it Make it make it work. I know so we have this sort of marketing issue. What

Todd Gardner (20:49)
Yeah, make the web match the PSD. I remember that.

Kate Holterhoff (20:57)
lessons have you taken toward, I guess, marketing to web developers? It sounds like you have a real sense of how they work.

How do you market to those developers to make sure that they understand that this product is for them?

Todd Gardner (21:10)
Yeah, so we’re a very different animal. We are a small company playing in a sea of big companies. And so we do things a little bit differently than most of the other ones do. So I have two products. Track.js does error monitoring and Request Metrics does performance monitoring. And the first thing is, well, why are those two products? Many of our competitors on both sides do both things together in one platform. And for us, it’s just about like,

What we learned, the big lesson we learned from our client-side observability attempt, is that when you have too many buyers, there’s nobody to sell to. I was trying to convince too many different people of too many different things, and there were competing priorities, and we just couldn’t get that right level. And so we decided to keep the two products totally separate, because the person who goes and buys an error monitoring product tends to be different than the person who comes and buying a web performance product.

Like if I look at our customers on both sides, they’re very different companies with different priorities. Like an error monitoring user is typically like a PWA. It’s a rich client application with a high value per user, usually behind a login wall. And they’re doing something interesting with a big JavaScript framework to like build some sort of ambitious web app. But the people who care about performance tend to be content sites, marketing sites, media sites.

publishing sites, e-commerce stores that like live and die based on performance where an individual user isn’t as valuable to them, but it’s like the performance is value in aggregate. Like they care about the core web vitals. They care about their SEO. They care about like making sure their bounce rate is super low. And so they’re different people that I’m selling to that like they rarely tend to overlap. And then

There’s just different technological limitations. So for example, on Track.js, on error monitoring, my priority is I want to catch every error. If you install our product, I want to make sure that I capture every error on your website. In order to do that, there’s a performance impact, because I have to be the first JavaScript on the page. If I’m not the first JavaScript on the page, then I can’t tell you errors that happened before I exist. But there is a performance impact. Whereas Request Metrics is like,

The goal is I never want to slow you down. I never want to be the cause of a performance issue. And so the system is built in such a way that it minimizes performance in order to capture that performance. Request Metrics wouldn’t be as good at capturing errors as TrackJS. And TrackJS is not as good at capturing performance as Request Metrics because it would hurt performance. And so like there’s both these different buyers and different technical approaches to solve those specific things.

Whereas I think if we combine the products together into like a larger suite, now I have like, I’m building a harder sell on both sides. Like I have to charge more because I have a bigger product and it’s a bigger surface area to learn. So I’ve now given a more complicated thing and chances are a lot of my customers are only going to care about half the functionality. And I’m making both sides worse. I’m slowing down the product and I’m making it more likely to cause some sort of impact in order to capture the data that I want to capture.

Kate Holterhoff (24:28)
That’s really interesting. Are either of them Real User Monitoring products?

Todd Gardner (24:33)
Yeah, I mean, depending on how you define Real User Monitoring, they’re both Real User Monitoring products. They just do different things. I would say Request Metrics is much closer to what you typically think of a Real User Monitoring product. It tracks the performance over time. Like here’s all your Core Web Vitals and here’s the performance of each individual user and all that sort of thing. Track.js doesn’t record that performance data.

Kate Holterhoff (24:36)
Okay. All right.

Todd Gardner (24:57)
But like it does show you the real user errors. Like here’s every user that came in and the errors they experienced. And so they’re both kind of Real User Monitoring products. They’re just with different emphasis based on what the customer’s coming, what problem they’re looking to solve.

Kate Holterhoff (25:13)
OK, and the reason I ask that is because earlier in this conversation, we discussed the fact that frontend engineers and web developers feel more comfortable around the word RUM than they do possibly around the term observability. Would you? Am I remembering? Yeah, is that accurate? OK.

Todd Gardner (25:27)
Yeah. I would say so, but like even within the term RUM, like I, so when we’re talking about TrackJS, when I’m marketing TrackJS, I don’t use the word RUM. Like it’s nowhere on the website. don’t try and rank for that keyword. don’t, there’s nothing there. I’m doing error monitoring. I’m doing crash reporting. I’m doing error tracking or any of the, you know, thousand little variants of those words chain together. because that’s what they’re looking for. They’re not.

Like that customer isn’t necessarily looking to monitor every one of their users. They’re looking to see when do things blow up in production and how do I like fix it. Now, is it RUM? I would say it is RUM. It’s just that’s not the word that the customer is looking for. And I think that’s the big overall lesson of this client side observability conversation is like, that’s just not a word that the customer is looking for. So anybody who goes down that path is just fighting headwind of like they’re not necessarily.

Like that’s not where their customer is right now. And so they just have to pay this education cost upfront because like I don’t think a lot of people are searching for a solution to that problem today.

Kate Holterhoff (26:36)
I’d love to take a step back because I’d say a lot of my research at RedMonk has to do with why is the frontend so weird and how do we access this buyer because they’re just different. And what I’m hearing from you is that frontend engineers really have this “jobs to be done” mentality. And that is, how do I get the most performant web experience possible? And everything else is just unimportant. And so

What they’re going to do is search for the problem that they’re having and how to mitigate that rather than maybe following trends or building a community around it. I don’t know. mean, maybe I’m reaching here. But I guess another thing that we haven’t mentioned is the OpenTelemetry and CNCF connection, which Honeycomb has. I I mentioned mobile. So Embrace, that’s the big part of their.

pitch is their connection to OTel. So I think that’s another sort of facet of like, do we access this particular buyer persona? And it seems like open source maybe is something that gets a lot of enthusiasm from folks working in the backend, maybe doesn’t in the frontend in the same way. I mean, I know the JavaScript frameworks, there’s a lot of buzz around those, but in terms of like observability,

I’m not seeing that in the same way. OK. So yeah, tell me if I’m off base here. Would you say that frontend engineers just tend to be more problems focused? Like, we’re most interested in buying a developer tool that fixes a problem that we have right now.

Todd Gardner (28:08)
Yeah, I would say so. Most teams that I’ve interacted with are very tactical, I guess, about that. Like they would be like, here’s the problem. Here’s the thing we’re going to use to fix it kind of thing. And not necessarily about following super high and macro trends. They also tend to very much buy into like their ecosystem. So like, this is a React shop or this is a Svelte shop or this is a whatever. And so like,

just because it exists in open source, if it doesn’t exist in open source within that niche, I think it’s a lot harder to market to them. So like if there was like an OpenTelemetry React hook, which I mean, maybe there is, I don’t know. I think that would have a much better buy-in to that community than just having a general thing. I’m not to your earlier comment about like open source. don’t think there is, I mean, there’s definitely an advantage to open source. Like in a lot of developers are like,

they want to use those open source tools, but not exclusively. There’s plenty of commercial closed source things that exist and are successful. I think the bigger problem with just the OpenTelemetry approach is that like it’s still trying to solve problems in a similar way. And

The last time I looked at the OpenTelemetry source for JavaScript, which was granted a long time ago, and so it might have changed, it was really complicated, and it definitely had performance impact with what it was doing. And so I would never put it in a in any of my code because the data that it gathered, I don’t think it was worth the cost.

Kate Holterhoff (29:38)
Okay, and so, but maybe you can speak to this. Do you hear your customers talking about OpenTelemetry at all and do they have any desire for it? Okay.

Todd Gardner (29:46)
No. No, nobody has ever asked. Other than like people who are in the OpenTelemetry space and you, nobody has ever asked. Nobody has ever said a word about it in the frontend side. Like, don’t think it, I think it is a solution looking for a problem.

Kate Holterhoff (30:03)
Okay, that’s helpful. When I have spoken with the folks from the OpenTelemetry Project, yeah, it sounds like there just aren’t a lot of folks from the frontend community that are participating in it. And so I do think you have like a chicken and an egg problem where it’s like, you know, how are you going to get frontend engineers to care about something that doesn’t care about them, right? You know?

Todd Gardner (30:23)
Right, right, you’re coming at this thing, you know, trying to add another notch to their supported environment like thing, but like the environment that they’re trying to talk to, they’re not part of that community, they’re not, they’re just trying to come in and do it.

Kate Holterhoff (30:40)
Right, right, right. So if OpenTelemetry is aspiring to become something that frontend engineers are using and excited about, it’ll probably need to be vendor-led at this point. The vendors are going to need to push it so that it does meet the needs of these users, with their very unique needs and demands and ability levels, right? We talked about the fact that they’re kind of alienated by the word observability at all. There’s going to be a lot of the…

Todd Gardner (31:00)
Yeah.

Kate Holterhoff (31:06)
It’ll be difficult. We’re definitely pushing a boulder up the hill to try to make this a thing.

Todd Gardner (31:10)
Right. And there’s like some incumbents. so like why if I was looking to if I didn’t want to like purchase like a commercial monitoring tool and I wanted to use an open source thing and integrate into everything else, like Boomerang has been around for a long time and is widely open source and super compatible and no performance impacts. And like that came out of like Akamai, but like it’s very open about what it does. And a ton of vendors use Boomerang under the covers.

Why isn’t OpenTelemetry part of Boomerang? Why isn’t it an extension, why is it something separate? That’s where it kind of like feels like, why didn’t you two talk?

Kate Holterhoff (31:48)
I mean, I love that you said that. That is such a big part of my experience studying this is hearing observability people talk past frontend engineers and possible users. And yeah, in the blog post, I mentioned Charity Majors five years ago having a conversation with swyx where he was using the term observability in a way that didn’t align with the way that Honeycomb was thinking about it. And…

Todd Gardner (32:10)
Mm-hmm.

Kate Holterhoff (32:12)
And at one point in that conversation, he actually brings in like Fullstory and some of these tools that frontend engineers feel comfortable with to try to see how end users are perceiving the UI. And the impression that you get is that, that’s not observability. And yeah, if you go on the website of Fullstory, they say, no, no, that’s not what we do either. And yet there’s these parallels and there’s these integrations. And so it’s like we’re meeting, but we’re not overlapping. It’s like kind of oblique. And we see that we’re all kind of.

Todd Gardner (32:22)
Yeah.

Yeah.

And like, why

isn’t it observability? Why isn’t it like another tool, another dimension of looking at it? And like, that’s just. It reeks of like, that’s not a product that they sell, so it’s not observability. I’m like, it should be about like, it gives me better understanding into the production use of a system. To me, like, isn’t that, that was like the one sentence description of it. And so it is like, and if you’re saying that like anything under that umbrella, that’s not observability, that’s just kind of reeks of like, you’re just trying to, you think you’re the only vendor that sells it and trying to define it in terms of like, my product is observability and their product isn’t.

Kate Holterhoff (33:19)
Right, right.

Todd Gardner (33:20)
I mean, I would call full story an observability product. They might choose not to use that word the same way I choose not to use that word. if we’re talking in like hypothetical, like, you know, like there’s this funny thing that like I’ve done with people at conferences about like talking about the I sandwich interface. Have you ever done this? So like, what is a sandwich define a sandwich? Like if it is a thing between two other things, right?

Kate Holterhoff (33:39)
No, tell me.

Yeah. Yes.

Todd Gardner (33:47)
Ravioli is a sandwich. A burrito is a sandwich. Like a pop tart is a sandwich. A dumpling is a sandwich. Like, and just like, how do you categorize these things? And to me, like, I think there’s similarities between that and like observability. If like, if it helps me understand how my production system is running, it’s observability under that broad, it’s a sandwich. Now I might not.

you know, advertise myself as a sandwich shop if I sell ravioli. But like, that’s just I’m telling my users in a more specific way what I do. Like I sell error monitoring tools. I sell performance monitoring tools. I am an observability vendor, I guess. I would call it myself that in like a organizational kind of way. It’s just from a marketing aspect of like, who am I trying to talk to? That’s not the word that they use. So I don’t use it.

Kate Holterhoff (34:42)
I think it’s worth reiterating here that it was these observability vendors who have entered the frontend domain. So I feel like at this point, it’s okay for us as web devs to say, hey, we’re gonna talk back to this a little bit because I get it, in computer science, there’s all these terms and you don’t step on toes and it’s like, okay, this is what it means in this context, we’ve got a Wikipedia page about it, it’s well defined. But with observability,

if you’re going to talk frontend to the front of it and enter that domain and say, no, no, we’re doing client-side observability, well, suddenly it’s okay now for us to say, hold on, I thought we weren’t included in this. thought, yeah, like the Fullstorys, all these things are sort of oblique and not quite it. for the record, I just wanna say that like, we’re here now, we’re all here.

Todd Gardner (35:11)
Yeah.

Kate Holterhoff (35:28)
We joined this conversation. You invited us.

Todd Gardner (35:28)
Right, right. Right. So, yeah,

I don’t want to step on your toes. If you want to, fight out, like, what is observability and what isn’t observability in the the distributed system space. Cool. Have fun. I hope you all have a good time. But if you come into the client side and now you’re saying, hey, this thing about capturing performance metrics is observability and error tracking is observability, but somehow screen recording and analytics isn’t. What’s the difference? Like it’s

Kate Holterhoff (35:44)
Right?

These are the questions I have Todd. This is why we’re here. I’m trying to figure this out And I think, it comes down to how are we going to market it? Because that’s it’s all separate question. And is there a frontend buyer? And are they even marketing to frontend engineers? Is it the SREs who are suddenly interested in what the end users want? I mean, maybe that’s a good question for you. Do you see that like folks that have been the typical users of observability products suddenly in 2025? care about the end user experience, is that what has changed? Okay.

Todd Gardner (36:23)
No, I don’t think so. In fact, like, so I have I have three customers that that are bigger companies. through whatever kind of corporate mandates, you know, we’re we’re told that they have to use. Big observability vendor or whatever, like they they sign some sort of corporate deal. It was got to be on everything. It was going to be used on the client side, But they still use TrackJS.

and because it’s better for their team. And so like we see like we’re both on the page together and sometimes that vendor is like actually causing an error because of something they did or whatever and TrackJS just reports it. Or I have one particular customer who most of their back end issues get exposed to some sort of client side failure. And so TrackJS tends to be the canary for their whole system is that

TrackJS will go off with an alert saying, hey, you’re having a bunch of users with this issue, like an hour before their observability tool knows that anything’s happening. And I find that both a little funny, but it also hurt their overall system. By mandating that there’s one tool to rule them all, you actually hurt your overall system. your frontend team couldn’t use that system, still uses a system that was built for them, and that your own tool

hurt the overall user experience. And so I think that’s the risk of this I’ve often heard it called of I’m buying into a single pane of glass for my whole system, is that systems are very different. And depending on what layer and what the priorities are, there might not be one tool that is the best way to look at it.

Kate Holterhoff (38:00)
I think we have hit on all the major big issues that I’ve been grappling with at least. So let’s wrap it up. Before we go, how can folks hear more from you, Todd? What are your social channels? Are you speaking at all in 2025? How can folks keep up with your ongoing thoughts on frontend observability and monitoring in general for web developers?

Todd Gardner (38:04)
Ha!

Okay.

I love to rant about this sort of stuff. You’re one of the few channels that lets me openly like just blast on it because usually I’m usually it’s just be me screaming into the void. Mostly I’m on Blue Sky lately. I’m Todd H. Gardner dot com on Blue Sky. You can also find me on LinkedIn. I’m pretty easy to find. You type in Todd Gardner. That’s who’s going to show up. It’ll be my face kind of looking on some sort of colored background.

Kate Holterhoff (38:27)
Yes.

Todd Gardner (38:46)
I tend I don’t have any scheduled speaking so far this year. I will be hanging out at Open Source North, which is here in Minneapolis in May. But that’s mainly because it’s local to me, kind of sticking around home as far as travel this year. But I’ll be blogging a lot. I’m on YouTube a lot. And you can, you know, hit me up on any social media channel.

Kate Holterhoff (39:07)
Right, and you’re on Frontend Masters, correct?

Todd Gardner (39:09)
I am. So that’s bit of a side thing I work on is so I work with Mark and the Frontend Masters team. I have two training courses out there. I have the fundamental of web performance, which teaches you all about like how to track frontend things, how to think about performance, what the different metrics mean and how tactics to make them better. And then I have an older debugging course about thinking through JavaScript errors and how they work and what they mean and how to go about debugging them

Kate Holterhoff (39:34)
All right. Always a pleasure chatting with you, Todd. Again, My name is Kate Holterhoff, senior analyst at Redmonk. If you enjoyed this conversation, please like, subscribe, and review the MonkCast on your podcast platform of choice. If you are watching us on Redmonk’s YouTube channel, please like, subscribe, and engage with us in the comments. All right.

Todd Gardner (39:52)
Like and subscribe.

No Comments

Leave a Reply

Your email address will not be published. Required fields are marked *