A RedMonk Conversation — “Good docs are still good docs”: AX, DX, & LLMs (with Taylor Barnett-Torabi & Lyle Schemmerling)

Share via Twitter Share via Facebook Share via Linkedin Share via Reddit

In this RedMonk conversation, Taylor Barnett-Torabi , Staff Product Manager at Netlify, and Lyle Schemmerling, Senior Software Engineer at FusionAuth, discuss the impact of large language models (LLMs) on developer experience with Kate Holterhoff. They chat about the evolution of agent experience (AX), the integration of LLMs into documentation, and the challenges of ensuring that documentation remains relevant and secure in a rapidly changing landscape.

Links

 

Transcript

Kate Holterhoff (00:12)
Hello and welcome to this RedMonk conversation. My name is Kate Holterhoff, Senior Analyst at RedMonk, and I am thrilled to be joined by two guests today. First is Taylor Barnett-Torabi, Staff Product Manager at Netlify. And my second guest is Lyle Schemmerling, Senior Software Engineer at FusionAuth. Taylor and Lyle, thanks so much for joining me on the MonkCast.

Lyle Schemmerling (00:32)
Hi, thanks for having us.

Kate Holterhoff (00:34)
This is gonna be such a good conversation here. both Netlify and FusionAuth serve developer communities. What was the a-ha moment when you realized that LLMs were fundamentally changing how developers discover and interact with your documentation? Taylor, would you start us off?

Taylor Barnett-Torabi (00:50)
Yeah, yeah, definitely. I was actually thinking about this right before this, what was like the, something is different going on here. And that was last year when I realized you could get ChatGPT to basically write a site code, package it up, and have it basically ready just to deploy straight to Netlify. It wasn’t doing all of, like, you know, we were much further along now with what that capability is.

but it was able to like, here’s a zip file that you could, and here’s your instructions that it told me how to go deploy the site to Netlify. And this was before some of the Lovable, before Bolt, before all those things started really coming on the scene. So it was like, okay, this is interesting. And we were seeing people do it too. And so we’re like, okay, there’s something new here that is going to only increase for sure.

Kate Holterhoff (01:48)
Amazing. And how about you, Lyle?

Lyle Schemmerling (01:50)
Yeah, so was just thinking about this. It was actually at GlueCon in 2023. And I remember cause we were going there and I didn’t realize that almost every single talk was about AI the whole day that we were there. And up till that point, I was still kind of on the, it’s a chat bot, it’s interesting train, but you know, mid 2023, was really when like people were kind of going, oh, wow. Okay. This is going to pick up and start to be a big thing. And then later I was,

just doing some volunteering and working with some young, kind of getting into the industry developers. And one of these guys was just showing me a conversation that he had with ChatGPT and he was building an app with it and he was talking with it in a way that I hadn’t seen anyone interact with the chatbot before because it was very conversational and loose but he was also correcting it and I was like, didn’t… You don’t really understand that whole sort of the nature of these things until you see someone would truly interact with it and that was sort of the eye-opener for me.

Kate Holterhoff (02:52)
man, I love the GlueCon reference as well because a lot of what I was first introduced to around that was at what used to be GlueCon and is now SW2Con. So I also had that experience going from booth to booth and just being like, okay, what is your your AI story? Like how are you integrating LLMs into these products that you’re here to talk about?

Okay, next question So Netlify has done a lot to pioneer idea of agent experience or AX. I’ve heard Netlify CEO and co-founder Matt speak about it at length. So I’m interested to hear how you’re defining AX these days, especially for folks who maybe haven’t heard this term. How do you define it?

Taylor Barnett-Torabi (03:32)
Yeah, so, you know, Matt’s really defining it initially came earlier this year as the like holistic experience of like using your product by like the agent using your product. know, and that really came from, first, you know, initially conversation in software development was UX. And then later, you know, Jeremiah Lee, you know, coined the term developer experience. And then this is kind of like that building on.

And one thing I will say is like there is no there is no DX without UX and there’s no AX without DX. Like all these things really build on top of each other. And I know some people think, AX is a thing, no more DX. Like what, you know, that’s out the window, old news. No, like I think it’s really like they’re all like building blocks on top of each other. And it’s really just about like what context of experience, you know, whether that be from an agent perspective, whether that’s a developer perspective.

And often at Netlify, we even split up AX kind of into different ways. There’s the agent experience of using Netlify, and that’s really what we focus right now with a lot of things that we’re focusing on, or is that? And then, you know, our customers also have, like, what is the experience of someone who’s deployed their site to Netlify? What is their customers and their, the agents of their customers’ experience of using their sites? And so we kind of break it up to kind of make it sometimes easier to talk about, but, yeah, for the most part, often we’re talking about the agent experience of Netlify lately.

Kate Holterhoff (05:06)
Got it, got it. And is that a term that you use at FusionAuth as well AX?

Lyle Schemmerling (05:11)
You know, I don’t think that I’ve actually heard us use the term AX, you know, just around the shop. We’ve been talking about AI a lot, but that specific term we haven’t brought up yet, so.

Kate Holterhoff (05:24)
Yeah, that’s totally fine. How are you characterizing, speaking to these AIs and making sure that we’re going to be talking a little bit about documentation here momentarily, that you’re able to communicate with them in a way that works well?

Lyle Schemmerling (05:37)
We don’t really see it right now as too different from the same conversations that we have when we’re talking about developers coming to our site and being able to navigate the documentation, find what they need to, and have the explicit detail in there that we need. So really, it’s been like, are we gonna do the whole click your download markdown thing? Are we gonna put the LLMs.txt up? But we’re not really talking about… it really is a separate thing. We still always treat our docs as the source of truth for an entity, whether that’s an AI or not, coming to learn how to use FusionAuth.

Kate Holterhoff (06:12)
sure thing. OK, that helps. All right, so Lyle, I was interested in FusionAuth’s blog, which our mutual friend Dan Moore shared with me. And it’s titled, How to Make Your Developer Documentation Work with LLMs. So in that, you’re mentioning that documentation sections should include enough context to be understood independently. And I think that kind of gets to what you’re describing

that docs should be accessible broadly to anyone who’s going to be visiting your site and looking for information about your product. So how do you balance this need for standalone content with avoiding redundancy? And have you had to completely restructure any of your docs to accommodate these LLMs?

Lyle Schemmerling (06:54)
we haven’t had anything to restructure yet. I think if you do, look across the site, you can actually see there’s a fair amount of redundancy just across the pages. There’s little reminders and things that we have pieces of, paragraphs and, instructions that we move from page to page. and that is just in service of sort of giving that whole context, you know, kind of on one single page.

Again, I don’t think we’re doing anything deliberately with the only goal being sort LLM optimization, but it is something that we’re constantly talking about once we do it.

Kate Holterhoff (07:24)
Yeah. and Taylor, I want to talk about how Netlify is using AX as part of the deployment process, because I’ve seen this myself. I thought it was super interesting. I mean, my experience with it is that there’s just a chat bot that comes up when you’re having issues and says, like, would you like my help? Yes, and then you’re able to actually copy and paste some text into your own.

LLMs, your vibe coding tools, whatever you’re using, and it will help you to make sure that your project is going to deploy better. I suspect that it’s broader than that. I’m probably butchering some of the details here. how are you integrating AX into Netlify in this sort of deep way to help users?

Taylor Barnett-Torabi (08:13)
Yeah, I mean, a lot of that, even backing up from that is like, we realize so many of the issues that often LLMs have when it comes to like the AX of Netlify has to do with its either operating on outdated information, old blog post, old content, you know, things like that. And so a lot of what we’re focused on is like, how can we make sure, in this non deterministic like space to give it, actually like better context.

and direction. And you know, like Lyle mentioned, we had initially explored things with the LLMs.txt and different things. Then we started looking at context files, because those were becoming really popular. The context files heavily depend on the agent. There’s a lot of different experiences. Sean Roberts at Netlify has been trying to work on how could we potentially have some standardization around how context files are written, because the experience depends

depends so much on what you’re using. And so what you’re talking about with why did it fail, which Netlify has had for a while, even before I joined. So I joined Netlify about a year ago. Why did fail exist even before that? It’s when a build fails, you have the option of getting it to look at the logs and try to give you some better direction of how can you debug this. It’s just another version of context, basically. And we recently just a couple, we’ve been continually trying to improve it.

And one of the things that was added was like a copy analysis button so that you could then take that to an agent or a chat or whatever and help you if you’re not able to immediately take the direction of what the analysis is. And again, it’s more context to give and more context that is a bit more specific and that we can control a little bit more so that people have a higher chance of success of it actually helping them out.

You know, and, and, but there’s other things, you know, so like so much of this is about the context. And so that’s why, like when we saw MCP, like servers come up, that was a really big thing for us because it was like, we really do struggle I mean, for context, context about context, um, Netlify is over a 10 year, year old company. There’s a lot of old content out there to both help us, but also to our detriment because it’s in these LLMs like data sets. And so, you know, we were really looking in context,

files was like one way of us trying to combat that of like giving them like better information to work off of but like MCP servers have helped us so much more like the Netlify MCP was one of the first that you could out there that you could like actually deploy a site or an app to like a platform but for us it’s even more than that it’s like being able to use that MCP to give more context that is up-to-date information and access the context of the agent has both for good and bad that’s a very powerful thing

and definitely something that’s evolving. even just being able to give it better direction on the Netlify primitives, how to use functions, how to use our blobs, how to use caching properly, those little things that there’s a lot of old information out there. And we see people, we’ll look at their LLM generated code and be like, ooh, they’re using the old version of whatever. they have no idea. And so the MCP server is kind of way to kind of expand that AX and that agent experience to try to help people deal with those pitfalls that do come with using LLMs for sure.

Kate Holterhoff (11:49)
That is such a good point. Yeah, Lyle, I’m less familiar. Is FusionAuth? Do they have an MCP as well?

Lyle Schemmerling (11:54)
not at the moment. just something that like I’ve been talking to people about on the last several weeks on the daily. The thing is, you know, we’re still a small shop, like we have to do things deliberately when we do do them, because we only have a handful of developers. And then the other thing too is, that it’s still kind of a tricky space and we need to be careful, especially because it’s an authentication service. So we don’t want to put something out there that’s going to let everybody see all your junk, right?

That would be really, really bad. And two, the other thing is, we’ve got a good and comprehensive API that would be relatively simple to just sort of slap into a kind of like an API wrapper, MCP server.

Taylor Barnett-Torabi (12:19)
Yeah.

Lyle Schemmerling (12:38)
We’re not sure right now how much value that on its own provides versus maybe providing like a sort of a more curated experience, a set of tools and, prompts and stuff that would allow the LLM to do what a user’s typically trying to do as opposed to just sort of give them all of the options, right, which can, I think, derail your task more than it can help a lot of the time.

Taylor Barnett-Torabi (13:00)
Totally.

Actually, and we, first iteration of playing around with creating MCP server was a lot closer to that and we realized it wasn’t a great experience. And so we actually like clawed that back and then kind of locked down like to a smaller set of tasks that we felt like were a bit more curated and that we felt like, just a better experience. I’ve definitely used MCP servers where it just felt like how is this that much better than if I just was like doing things directly with the API? You know, it just, didn’t, it wasn’t the same experience. And so a lot of the MCP servers that are better are the ones that are a lot more curated for sure.

Kate Holterhoff (13:43)
That’s really helpful. documentation is a really great example of trying to walk the line between giving as much information as you want, but also keeping in mind that there’s bad actors who want access, and you certainly don’t want to do that. Yeah, OK. super interesting. And you mentioned the idea of success. And I would be interested in hearing more about how you’re measuring this idea of success. So how do you know?

that the, I guess in Netlify’s case, that the agent experience is working the way that you want, or that maybe you can even improve it. Like, how do you measure that? And then I guess, I’m also interested in how FusionAuth is making sure that the documentation that you’re creating with LLMs in mind is doing all the necessary stuff, that it is ensuring that those developers are able to find what they want, but also that you have that discoverability that is becoming increasingly important. So maybe, Taylor, would you go first?

Taylor Barnett-Torabi (14:40)
Yeah, I mean, the big thing for us is just watching the data of how many deploys are coming from agents. I mean, that’s a big one. And also, ones that, you know, looking at whether the builds are succeeding or failing is a another thing. I mean, it’s one thing to be able to deploy from an agent. It’s another thing that the deploy is actually like a successful build, especially when like, you know, with Netlify, you could locally build something and see if it’s successful or not. And most of time,

as a predictor of whether it, you know, there’s factors that will still fail once you push it for other various infrastructure reasons, but you know, not all these agents are necessarily doing that local stuff before they actually try, you know, to put, that people are saying, yeah, go push to Netlify or whatever. And so, there’s things around that. we’re looking at just how much people are using different things, like the context files and,

accessing them, are we seeing the LLMs.txt getting, the traffic? What’s the traffic from agents themselves? Like, we can see that, you know, the traffic from ChatGPT coming in to us. and some of this is from, you know, we differentiate between, the crawls that are, these different LLM providers are crawling our stuff to bring in the data. And then there’s the actual, like,

web search type, functionality in, Claude or ChatGPT and stuff that we’re also watching too. And so it’s kind of a culmination of a bunch of those different things and seeing. But also just like we regularly are talking to users who are using these tools and finding, new areas that we want to improve and stuff. And it’s hard because it honestly changes on a week to week basis depending what agent released a new version or what, you know, new models are released and different things like that. And so it’s constantly something you’re chasing for sure.

Kate Holterhoff (16:36)
And how about you, Lyle? What sort of metrics are you using at FusionAuth to gauge success?

Lyle Schemmerling (16:41)
I…won’t list any by name just because that is mainly domain of our, we have a handful of our people in the marketing department who watch the metrics and stuff around the site religiously and so they have a bunch of like, you know, KPIs and things that they’re paying attention to, but it’s things like how many clicks would we get on the copy to markdown button, was the presence of the markdown button driving traffic to or from these things, you know, where is the traffic coming from, who we got referred to, so lot of stuff that Taylor was kind of talking about,

second ago and then there’s just little experimentation knobs, know that people are playing with constantly and it is still like a bit of a I guess a try and see, sort of experimentation right now with like, okay, well we’re gonna put this thing out. We see that it’s getting traction with, the AI community. So let’s try this method out see if it does anything, and then kind of

go from there as to whether or not to continue to pursue it further. And then, to what, Taylor also said, I think some of the best feedback we get is from our customers themselves. We’re fortunate enough to have a lot of our customers just in Slack with us. You know, they have channels that they can directly talk to support the engineering team or, anyone. But, it is nice when they pop in and they’re like, hey, I was, you know, I popped this into ChatGPT and it came back with this and is this right? And it’s like, yeah, sure. Or, sometimes is totally not, the feedback on how they’re using it and what they’re trying to do just in person is really, really valuable.

Kate Holterhoff (18:03)
Yeah, I’m thinking about when I’ve used some of these tools. Sometimes I will click the button and I’ll try it and then it doesn’t work and then I just have to like copy and paste what my prompt was to begin with. I could see that that would be, you your marketing team has their work cut out for them and trying to like figure out what’s actually working and what is folks trying to like, figure out these new tools. I mean, everybody’s workflow is changing again so rapidly.

I mean, it’s fun for kicking the tires. It’s fun for trying new things. But I will say, I kick a lot of tires these days. I’m trying all the new things. I don’t know. I’m sure These are late nights trying to suss all this out.

Taylor Barnett-Torabi (18:42)
One really good example of that that we’ve seen is like we have this one blog post from like 2000 and like, I think it’s like eight years old, you know, something like that, that we see the LLM traffic coming into it. And so it’s like, okay, we gotta update it, but how do we not update it so much that it potentially affects like the traffic coming into it? And you kinda have to really, like that’s something that I think a lot of companies are still like figuring out of like,

Kate Holterhoff (19:06)
Yeah.

Taylor Barnett-Torabi (19:12)
you know, there’s all the different new acronyms that are basically the new versions of SEO and stuff and I don’t know, it’s just kind of goofy to me right now but I know it’s very real and everything and so like us just keeping tabs on like this thing that’s old that’s getting a lot of traffic, we should make sure it at least has the most up-to-date information so that we don’t lead someone down or an agent down, a person or agent down the wrong path basically.

Kate Holterhoff (19:40)
That is huge. You know, we complain about—at RedMonk, with our language rankings, about the fact that, Stack Overflow doesn’t get as much traffic as it used to. And so, trying to get information from all the sources that we used to visit all the time, you know, suddenly, they just don’t have the same value, for the data that we’re looking for. Not to say that they’re not useful, but, things are shifting.

Taylor Barnett-Torabi (19:50)
Yeah, yeah.

Lyle Schemmerling (20:05)
And it’ll be a moving target. mean, if you know right now with a lot of these LLMs and MCP servers and some of the tools that are coming out, they can drive your browser for you and navigate to websites themselves and to the website. That’s basically indistinguishable from a human being. So, as we move from, I’m getting all of my information, typing things into the chat bot interface, which I think is still doing a pretty good job of letting people know that it’s coming from there. As the agentic movement sort of starts to take over, I think there’s gonna be a lot of gray areas in the space.

Kate Holterhoff (20:36)
and I think we’re touching on this, but I want to think about how you are future-proofing documentation. again, LLMs, they’re evolving so rapidly, month to month. How are you thinking about building documentation strategies that won’t become obsolete in six months? Is it a matter of just going back and measuring what sort of things are getting the most traffic, or is there ways that you can be, instead of reactive, like a little proactive about this.

Taylor Barnett-Torabi (21:05)
I think for us, mean, docs are still your product either way, honestly, like it’s still like the representation of the product. There are no LLMs without docs and content and stuff, like they would not exist today without it. And I don’t think that’s like gonna change a lot.

I think a lot of it we still have to just watch of how they’re going through the docs maybe and we can make improvements there but I don’t really think, I think we’re future-proofing ourselves by continuing to write docs basically and write good docs with useful context and that’s not a trivial thing. I mean, I’ve heard some people are just like, your LLM strategy should be write better docs and I’m like, that’s not as easy said as done. And so I don’t,

don’t think we’ve dramatically changed. It’s more about kind of like the features to the docs that we’re thinking a little bit differently, more in like how can docs be used in other ways? How can they plug into different things? And then bring that context to where it’s needed without always having to go search for that context, basically.

Kate Holterhoff (22:16)
Okay. Lyle, do you have any sense of ways that you’re future-proofing? whether it comes to documentation or just ensuring that all of these new AI and AI-adjacent features, tools, things That FusionAuth will position itself in order to function well in this new context, this new situation that we’re all in.

Lyle Schemmerling (22:37)
Yeah, think similar, you know, kind of the same thing. And now I don’t think our strategy for how we are writing docs has fundamentally changed. Good docs are still good docs, right? Do they tell you how to use the product? Can you figure it out on your own? Can you search through the stuff and get what you need to and, you know, walk away with just enough information to do the job that you’re trying to do and not get overwhelmed with all of this extra stuff.

stuff. And to change the technique a little bit. I’ve run into a couple of situations where I’m, because I do a lot of this myself, I will be running, you know, the LLM locally and telling it to do FusionAuth things. And, there’s some pages that just have too much on them, right? Like we, you’ve got context windows that are getting squished or it’s just, a lot of information. So I think there’s room for sort of keeping that type of thing in mind.

there’s new constraints which were already probably really there. If you had a really really long page you probably bug in some human beings out there as well. But you know sort of the AI things really bring it into focus for you and it just makes you pay attention to a couple of different things. You know like we’re gonna try and improve our search and maybe pay attention to the length and size and content of articles that we have but we’re still

still have to write good docs. We want people to be able to go to the website, whether they’re a machine or a human, and be able to figure out what they need to do what they want to do.

Kate Holterhoff (24:11)
So I’m not hearing that either of you are concerned about a future where developers never visit the actual documentation site, that direct relationship between developers, users, and the documentation, the product that these things are not for humans any longer.

Taylor Barnett-Torabi (24:28)
Yeah, I don’t think docs are going anywhere. And the funny thing is even if they were used less by humans, they would still need to exist on the public web for the LLMs to crawl, so they would still exist. I think one of the things, when I use Claude or ChatGPT and different things, I always have instructions like on my account to add source links when they give me responses because I always,

Kate Holterhoff (24:40)
Good point.

Taylor Barnett-Torabi (24:54)
almost always like love to check like where they’re actually getting the information from. But also because I like to then go dig deeper. especially if it’s a thing that I’m wanting to dig deeper on. know, there’s, I’m really asking it like the quick questions. It’s usually a little bit deeper questions and like the source links always really help me. And often that is linking out to different companies’ docs like I often find. And so I don’t think, you know, that those are still just gonna disappear. Now I don’t know if

Kate Holterhoff (24:58)
Yes.

Taylor Barnett-Torabi (25:24)
if I may, you know, an average user or, you know, wanting to look at source links, but I know a lot of people want to look at the, like, show me the rest of the doc that that came from. And, you know, often reveals, like, other useful details that maybe wasn’t mentioned in the first place, especially if I’m trying to debug something,

Lyle Schemmerling (25:41)
think we all know that one person on Facebook who doesn’t follow this rule, but really it’s, should never ever ever blindly trust an LLM and you should always, go and verify the source of the information that it’s talking about. Because I’ve had them just flat out lie to me before, which is always fun. But then a lot of times they just misread or they misinterpret. you know, so what you’re saying, I will actually explicitly say, I want sources. Give me links because I’m going to go look at the stuff myself.

a lot of the smart people, so especially in the technical product land, the people who are used to going and hunting docs for themselves are going to use it more as like an aggregator of things to go look at, then just feed me the answer and I can run away, right?

Kate Holterhoff (26:28)
I love having experts on the podcast because I get to ask these questions to pull back the veil a little bit. So if you’re willing, I would love to hear any horror stories that you have had with trying to, I guess, accommodate AX. Have you seen anything really surprising now that you need to make all of your sites, legible to these LLMs.

Taylor Barnett-Torabi (26:52)
I don’t think I have any like really bad ones. It’s just like annoying things most of the time. I don’t know. I mean, I mean, it’s totally it’s a thing that was that people I’m starting to hear are creating like unlisted docs.

Kate Holterhoff (26:57)
I would love to have just like, what? you have, can you share any?

Taylor Barnett-Torabi (27:11)
to try to correct issues that necessarily the human doesn’t need. It might already exist somewhere in the docs, but it’s just not getting processed right. for some of these different custom AI chats and stuff, I’ve heard multiple people talk about, we have this hidden doc that has this information to try to give it the context that it needs so that it does the right thing. There’s stuff like that.

Kate Holterhoff (27:23)
Hmm.

Taylor Barnett-Torabi (27:39)
that I’ve just like struggle with is like when we released a new library like a couple months ago and like the agents just were not picking it up and so then we had to creatively ask ourselves like are there creative solutions to that I mean honestly the MCP server was like the best way to help with some of that because it had to do with some of our primitives like the platform primitives that we have at Netlify I mean there’s

I don’t want to have to write context files honestly. I don’t think anybody does. Like it would be nice if it just had it all. But like we’re having to do that so that it has that latest information. It all comes down to just like the age of information most of the time for us in context. Like how can we give it better context?

Kate Holterhoff (28:26)
That really helps me out. What about you, Lyle?

Lyle Schemmerling (28:28)
Again, I don’t know that we’ve had any horror stories ourselves. It’s interesting because so you know one of the things I’m paying attention to with all of the stuff because we are a authentication and security company is You know like OWASP now has a new top ten just for LLMs and a couple of different things and it’s interesting that you’re talking about like Yeah, we’ve got secret hidden You know context files that get injected and that stuff and so it’s depending on how you do it that could be

safe, but that is literally like the number one OWASP attack vector right now is prompt injection. So you have to be careful about the things that your LLM is consuming. And so yeah, that’s the one that I guess maybe I care about more is I’m not as worried about our docs necessarily not being up to snuff. I’m worried about people doing it and then getting junk mixed in there that they didn’t

Taylor Barnett-Torabi (29:02)
Yeah.

Kate Holterhoff (29:05)
Wow.

Lyle Schemmerling (29:28)
want, whether maliciously or accidentally. And that A can come back to look on you as like, I was trying to do fusion off stuff and then you deleted all my users, what’s going on? Like, well, you But yeah, it’s just, it’s a new world of security scares that we just are not familiar with yet. So.

Kate Holterhoff (29:37)
Yeah. man, this has been so helpful. Okay, so I’m gonna wrap us up here but before we go, how can folks hear more from you? Do you have a preferred social channel and are you planning to be at any specific conferences, you know, the next couple months?

Taylor Barnett-Torabi (30:01)
You can find me @taylorbar.net on Bluesky. Also, that is my website. And right now, the only conference I have on my calendar is Monktoberfest. So I’ll be there in a few weeks.

Kate Holterhoff (30:14)
I love it.

I will see you there.

Lyle Schemmerling (30:16)
Thanks. I know FusionAuth is planning on being at some conferences up here in the near future. I’m not sure which ones they are. I was just at Gamescom in Germany a couple weeks back, which was crazy. Like, it was huge and fun. And as far as contacting us, I’m Lyle.Schemmerling or you can probably just do [email protected]. And then going to our website, blog or LinkedIn like that’s where a lot of stuff happens. I’m not the most active person on social media. My Facebook is mostly full of dog frisbee photos so like…

Kate Holterhoff (30:48)
There it is. No, that’s reasonable. Okay, amazing. Well, thank you so much for joining me, Lyle and Taylor. Again, my name is Kate Holterhoff, senior analyst at RedMonk. If you enjoyed this conversation, please like, subscribe, and review the MonkCast on your podcast platform of choice. If you’re watching us on RedMonk’s YouTube channel, please like, subscribe, and engage with us in the comments.

No Comments

Leave a Reply

Your email address will not be published. Required fields are marked *