RMC: Making Docs Better for AI and Humans (with Jennifer Marandola)

RMC: Making Docs Better for AI and Humans (with Jennifer Marandola)

Share via Twitter Share via Facebook Share via Linkedin Share via Reddit

Jennifer Marandola, Technical Writing Manager at Cribl, joins RedMonk’s Kelly Fitzpatrick for a discussion on the evolving role of technical writing in an era of AI copilots and assistants. She shares insights on how her team collaborates with Cribl’s AI team to enhance documentation, making it more accessible and effective for both human users and AI tools. The discussion also covers the changing nature of user queries, the importance of clear documentation, and advice for aspiring technical writers navigating this new landscape.

Links

Transcript

Kelly Fitzpatrick (00:12)
Hello and welcome to this episode of the MonkCast on AI and technical writing. My name is Kelly Fitzpatrick, senior analyst at RedMonk and with me today is Jennifer Marandola, Technical Writing Manager at Cribl. Jenn, thank you so much for joining me on the MonkCast.

Jenn Marandola (00:27)
Thanks for having me.

Kelly Fitzpatrick (00:28)
And to start off, tell us a little bit about who you are and what you do. And can I just say, I’m always excited to have anyone who does technical writing on the show.

Jenn Marandola (00:36)
As you said, I’m Jenn Marandola and I run the documentation team at Cribl. We have a team of nine writers plus myself, and we are responsible for all of the documentation that you see on docs.cribl.io. And we also are part of the product team. We work closely with our dev partners. Any of the tool tips and pop-ups and help you see in the product. Most of that also goes through my team. So we’re pretty busy.

Kelly Fitzpatrick (01:00)
Yeah, and Cribl has been hiring a lot lately. I think last count, you’re up to something like 800 folks at Cribl.

Jenn Marandola (01:07)
Yes, we are. I’ve been at Cribl for a little more than two and a half years. when I started here, we had four writers. And now, we’ve doubled the size in two years and you know, it’s just, that’s every team at Cribl right now.

Kelly Fitzpatrick (01:19)
So your team is doubling, can you say a little bit more about the kind of docs and content that fall under your purview?

Jenn Marandola (01:26)
Yeah, so this is mostly, the kind of thing you’d expect to use our products. So “how to” docs, concepts, as I said, we do the in-product documentation, too, but everything you kind of need to know a lot of reference documentation to use Cribl products, the whole suite. So we have four products plus, and you know, there’s a lot to cover there.

Kelly Fitzpatrick (01:46)
I like the four products plus.

Jenn Marandola (01:49)
Yeah, I mean, Cribl Copilot is a product, but it’s a little different from our other products.

Kelly Fitzpatrick (01:53)
And speaking of Cribl Copilot, I was at CriblCon in June (2024) in Las Vegas when Cribl Copilot was announced. And one of the features is to use Cribl Copilot to access information in the documentation. And I think the actual tagline, if you go to that, the Cribl Copilot part of the Cribl website is like, “Let Copilot do the book work for you, search thousands of pages of content in seconds and find exactly what you need to get the job done,” which sounds delightful, but also, so there immediately we have this intersection of what your team does. So like documentation of all kinds and, Cribl Copilot. So we’ve got our AI and technical documentation.

Tell me about that and what role has your team played in supporting this intersection of AI and documentation?

Jenn Marandola (02:37)
I mean, it’s been really interesting for us. I mean, this kind of started as, hey we’ve got this Copilot, can we index your documentation? So, we use a static site generator called Hugo to create our docs. We author them in Markdown. We store them in a Git repo. It’s pretty basic, pretty simple. And the AI team was like, Hey, how do we, index this stuff? So we kind of explained how the repo is laid out.

You know, for some of our products, we do a back version. So it’s kind of important to know what’s new versus what’s old content. And they just sort of had at it. They indexed it and Copilot started answering questions. If you’ve worked with an AI today, you realize that they’re great for some things. And then every once in a while they just come to you with something that just sounds plausible, but really when you look into it, it’s just not true.

So the AI team will tweak things on their side. They will–and I don’t pretend to understand this– but they will do some work with their algorithms and try to get those answers better. But after a while, we were like, hey, you know, we could also be doing something on our side that is making it really hard for you to learn. And so maybe we can work together and figure this problem out. So now I think we’re in that trying to figure out how can we from a doc side make things better for the AI in addition to what the AI teams already do.

Kelly Fitzpatrick (03:49)
In the documentarian community more broadly, so think of Write the Docs, and folks who are technical writers, people have been talking about how can you make documentation that is not only human readable, but also more easily readable by all these other kind of tools. What has your team learned in terms of things that you can do to better enable, say, Cribl Copilot?

Jenn Marandola (04:15)
When you really think about it, a lot of the things that are good for Copilot are good for people. So we’re also at Cribl doing an accessibility initiative. We’re trying to work on making our products and documentation more accessible. And when you think about that, so for example, one of the first things we learned about the Copilot is that it doesn’t really know what our navigation looks like. Again, we have a static site generator. We kind of lay out that navigation in a separate file and it just pulls in.

Copilot doesn’t really read that. It doesn’t understand the file structure really. It knows the URL so that it can generate a, “here’s where we got this information” kind of link. But it doesn’t know anything about navigation. And this was hard for us to understand as a team. We’re like, well, it’s only for (Cribl) Stream because it’s in the Stream document and you should just know that. But maybe we didn’t within the topic actually say, hey, this is for Cribl Stream.

And so we’re learning that humans are really good at inferring information, right? When we give them a topic, they know where they are in the navigation. They can see all the things, you know, they’re very visual. And so they will infer like, yeah, okay, I know this is about Cribl Stream, because I see I’m in the Cribl Stream docs. Your AI won’t do that. And to be honest, we’re also learning.

It’s probably bad assumption that all humans will do that as well. So again, what’s good for the AI in the many cases. Another instance is images. We rely on graphics to convey a lot of information, especially flow type information. And, if you have been a technical writer and someone tells you, you need to provide a long description for your graphic, this is a groan worthy moment. Like no one wants to do this. So we don’t do a great job of it all the time. Sometimes we just rely on that visual. The AI does not really interpret our images. So that information is lost to it. And it’s also lost to a sight impaired person. So a lot of things that are good for AI are also good for humans.

Kelly Fitzpatrick (06:04)
That moment of you cannot just rely on the visual is, especially for accessibility, because there’s been a lot about alt text and making sure people use it. And I’m here for maybe the whole AI craze helping us get more accessible documentation, if that’s what it takes to get more accessible documentation.

Jenn Marandola (06:24)
The great thing about the AI is it’s not, it doesn’t have any feelings, right? It is flat out, it is reading every word we wrote and it is giving you the best answer it thinks it can come up with. So it’s really hard to argue with it in terms of like with people, we tend to be like, they just weren’t looking right or maybe they didn’t ask the right question or whatever. The AI just doesn’t, there is no “I was being lazy and I just didn’t want to look for it.” There is no, “I just didn’t do it.” It knows it’s doing whatever it can with the information we put in front of it. But it’s also very literal. And so you really do learn quickly, what it does and doesn’t do very well.

Kelly Fitzpatrick (07:00)
I love the characterization of AI as, well it has no feelings, so you can’t you can’t really be like, well the AI didn’t read the documentation right. That’s not how that works.

Jenn Marandola (07:10)
Yeah, yeah, doc writers love to say, well, you didn’t read the whole manual and that’s on you, right? But you really can’t say that to an AI. It definitely is the only thing I can guarantee right now reads every word of our documentation. But it also is very literal. One of the things we’ve also noticed is that it has hallucinations. I think most people, again, if you’ve worked with an AI, you realize that sometimes it just makes up an answer and you sort of scratch your head and go, where does that come from?

The interesting thing we found since working with our AI is that sometimes it hallucinates very factual concrete things. For example, a function name. So it describes a function we have, but it gives it a different name and we don’t really know why. A couple of weeks ago, we had one where it hallucinated a port number for our Cribl Lake destination. It’s not in the docs anywhere that that port number is associated with this destination.

So we really start to look at it, like where does this appear? Where do we talk about this destination? Is there a connection there somewhere? And actually just last week at our company kickoff, I was talking to one of the AI engineers and going, can we like selectively remove content and see if that changes the answer versus adding the content and not knowing like where, cause I really want to get to the root of like, why does it make up an answer for something that’s so clearly documented? For things that are more, workflow or very complex, you can kind of understand that it has limited information. But for something like a function, when we have a function reference on an entry, you really scratch your head and go, why? Why did it decide I’m going to just make something up?

Kelly Fitzpatrick (08:39)
The science fiction answer is the AI does have thoughts and feelings and it didn’t like the name of the function. So it just came up with a new one. But in the real world, you know, probably not. But I could see that being a mystery that you really would want to uncover.

Jenn Marandola (08:52)
The other thing that throws people is it will serve this up to you with very high confidence. You know, this is your answer. and that is a real struggle for humans because we’re used to either hedging our bets or saying we don’t a hundred percent have confidence in our answer. So if you ask me a question and I give you an answer, but I’m not a hundred percent sure, I’m probably going to say that to you. “Hey, Kelly, this is what I think it is, but like, I’m not really sure.” Our AI right now its limitations are that it really just doesn’t do that. And so it will very eagerly bring you back an answer.

Kelly Fitzpatrick (09:21)
So another question I have for you, you get to see the queries coming in, or you can see the queries coming in. And one thing I’m really curious about is how is AI and things like a copilot changing the way that people are trying to access information?

Jenn Marandola (09:35)
So it’s interesting because at the same time, the AI is learning, it’s sort of changing us. If you just do a Google search right now, you’ll start typing in and it will pop up questions for you. I don’t know about you, but I’ve started to be like, yeah, that’s close to the question I want. And I’ll just pick that answer. And so, with Copilot, we can see the queries that are coming in, which is really great for technical writers because one of the things we always wonder about is what do people really want to know? Like what is the question they really want to ask? And now we can see because people will ask the actual question they want to know. So that’s really exciting. But there’s also this whole thing of even in our on-page search, which is not AI driven, and we look, I see users typing in less keyword searches and more like, how do I, blah, you know? So that’s really interesting too, to see that human behavior is also changing.

I think pretty soon people will more naturally just type in their question. They won’t really think too much about, I don’t know, people in my life who pride themselves on being able to come up with just the right bunch of keywords to get the answer they want. This is a real art here. And I think the AI is probably gonna take some of their thunder because now you can just ask the question. And then if you take something like Gemini where you can follow up and refine the question, it teaches us to ask better questions too.

Kelly Fitzpatrick (10:49)
I almost wonder if I’m lazy, because I’m from the keyword in the search engine generation. And I don’t want to put together an entire question just to get the thing that I want. But to your point, I’m being trained to do that by the tools where I heretofore have been able to just kind of do my keyword search.

Jenn Marandola (11:06)
And when it prompts you with the question and you look at the question and go, yeah, that’s kind of what I want to know. It’s really easy to just click on that question and there you go. And you probably are getting a better answer at that point because we can understand what kind of thing, if it’s a how to or tell me about, or I don’t understand what is blah, blah, blah. That’s a very different question than how do I do something. And so that alone, I’m not really sure AI takes that into account right now.

But for us looking at the questions, it definitely helps us understand maybe we need more contextual information because people are asking what is this and what is that and what is this versus, you know, again, how to information where they’re asking specific workflows and queries.

Kelly Fitzpatrick (11:46)
AI helping tech writers and just people asking questions do better.

Jenn Marandola (11:51)
I think for tech writers, a lot of people do ask me, like, are you worried about AI taking your job? And like, the answer is no, not not right now. I mean, it’s got a long way to go before I think but what it does do is let me do the parts of my job that really require my expertise. Even things like I am not a good my background is not in English. I was an aeronautical engineering major when I went to college. So I’m not real great with punctuation and grammar sometimes. So before I would write something, not be real happy with it, think it was wrong, I’d send that over to another writer and be like, hey, can you take a look at this and tell me what you think? And now I’ll go ask Gemini and it’ll probably give me a bad answer, but I’ll ask it again and I’ll tweak it a little bit until I’m happy with what I have. So we can definitely use it as a tool, So it’s really just making our jobs easier all the way around. Knowing what people want and being able to use it for simple tasks that really don’t require more than what it can give.

Kelly Fitzpatrick (12:42)
One thing that caught my attention in your earlier description of just how your team and AI team interacts is that you actually interact. So you have a company of 800 people and these two teams have figured out that they need to speak to each other and that doing that is to great benefit to all. Tell me about that and the relationship that your team has with the AI team.

Jenn Marandola (13:03)
I think we’re still in the early stages of that. But again, as I told you before, like I think for a very long time, the AI team was kind of going it alone and trying to figure out how it could make the answers better. Really a lot of our current interactions started from trying to improve the quality of the answers.

And I think at that point, we were sort of invited into the conversation. They were thinking like, you guys can help us find the actual answer so we can measure the quality and test against it. And in the process of doing that, when we were seeing these answers, we could say like, hey, maybe there’s something we can change. Like maybe we need to mention the product name. Maybe we need to explain links better or something like that. Because again, it’s very literal.

I’m dating myself here, but if your kids read Amelia Bedelia when they were kids, you know, she does this very literal thing like make French toast and she has toast with a little French flag in it. It’s kind of like that. Or if you were, if you’ve ever done a home improvement project with someone who knows tools really well and you don’t, know, okay, can you go get me this chisel? And you run over to the toolbox and you’re like, I don’t know what a chisel is. So like, I’ll grab these three things and hopefully it was one of them. AI will just grab the one that it’s most confident in and run back with it. And so we’re trying to really work with them to understand as best they can understand why the AI comes up with these things. So it’s like, think putting our heads together, hopefully we’re getting better results.

Kelly Fitzpatrick (14:22)
Another question: if someone was just starting out in the tech writing field in 2025 with all of these new kind of considerations and technologies that can be very useful, what type of advice would you have for them?

Jenn Marandola (14:36)
I think it’s really just to think outside the box and don’t just assume. I tech writers are creatures of habit, right? Like we’ve done certain things the same way. I mean, I think you and I, when we talked before, we talked about like all the time and effort we put into search engine optimization. I don’t know how long that lasts at this point, right? Maybe there is, I mean, we have talked about, can metadata help AI?

And I’m also a little wary of like, I don’t want to create yet another type of SEO I have to manage. So we’re hoping we don’t get to that point. But it’s definitely something to think about. Learning this craft is one thing, but also keeping an open mind to the fact that the landscape has completely changed. in fact, I did a blog post for Cribl don’t know exactly when, and it was just my thoughts on when I was a younger tech writer and we were kind of making the transition from sort of these very narrative, very long, lengthy docs where we kind of expected you to read every word and know everything we said to making more like when we got into DITA and task based and people just want to know how to do it. They don’t want all this information.

It took a long time for writers to shift. Mark Baker, the author of Every Page is Page One, saying people are going to search and they’re going to land in the middle of your document and you cannot assume they are going to use that table of contents. So we already had this navigation problem that we’re talking about with the AI, but we were like, it was okay. We, we, learned to do some things and, know, get around it. But this is another one of those mind shifts where you need to go from what was working before may not be what we need to keep doing in the future.

One of the great things about being at Cribl right now is we do have this AI team and these guys and gals are fantastic. They’re so enthusiastic about what they do and so excited to get this done. So it’s really nice to be able to work with people like that that just really love what they do and can help explain that to us. So if you can find that connection, you will 100 % benefit from it, but don’t be afraid of it for sure.

Kelly Fitzpatrick (16:33)
Think outside the box and find some good coworkers.

Jenn Marandola (16:36)
That’s what you should do, right? The coworkers that are thinking outside the box. And I do think like AI forces people to think outside the box because it just doesn’t follow our rules. And we need to keep on top of that.

Kelly Fitzpatrick (16:51)
So Jenn, I notice in your background and people listening to audio are not gonna be able to see this, but is that a painting of a goat?

a stylized painting of a goat hangs on a wall. various goat and Cribl memorabilia sit on shelves surrounding the painting. also in view: figurines of Inigo Montoya from The Princess Bride (1987) and Ludo from Labyrinth (1986)

Jenn Marandola (16:58)
It’s a painting of a goat. His name is Vincent Van Goat. So you can see this is my little mostly a Cribl shrine behind me. So we are very big on the goats here at Cribl. So I collect little bits from every kickoff. Now people know about the goats. So my people will buy me goats like, hey, I found this goat thing for you. I have quite a collection of goat things going on back there.

Kelly Fitzpatrick (17:20)
Cribl Goat collection. I love it.

Well, Jenn, we are about out of time, but before we go, how can folks hear more from you aside from actually going and looking at the Cribl documentation? Because that’s technically partly from you?

Jenn Marandola (17:32)
I would say look for me on LinkedIn, right? That’s probably the place I put the most information like this on there. You can also come over to Cribl Slack community. I am Jenny from the Docs on the Slack community. So always happy to interact with people there. And you may see a Jenny from the Docs, X feed or something like that in the not so distant future.

Kelly Fitzpatrick (17:51)
Well, many thanks to Jenn for a great conversation. Again, my name is Kelly Fitzpatrick with RedMonk. If you enjoyed this conversation, please like, subscribe, and review the MonkCast on your podcast platform of choice. And if you’re watching us on YouTube, please like, subscribe, and engage with us in the comments.

Jenn Marandola (18:06)
Thanks so much, Kelly.

No Comments

Leave a Reply

Your email address will not be published. Required fields are marked *