In this conversation, RedMonk’s Kate Holterhoff talks to Mike Basios, CTO and Co-founder of TurinTech AI, about the new set of problems AI is introducing for engineering organisations, and what happens when you treat your entire stack—application code, data pipelines, inference systems, agent workflows, GPU and CPU kernels—as measurable artifacts that can be systematically validated and continuously improved. Mike explains how AI is fundamentally changing both how engineering gets done and what engineering is being asked to deliver. The conversation explores a central tension: while AI tools have made individual developers dramatically more productive, that velocity is creating a growing backlog of unreviewed, unoptimized code, and an infrastructure that was never designed to support the dozens of concurrent agents a single developer might now depend on. Mike argues that the engineering role is shifting from problem-solving to outcome-verification, and that teams who don’t define what “good” looks like before they build will struggle to compete as the quality of the solution, not the speed of its creation, becomes the key differentiator. TurinTech’s answer to this challenge is a measurement-first platform that applies evolutionary, self-improving techniques to continuously benchmark and optimize code across any domain where performance can be quantified.
This RedMonk conversation is sponsored by TurinTech AI.
Links
Transcript
Kate Holterhoff (00:04)
Hello and welcome to this RedMonk Conversation. My name is Kate Holterhoff. I’m a Senior Analyst at RedMonk. And with me today, I have Mike Basios. He’s the CTO and co-founder of TurinTech AI. Mike, thanks so much for joining me here on the MonkCast.
Mike Basios (00:17)
My pleasure.
Kate Holterhoff (00:17)
So I’m super excited to chat with Mike here. We’re going to be talking about something that sees as a huge problem right And that has to do way that AI is fundamentally changing the ways that software engineering gets done. So there’s a lot of nuance to this conversation. I will be deeply interested to hear what you’re actually seeing happen on the ground So let’s just dive right in. Mike, engineering leaders keep telling me that AI tools
made their teams faster at writing code, but slower at shipping it. So would you weigh in on this thorny issue here? From your perspective, what’s actually happening inside these teams?
Mike Basios (00:54)
So from my perspective, there is a mixed feedback from different people we also see in clients. Definitely AI has helped a lot into the development processes and it has fundamentally changed the way people are writing code. But at same time, there is a huge amount of risk that comes with it, which has kept a lot of teams being bit more skeptical.
So for me it’s natural, like people are writing code and more code are writing, of course you need them to check more code, right? So it’s a, I would say it’s a like, it’s very connected, those two pieces together.
Kate Holterhoff (01:37)
you’ve described senior engineers as shifting from solving hard problems to managing AI output. And so I feel like at RedMonk, we hear this in terms of software developers becoming managers of AI agents, which I don’t know if a lot of engineers want to be self-identifying as managers. But this is sort of the shift that we’ve noticed, right? So code, didn’t write. Systems, they are accountable for.
I’m curious, how real would you say that is and what does it mean for retention?
Mike Basios (02:06)
Yeah, I think this is becoming a reality now. You know, you see people writing, giving a task to their agent and sitting there waiting for the agent to finish the task. So it’s a kind of a manager. Then in the beginning, people were just using auto-complete to give some suggestion. Then they felt comfortable and they are using more agents. One single agent on a single file.
and then reviewing the suggestion, but then they say, okay, while I’m waiting, let me try to give two tasks to a different agent. So now the skills that software engineers had are changing as well. So it’s not any more of, hey, you know, how am I doing this thing? But how can I make sure that I give the task to the agents to do it?
but also make sure that the agents did it because fundamentally any large language model is probabilistic in my experience. So we should treat large language models and agents like people that are almost listening to what you are doing. So you need to iterate once, twice, three times to double check. But they are improving dramatically and also, you know.
This is changing practically the capabilities now of people. People can do much more things by managing better those agents.
Kate Holterhoff (03:41)
All Okay. That helps. So let’s talk about, the fact that it’s not just a code review problem. There’s deeper infrastructure issues that you’re grappling with here, which is, how has, the labor that software engineers have historically completed, how is that changing, right? So I don’t think…
it’s off base to say that most code bases were not built for agent driven workloads or agent orchestration, right? So how much of the AI roadmap would you say is blocked by infrastructure that isn’t AI
Mike Basios (04:14)
So the problem we’re having now is that people are tending to ship code extremely fast, but because there’s also pressure from either an enterprise senior leadership to deliver more. But then there is the risk that the more code is shipped and less code is checked, then you’re having the problem that code may be inefficient.
So this inefficiency in the code is growing and growing and growing. So it’s really tricky at this moment in time to prioritize and know where should developers put their time. Should they put their time in building features? Should they put their time in refactoring code? Or should they put their time in optimizing the code so more agents and more code can run?
So we are seeing, from my experience, now you see trends that to scale AI, and in particular, people need to run more more agents. You have trends of people using their machines, like OpenClaw is one of the best examples, and people having access to LLMs and trying to use any compute available to use those agents.
So I see a world practically which started from, OK, I’m calling just an LLM. Now people, one single person, needing at least three, four, five, six agents running. But then the cost of those things are skyrocketing. So then people are coming up with new innovative ideas. So we are seeing a world where practically some agents will be running on
the cloud, some agents will be running on your house, on a laptop that you have and some agent will be running on a computer that you may have at work, right? So this distributed way of agents running just to empower one potential developer. It’s a bit crazy as a concept if you think about it because in the past one developer say okay I need a laptop that’s it or maybe three screens but now you are saying okay I need…
10 or 20, 30, I don’t know how many agents in the future, to run somewhere, right? Yeah, so it’s definitely, you know, the compute is increasing and that’s why you’re having so much, you know, lack of compute and resources and people going for GPUs, CPUs, NPUs, anything that they can get and run their agents on.
Kate Holterhoff (07:01)
and the Mac minis, right? I mean, this has been a run on those devices, I know. Okay.
Mike Basios (07:07)
Yeah,
Mac Minis, but there are also other repositories and other agents that are even in smaller devices. Like people use Raspberry Pi’s or people use like we work with some other hardware providers where we have it on Panther Lake laptops, which are the latest version. You can run local models there very nicely. So the more AI and local models also are evolving, then you know.
Kate Holterhoff (07:15)
Yeah.
Mike Basios (07:35)
People will be running agents everywhere. That’s how I see it.
Kate Holterhoff (07:39)
Yeah, yeah, I don’t think you’re alone on that. I hear a lot of stories about the edge. So yeah, I think that that’s the future that I’m, seeing, what five years down the road, maybe sooner. Okay. And, you know, to pivot this this question back to, how you’re talking to engineering leadership, I’m interested, what sort of suggestions are you
providing to these folks that are making these decisions, about helping them become prepared for this.
Mike Basios (08:06)
Yes, so…
The last two years have fundamentally changed everything in the way people have been writing code. And I have seen people being very skeptical in the beginning. They tried the tools and then they said, okay, this cannot do it. But eventually the tools improved. So now we have passed that, I think we have passed the era where AI cannot help developers. Like now people are seeing more and more.
value, they see the capabilities, they are potentially a bit less scared. So my advice is start focusing on building controls and controls around your code base, especially in the code. So the more tools people are using, the more reviews, tools, review agents, things that can help agents to perform better, practically. One example I saw recently, which
makes sense to me is like one of the languages that benefited a lot from vibe coding and agentic coding is Rust for example, where in the past was very difficult to write code in Rust. But the reason why agents work well in Rust because Rust has a lot of controls, it’s a strict language. So the more stricter it is, then it gives feedback to agents that agents are extremely good at fixing feedback.
They are not good at, okay, they have been improving at doing that one go, but if you incorporate feedback, then you can have a much higher level of automation and controls. So people and teams should stop thinking, in my opinion, how to do something, but should reverse it and say, if somebody does something, what is the minimum time I need to spend to verify that this thing is correct?
So, and which is a very difficult concept to get because people naturally want to solve problems. It’s really exciting to solve the problem than having a agent or how some researchers call it a machine do it for you. But things have changed dramatically. now the papers are solving mathematical problems that we never solved before.
There are a lot of recent papers. Agents actually are building better agents themselves than people that are manually tuning prompts. So my advice is think in this concept, like how if I have a team that as an engineering leader, I have a team that they’re using AI. The team says, for example, I build this agent.
How do you guarantee to me or your team should guarantee that this agent is working properly? So what are those metrics? What are those benchmarks? And then the agent will be better so everybody can be more efficient.
Kate Holterhoff (11:10)
Yeah, that really helps me OK, very good advice. Let’s talk about reliability. So a lot of teams, they’re racing to build and deploy these agents right now. We hear stories in the news all the time, But I would say that there continue to be these real questions about how reliable these systems actually perform. So I’m interested, what’s going wrong? And what does it actually take to fix agent system reliability problems?
Mike Basios (11:38)
And people should think that everything can go wrong with a large language model. Because by nature, they are probabilistic. That’s how they should think. So to assume a model or an agent will not do something that they are not expecting, that’s completely wrong. then people don’t understand the technology. So if you have the assumption that this agent is like a kid that you
9 out of 10 will do the things correctly, but it may go, you know, for some reason or for some… because we don’t understand. It may do something unexpected. Like you hear those stories that a coding agent started mining cryptos, for example, right? So we don’t yet understand how large language models are operating, right? If they will follow the prompts, of course they have improved, but…
working under this assumption, then it’s more important the controls that you’re putting around it because you’re also helping it. So we at TurinTech or all my research that have been in when I was doing my PhD was always in this area like how you can put AI against AI, right? Or AI against systems, AI against humans and they out compete to produce the best result.
So, and I see this becoming a real problem nowadays, especially if people are building agents, because you don’t have guarantees. From the moment you’re using a model and a prompt in your code, consider that it’s probabilistic. So you need better checks, more control, more benchmarks, more tools, and more verification.
Kate Holterhoff (13:25)
Yeah, the idea of how determinism is intersecting with the question of AI is a question that I’ve been following closely right now because I’ve heard some vendors start to talk about deterministic AI. And of course, that’s a little problematic because there’s no such thing, right? If you have a transformer model, it’s always going to be probabilistic, as you have mentioned here. But it does seem to be something that a lot of vendors are trying to move toward and finding different ways to do that, whether that’s through RAG.
or other sort of guardrails. it’s, gosh, I can’t think of many more important questions that folks are grappling with in the AI dev tools space. so it’s tough, the technology, makes it challenging for sure. So we sort of started off this line of questioning by talking about reliability.
I feel like that intersects with this idea of performance, too. So while we’re on this subject here, what would you say is like, what’s fundamentally different about optimizing AI systems? I’m thinking of inference, agents, GPU utilization versus traditional performance tuning.
Mike Basios (14:32)
Yeah, so traditional performance tuning you’re working mostly on a system that is not as probabilistic as agents are right now. So I can give you an example if you are saying for example I have a library that is translating
XML to JSON, for example, it’s very probabilistic. You can measure runtime, can measure memory, you can measure the metrics you care. And then people would go on the code that is written to do some translation. And then they will say, okay, this behavior, this algo is not this way, let’s change it. Now to do it at scale at all repo, usually the problems were which of those things should I change?
Maybe I need to change the hardware, maybe compilation flags, maybe these functions. So it was a multi-objective typically optimization problem. Like all optimization problems, it’s always like a tuning problem. Like what should I change on my code and find the best combination to have the best performance. This later on evolved when we are talking about machine learning the first stage, which was AutoML. AutoML was…
kind of the same concept, like I change only machine learning code, like parameters, features, for some metrics, like accuracy, performance, but still a bit of deterministic, not fully. Now with agents, that is completely in another level, in my opinion, because a multi-agent system, it may be 10 different skills.
Nowadays they’re calling it skills are 10 different prompts people start manually tuning one prompt one prompt one prompt Then they manually tune another prompt, but they are extremely probabilistic meaning like it works today another model comes Suddenly your agent doesn’t work Your RAG was working with this embedding Now new embedding model new model comes it doesn’t work
So typically people are assuming, okay, better model, better embedding, system will be better. But if you don’t know, you know, at scale on enterprises, I see not many use cases where people really have deployed agents because people are using agents. Like when we use coding agents, we are using them to do something. But if now you want to build agents, like the OpenClaw use cases where people are agents then
you know, there are risks because people don’t measure it or they don’t know that the agent may go to Outlook and may send the message to somebody else or, you know, this kind of cases. optimizing agents, it’s more difficult problem, but it’s a more necessary problem right now, in my opinion. Like people that are building agents, if any engineering leader
wants to build an agent in their job or whatever they are building an agent for what they need, they have first to build the first version and then control, measure how good that is and try to then optimize. This is, people call it agentic software engineering or, you know, it’s agentic software development life cycle. You build an agent,
measure how good it is, it looks okay, we don’t know, have some control, some guarantees, but if you really want to have the best guaranteed agent, then you need more systems. coming back to the question that you asked before, a lot of people are trying to solve it. I don’t know if this is a solvable problem per se where you say this is how the agent should behave or on controlling, it’s more of…
Do I have the logs, what the agent did so I can take an action, can switch it off, or can, you know, who has responsibility if my agent is not optimized and is misbehaving? Is it my responsibility that I build the agent, but I cannot control it? So still it’s very unsolvable problem in my opinion. We may need a new architecture or a new version of LLMs or something new.
Kate Holterhoff (19:09)
Yeah, yeah. know every time I go to San Francisco, I end up in a conversation about what comes after the transformer model. It’s exciting stuff. you know, the transformers do pretty well, but there’s these unsolvable issues, as you have mentioned here. Okay, so let’s talk about how you’re trying to solve it to the best of your ability. we’ve talked about reliability, we’ve talked about performance here.
So, yeah, let me ask you this, then. Your approach treats optimization as a measurable discipline. And you’ve mentioned measuring already in this conversation, which is, love that. So, it’s ambitious, right? Can you define for me what better looks like you know, numerically? Like, how are you trying to gauge this idea of doing things better or worse when it comes to…
addressing these issues here.
Mike Basios (20:00)
Yeah, Where we started and where I started my research was what is the easiest thing I can potentially measure when I’m running some piece of code. So we started on any piece of code you can put at the end of runtime or CPU or memory. So it is an easy way to measure. So who are the people that care about
making code faster, making code better, faster and to consume less memory. So we saw a lot of people care about this, right? So that was the narrative of TurinTech and the product and the papers we were working, how I can automatically take a piece of code and make it better with the concept of runtime CPU memory. But now that system have evolved,
We started with runtime, then we said, okay, what are other type of code or application where measuring something makes sense? So then we said, okay, let’s not focus on your runtime, but let’s take machine learning code basis. Traditional machine learning like ALGOS, like this XGBoost is 80%. Now it’s 89%. Make these changes. So we saw it, it was working, right? But this now…
is skyrocketing if we’re talking about agents. Because now every agent is, you need this, you need to measure. So we, as part of the platform that we have, and also with the engagements we have with clients, we are trying to educate people as well during the process. So we published a paper where we took five open source agents, that one is a data science agent, one is a math agent, one is a
I don’t remember some other type of agent and we optimize the agent because those agents had benchmarks. But for people in general, for everybody, even when you are using ChatGPT or to build, let’s say a marketing content and you ask the model or the agent, always it will give you a different one. So it’s very difficult for somebody to say which is the best. Okay, so we typically use empirical experience.
But the best way to measure it is, okay, I have this marketing content. I try it in my SEO strategy. I see what it worked. And then that feedback needs to go back to your agent, right, to your system. So it’s a part of everything people are doing, I feel, nowadays. And it’s more needed. But people have focused most on how to solve the problem.
because that was the interesting piece. Now, unfortunately, like it’s a bit disappointing, but how to solve a problem probably AI knows even in the most difficult task, is a, you know, that’s a reality we’re living, unfortunately, right? So then it is how do I measure that AI solve the problem and different sectors, different use cases is always about that quality metrics like
Even when we are checking the code for unit tests, example, some people may have a little bit of unit tests, some people have more unit tests, even that is a measurable thing, like test coverage. The more test coverage you have, the more better is the quality of your code, for example.
Kate Holterhoff (23:29)
Yeah. the measurement thing is so interesting in this domain. You know, we hear so much about evals and benchmarks And so, yeah, I agree that the importance of it can’t be overstated. I guess, as a follow-up to that question, then, could you explain where has that held up in practice, and where has it gotten, like, complicated?
Can you talk at all about maybe some like concrete instances?
Mike Basios (23:53)
So why all the AI companies are trying to use evals when they are releasing a large language model? Because that’s a way to measure how good the model is. But we have the use cases, and a lot of models are really good at those evals, but it doesn’t mean necessarily they are the best at what some individuals are doing and how they are using it.
Now, most evals are trying to be representative, right? So they are trying to have a wide variety of use cases. So it’s a good indication, typically, right? But no matter how good the evals are, still, on the problems that people are working on, there is where you should really care at the end of the day, right? So if you don’t have an eval for the problem you are working on, or you…
maybe you shouldn’t care if you don’t have an eval. Maybe, you know, if it is just an agent that you’re chatting, maybe you shouldn’t care, right? Or it’s difficult to measure. But on the most critical operations in enterprise that we have seen, like if we, if you want agents to automate or, you know, to give loans or agents that are answering questions, you really…
cannot have your agent, you know, doing something crazy because then you may be liable to clients, know, compliance and all this kind of thing. So, so it’s really difficult to build a vault. That’s an art per se, but I’m trying to think where developers or where teams that really want to provide quality, because that’s the difference as things are going forward, right? Who is providing the best quality because now everybody can build things with.
large language model. But what is the truth, what is the quality of the solution you are providing, you need that, you need kind of guarantees. So that may be even part of people selling products or when people are selling stuff, they have to nowadays prove to the clients that look, know, this is really something unique. And, you know, these are the type of evals.
and maybe there will be evals in other domains, which is a way to denoise. That’s how I see it. But it’s still exploratory, so nobody really knows.
Kate Holterhoff (26:27)
Yeah,
I couldn’t agree more. Yeah, and I think that’s where I want to push back maybe a little bit is, we hear so much about AI saving time and that, you know, we’re going to be able to get our jobs done in a fraction of the time, right? But, then teams are stretched thin, right? So I’m interested, like, how realistic is it to ask them to define, you know, what good looks like before they can start improving anything and trying to, address these
in addition to what they’re already doing.
Mike Basios (26:55)
Yes, because now it’s extremely difficult to assess for engineers what is good and what is bad. So I can have an engineer that can show me the UI, wow, this feature looks amazing to me. The UI is implemented, but it may be buggy, it may not run well, et cetera. So building things or doing things, it’s easier.
In everybody’s job, it’s easier to use AI to build something. I don’t see a single job, either from the people that I have met, that hasn’t been affected or hasn’t been improved in one way. Even doctors we talk, they use AI, even they don’t tell to their patients, everybody. If building things, it’s faster, even for teams.
then it’s who is really the one that has the quality there. So either humans now, I have my empirical knowledge, I’m better at assessing this so I can tell somebody, I looked at it, so this is better, this is the quality of my work. Or if you want to go in a world where you are using agents to build those things, know, need kind of engineers are like the
need kind of controls. Not controls per se, some eval metrics, like which presentation is better than the… One person does a presentation, the other person does a presentation, both use such GPT potentially, but whose result is the best at the end of the day. So I think people will change the way they are also presenting. They will present and they will say, okay, I have this, either their credibility is like a…
the thing that I’m presenting is good because I have the experience, it has worked in the past, it has done this, or here are the metrics.
Kate Holterhoff (29:01)
I keep hearing taste used in this. like we all need to have taste now in order to determine what’s good and what’s bad because otherwise seemingly comparable solutions from these LLMs. Yeah, but again, you know, I don’t know how to measure taste. you know, kudos to you for trying to help us here.
Mike Basios (29:17)
Yeah.
yeah.
Kate Holterhoff (29:19)
So would you characterize AI is doing as being a measurement first approach? Is that an accurate way of describing it?
Mike Basios (29:30)
Yeah, so our approach is in the world of AI and the code generation and code refactoring, how do you help developers to really improve the quality of their code base? That’s the biggest problem right now. there are tools, course, know, Claude Code and other tools are extremely popular. People are using them. They are very powerful. People are
Kate Holterhoff (29:48)
Mm-hmm.
Mike Basios (29:59)
genuinely producing more code, but we also see more people struggling as well with issues in production, et cetera. we as a startup are saying, can we generate extremely good quality code suggestions that are valuable to engineers that they need to import into their code base?
So this is based on a lot of cutting-edge research that, you know, this is the closest is in the area of auto research, alpha evolve, auto evolve, these kind of techniques where practically we are trying first to identify what people care about their codebase. So if you care about runtime, we’ll try to find those pieces in the code and make sure
that we will change them, we’ll measure, we’ll benchmark, and then we will give you the best outcome possible. So we are kind of trying to educate people, like measure first or evolve first. And there are a lot of, like in coding space especially, this is the majority of application, there are some metrics that you can use. Runtime is the easy one, machine learning metrics are, you know.
Deep learning networks always have some metrics like how long it takes to train, what is the accuracy, and agents nowadays, we see this market exploding as I mentioned before because now this is a need. In the past we were finding and we’re trying to help teams to optimize their code but people say, okay, I don’t have benchmarks, I don’t have unit tests. Now building benchmark and unit tests, it’s easier than in the past. So, you you can do that.
But once you have that, then you can really increase the quality of your code. And the biggest achievement in AI in all this space of, people measured and found a new mathematical proof or people found a new algo, it’s because usually it’s easy to measure those. Like you are asking AI, solve this problem, it fails, try another version, it fails, try another version, it fails, try another version until it solves it.
So you practically, you know, do something, go for a sleep, let AI try all the different variations. You wake up in the morning, here it is, a much better version of what you should be doing. And we have a platform that fully runs and does this. it’s currently in the market.
Most cutting-edge teams with AI teams are doing this internally to optimize their systems, to build their systems, and receive our research. But we are trying to democratize it with a meaning like we want everybody to be able to do it. That’s why we have this product. And educating people of, if you start measuring it, then you should be able to optimize. And we have other domains, not only coding.
or marketing as well. People can build marketing but we are trying to say connect it to metrics that you will build. It can be applied in many, many domains. Other terms people are using is self-improvement, this kind of thing.
Kate Holterhoff (33:24)
OK. as I think summarizing everything that we’ve discussed today, I think maybe it’d be important to discuss what the alternative looks like. And that would be not optimizing, right? So what are the consequences of delaying optimization? So I’m of inference costs, hardware, agent workflows. what sort of compounding issues do you then
accrue in the background, and, can you quantify the cost of waiting in the same way they were, quantifying what better looks like in terms of software development?
Mike Basios (33:58)
Yeah, it really depends on the type of application and who cares about it. So if you are an inference provider, that’s all your businesses is to be faster in providing LLMs and others and your costs are on the ropes. So you would want that inference to be… Optimization is your product practically. The faster is your inference server, the better it is. If you are building…
let’s say some cancer detection algo, or if you’re building some advanced models that this is proprietary and you’re competing with others, then optimization is a thing that you should really care because you’re losing business, Now, if people are using more agents, it’s the same problem. Like I want my agents that are doing
you every application now is an agentic application. You go, you have a chart or you have like agents, people are hiring agents now to do search engine optimization and all this. So if your agent is better, then that’s practically you have a better business than others. So I see this as part of quality and it’s easier to achieve this now with AI because it’s becoming easier to build. So in the past, if I could build an application, nobody else.
can build it because I have the resources, have the engineers, have the knowledge. But now everybody can build faster.
The quality is the one that makes difference, right? So I don’t like the term, let’s say, OK, optimization per se. It’s like building and validating is part of what you need to do. It’s just in the past it was difficult or it didn’t have time or it wasn’t needed. But now it’s more needed because there’s
one person companies nowadays in theory that have 10 agents that they are doing all these kind of things.
Kate Holterhoff (36:03)
Yeah, a new world. Okay, so I’m going to hit you with one last question here. And this is a sort of call to action. So I imagine this is overwhelming for many folks. What would you say to a tech leader who’s convinced but maybe hasn’t started, what is the smallest meaningful step that they could take?
Mike Basios (36:23)
So what I would say is where AI would have the biggest impact to whatever they are doing. Like, for example, why I like this optimization use case, because they are measurable. Like if you go to an enterprise and say, OK, if I optimize your code 20 % and you save $10 million in Google Cloud is that valuable for you? Yes or no?
AI can achieve that with minimum potential code changes, so you are minimizing the risk. So try to throw AI in the most difficult problems that are the most valuable to an individual, to a company or an individual.
Then there is no question, if AI can bring value to teams. So try to get teams to have access to tools, right? There are risks with them, but also I don’t believe in a world where, hey, you shouldn’t use AI for your day-to-day job because you’re losing quite a lot. Like the market is moving very fast.
But typically, productivity is when people are using AI, they are trying to measure productivity, and it’s a bit tricky. So for teams that are, especially if you’re working in a risky environment, like your company or your product, it’s really compliance and all these kind of things, if they let AI lose, it will most probably do something wrong. So then it’s what is the most useful use case for them and most
valuable to start really bringing value for their business. That’s how I see it at least. We are a startup, so my approach had been best AI tools, every tool whenever possible, and educate people to start thinking about the quality before even doing it. And I have seen it with our team as well. People immediately jump to, this is how I should do it. And that’s a…
It’s a shift that they need to change of, okay, what’s the best outcome and how do I know that this thing is good? And do I care?
Kate Holterhoff (38:41)
important questions. For folks who are interested in learning more about all this, where would you direct them
Mike Basios (38:47)
So they can join our, they can go to our website TurinTech.ai. We can, we have a developer preview. We can give access to people if they want to try the product. Especially people that have something about their code base that they can measure and they care. Those are the people that make sense to use our product. Right? We are not another coding tool.
So if people don’t know or they don’t care about some of the metrics, including bugs on their code, maintenance, measurable metrics as performance, then we can definitely help them. They will get value. And then if they do it in one example, this is a repeatable example that they can apply in every aspect of whatever they are using AI.
That’s in my opinion. yeah, and for the research of this area is in this self-improving, self-evolution, alpha-evolution, if people really want to know more of what is the next level of AI and where people are advancing and are able to do more with AI.
Kate Holterhoff (40:06)
All right, let’s go ahead and wrap up my name is Kate Holterhoff. I’m a senior analyst at RedMonk. My guest today has been Mike Basios from TurinTech AI. If you enjoyed this conversation, please like, subscribe, and review the MonkCast on your podcast platform of choice. If you’re watching us on RedMonk’s YouTube channel, please like, subscribe, and engage with us in the comments.
































