A RedMonk Conversation: Feature platforms – the data foundation for production ML applications

Share via Twitter Share via Facebook Share via Linkedin Share via Reddit

Get more video from Redmonk, Subscribe!

There is so much hype around Generative AI right now, but what does that mean for Predictive ML? Senior RedMonk Analyst Rachel Stephens talks with Tecton VP of Marketing Gaetan Castelein about the challenges of getting ML models into product, the role of feature stores vs. feature platforms, and the overlaps and distinctions between Generative AI and Predictive AI.

This was a RedMonk video, not sponsored by any entity.

 

Rather listen to this conversation as a podcast?

Transcript

Rachel Stephens: Hey everyone, and welcome to RedMonk Conversations. I’m Rachel Stephens with RedMonk. And with me today I have Gaetan Castelein. Did I get that right?

Gaetan Castelein: You did. You’re very close.

Rachel: My French is really terrible. I’m sorry.

Gaetan: No, it’s all good.

Rachel: Maybe you should introduce yourself.

Gaetan: And I make it tough, because I have a French first name and a Dutch last name. I’m from Belgium, so that weird French/Dutch combination. But it’s Gaetan Castelein. And thanks so much for having me, Rachel.

Rachel: Wonderful. And can you let everyone know just who you are and what you do?

Gaetan: Yeah. So I am VP of Marketing at Tecton. I have been at Tecton for about four years, so I joined the company very early on, there were like seven in the company when I joined. Before that, spent a couple of years as VP of product marketing at Confluent, the streaming data company, and then before that a few years at a company called Cohesity. So another startup. And then a long time, VMware before that, like running product management, product marketing at VMware.

Rachel: Wonderful. And Tecton, for those who don’t know, is a feature platform in the machine learning space. And I am recording this call today because I had this lovely conversation with GC, I guess it was a couple of weeks ago where I came in 100% in like this GenAI frame of mind because that’s what everybody talks about all of the time. And I learned so much because I did not correctly have this mentally bucketed myself. And so really, we’re here to talk about what is a feature platform, what is predictive ML, and just kind of understanding more to this AI space and the AI/ML space beyond what is getting headlines these days. So generative AI has all the hype right now, but I would love to talk to you today. Just what is the predictive ML market looking like and what makes it different?

Gaetan: Yeah, yeah, great question. And you know, obviously GenAI has all the hype, right? Always talking about it. And and I think rightfully so. Right? Like it speaks to us as humans because now we can converse with machines and have machines generate human-like content. And so that’s fascinating. But yeah, I think there is a strong distinction between generative AI and predictive AI. And so predictive AI is all about generating highly accurate predictions and generating those in the fastest way possible. And predictive AI has been around for a long time. It’s used by companies like Google or Amazon, like when Google figures out which ads to serve you that that is powered by predictive AI. On Amazon, it figures out which products to recommend to you when you watch Netflix. And it recommends shows that you may like. All of that is powered by predictive AI. And so that’s quite different from generative AI. The goal of generative AI is not necessarily accuracy. In fact, we’ve all heard these stories of generative AI sometimes just making stuff up when it doesn’t know. But the goal of generative AI is to understand human content like large language models and to be able to generate text or content that speaks to us as humans. And so it’s really two different objectives between these two forms of AI.

And thinking about predictive AI, which is really about generating those very accurate predictions, what we have seen the most advanced organizations like Google or Amazon or Facebook do is use predictive AI to power applications. And I can give out a few examples. Like Tecton was created by the team that built the Uber Michelangelo ML platform. And Uber Michelangelo is kind of this well known platform in the industry and it’s used to power all sorts of predictive machine learning at Uber. So for example, if you get an ETA on a meal, like if you do Uber eats and you order a meal and you get an ETA of when it’s going to be delivered that’s powered by machine learning. Or if you have surge pricing, those prices are set by machine learning models, not by an army of humans who are constantly monitoring demand and supply. So predictive ML has been around for a long time. That being said, the way it is used in most enterprises today is still very batch in nature. And so what that means is we’ve done business intelligence for a long time. We’re all used to having dashboards and having humans make decisions off of dashboards. And the way most enterprises use predictive machine learning today is to provide better insights to a human decision maker.

That’s a good first step, but we believe that really predictive ML is most powerful when it is deployed in production to support new applications, new end user facing services, or to automate business processes. And the reason for that is as long as you have the human in the loop making those decisions, you’ll only be able to operate in batch mode on those decisions. And we’re now generating so much data that is coming at us so fast with streaming, Kafka, real time data, that it’s really impossible for humans to keep up. Like in the examples I threw out, like Uber could not have an army of humans generating an ETA prediction for every meal that is ever ordered on Uber eats. It’s just not practical. And I think many enterprises have situations like this where there’s just too much data to rely just on human interpretation. And that’s what predictive ML really comes in. It can allow you to make the simple, routine decisions very, very quickly, oftentimes more accurately than a human. And you can then use those predictions to build new customer facing services or applications. And that is what’s really powerful about predictive ML. And as you can tell, it’s really about predictions. It’s not about human generated or interpretable content. It’s really about making a prediction. And that’s that’s what predictive ML is all about.

Rachel: Gotcha. Hold on, I want to clarify one thing, because you had human in there a couple of times, and I just want to make sure. So we talked about how batch processing could kind of be like that human in the loop processing, which is not sustainable for a lot of things. You also used the word human in your conversation around generative AI making human content. So I think there’s some distinctions there. It’s like human in the loop in terms of structured data, but we’re doing it in a batched way, versus human being like, this is unstructured data that we’re kind of using in it. I feel like human kind of came up in two different forms. So I just want to make sure we’re clear there.

Gaetan: Yeah, yeah, yeah. So in the context of generative AI, when I’m talking about human interpretable content, right, like speech, or text…

Rachel: Like natural language.

Gaetan: Natural language. In the context of predictive ML, the point is that most of the time in the enterprise today, predictive ML is used in a batch way with a human in the loop to make that ultimate decision based on the data that is provided by the model. But that’s where we can really take that human out of the loop and automate those decisions instead of relying on humans. Right?

Rachel: Yeah, perfect. I just wanted to make sure we had that clarified because the whole goal of this conversation is clarification. You also talked about the challenge of bringing things into production. So it’s not just having the data, but how do I actually get these predictive models to be able to make these predictions at scale in these enterprises? Can you talk a little bit about what that looks like?

Gaetan: Yes. And so when we talk about predictive ML, we believe that predictive ML really belongs in production to power new applications and services, right? I mentioned Uber examples, but there’s countless examples. Like, we have customers using predictive ML to automate loan underwriting or to do fraud detection or to do product recommendations or to do dynamic pricing. There’s a large quantity of use cases that are much better once you get predictive ML into production. Now that’s a very complicated transition because we’re coming from this BI world where everything is batch in nature, right? We have the human in the loop. So it’s okay if we generate a report like once a day. And so pipelines are batch in nature and the data serving is batch in nature. Like if you’re using a centralized data warehouse, chances are you’re not doing real time transactions on that data warehouse. You’re updating dashboards and stuff which which is very batchy. And once we’re moving predictive ML into production, now we’re dealing with production data. And that means two things. That means one, we need to be able to solve the predictive signals — the data that’s going to be used to make the predictions — we need to be able to solve that in real time at scale with enterprise grade service levels.

So the serving aspect becomes a lot more complicated. But then also, oftentimes if you’re doing real time predictions, those predictions will become much more accurate if you use real time data to power the predictions. Right? So just going back, I’m just going to go back to my Uber Eats example because that’s an easy one to understand for everyone. But imagine that you order a meal on Uber Eats and we’re telling you like, Hey, it’s going to be delivered in half an hour. That prediction is actually quite complicated because you need a multitude of data points, like how complicated is the meal that you just ordered? Is it just like a simple thing for one person or is it a big meal for like 20 people? How busy is the restaurant right now? How long does the restaurant usually take? What is the traffic situation like between the restaurant and the delivery location? Do we have any drivers in the vicinity? So there’s a lot of data points that go into generating that ETA. And you can tell that some of these data points can be batch in nature. That’s fine. Like how long does the restaurant usually take? If that’s a data point from a day ago, that’s probably still valid.

But some of these data points will really benefit from being real time. Like what’s the traffic situation like now? Or how busy is the restaurant right now? And so these predictions become more accurate if you’re also using fresh real time data to power the predictions. And that gets us into a world of streaming transformations and real time transformations. And that’s where it gets really, really complicated because all of the analytics stack has really been designed for batch use cases, and now we’ve got to start taking that that batch data, combining it with streaming and real time data, transforming it very, very fast and solving it online at very high volumes, very low latencies to support these real time predictions. And that is what’s contributing to making this transition from batch to real time so complicated for organizations. We essentially have to bridge those two stacks, right, the batch analytics stack and then your production data stack. And that’s really the problem that Tecton is setting out to solve, is providing tooling to make that transition much easier for organizations.

Rachel: Gotcha. So now that you mentioned Tecton, there’s a couple different things. We’re thinking feature platform. Number one, let’s make sure we’ve defined feature. So if we’re saying feature, in this example you just gave all of those various data points would be features of the model. Is that right? Or how should people be thinking about what feature is.

Gaetan: Yeah. And features, you know, features can be confusing because we talk about product features, right? And in this case, features are not that. Features are actually high quality data signals that you feed into your model to make a prediction. And so again, going back to the Uber Eats example, like how busy is the restaurant right now could be a data point on a scale of 1 to 10, right? And when you make a prediction, you need a feature vector and feature vector is something that the application is going to request. It’s going to be like, Hey, I need to make a prediction for Rachel. Please send me the feature vector that relates to Rachel and then we will serve back all of the data points that are going to be used by the model to make a prediction. So that is what a feature is, is a high quality predictive data signal that gets fed into a model to power these predictions.

Rachel: And I wanted to clarify with you, so feature’s different than raw data because the raw data points are just all the data points. But these features are going to be, like you said, they could be ranked, they could have ETL, things applied to them to do transformations. Like they’re going to have some kind of, I guess, computation in and around them.

Gaetan: Yes. So, creating high quality features — it’s manageable if all you’re using is batch data, like we have tools like pipeline orchestration tools, processing engines that will allow us to create features in batch mode. Where it gets really complicated is once you start incorporating streaming data or real time data.

Rachel: Right.

Gaetan: And I’ll throw out a few examples. Like one of the things that makes it complicated is that we’re generating data for machine learning, which means that, yeah, we need the fresh data that can be served very fast at high volume, but we also need historical data that we can use for model training, right? So we can’t just keep the very latest fresh data. We also need the historical data and so that increases the complexity of the pipelines. The other thing is like oftentimes if you’re using streaming data, you’re going to want to be doing aggregations on the data. And so these aggregations can be very resource intensive. If you’re using streaming data, you also oftentimes need to back up that streaming data with batch data to get that historical context. Like imagine you’re just using a stream and you don’t have any of the historical data behind it. How do you train your model? If you want to train the model on a year’s worth of data, you’re going to wait for a year for that stream to play out before you can train the model? That’s not practical, right? And so we oftentimes need to have a batch data source that is going to back up the stream with all the historical data.

And then we need to transform all of that and combine that streaming and batch data source in a way that that keeps the data consistent. So these data transformations that will require you to produce the high quality predictive signal are oftentimes very complicated. They rely on data engineers. Data engineers have a lot of work to do. There’s backlogs. And so, one thing I want to mention, too, is the process is often very, very convoluted today because you have data scientists who behave more like scientists, right? They search for data, they do experimentation. They try new things and often operating in a Jupyter notebook on the local machine. And they generate code that is not production quality code. And so once you’re happy with the features that you’re going to use for your model, then you need a production data pipeline. And that’s typically not built by the data scientists, but they hand over the notebooks to data engineers or machine learning engineers who are going to re-implement the pipelines using production quality code. And that handoff process just slows everything down. Like if you think about the way we do application development today, the software engineer is really empowered to build this code, QA it, get it to production and we can release software on a daily basis because the process has become so streamlined. In the world of machine learning, we’re very far from that today because because of these complicated handoffs between teams.

Rachel: And I think that ties into my next question though. So we talked about what is a feature and then I want to get into the word platform. So I’ve definitely heard about feature stores and projects like Feast. What’s feature platform? And I think your conversation here is starting to touch into that. So let’s dive there.

Gaetan: Okay, great question. So yes feature stores have become popular over the past three/four years. We now have Databricks with a feature store. AWS has one, Google has one. There’s multiple feature stores that have been created. The feature store aims to solve the serving side of the feature problem. So what feature stores do is they curate high quality data in an online store for online surveying and in an offline, cost efficient store like S3, for example, to generate training datasets to train the models.

Rachel: Okay.

Gaetan: And then they present a clean, very consistent set of APIs to consume the data. They allow the data to be re-shared between models if you have a central repo of all of your features. And many times models will reuse features across models like this, oftentimes data points which are common between between models. And so when you have the central feature store now anybody can go and reuse existing features, right? So, it’s really solving the serving side of the feature challenge. But that’s not enough because as I mentioned earlier, like the data transformation aspect is oftentimes where people get the most bogged down. And so to really enable people to quickly and reliably generate high quality features, you need to not only serve them with a feature store, but you also need to to manage the transformation of the raw data into clean feature data.

These are pipelines. And so at Tecton, because our mission in life is to build high quality data for machine learning, we think that’s important to address both of these sides, the feature pipelines and then the feature store. And that’s what we call a feature platform at Tecton, right? So we allow data scientists and machine learning engineers to create declarative definitions of the features. And then we automate the pipelines and optimize these pipelines. A lot of our IP is in managing these data transformations on streaming in real time data. So we automate the pipelines to create, to generate high quality feature data and then we curate that data in a feature store and then serve it online with great service levels and make it very easy for people to generate high quality training data sets using the historical data. And that’s how we think about that distinction between a feature platform, is that combination of transformations and storage and serving, whereas the feature store really just does the storage and serving.

Rachel: Gotcha. And I think I might have interjected too soon with me thinking that I understood platform because in the last question, you were talking about getting that model from the data scientists, the Jupyter Notebook into production. And it sounds like that kind of comes after the feature platform? Is that right?

Gaetan: So the way the process works with Tecton, you’re still going to do a lot of data exploration in your notebooks, right? That doesn’t really change. Like you’ve got to do a lot of experimentation. The notebook is a great way to do that. The way the process changes is once you know what you want your feature to be, we allow anybody that could be a data scientist — it could be a machine learning engineer, a data engineer — to create a feature definition, right? Which is basically a python file, which with a declarative framework defines what the feature needs to to look like. So they’re going to define the data sources, they’re going to define the transformation logic that needs to be applied. That Python file is managed in a centralized Git backed repo. And so that means that anybody can collaborate on these features. Like I can pull an existing feature, branch it off, create a new version, it’s all centralized so all these teams can collaborate on features. And once you have created that feature definition, you then apply it to Tecton and we will automate the end to end process. So we will automate the pipelines, the data transformations, and then we will serve the data online and have the data available for generating training data sets. So that’s really what Tecton does, right? So it’s aiming to manage that end to end workflow aside from the early data exploration phase, which is still mostly going to happen using existing traditional tools.

Rachel: That was really helpful. And so then I wanted to take it back by ending where I started in my confusion. And can we talk about just how feature stores interact and intersect with generative AI and kind of bring this all together because it feels like they’re slightly different, but it also feels like there’s overlap.

Gaetan: Yeah, yeah. So great question. So in the beginning we talked about generative AI is all about human interpretable content, right? And predictive AI is all about generating high quality predictions. Today, feature stores are mostly used to support predictive AI because it’s these models that need to be very, very accurate. You can imagine a fraud detection model, accuracy is essential. Like if you can improve accuracy by a 10th of a percent, that probably has a meaningful impact on the bottom line. So they’re optimized for different things. But that being said, there are many use cases where generative AI and predictive ML play well together and allow you to generate a better overall solution. So for example, I think last time we spoke I mentioned the Stitch Fix example where you essentially get clothes delivered to you on a regular basis and then you give feedback on what you received. If you like it, great. If you don’t like it, you send it back and you mention why you don’t like it. And that response is human generated content that’s going to be free text of the reasons why you did not like that shipment on that set of clothes. And you can then use generative AI to extract the signal from that feedback, right? Generative AI will understand what you wrote in your feedback and would extract the main points.

It could be like, Hey, the color wasn’t right or the size was wrong, or the style was a little bit too out there, what have you. Once generative AI has extracted the signal from your feedback, that signal then can become features that would be managed by Tecton and fed into a predictive model that would be used to generate a recommendation of what to send you next. And so that’s a good example of the two types of ML we talked about — generative and predictive — working in conjunction with each other to lead to a better outcome. Another example of this is imagine you want to do an email campaign. Like you’re an online retailer and you want to do an email campaign to all of your customers. Well, similarly, you could use predictive ML to generate a recommendation, being like, based on Rachel’s buying history and recent browsing history, we think Rachel will really like this product. And then you can use generative AI to generate content, like an email content, an email that could be sent to you on Hey, Rachel, check this out. We have a promotion on this product. And so that’s another example of the two types of ML can be used in conjunction.

Rachel: So to summarize, in one, generative AI is kind of helping generate — is it generating data for the feature? Is that right? And then the other you’re using the features to generate. Does this kind of go both ways?

Gaetan: So in the case of Stitch Fix generative AI is used to extract the sentiment from free text, right? In the second case, generative AI is used to generate human content based on a prediction that was on a recommendation that comes from a predictive AI model.

Rachel: That’s much better articulated than what I posed.

Gaetan: And then there’s another thing I just want to mention. Like even in the case of generative AI, if you use chat GPT, it’s a human generated prompt, right? So chat GPT is going to give you a response based on the information that is included in your prompt. Now that prompt is human generated, so there’s limited quantity of information and accuracy that you can expect in that prompt. You can augment the prompt with high quality features, which could be like, you could be asking a question of chat GPT, like, I don’t know, Hey, can you write a thank you email to a customer. And then chat GPT will generate that email. But if you can enhance your request with high quality data, which could be like how much does the customer buy this year? What was the last product the customer bought? Then chat GPT can generate a response that would incorporate that information and make it more personalized. Right? So there’s also a use case where you can just enhance the prompts that you feed into large language models with high quality features to lead to a better outcome. And so there’s these two, predictive ML and generative AI being used in conjunction, but also high quality data just being used to lead to better outcomes from generative AI. And so the main conclusion here is that these high quality data signals are extremely useful for an organization today. These features that Tecton manages are mostly used to power predictive AI type use cases, but will increasingly become very useful also for generative AI use cases. And so really, our mission, the way we think about feature platforms is we enable organizations to create, manage, share, centralize all these high quality data signals that are going to be useful in a variety of machine learning use cases. Today, mostly predictive, but in the future, also including generative AI use cases.

Rachel: Well, that was absolutely just as enlightening this time as it was last time. And we’ll share this publicly because I think it will help a lot of people understand the space better. Gaetan, thank you so much for your help today. It was wonderful to chat with you.

Gaetan: Alright, same here. Thanks so much for having me, Rachel. A pleasure as always. Thanks. Bye.

 

More in this series

Conversations (72)