James Governor's Monkchips

IBM and embeddable AI – natural language processing and speech recognition, high touch to self-service.

Share via Twitter Share via Facebook Share via Linkedin Share via Reddit

In this RedMonk Conversation, I talked to IBM Director of Engineering Bill Higgins, responsible for IBM’s foundational AI technologies, about the future of embeddable AI, as IBM rethinks its delivery models, from high touch to digital self-service. We discussed the past and future of core technology in IBM’s AI portfolio, and what comes next, after the recent launch of Watson Libraries: Embeddable AI is is a major strategic project for IBM, involving new ways of thinking about how the company delivers products for functions like natural language processing and speech recognition. 

This will entail a fundamental rethink about how IBM engages with third party software providers, with a more developer-led focus and a more focused and modern approach to product management. It has started with internal collaboration, between IBM and Red Hat. AI and machine learning are a strategic focus for IBM CEO Arvind Krishna, and as such, the company has to successfully deliver on a reset around Watson. Part of that will be seeking to encourage third party adoption by SaaS companies and others delivering AI in their products and services. IBM has to compete on the basis of developer experience with the likes of Google or Hugging Face. That’s the challenge at hand. It’s going to take a lot of work to get there. It’s good to hear Bill call out the importance of great docs!

According to Higgins: 

Internally we have about maybe ten common components that really address the common use cases, and they’re adopted in IBM products already. So they’re very robust because they’ve been running in production for sometimes up to three years. But what we’re going to do is we’re going to have a kind of a phased rollout for commercialization because, of course, you have to do the product design work. You have to create better documentation, and you’re going to learn a lot from the first encounters with commercial customers. So what we just announced was that we’re providing the embeddable version of our natural language processing technology and our speech technology, which are two of the most mature ones, both in terms of the long history of IBM investing those technologies over the years, over the decades, in fact, but also for our internal architecture, the foundational way, our technologies. Those are some of the ones that we did first. So they’re already broadly adopted within IBM and therefore are good candidates for that first wave.

It’s crucial to scale down as well as up. Not everyone is a hyperscaler. 

Our strategy is hybrid cloud and AI,  and that means that we’re going to bring the AI to the data, not force you to bring the data to the AI. And that’s useful in a couple for a couple of reasons. The first one is that every business finds themselves in a hybrid cloud situation, whether they want to or not, because it’s just so easy for division A to pick Amazon and Division B to pick Azure. And then they acquire a company who uses Google Cloud, and also for on premises and edge use cases. And so one of our mottos is that our Embeddable AI scales down as well as it scales up. And so it should be able to run both the Internet scale workload and any of the hyperscalers or it should be able to run in a box in a McDonalds to do the speech to text or Watson orders to basically help, you know, speech to text for ordering food in a little device that has to run disconnected from the Internet in a McDonalds.

Anyway here’s the conversation. 

 

IBM sponsored the making of the video, but editorial control is entirely RedMonk’s.

 

No Comments

Leave a Reply

Your email address will not be published. Required fields are marked *