James Governor's Monkchips

Conversational IoT–’SUP?

Share via Twitter Share via Facebook Share via Linkedin Share via Reddit

Conversational IoT emerged strongly as a theme at our Thingmonk conference over the last couple of years – which is on again next week. Nick O’Leary gave a stellar talk on the subject in 2014, inspired by the work of Tom Coates.

A lot of experimentation in the space happened on Twitter with tweeting objects including ferries, bridges and thirsty plants (sadly Twitter didn’t support the interesting work by developers in the space, another potential opportunity not acted upon).

“We no longer appear able to send a robot to Mars, or land a probe on a comet without an accompanying twitter account bringing character to the events.”

But O’Leary also pointed out one of the most interesting ideas behind things that can talk  – the emotion.

“There’s always a sense of excitement when these inanimate objects start to have a conversation with one another. The conversations between the philae lander and its orbiter were particularly touching as they waved goodbye to one another. Imagine, the lander, which was launched into space years before Twitter existed, chose to use its last few milliamps of power to send a final goodbye.”

ironically enough while writing this post this happened

Lifeless body, lifeless body, lifeless body.

We imbue. It’s what we humans do.

So where would we be without the thoughtful contributions on thinking machines by Douglas Adams and the writers of Red Dwarf.  I am fascinated by the notion of authorial “voice”. Any good writer has one. Indeed these talking spaceships are of course humans playing at being machines. My own founding partner did some good work in this regard with @analyticsmonk.

But what of machine voice? Is it really a thing? Well if machines, or rather the software that drives them, are going to be interacting with people, it’s going to be emotional.

Some cultures are I suspect ahead on this stuff – I remember being initially taken aback in a discussion with a couple of Japanese people when they said Japanese generally believe that machines have souls. But of course, we have the same cultural tic. People name their cars, we anthropomorphise everything. Which is one reason I think we are fetishing chatbots right now.  Bots are people too. If companies can be people then machines definitely can.

Jeff is right – we’re being reductive. Chatbots have become our exemplifying vision of talking things. The Developer Aesthetic has emerged from IRC, through Github ops, and into Slack as a way of communicating. Slack rather than twitter begins to feel like the foundational technology for how we’re going to build conversational interfaces. Slack may not have an IoT play yet, but I expect the platform to increasingly be used for customer relationships, rather than internal comms. Part of that support for the customer is going to be engagement with the machine.

You should definitely read Chris Messina on what he calls Conversational Commerce. My only quarrel with the term is the use of “commerce”.

Before I begin, I want to clarify that conversational commerce (as I see it) largely pertains to utilizing chat, messaging, or other natural language interfaces (i.e. voice) to interact with people, brands, or services and bots that heretofore have had no real place in the bidirectional, asynchronous messaging context. The net result is that you and I will be talking to brands and companies over Facebook Messenger, WhatsApp, Telegram, Slack, and elsewhere before year’s end, and will find it normal.

We’ll shortly be talking to things in the same way. This conversational model isn’t only about buying and selling and brands– it’s deeper than that. It’s going to be completely pervasive.

Stephen wrote a good piece about why Microsoft was acquiring Linkedin which plays to these scenarios. He imagined this conversation

“Cortana: Where has Jane Smith worked?”
Jane Smith has spent ten years as a Java programmer for companies in the insurance and healthcare industries.”
“Cortana: What are Jane’s professional certifications or memberships?”
Jane is an Oracle Certified Master, Java Enterprise Architect, a certification that less than 5% of the applications for this position claim.”
“Cortana: What languages does Jane Smith speak?”
Jane is fluent in English, French and Spanish.”
“Thank you, Cortana. Can you schedule an interview with Jane next week?”
Of course.”

I haven’t had any briefings on it yet, but I think this kind of thinking is driving Salesforce to create “Einstein”. and of course IBM has made a lot of the running in thinking machine technology with Watson.

Chatbot startups have raised over $140m in the last 8 months. A good chunk went into Slack startups and CRM.

The nature of the current dominant user interface – the app – is breaking down. I thought this MG Siegler piece – K I Get Uber – was really instructive about how.

On Friday, I was chatting on Facebook Messenger with my wife about where we should grab dinner. Once we decided, I responded with a simple, informal “K I get uber.” Meaning, of course, that I would call an Uber and meet her there. What’s amazing in this mundanity is that Messenger was able to parse my words and figure out that I indeed wanted to order an Uber car, and so it automatically inserted a “Request a Ride” button right below my chat bubble.

I clicked the bubble, went through an Uber ordering flow — all within Messenger — that in some ways was better than the flow within the Uber app itself, and the car was on its way. I got a message from Uber’s Facebook Messenger bot, letting me know the car was on its way and an embedded map to track it. But I didn’t even need to do that because when it got to me, Uber messaged me, again via Messenger, to let me know the car was there, complete with driver name, car type, and license plate number. When the ride was over, Uber messaged me, yet again through Messenger, with a detailed receipt for the ride.

Who needs apps?

Machines that parse intimate slang.

Frankly this stuff is getting existential.

What are some of the disciplines that underpin Conversational IoT?

We spend a lot of attention on integration for automation, but that feels like a solvable problem, given it’s a core competence of IT. We have emerging protocols – notably MQTT (author Andy Stanford Clark will be with us at ThingMonk again this year) that make sense in a listening world rather than always on like HTTP. We have amazing platforms like IFTTT showing that integration doesn’t have to be painful. IFTTT is likely to emerge as a major IoT player. It has started down the connected home route, which is already pretty crowded but it has momentum and a developer savvy user base to tap into.

We can do the automation.

But writing witty, pitchy even plangent replies on the other hand may be a lot harder. Especially if we need to be able to respond sensibly to every request. When I think about conversation done right from a user experience perspective I think of Joshua Porter’s “microcopy”.

Ironically, the smallest bits of copy, microcopy, can have the biggest impact.

Microcopy is small yet powerful copy. It’s fast, light, and deadly. It’s a short sentence, a phrase, a few words. A single word. It’s the small copy that has the biggest impact. Don’t judge it on its size…judge it on its effectiveness.

Short, sweet and to the point. That’s what we need to automate. Of course machine learning is going to end up writing great copy. ML is analysing emotional state in movies and recreating photos using copies of artist’s painting styles.

Of course chat is not the only interface – voice is taking off. Siri, Cortana, Alexa, Leah are all going to make huge headway in the next 5 years. Perhaps we might even get a digital assistant with a male voice. While we associate Alexa Skills with the Echo for now, Amazon plans to make it pervasive, found in all kinds of listeners.

Then there are emojis. Machines are probably going to send each other emoji as status updates. For a while at least. Until emoji become like cassette tapes.

So great copy-writing, machine learning, chatbots.

What about the platforms and changing infrastructure requirements? We talk a lot in tech today about event-driven reactive platforms. The serverless economy is going to come into its own here. An architecture based on waiting, where you only pay at the point of customer value. One reason Amazon acquired 2lemetry was that Kyle Roche had worked this out.

The affinity of IoT and serverless is exemplified by the fact Amazon’s Echo architecture is built end to end on AWS Lambda.

In Conversational IoT machines will spend a lot of time waiting. They will be sitting idly by in our cars, in our homes, our workplaces. Listening, waiting. They will probably get lonely, and then they’ll talk to each other. One of the key insights of Thington is that machine chatter should be in natural language – we need to be able to understand what the machines are saying to each other. This is higher order craziness. Robot drivers arbitraging human lives.

I Robot indeed. We’re going to be needing those laws of robotics.

At Thingmonk 2015 Coates’ cofounder at Thington – Matt Biddulph continued the theme, with Welcome to the Conversation pointing at what their company was planning to build – a conversational model for IoT in the home. He is back for Thingmonk next week, now his company is out of stealth. But Matt is just one of many many great speakers in a diverse lineup.

We will have coding workshops on Alexa Skills, IBM Watson and Splunk data visualisation. We will having coding by candlelight. Great food. Lovely people.

We have some tickets available. You should buy one.

 

 

Amazon, IBM, Microsoft and Salesforce.com are all clients

One comment

  1. How does one respond to this, knowing the computer will read it?

Leave a Reply

Your email address will not be published. Required fields are marked *