I have been taking a look at the various chatbot platforms – and something struck me about how we’ve got away from some of the original beauty of chat as UI. The problem is the focus on artificial intelligence and machine learning, as if every bot had to pass a Turing Test. But they really don’t. Disambiguation is what humans are best. Sure – it’s a nice science experiment to make a chat UI we can’t distinguish from a human call center agent. But frankly, in my experience – call center agents aren’t what we actually want. If we did, why are so many of us choosing banking and financial services platforms that avoid the use of call center agents? We don’t want call center agents, we want self-service banking, and we’re prepared to learn user interfaces and hacks that make our use of services more efficient.
Chat platforms are command line interfaces, but you don’t need to be a developer or IT pro to understand how to use them. When we look back at the history of chat-based UIs Twitter plays a special role. Long before AI chatbots was a thing folks were using commands in Twitter to kick off actions. Andy Stanford-Clark could control things in his home, or be notified when the cheese in his mousetraps got dry. He could also talk to ferries between Southampton and the Isle of Wight. Chris Dalby built the chinposin platform, which had commands to kick off actions, for example, to store updated profile images. At Thingmonk 2016 one of the themes that both Matt Biddulph and Terry Eden talked to in detail was Twitter as a platform for conversational IoT.
Twitter was a wonderful playground for conversational command lines, until Twitter messed things up with too many API changes and restrictions. Few if any of these experimental services involved any significant AI elements- rather it was just about triggering commands from Twitter on listener services (in chinposin’s case at least driven by some proper hacky cronjobs). Today Biddulph is building Thington, a “chatty concierge” service for smart homes, and there is clearly some ML there, but also some command line thinking.
After Hubot IRC was full of non-sentient, rule-driven bots. More latterly check Slack.com, where it has become normal to issue commands that require a particular syntax like
/giphy #caption “Who’s the GIF master now?” happy dance
Or more prosaically to check the weather
/giphy #weather <insert zip code here>
When I want to listen to music on my Home device I have to say: “OK Google Play Eric Sermon Music on Spotify.”
Not exactly natural language. So let’s not forget humans and what we’re good at. Disambiguation is our a core competence. How hard is it to issue a command where the system understands when it’s a clear syntax? We can learn a new syntax much more easily than a chatbot can deal with every possible linguistic utterance and edge case. So while more easily consumable AI, and programmatic models specifically designed to reduce ambiguity, are certainly interesting, we shouldn’t be afraid to make defined commands accessible via chat interface – humans will pick that up. Customer service is not going to suffer if we ask people to use simple command lines. On the contrary – that’s the real beauty of chatbots.
Suggest you watch these great talks by @mattb and @edent for more on the subject.