For anyone of my generation, the concept of artificial intelligence is viscerally embodied in the jet-black camera lens, with the red-iris-yellow-lens eye, of the sentient computer HAL 9000 in Arthur C. Clarke’s and Stanley Kubrick’s 2001: A Space Odyssey ..
Our understanding of artificial intelligence is bound up in the way humans and computers communicate; the easiest way for us to intuitively understand computer intelligence is to imagine how we would talk to each other. In fact, this is precisely the basis for the famous Turing Test, which is essentially a test of natural language processing.
HAL 9000 brought this to life for an entire generation, who saw and heard, for the first time, a computer that could operate a spacecraft, hold natural language conversations, lip-read, interpret human emotions, play chess and, ultimately, fight for its own survival.
Arthur C. Clarke believed this level of conversational AI would be a reality within 20 years. It’s taken us rather longer than that, but the last decade has seen astonishing progress. We are now experiencing almost universal adoption of conversational AI, most notably in Alexa (which operates on more than 100 million devices) and Siri (on more than 500 million).
While we can “ask Siri” almost anything, and get an intelligent response, we are only just starting to see effective conversational interfaces in widespread use in the business world.
Surfing on the leading edge of this wave of innovation, is the humble chatbot. These, often relatively simple, AI driven, conversational interfaces are rapidly becoming a primary mechanism for interacting with customers and employees, at least as a first line of support.
Chatbots offer many advantages. They:
So, how do these chatbots measure up against HAL 9000?
In 2001: A Space Odyssey, when Dave Bowman begins removing HAL’s modules, one-by-one, HAL’s cognitive ability progressively diminishes, getting less and less sentient, until all it can do is sing the song “Daisy Bell”.
We can perhaps see, in some of these final stages of HAL’s regression, the level where the simplest chatbots might be found.
The most basic chatbots are really little more than flowcharts or logic trees, guiding the user through a series of predetermined conversations.
While this may not sound very exciting, these entry-level chatbots are extremely effective at handling customer or employee service requests: anything from booking a meeting room to registering for a new service, or from requesting an account balance to changing an address, can be quickly and efficiently handled by a relatively simple chatbot.
One of the first places chatbots like these have been deployed, is in customer service call centres, where they support human service agents by providing them with more flexible and intelligent guidance, significantly improving productivity and customer satisfaction.
Another common deployment is on company websites, e.g. for many of the larger banks, they provide a first line of support for customer enquiries. One such bank has seen a 2.5x improvement in productivity and a 50% reduction in the time taken to resolve the average customer enquiry.
The next level of maturity is using chatbots not just to handle customer / employee interactions, but to generate intelligence about these interactions which can be used to progressively optimise the conversations and outcomes for the customers / employees.
This can be done in several ways:
The next level of chatbot maturity is the ability to take into account information about the customer’s / employee’s situation and tailor the conversation accordingly. For example:
The general principle is for the conversation to be customised to the specific needs of the customer / employee and for additional resources and sources of data to be available to the chatbot, perhaps via API’s, to enable this to happen.
Sometimes a customer or employee may simply want an answer to a question, but more often than not, what is really wanted is an action or an outcome.
If I tell a company’s chatbot I’ve recently moved home, and ask it if I can notify them online, then it’s fine for the chatbot to say: “yes, you can notify us here ..” and give me a link to the relevant page on their website. But it would be so much better for it to say “sure, let me do that for you, what’s your new address?” and “there, that’s all done for you, have a great day.”
This is a relatively simple example. Chatbot technology can use API’s to interface with other systems, either to get the information needed to answer a question or to trigger an action in response to a request.
The next step is to go beyond this and integrate chatbots with robotic process automation, or better still intelligent automation, to move from “here’s the answer to your question” to “would you like me to do this for you?” or “here’s what I’ve done for you”.
Another dimension of chatbot maturity is the enterprise itself. As companies develop and deploy more and more chatbots they risk proliferating a confusing array of technologies, solutions and architectures, especially as there are a multitude of chatbot and AI tools available on each of the major cloud service platforms.
There are two dimensions to this problem: deciding where to deploy chatbots and deciding how to deploy them.
In selecting suitable use-cases for chatbot implementation, a good place to start is areas that generate relatively high volumes of enquiries. Good questions to ask are:
As ever, the key is to focus on the problems or opportunities that have the greatest potential impact for your organisation, your employees or your customers. Sometimes this is easier said than done; often all you have is some data and a few ideas about how to apply machine learning and conversational interfaces to deliver value.
The skills and experience needed, to sift through this data and identify the most promising use-cases, are in short supply. So, companies often make use of outside expertise. Indeed, one of our member firms, experts in data analysis and machine learning, do exactly this, working with clients to identify the best places to start.
They focus on the human element as much as the technology, and create data assets, machine learning algorithms and conversational interfaces that are owned by the client, aiming to make the client self-sufficient as quickly as possible.
When it comes to actually building and deploying your chatbots, it’s important to get the foundations right. Ideally, you should establish a single chatbot architecture that ensures the basics (security, version control, testing, etc.) are leveraged across all your implementations (rather than being developed over and over again, separately for each deployment).
Fortunately, there are tools to help you do this: for example, our member firm (mentioned above) who are highly experienced in the deployment of AI, provide an enterprise grade chatbot management platform which orchestrates and connects with multiple underlying chatbot solutions (that might, for example, be on two or more different cloud services).
If you are using chatbots internally, consider creating a “bot-of-bots” to guide employees to the right chatbot for a given situation. Otherwise, just finding the right chatbot becomes the problem in itself (one major international bank has created more than 20 chatbots to assist employees in just one function).
Some additional lessons learned from real-life chatbot deployments, both internal and external, include:
I think there’s no doubt conversational interfaces are going to play an increasing role in facilitating the relationship between businesses and their customers / employees.
It seems likely these conversational interfaces will make increasing use of voice interaction (rather than simply text) and perhaps also make use of textual analysis or facial recognition, for example, to detect what mood we’re in when we talk to them.
Perhaps chatbots will become more pro-active and initiate communication with us, maybe via messaging; they might even start proactively offering assistance or telling us about things they’ve done for us without being asked. They might try to sell us things.
Whatever the answers to these questions, it’s clear chatbots are here to stay and they’re going to do a lot more than just answer our questions.
I’ll leave HAL with the final word on the subject:
Andrew Simmonds is Consulting Director at Clustre – The Innovation Brokers www.clustre.net
If you would like to learn more about developing and deploying chatbots, or about enterprise chatbot management tools and techniques, please contact us at firstname.lastname@example.org