Artificial Intelligence (in the sense of full replication of human intelligence) is a pipe dream, in my honest opinion. Why do I say this?
We still don’t understand fully how our amazing brains (which are biological in nature rather technological) work, so what hope is there for humans to create a non-biological system that emulates the brain? Not in my lifetime (said by someone who delivered an AI app in the mid 90’s and has been looking for an excuse to roll it out ever since)!
The notion of AI taking over the world and making humans redundant is also starting to scare a lot of people and is therefore, perhaps, causing a slight dragging of heels over the use of AI. This is a shame because AI, even in its present, under-developed form, enriches any app substantially and takes us to an even higher level of automation than previously possible (see my earlier article on this subject https://www.clustre.net/forget-intel-inside-ai-inside-alexa-outside/ ).
Historically we have computerised the handling of routine and less routine processes by describing the exact steps to be taken in respect of any event or transaction (the so called ‘if-then-else’ approach). As we took on more and more complicated processes, applying this approach led to massively complex programs that are nigh-on impossible to get right first time, nigh on impossible to test for 100% accuracy and indeed, cannot handle every scenario they are hit with (some of which will not have been anticipated). Such is our dependence on these complex, yet vital programs that failures can now cause even the CEO to lose his job, as was the case with TSB just recently. But there is a way around this and the answer, ironically, lies in emulating the way that humans work and/or in augmenting their efforts.
The IA journey starts and ends with human experience.
Let’s consider how any one of us would process a request for help:
- If the request has been clearly expressed and we’ve handled a request like this before, we pretty much know what to do.
- If a request is unclear, we ask for clarification.
- If the request is a new one to us, or has some unusual aspects to it we either:
- Look up a rule book for guidance, or
- Ask for help from someone more knowledgeable or experienced than us, or
- Improvise by adapting the solution to a similar problem, or
- Pass it on to someone else to handle.
If this is what we humans do, why can’t we have a computer emulate this process, or better still, do as much work as it can before passing it on for follow up by a human?
Well, now you can, using tools that provide IA (Intelligent Automation). These tools allow you to capture the knowledge of experts and deploy it to augment the capability of the entire workforce, replace some of the more labour intensive or mundane work done by the workforce, or allow customers direct access to that knowledge so they can self-serve. These tools work by recording the knowledge of the experts and then, using cognitive AI, intuiting how to process a request.
Don’t be negative about AI, be excited about the proven potential of IA.
The first step is to sit down with some human experts and ask them to describe what they do and capture this information in a heuristic rather than a programmatic way. What do I mean by this? Well rather than ask the human experts to explain, in precise detail how they respond to a particular event or transaction, ask them to describe:
- What types of queries they get and what decisions they need to take in respect of each of these.
- What data needs to be obtained, processed or recorded.
- What rules need to be followed and where are these documented.
- What types of judgement do they need to exercise.
Once captured and recorded in a graphical manner, this knowledge can be deployed in a number of ways, either:
- In support of an existing app, like a fraud detection engine, or
- As a stand-alone query tool for use by, say, call centre staff, or
- As part of a bot provided for the benefit of customers, etc.
We have seen some considerable success with IA tools. In one instance, a major credit card company reduced the cost of fraud detection by 60%.
What’s interesting about this particular example is the card company in question was using the industry standard fraud detection system called Falcon. But this tool could not accurately determine whether a particular transaction was actually fraudulent so what it would do is spit out suspicious looking transactions for human follow up. Given the volume of suspicious transactions, a very large workforce was deployed to follow them up. By capturing and deploying the knowledge of the experts amongst this workforce, the IA tool in question was able to investigate, on their behalf, transactions highlighted by Falcon and process a large percentage of these automatically, leading to the 60% cost reduction quoted above. The IA tool still passes transactions onto the workforce for further investigation, but far fewer in number.
The lesson from this example is not to worry so much about the theoretical negative impact of AI but instead to be excited by the real and proven potential of IA.
Indeed, I hope this short article will encourage you to marvel (or worry) less about AI and instead move to embrace IA now.
Robert Baldock is the MD of Clustre – The Innovation Brokers (and an early AI practitioner)