With the increasing use of bots in corporate environments – for example, 50% of service providers intend to deploy these into their call centre environments in the next two years, according to recent Omnisperience research – the question of how these are being used is increasingly rising up the agenda for both service providers and their customers. (see The benefits and perils of deploying AI)
In California they recently decided that transparency is key when it comes to bots, enacting a law (30 September 2018) that requires companies to make it clear to customers if they are speaking to a human or a bot. The law comes into force 1 July 2019.
The problem with this law, of course, is that no-one’s absolutely clear what comprises a bot (it’s just a form of automated program, but how do we define it in a legal sense?) or how this is going to work. Let’s take a little look at what this means. If you get a human assistant to send out emails for you or manage your Twitter account, that’s okay. But if you use technology to do this then you’re in a grey area. The difference between using an automated program versus AI/bots is not entirely clear. However, it seems that if the bot interacts with a customer (is customer facing) then it will have to identify as non-human (at least in California).
Whether or not this approach is enforceable, helpful or practical we shall see. It does point to a wider issue though, which is how bots are used and how much information companies need to divulge. If a customer knows they are talking to a chatbot or AI then they adjust their conversation accordingly. But if they assume it’s a human, when it isn’t, they naturally feel deceived if they later discover it’s a bot. Companies need to think about how they manage such a situation to ensure that bots enhance the customer experience rather than undermine it.
Bot traffic is certainly big business for service providers, since more than half of website traffic is now bots rather than humans (Source: Bot Traffic Report 2018). Helping enterprise customers manage the risk from badbots, while gaining the advantages provided by goodbots is a new opportunity. Service providers need to learn botiquette not only to benefit their own businesses and prevent botaggedon, but also because it is a saleable skill to their customers.
Omnisperience research shows service providers aggressively adopting AI and chatbots in call centres – often motivated by reducing the cost of service but equally to provide faster service. However, this will only succeed if “good” automation is delivered. That means that the bot is able to efficiently answer a query and satisfy the customer’s enquiry. This, in turn, means that there needs to be a virtuous feedback loop such that enquiries are analysed, common problems identified and addressed, and solutions and advice fed back to bots and CSRs. If AI is deployed without this feedback loop then the root causes of problems will not be addressed and the volume of calls will simply rise. If the cost of resolution is minimised but the volume of calls is not, then deploying AI could be a false economy.
Triaging enquiries will become increasingly important. AIs can only deal with previously encountered problems or simple enquiries. They cannot deal with complex enquiries or irate customers that need an emotional response. The message here is that humans will remain an important part of the mix. Deploying AI isn’t about removing human agents but rather using them more intelligently. Bots are great at handling routine enquiries, which free up humans to focus where they add the most value. As is often the case, the 80:20 rule applies here with 80% of enquiries being routine and related to predictable topics. By letting bots mop up most of these, human agents can spend more time on the 20% of complex, novel or unusual enquiries and provide more proactive customer care rather than just fire fighting. The message, as usual, is to let the technology do the heavy lifting.