Responsibility and AI Ethics in the Contact Centre

Controller reading regulations to robot
181
Filed under - Guest Blogs,

Dawn Herbert at Odigo explains how the use of AI to optimise contact centre operations has surged in recent years, as businesses look to harness the power of automation and analytics to provide better customer and agent experiences.

As with all freedom, the price is responsibility and the same applies to the use of AI. AI ethics need to be considered so contact centre managers can responsibly use AI to revolutionise customer service.

AI facilitates the collection and processing of massive amounts of customer data, to help organisations predict consumer behaviour and tailor individual experiences.

However, along with the use of that data and the interactions between AI systems, customers and contact centre agents come concerns over AI ethics. With the beneficial rise of AI in the contact centre set to continue, UK organisations need to take steps to build AI strategies that are both practical and ethical.

Recent years have seen the push from customers for both digital and personalised experiences. Consequently, to provide cost-effective customer services that keep pace with customer expectations and the competition, artificial intelligence (AI) systems are a must.

However, deploying AI in the contact centre requires consideration of new practical and ethical issues within the customer service sphere.

Ethics is a human thought process, which applies concepts of ‘right’ and ‘wrong’ behaviour to activities and actions. Therefore, as AI-enhanced systems, like automation and intelligent routing systems, are just tools – the responsibility for AI ethics rests with the people assigning their function and deployment.

By asking the big questions to define ‘how’, ‘why’ and ‘where’ artificial intelligence will be deployed, it is possible to create a framework to minimise risk and maximise effectiveness. If companies consider the applications and output of AI systems, like ‘advice and guidance’ or ‘decision making,’ they can also control the associated risk.

Understanding AI Ethics

In order to visualise where AI ethics must be applied, it is helpful to consider what an AI system is. To do this, one should start by looking at automated tools.

These tools usually work well for repetitive tasks and are essentially ‘decision trees’, boiling down to the simple set of rules: if ‘x’ happens, then the next step is ‘y’.

Artificial intelligence comes into play when these decisions are made more accurate by applying models trained with machine learning (ML) methods. Furthermore, the more data these systems work with the more examples they can use to tune these decisions.

In this way, performance and accuracy constantly increase through learning and adjustment. These decision trees can become increasingly complex – processing masses of information, almost instantaneously, and producing potentially infinite ‘if-then decisions’, that benefit agents and customers with speed and accuracy.

However, in some other applications, the ‘black box phenomenon’ comes into play. In this situation the reason an AI makes a certain decision becomes obscured by an incomprehensible level of complexity.

Input and output are visible, but the internal workings of the AI decision, ‘the black box’, is unknown. All systems have innate limitations, even newer ‘explainable AI’ which aims to eliminate black box situations.

However, due to the sheer complexity of the decision trees and the imperfect nature of the data being fed into the imperfect systems in which AI tools are deployed it is possible to inadvertently introduce bias. In some cases, this has accidentally perpetuated social inequalities, like race and gender.

For example, science career ads are disproportionately shown to a male audience, with the result that social media algorithms could mean advertising jobs here breaches the UK Equality Act, and facial recognition software tends to be less accurate at recognising darker skin tones.

Within the contact centre, however, when personalisation is a driving factor, decisions are made based on a customer’s individual input, the impact of this is minimised. Of course, no system is ever perfect, which is why ongoing training through ML and supervision of the mission and success of AI with voice of customer programmes, analytics and tracking KPIs is essential.

AI Ethics to Consider During Deployment

A simple consideration that can impact the trust customers develop for AI systems is the transparency of use. Icons, names and the voice of AI-driven bots should clearly signal to the user, or customer, when they are interacting with an AI and not an agent.

The quality of all artificial intelligence systems is also dependent on the quality of the data they run on. As the saying goes: rubbish in, rubbish out. Errors and poor results in AI are most often dependent on the quality of information fed into it.

So, it’s crucial to maintain accurate, up-to-date, non-siloed data. Without it, the outcomes of these AI processes will be unsatisfactory, or in a minority of cases – for example, those involving vulnerable customers – even harmful.

AI systems are intended to assist agents, whose empathy and ability to think laterally around complex and unique customer situations will always be needed. Final decisions or interactions with serious consequences and emotional circumstances should always be escalated or directly routed (using AI systems) to agents.

This leads us to the next point, the role of AI systems should be to augment and assist, not replace agents. There had been fears that almost 30% of UK jobs were at risk of being replaced, or made redundant, by the early 2030s, as a consequence of automation.

However, more balanced appraisals now suggest that the real result is unlikely to be as black and white, with some jobs being more suitable for automation than others.

This must be also balanced against the ‘Great Resignation’ currently underway, with the implication that there are simply some jobs that people don’t want, or that currently don’t offer enough job satisfaction. Therefore, where AI is deployed to take the tasks people don’t want or assist them to greater job satisfaction, then this can be considered an ethical way to proceed.

AI Ethics Is an Equation CCaaS Solutions Can Balance

Contact Centre as a Service (CCaaS) solutions offer contact centres both the AI-powered tools to cater to customers’ needs economically and efficiently, but also the keys to supervision.

Omnichannel bots, intelligent routing and agent assist functions all utilise AI to free up agent time and empower them to use their skills to maximum effect. Analytics and speech analysis facilitate the accumulation of an increasing amount of data to fuel AI accuracy and data-driven policy decisions.

In this way, AI can be seamlessly integrated and expertly monitored, to give business leaders the power of AI, with the assurance of an ethical model.

This blog post has been re-published by kind permission of Odigo – View the Original Article

For more information about Odigo - visit the Odigo Website

About Odigo

Odigo Odigo helps large organisations connect with individuals through world-class, cloud-based contact centre solutions. Our cutting-edge, proprietary technologies enable a seamless, efficient, omnichannel experience for your customers and a satisfying, engaging experience for your service agents.

Find out more about Odigo

Call Centre Helper is not responsible for the content of these guest blog posts. The opinions expressed in this article are those of the author, and do not necessarily reflect those of Call Centre Helper.

Author: Odigo

Published On: 11th Mar 2022 - Last modified: 15th Mar 2022
Read more about - Guest Blogs,

Follow Us on LinkedIn

Recommended Articles

AI ethics concept with icons
The Ethics of Artificial Intelligence in CX
Business ethics concept. Ethical corporate culture with wooden cubes with ethics mindset and teamwork icons.
The Big Problem With Remote Working and Corporate Ethics
AI in customer service concept with a robot in headphones coming out of a laptop
Our Top Use Cases for AI in Customer Service
Is It Time to Talk About Artificial Intelligence Ethics?