AI Ethics: Three Pillars to Staying Accountable

Three marble pillars of sustainability on blue cloudy sky background
2,441

Olivier Jouve of Genesys discusses the ethical considerations that you may need to consider before implementing Artificial Intelligence (AI).

We often talk about the seemingly boundless power of AI when it comes to helping organisations solve difficult problems, realise new efficiencies and achieve very specific business results.

For organisations across nearly every industry, it’s quickly gaining traction as the proverbial magic bullet for doing more with less, and even better than before.

In customer experience specifically, we also talk about how AI can actually fix what technology “broke” – by facilitating the human-to-human connections that simply got lost in many digitalisation efforts.

All of this is great – until it isn’t. Anyone involved in developing and using AI should also be talking about the potential for its extreme side effects – which unfortunately aren’t just fiction.

Already there are real examples – whole neighborhoods unfairly targeted by police due to their demographic composition; some candidates having difficulties finding jobs because their profile doesn’t match the company’s idealistic and premeditated “right” fit; and some minority groups facing hardships achieving certain milestones – like being accepted to a university or accessing an affordable loan.

This is all a result of an inherent problem rooted deep within all AI algorithms – biased models. As a society, we have to come to terms with the fact that every model has some degree of bias.

How does this happen? Well, it’s easy. If a majority of your customers are males between 50 to 65 years old – the data that feeds your algorithm is already (albeit unintentionally) skewed towards one demographic.

Your algorithm likely bases its decisions on this model’s past preferences and behavior, unless you’ve done some work to prevent it. To put it simply, if you don’t use the right data or all of the data, you introduce a bias.

Given that some of the most high-performing deep learning models are complete black boxes today, how does an organization respond? It starts with accountability. From a company’s perspective, there are three pillars to an accountable AI practice:

1. Be Responsible For the Outcomes

The key here is making sure your AI produces the intended results and taking ownership. You don’t want to be exposed when the result of a score is unethical or unfair. Blaming the algorithm or the data is an indefensible position.

AI development is a complex and dynamic process, which is why you need strong guardrails in place to keep up with and stay in control of the models as they are continuously self-learn and evolve.

2. Have the Ability to Explain the Algorithm

When you do not understand how your models make decisions, you allow the algorithm to dictate the reality of many (usually marginalised) people.

While it can be exceedingly challenging to determine how a mathematical model reaches a certain conclusion, businesses have to find a way to explain the “why” behind the decisions reached with AI.

That is particularly important when those conclusions impact the well-being of an individual and could have legal implications.

3. Show Transparency on Data Sets That are Used

First, we should be transparent if a decision is made online by an algorithm or a human being. Both of them will introduce some bias – the machine-generated one will likely be more consistent though.

However, transparency is always the way to mitigate bias, especially for models that are difficult to explain. The key is to provide users a view into the data that led to the decision in the first place.

To take that concept a step further, consider giving users a voice so they can provide continuous feedback that enables regular adjustments to the model as needed.

In the book, “Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy”, author Cathy O’Neil was one of the first to shine a light on how automation and AI can destroy people’s lives and perpetuate inequality.

She wrote: “Big Data processes codify the past. They do not invent the future. Doing that requires moral imagination, and that’s something only humans can provide. We have to explicitly embed better values into our algorithms, creating Big Data models that follow our ethical lead”

Olivier Jouve

Olivier Jouve

This is a charge we should all take seriously. It’s not easy, and the paradox is we are beginning to use AI to explain AI because we can’t always do it ourselves. The point is – you can’t just deploy your AI and leave it. You have to be responsible at the time of deployment and accountable throughout its entire lifecycle.

There is a constant need to make sure you’re really tuned to what is happening, why it’s happening, the impact its outcomes have on people while also making sure you aren’t inadvertently introducing new biases.

It’s a process of continuous improvement – and as they say – we don’t know what we don’t know…yet.

Author: Robyn Coppell

Published On: 5th Apr 2019 - Last modified: 18th Jun 2024
Read more about - Guest Blogs, ,

Follow Us on LinkedIn

Recommended Articles

AI ethics concept with icons
The Ethics of Artificial Intelligence in CX
Controller reading regulations to robot
Responsibility and AI Ethics in the Contact Centre
FAQs - Are You Staying on the Right Side of the Law?
Measuring and benchmarking concept with hands holding measuring tape
Bias, Accuracy and Benchmarking for Conversational AI