Ben Rigby at Talkdesk asks what’s the problem with ChatGPT in the contact centre?
Wading through the ethical concerns.
I’ve been writing feverishly lately about the opportunity ChatGPT (and other generative AI systems) presents for the contact centre.
We believe that the generative pre-trained transformer (GPT) large language models (LLMs) powering systems like ChatGPT are going to be the backbone of the next generation of contact centres as a service (CCaaS).
We’ve launched our first feature that harnesses these capabilities to automatically summarize customer conversations and accurately select their dispositions (e.g., requests follow-up, wants to cancel service, etc.)—effectively eliminating most of a customer service agent’s after-call work.
And we are actively integrating these LLMs into other core contact centre use cases, as well as shaping our roadmap to accelerate the value our customers can gain from these natural language processing techniques.
But we also understand the magnitude of this new technology, and the potential it has to do harm as well as good.
A number of prominent tech leaders—including Elon Musk, Steve Wozniak, and Andrew Yang—are publicly calling for a pause in the training of AI systems more powerful than ChatGPT-4, where the general concern is that runaway AI will overtake humanity’s ability to harness it for good.
I thought it would be a good time to delineate some of the top ethical concerns of these new LLMs for the contact centre.
Any company considering using these AI models in their contact centre should be aware of the potential pitfalls and consider processes that will help ensure safe and positive customer experiences.
Five Ethical Concerns of ChatGPT in the Contact Centre.
1. Transparency.
The first ethical concern with using ChatGPT and other generative AI systems in the contact centre is transparency, which applies both to being clear how an AI-driven decision was made as well as to the fact that it was made by AI. Customers have the right to know if their interaction was mediated by AI.
If customers believe they are talking to a human and later discover that they were talking to a chatbot, for example, it could erode trust in the company and damage customer relationships with the brand. Full disclosure to the customer is crucial when designing chatbots and virtual agents.
2. Bias.
Another ethical concern is the potential for bias in the responses of chatbots and virtual agents. ChatGPT is a machine learning model that is trained on vast amounts of text data, and there is a risk that this data could be biased in some way.
If, for example, an automatic quality review favours one pattern of speech over another, this could lead to unequal treatment of agents, resulting in discrimination.
3. Data Security.
As with any software system handling personally identifiable data, data flowing through the LLM needs to remain encrypted end to end, at rest, and to follow all of the data privacy considerations that keep customers’ data safe.
4. Truthful Output.
Another ethical concern is the potential for ChatGPT to “hallucinate” or to make up answers that may not be truthful, but to convey these alternative facts in a compelling way.
Of course, we don’t want to be giving misinformation to customers, especially if they’re using it to make life decisions.
5. Job Displacement.
Finally, there is the concern of job displacement. As more companies turn to AI-based chatbots like ChatGPT, there is a risk that human contact centre agents could lose their jobs.
Companies must consider the impact that automation could have on their employees and take steps to mitigate any negative effects, and in particular to imagine new opportunities for better service with much of the routine work being automated.
Pause at GPT-4?
The speed of recent advances in LLMs has been astounding And there are legitimate concerns as outlined above.
Rapid development of AI without safeguards (at the very least consideration and mitigation of the negative outcomes) could result in those negative outcomes coming to life.
The Future of Life Institute (FLI)—the author of the open letter calling for a pause—wants to ensure that AI technology is developed in a way that benefits humanity and avoids potential risks and dangers.
They are concerned that the major AI labs are in an “out-of-control” arms race, and currently there is no proper oversight ensuring they are working in an ethical manner.
We’ll be tracking these discussions carefully as they evolve over time. But our commitment to our customers and to the millions of people who interact with our software systems every day is to continuously evaluate the impact of our decisions against the ethical considerations outlined above.
Final Thoughts.
For the LLM-powered features that we’ve already delivered to market and those we are developing, we allow for transparent decision-making; modification by a human reviewer to remediate bias if it should exist; absolute commitment to data privacy and security; anti-hallucination techniques; and careful consideration of the impact of LLMs on human agents—with an eye toward imagining expanded roles for humans in the age of automation (I recently wrote about the concept of steering agents that oversee a team of bots).
While LLMs will improve the quality of service for the millions of us looking for help and support from the companies we love, it’s incumbent on the software engineers, designers, and product leaders among us to consider and address these ethical concerns as we build.
This blog post has been re-published by kind permission of Talkdesk – View the Original Article
For more information about Talkdesk - visit the Talkdesk Website
Call Centre Helper is not responsible for the content of these guest blog posts. The opinions expressed in this article are those of the author, and do not necessarily reflect those of Call Centre Helper.
Author: Talkdesk
Published On: 14th Apr 2023 - Last modified: 24th May 2024
Read more about - Guest Blogs, Ben Rigby, ChatGPT, Talkdesk