Case Study – Voxpro Increases Productivity by 20%

A picture of people walking over a series of cogs
678

A large Irish contact centre tells the tale of how they greatly increased productivity by rethinking their quality assurance (QA) process.

Background – The Problem

According to Kieran McCarthy, Improvement Strategist for Voxpro, a traditional quality model is designed to involve a substantial number of quality leads/evaluators, who review support tickets, chats or listen to agent calls.

So, quality leads might spend 60-70% of their time reviewing what people did previously—one hour or one week ago; trying to find something in a huge haystack of calls that informs them enough to have a substantive coaching conversation; and then the quality lead or supervisor has the opportunity to meet with the agent who handled the call.

Finding the opportunity to have that conversation is also a challenge because agents are tied up taking phone calls and answering emails and chats.

Apart from the operational challenges of traditional QA processes, it is a significant investment. On average, there are 15 agents for each team lead and 30 agents for every quality lead.

Even if a lead can review 1 or 2 interactions/agents per month, team leads are going to spend an inordinate amount of time reading tickets, before they get round to speaking with the agents about what they have found.

For example, 10 quality leads costing €30k /year = €300k, where at least 50% of their time is spent reviewing someone else’s work @ €150k.

At the same time, once they have completed reviewing the work of agents, they have still not begun to work on changing agent behaviours.

In addition, what exactly is the team lead listening or looking for? Are they looking for soft skills, empathy, ownership, or knowledge? All very subjective terms, resulting in emotional conversations that are likely to be unproductive.

When the team or quality lead finally meets with the agent, they begin by having a discussion or sometimes even an argument about what those terms really mean.

On top of that, Voxpro has multicultural, multilingual staff, adding complexity to the need to have quality leads that speak one of the 13 languages supported by Voxpro, and who can be made available to review the recordings.

This is typical of most outsourcers, whose agents are conducting 70-80 calls/week and yet are only reviewing 1-2% of the calls they handle, consuming 50% of their QA resources, and even after all that is accomplished, they end up having disagreements with agents about the results.

As Kieran McCarthy says: “Clients really care about how much work are you doing (productivity) and how well are you doing it (quality); after all, these are the only real measurements from the outsourcer’s client perspective.”

Voxpro saw that this traditional quality model was inefficient, and worse than that, there was no correlation between the quality scores and key metrics that our clients care about, namely NPS / CSAT from a CX perspective or the amount of work they are getting done.

An Innovative Solution

Taking those things into account, Kieran and his team decided to try and do things in a different way.

Heretofore, scoring had been done in Excel spreadsheets, so metadata was not available.

So, as Kieran tells us: “The first step we took as part of our strategy was to decide what platform would be used to provide analytics and reporting that we could do with ease. We needed to make sure we could access the data easily. We selected Scorebuddy!”

The next step was to move Quality out of the hands of the few – quality leads – and into the hands of the many – the agents. Two percent of their more than 4,000 staff are agents. So there are quite a few agents who participated in this step.

The strategy moved to how to educate the agents in self scoring / quality review of their own work, since they were likely to be more self-critical than if someone else were to score them.

It was determined that it would be far more efficient and effective to have agents review their own work, so we built a simpler quality card with three main things we were looking for, requiring the agent to review the ticket if, and only if, they received a negative survey.

We were looking for:

  1. Did you authenticate the caller properly? (awareness)
  2. What do you believe was the main reason for any detractor or negative review?  (awareness)
  3. Could you have done anything differently, and if so, what? (awareness)

People inherently want to do better, and Voxpro noted that it was to their benefit that if a score is poor, it would allow us to conduct a productive, awareness quality coaching session.

It was clear that we could coach the agents more effectively, if their CSAT and Quality scores didn’t match up.

Agents had several reactions to this process:

  • 90% were delighted. A huge amount of self-evaluations began happening quickly.
  • Those that had bad surveys every month didn’t complete a self-evaluation; and it screamed to us that they were disconnected. It came down to a yes or no, did you review your bad quality survey or not.

The question then arose, how many were evaluating surveys correctly?

Using Scorebuddy and the huge amount of metadata Voxpro had gathered quite quickly, we were able to view the data easily.

The Results

This was more than 1.5 years ago, and within 2 months, Voxpro had 80% compliance and the data was rich; they were looking at evaluations from the agents’ perspective, not from the team leads’ perspective.

All the metadata gathered had taken no time away from agents or team leads.

Analysis of the data showed that 40% of the agents said they could have done a better job! Comments were made to suggest where and what they could have done better.

Within a few months they had gathered enough data to give context to workshops and to bring in focus groups to help to agents get better.

Quality leads were now freed up to look at the data. They had the time to do root cause analysis – through the Scorebuddy solution – and make improvements rather than spend all their time listening to phone calls and looking at tickets.

Fast Forwarding to Now

One of Voxpro’s contracts applied this same strategy and were able to detect where the problems were coming from.

Kieran’s team discovered that the Scorebuddy analytical tool was not being used so they trained people to use it.  They were then able to move 500 agents from 42 to 56 in NPS scores in 3 months.

As a result of these improvements, they gained additional business with a major account.

For outsourcers working with a scorecard system like Scorebuddy, the primary problem is that you take the ecosystem that your clients use when they come on board. You need to integrate their / the client’s CRM into the way you as the outsourcer works.

So, it is very difficult to standardize platforms across all of your customers, in terms of delivering the service.

Scorebuddy can be used to standardize to one way of capturing quality, regardless of the ecosystem being used by the client.  Capturing all the data in Scorebuddy is invaluable.

Voxpro uses Sisense as an analytical tool, so they are able to use Scorebuddy’s API to automatically feed Sisense with the data to do more sophisticated analyses, if needed. The dashboards were taken to the next level in this way.

In conclusion, if you are getting tens of thousands of data points from your agents, you can analyse the data quickly and easily to move the needle with little or no cost to the outsourcer.

Evolution

Below, Kieran McCarthy shares his thoughts on the evolution of the Voxpro contact centre, after implementing the Scorebuddy quality solution.

Eventually, we evolved to a behaviour model for the quality team. We asked ourselves, what is it that drives CSAT and NPS? What are the drivers of productivity? And what looks healthy in terms of those behaviours driving the best outcomes?

We wanted to improve productivity on a client contract and of course we started to focus on the people with lowest productivity to raise them; but someone else, like top performers, inevitably suffer because they get less attention. We felt that wasn’t sustainable.

Most of the phone calls came in between 12pm and 4pm, the time that shifts also changed. And we found that this was the same time that the quality leads and team leads were roaming around to have one-on-one coaching sessions and team meetings.

We instructed them to let the agents do their work during those hours and hold their team meetings outside of those hours.

Result: Productivity rose by 20%.

We have integrated that model from self-assessment to behaviour to automation into one process with Scorebuddy as an integral part of that process.

Click here to view other Case Studies from Scorebuddy

Author: Robyn Coppell

Published On: 3rd Feb 2020 - Last modified: 5th Feb 2020
Read more about - Industry News, ,

Follow Us on LinkedIn

Recommended Articles

Bowling ball hitting bowling pins
Case Study: Hollywood Bowl Increases Adherence to 95% With Injixo
Blurred call centre agents with tech icons
Case Study: DDC OS Dramatically Increased Planning Productivity
jargon definition
Contact Centre Jargon and Terminologies
House Appliances
Case Study - GE Appliances Increases Adherence by 20% With Calabrio