We picked up this question from our LinkedIn group
“How do you measure the productivity of QA auditors who are evaluating recorded calls and emails?”
So we asked the question of our readers, and here are all the answers that you need to know…
-
A Few KPIs
Here are a few KPIs I used in the past for QA evaluators:
- Quantitative KPIs – number of evaluations per hour; average time per eval
- Qualitative KPIs – evaluation accuracy (e.g. – xx% accuracy compared to the calibrated contact); random audits to measure accuracy (QA the QA style)
- Business Impact KPIs (if the QA is also giving feedback/coaching) – improvements generated at individual/team levels due to their coaching/feedback (e.g. – % of improvement in Quality/CSAT scores month over month; number of agents meeting QA KPI in the team on monthly basis); x number of operational processes improved due to the QA’s analysis
- Administrative KPIs – adherence (lateness and absenteeism); behaviour, etc.
You can use some or all the above as you see fit. The important part to have a balanced scorecard and an overall view on the QA auditor’s performance is to have at least one from each category.
Thanks to Emil
-
Time to Competently Complete Each Audit/Tasks
The time taken to competently complete each audit/task, such as report writing etc., factored into your expected utilization figures could be one approach.
Thanks to Philip
-
Amount of Work They Complete Divided by the Time They’re Scheduled to Be Productive
We measure the amount of work they complete divided by the time they’re scheduled to be productive to identify how productive they are.
Thanks to Andrew
-
Look at Completed Audits
I normally look at completed audits to target accuracy in calibration sessions, plus spot checks on their audits.
Thanks to Chris
-
Performance vs Forecast Is the Better Measure of Productivity
I’ve always found performance vs forecast to be the better measure of productivity, and also allows you to plan and manage the workload for your QA team better.
I create one that covers the financial year so I can plan for headcount, with a more specific month-by-month one to account for fluctuating contact volumes and individual performance according to sampling criteria.
My QA Lead will then use this forecast to allocate a number of QA assessments to each of their QA specialists across the course of the month, with the progress of the workload then tracked and discussed in regular 1-2-1s to keep the pace and engagement.
Alongside productivity, my QAs have their own QA metric which works like “Check the Checker.” This second line QA involves my QA Lead re-QAing the assessments to ensure they’re aligned and that we’re getting the quantity vs quality balance right with QA assessments.
I wouldn’t recommend measuring your QA team on QA performance, as this creates a slight conflict of interest and somewhat defeats the object sometimes of having a segregated QA function.
Any improvements in agent performance should be the agent’s line manager’s responsibility, with the QA team being the enabler.
Thanks to Matt
-
QA’s Responsibility to Improve Performance
It is my QA’s responsibility to improve performance along with the supervisor.
I don’t need a bunch of error finders… I want continuous growth and improvement.
I expect to see improvement or I look to them just as much as the employee.
We also do secondary audits of evaluations and calibration sessions as a group.
Thanks to Katherine -
Allocate Calls for Our Quality
We allocate calls for our quality team and allocate time for them to complete.
We also use Microsoft Forms too, which does provide a ‘time to complete’ measure for each piece of quality too.
Thanks to Richard
-
Reflect on Their Performance Through QA
When call centre agents can reflect on their performance through QA metrics, you’ll see measurable improvements like:
- Increased first response time( FCR)
- More strategic use of words and language
- Better communication with customers across the board
- Improved Average Handling time
- Improved problem solving especially when it comes to troubleshooting for a customer.
Thanks to Aoron
-
Auto QA Support From Speech Analytics
We’ve found a lot of people are very interested in, and benefiting from, auto QA support from speech analytics.
Nevertheless, don’t be fooled by those vendors telling you “AI can get you across 80-90% of your score card.”
Whilst it CAN score across 100% of calls, the likelihood is that there are elements in your scorecard that the technology just hasn’t got clever enough for you.
Most scorecards can be about 40-50% automated, with improvements on that over time.
Were also seeing a lot of success in a coaching platform that identifies metrics created in the auto QA process for managers, and tell them who to coach, and on what.
Similar processes could be added for a QA team, with metrics on completed manual QA added, in order to drive conversations for them too.
Ultimately, the huge benefit with speech analytics is to highlight areas of “risk” within the business and provide insight for targeted QA, and even highlight the parts within the call that they need to listen to, rather than the whole thing, for an initial triage of the call.
Thanks to Paul
Extra articles on productivity of QA
- How to Calculate Productivity in the Contact Centre
- Call Centre Quality Parameters: Creating the Ideal Scorecard and Metric
- 4 Key Objectives to Improve Productivity and Customer Experience
- Call Centre Quality Assurance: How to Create an Excellent QA Programme
Author: Rachael Trickey
Published On: 30th Sep 2022
Read more about - Call Centre Questions, Key Performance Indicators (KPIs), Measurement, Productivity, Quality