Richard Snow shares his thoughts on using the Customer Effort score in the contact centre.
Transition from the old-fashioned call centre
During my involvement with customer service, CRM and contact centres, I have seen companies transition from the old-fashioned call centre that simply handled customer calls, to contact centres that typically added email and white mail as channels of engagement.
This has now expanded dramatically, with everyone talking about the omni-channel customer experience, where customers can engage companies through the channel of their choice and at a time of their choice.
I am on record as saying that much of this latest talk is just that, talk.
Companies will not be able to support “omni-channel” until they technically integrate all existing channels of engagement, have single interaction routing (i.e. all interactions routed using the same rules), and have processes and systems in place that ensure customers get the same information regardless of touch point.
The central issue is the metrics companies use
Over the same period, I have continually heard phrases such as “customer service is the only differentiator” and “companies need to be customer-centric”. Sadly, in the same way as omni-channel, I think most of this is only hyperbole.
I have come to realise that the central issue is the metrics companies use to monitor and assess the success of customer engagement, and this is typified by the metrics used to manage contact centres.
The top metric is still Average Handling Time
I began my research into contact centre performance management about 10 years ago with a benchmark of global contact centre maturity. I judged maturity across four dimensions: people, process, information and technology – just as I do today.
The results showed that about 10% made it to the highest innovative level and the limiting factor was most often the quality of information – including the metrics – companies used to monitor performance. Additionally, there were issues concerning how to up-skill agents to handle different forms of interaction, system and information access. But metrics were always the biggest issue.
It probably won’t surprise anyone that the top metric at the time was Average Handling Time, and guess what, in my very latest benchmark into customer analytics, the top metric is still Average Handling Time!
Used in context, Average Handling Time has its place, as it has a direct impact on the cost of handling calls, but it is not very customer-centric. Customers want resolutions: yes they would like them as quickly as possible, but the key is getting an answer to their issue.
To become more customer-centric, companies have added various other metrics such as Customer Satisfaction, First Contact Resolution, and the Net Promoter Score. Measured properly, these also have their place.
Make it easy and quick for customers to contact you
Back in 2010, an article appeared in the Harvard Business Review that proclaimed it was time to “stop trying to delight your customers” because it was more important to make it easy and quick for customers to contact you. This would engender greater loyalty and thus more business, and so a new metric was born – the Customer Effort Score.
The idea was simple; ask customers the straightforward question “how easy was it to contact us?” Then allocate a score according to the response, track the metric over time, and put in place new training, processes and technology to improve the score.
Like all the other metrics, Customer Effort has its place, and my research over the last four years has seen an increasing number of companies adopting it, although adoption rates are still not that high.
Companies now use a range of metrics to monitor customer engagement
My latest research into customer analytics shows that companies now use a range of metrics to monitor and assess the success of customer engagement.
To make analysis easier, I grouped these into 3 categories:
- Process metrics – the top three are: call outcomes (how issues were resolved), performance against SLAs and agent quality scores. This was the most popular set, with companies using an average of five.
- Financial metrics – the top three are: adherence to budget, overall customer service costs and customer profitability. This was the equal second most popular set, with companies using an average of three.
- Customer-specific metrics – the top three are: customer satisfaction score, cost to serve and life-time value. This set was equal second, with companies using on average three. Customer Effort was included in this set and was sixth most popular, with 27% of responders saying it is one of the metrics they use; indicating that although it is quite a recently introduced metric, its use is increasing.
My experience and benchmark research shows that companies place high importance on contact centre and, more recently, customer-facing metrics. I believe that no one metric is adequate to monitor and assess customer-related activities, and the recent increase in the number of touch points companies support makes this more true. Companies need a balanced set of metrics that cover all processes – operational performance, finances and direct customer-related metrics.
Making it simple, quick and easy for customers is vitally important
I think delighting customers is still valid, but I accept that with changing consumer communication and living habits, making it simple, quick and easy for them to engage with your company is vitally important.
So for me, I would go for a core set of metrics that is based around First Contact Resolution, Customer Effort and Customer Value. First Contact Resolution because it is what customers want and achieving it will lower costs, Customer Effort because it reflects customer expectations and if used properly can lead to good multi-channel customer engagement, and Customer Value because the first two should lead to more business achieved at lower cost.
Richard Snow, VP & Research Director Customer Engagement at Ventana Research
Author: Megan Jones
Published On: 19th Feb 2014 - Last modified: 30th Oct 2017
Read more about - Customer Service Strategy, Customer Effort, Metrics
Great article, thank you, Richard. It’s also key that companies consider employee performance.
Regardless of the method of communication with a customer, you need a learning platform and delivery method that is consistent and measurable over all of your contact agents, and indeed the wider workforce. You then need to ensure, by tying learning to performance, that the learning, training and development drive the outcomes that you want and expect.
One aspect to add Richard is communication and confusion. Many front line staff and management switch off when given a lot of different metrics to chase.
Although hierarchies of metrics should in theory allow each layer of management to see only the metrics they need, in practice they tend to confuse.
The unifying thought behind making it easier for customers is very powerful. At all layers, in all functions. I’ve seen this implemented in many places to great effect. Both to fix that particular customer’s problem AND to prevent other customers having the same problem
Richard, great start to one of the industy’s longest running debates: what are the best metrics and why? Like you, we’ve used maturity model using 300+ deployments to frame this discussion, because there is strong evidence that the type of metrics you use tend to mirror which stage of development or maturity you’ve reached and where you’re headed next. Our Global Contact Centre Benchmarking Report has been tracking this for 16 years and one of the significant changes is the increasing use of metrics which incorporate proper feedback processes, like NPS. Why? because the learning from verbatim customer feedback really can be used to drive agent and organisational performance improvements if implemented correctly as well as driving loyalty and customer satisfaction. The other trend in the metrics area is a increasing focus metrics like Customer Effort Score and NPS, at the expense of traditional metrics. In our survey three years ago NPS was only used by 8% of our sample, this year it was 50%. CES wasn’t used by anyone in 2012. In this years survey 10% had it as their 2nd choice.