Take on any CX challenge with Pipeline+ Subscribe today.

Mother, FCR, and ACR

Mother, FCR, and ACR

/ Current Issue, Operations, Reporting, People, Performance Management
Mother, FCR, and ACR

Why FCR should be supplemented by active contact resolution.

I read with great interest Contact Center Pipeline articles written by my friend, Rick McGlinchey, called “Solving the CX Calculation Discrepancy – Parts 1, 2, and 3.”

In Part 3, he discusses the famous metric first call/contact resolution (FCR) and how it is usually not captured by ACD or contact center-as-a-service (CCaaS) systems. FCR tries to measure a simple - and customer experience (CX) - critical thing: how often do our customers have to call more than once to take care of an issue?

For this reason, FCR has been one of the mainstay big-picture contact center metrics for years, it seems every company measures it, and operational executives are keen to report it.

But I’ve always had my issues with this metric, and here’s why.

FCR is an organizational metric. By measuring how often the customer takes more than one attempt to solve a problem, FCR measures the performance of the overall agent group, rather than the individual agent. An individual agent does not control whether the call coming to him is the first, second, or the third contact.

Group measures are fine, but...

FCR is not really actionable. How do you improve FCR directly? Given that agents aren’t individually responsible for FCR, direct action to improve it is much more difficult. If FCR is lower than you wanted, what can you really do about it? Hire a consultant? Training? Chastise supervisors?

FCR is not really calculatable. McGlinchey lays out the four common ways FCR is calculated.

FCR seems to me like a great idea, but only in theory.

First, agent authored disposition codes or “Wrap Up Codes” can be used to determine if the agent feels the contact was resolved. This seems a bit sketchy: agents shouldn’t determine FCR performance by subjectively measuring it themselves.

Second, you can use data to determine if there are repeat callers/contacts. I like this, but most companies don’t use this method as it requires a somewhat sophisticated data query to quantify it.

Third, many companies incorporate within CSAT surveys a question asking: “Did your problem get resolved?”

Given the low return rates of surveys, self-selection bias is most certainly a problem, and the results are suspect. But this approach seems the most common avenue for most contact center operations.

Finally, new systems use speech analytics and artificial intelligence (AI) to glean from the conversations whether the issues presented by the customers in their contacts were resolved.

The jury is still out on this, but I suspect that if your CCaaS system has this functionality or if you purchase such a system, you might see decent results. But I am not sure.

FCR seems to me like a great idea, but only in theory.

Fixing Mom’s Passwords

Forgive me if you’ve heard this story from me before, but it is one of my favorites. In my last job, I was working for a very cool and forward-thinking company, Sharpen, a CCaaS provider. I was living near Washington, D.C. but commuting weekly to Indianapolis, where I stayed with my parents during the work week.

One day, I got a panicked call from mom. It seems that every one of her accounts: her Amazon, bank, cable provider, and her online pharmacy accounts had been suspended. Someone had logged into her accounts with her password from the Middle East, and each company had caught the breach and suspended access. Whew.

I told my mom that when I got home that night, I would call each of the companies and take care of changing her passwords and getting her access to the accounts again.

It was easy. One call per company, a few minutes, and no problem.

Except for her cable provider. I contacted them nine times (seven calls and two long chat sessions) over a full four hours to resolve the issue.

I wondered: does this company know that their agents do not know how to instruct their customers on how to change a password after it had been compromised?

The next day, when I got to the office, I pulled my Sharpen buddies Adam Settle and Kevin Schatz into our conference room and told them the story. I asked: “How can we systematically measure each of these agent’s performance, the first eight agents who didn’t help me, and the last agent who showed me exactly how to change mom’s password correctly and surface this issue?”

We started by plotting this graph (see CHART 1). Sharpen’s CCaaS platform has a terrific business intelligence (BI) platform built into it, and we could also look at customer data (with permission).

In CHART 1, we plotted handle time per phone number (X-axis) against the number of times that the specific number called (Y-axis). Each data point represents a single phone number that called the center within a 24-hour period.

We can see at a glance those contacts that were particularly costly to the operation (those within the gray background). If this data showed my mom’s cable company’s performance, it would be easy to spot my calls as problematic by simply looking at this graph.

It makes sense for management to listen to each of these difficult calls and try and determine the issue associated with each one. Could it be a needy customer? Could it be a training issue (like the cable company)? Could we, as a company, have created a business process that confuses our customers? We won’t know, unless we listen to these calls.

It was surprising to me how many contacts are repeated calls. Note that for this company, management believes that each contact should be, because of the nature of the function, a one-and-done. Each customer that calls back should be seen as a service failure.

Advent of ACR

After looking at this graph, Kevin, Adam, and I developed a new measure, a variation of FCR, that we called active contact resolution (ACR). In it, we calculated for each individual contact whether the customer called back, say within 24 hours after.

By tagging contact resolution to every contact, rather than just the first contact, you can score each agent who handled it as successful or unsuccessful. A thumbs up or thumbs down per contact, and an agent’s score is simply their batting average, how often they get it right.

Interesting, but does ACR matter?

Intuitively, we would expect ACR to matter greatly to our customers; who would ever want to have to call a company back? But it is important to see if ACR registered in the CSAT measures.

So, we plotted the monthly CSAT score, measured via survey of each agent, against each agent’s ACR average for the month and created this chart (see CHART 2). Note that ACR is measured with a 24-hour time window. If ACR matters to the customers, you should be able to see a distinct trend in this relationship.

Sure enough, the trend is obvious and intuitive. Agents who have high ACR performance on average have higher CSAT scores on average. We looked at many other of Sharpen’s customer’s data and saw that this trend was unmistakable and universal. So ACR matters to our customers. A lot.

How to Improve ACR

The next thing we pondered was how can we improve ACR? There is a multitude of great ideas for this. We could create an AI agent helper that guided the conversation toward contact resolution. We could institute a training program that emphasizes the value of improving ACR. We could set up KPIs for supervisors. We chose to do something different.

Adam and Kevin developed a little nugget of a performance management tool that resided on the agent’s desktop. When not engaged with a customer, the agent would see three tiles, which we called Performance Tiles, that simply showed agent performance on three metrics, one of which was ACR (see FIGURE 1).

A remarkable thing happened when we introduced the tiles to the first customer, everything improved. Management explained to their agents what they would see, why each metric was important, and how each metric was calculated. We just left it to each agent how to use this information.

What they did was to use the three metrics as a guide, it told them what was important to their management team, and what it took to do a good job.

In a million years, I wouldn’t have thought that the improvement seen by simply showing agents their performance, on a new metric, would have the impact it did. Yet the results of our first test are in CHART 3. Note that we used the term FCR in that graph, because we hadn’t yet coined the term ACR.

Bottom-Line Results

So, let’s connect ACR with CSAT and find out whether ACR made a difference with it and with other metrics.

Each of Sharpen’s groups saw an improvement of 6% in ACR (see CHART 4). That meant a 6% reduction in callbacks: and 6% fewer calls. Handle times and transfers also improved.

What does that translate to? A 6% ACR improvement translates to real cost savings and a real (15%) CSAT improvement.

Agents Want To Do Well

My cheerful takeaway from this exercise is that agents are yearning to know what a good job means at their company. When shown, they will happily do their best to impress their bosses. Everyone wants to do a good job.

Doing a good job on ACR has three fantastic results:

First, fewer calls.

Second, improved CSAT scores.

Third, financial benefits from cost savings and possibly from increased customer loyalty and brand advocacy, translating into higher sales and revenues.

Agents, when polled after, liked having Performance Tiles in front of them. It was an indicator and reminder of their performance and goals. Management liked the improvement but also liked having metrics to focus on. This was a win-win-win-win.

By incorporating ACR, not only will you get fewer calls, not only will you improve CSAT, but your FCR will likely improve...

Later, we rolled this out to a large number of companies and measured the performance before and after the tiles. The results were universally great. We saw improvements in ACR at all implementations from 5% to 16% with the resulting financial and CSAT benefits.

You Don’t Need Tiles (Even Though They Are Very Helpful)

To implement performance improvement, like we did at Sharpen, you don’t need a system, although I do recommend having one.

Scoring ACR is a simple SQL query on your ACD or CCaaS database, and other means can be used to remind agents of their performance instead of coding tiles. Maybe a bulletin board, or a daily email with everyone’s performance? Simply regularly letting agents and their supervisors know how they are doing will suffice.

Here is my expectation. By incorporating ACR, not only will you get fewer calls, not only will you improve CSAT, but your FCR will likely improve, your agents will get smarter, and very possibly, I will not have to call you back over and over to help fix my mom’s password.

Publisher's Note: Above charts and figure provided by Ric Kosiba, Ph.D, Real Numbers.

Ric Kosiba, Ph.D

Ric Kosiba, Ph.D

Ric Kosiba is an engineer who tripped into the call center industry about 25 years ago. He started Bay Bridge Decision Technologies and probably has seen more contact center strategic plans than anyone on earth. He is working on a new project involving contact center strategic planning called Real Numbers.

Contact author

x

Most Read

RLZD Top 10 Analyst Report
Verint 300x250 20250116
Upland 20231115
Mpower
Cloud Racers
MPOwer