Contact centers too often face what I have termed as the “CX (customer experience) Calculation Discrepancy” that exists between some contact center platforms. And it is also the last thing you want to hear when a contact center goes live on a new platform.
I recounted in Part 1 of this article, published in the October issue, my first encounter with this phenomenon in 1997 when I was a sales engineer with Interactive Intelligence (now Genesys).
This article will focus on some of the most common metric discrepancies between platforms, in the hopes you might avoid their pitfalls the next time you perform a major upgrade or migrate to a new contact center platform.
Given the complexity of this issue, I’ve split the article into three parts. Part 1 explained what this discrepancy is while Part 2 (this article) discusses the most common offenders. Finally, in Part 3, we will cover the differences between CX platforms that are voice-first versus their competitors that started with digital channels and added voice.
Common CX Calculation Discrepancy Metrics
Now let’s review each metric and explain how the CX Calculation Discrepancy applies to it.
Pro tip: if you are staying with your current vendor but are moving solutions from premise to cloud, you still need to check for all of these calculations.
Understanding the differences between platforms typically would not (and probably should not) influence your decision as much as inform what, if any, additional consideration you need to give to configuration, data, and reporting as you make the transition.
Abandon
We are starting with Abandon because it also can be a factor in your Service Level Agreement (SLA) metric.
Generally across the CX platforms, Abandon is defined as a conversation that has been routed to a queue (a grouping of agents, also commonly referred to as a workgroup or skill group depending on the platform) and the customer has disconnected before being connected to a contact center agent.
Know there are often choices for how abandons are measured by your platform and equally important is communicating...how you calculate them.
Some platforms by default will count a conversation as abandoned if it disconnects even one millisecond after connecting to queue, which I refer to as true abandon.
The problem, as I introduced in my flashback at the start of the article in Part 1, is that some CX platforms allow for a configuration setting that might be called something like a Short Abandon, Quick Hangup, Ghost Call Filter, etc.
Usually these default to 0, meaning abandon is “true abandon” as configured out of the box. One reason you might want to filter out short abandons could be you get a certain percentage of “junk calls” that are spam. Another could be along the lines of “I’m never going to staff my center to answer calls in less than five seconds, so I don’t want those calls counted against me.”
Whether or not this is applicable to your center or vendor platform is not good or bad. What I mean by that is how you report abandons should be tailored to your business and how your organization chooses to measure it. Know there are often choices for how abandons are measured by your platform and equally important is communicating out to your organization how you calculate them.
Regarding Abandon, make sure to capture:
- How is Abandon calculated in your legacy platform? Are there any configuration options for it and how are they set?
- How are you calculating Abandon in any custom reports?
- How does your new vendor define Abandon and what configuration options, if any, are available for it?
These will help you avoid the CX Calculation Discrepancy for Abandon and ensure you are calculating like for like as you migrate.
SLA (Service Level Agreement)
SLA, which is also referred to as Service Level, is usually expressed as something like “We target to answer 80% of our calls in 20 seconds or less.” I won’t go into the history except to mention AT&T’s platform in the 1980s and four rings are the origin of the original guideline of a 20 second answer threshold. To be clear, we are talking about how SLA is calculated, not the percentage or threshold.
There are at least a handful or so of different options to calculate SLA. I will cover two of them so you can get an idea. I will not go into too much detail about the benefits or trade-offs, which are significant but better saved for another time.
An example SLA calculation using only answered:
Calls answered in threshold / Answered
This calculation is the most basic flavor of SLA and what I would generally call standard. Notice we are only using answered calls and have not considered the total of offered calls or abandons.
In a broad stroke, this is the default calculation used with many CX platforms if you leave other settings like Short Abandon at the default of 0.
I could make the case on how, as an overall metric, this particular version of SLA is nearly cheating. Instead I’ll suggest that you determine if the calculation is handled by the CX platform and standard reports if you are calculating it yourself in your own custom reports.
An example SLA calculation using answered, short abandon, and offered:
Calls answered in threshold + Short abandons / Offered
This version of SLA revisits the earlier declaration of “I am never going to staff my center to answer calls in five seconds or less, so I do not want to be penalized for those short abandons.” But with a twist because we are now adding short abandons into the numerator.
To be clear, we are talking about how SLA is calculated, not the percentage or threshold. There are at least a handful or so of different options to calculate SLA.
We have potentially stepped into CX Calculation Discrepancy territory. That is because your current state CX platform may or may not have a filter for Short Abandon, which may or may not have been configured for five seconds or less and this calculation assumes it is working with raw, unfiltered data.
If you have been in your center long enough to know this version of SLA is how you calculate, you must now determine if these calculations are handled by the CX platform or in your own custom reporting.
Regarding SLA, make sure to capture:
- How is SLA calculated in your legacy platform? Are there any configuration options for it and how are they set?
- How are you calculating SLA in any custom reports?
- How does your new vendor define SLA and what configuration options, if any, are available for it?
- If Abandon is applicable for you, or might be for future consideration, how is Abandon calculated in your legacy platform and custom reports? (see the previous section on Abandon).
These will help you avoid the CX Calculation Discrepancy for SLA and ensure you are calculating like for like as you migrate.
Occupancy and Utilization
These two are especially challenging because Occupancy and Utilization are not consistently defined metrics across the vendors or in the CX industry.
And sometimes their definitions are flipped, meaning you may find examples of what you consider to be Occupancy that is calculating what you consider Utilization and vice versa.
To illustrate, there’s at least one CX vendor I know of that uses the term Utilization to describe an agent setting for the number of concurrent interactions per channel and does not provide any standard reporting for Utilization as it is often defined.
If Utilization is consistently low, it might be appropriate for your environment because agents are only needed on relatively small shifts to take interactions compared to the other work they perform. It could also point to overstaffing.
You need to understand how your organization defines these two terms if you measure them in your current state and then research how your legacy and future CX platforms may (or may not) measure them.
I am lumping Occupancy and Utilization together because they both measure how agent time is spent. To keep things simple we will use the following definitions for the purposes of the example, which are usually expressed as a percentage:
- Occupancy. This is the % of time spent interacting with customers versus time available, non-customer interact time, idle, and not answering/not available time.
- Low Occupancy can indicate possible overstaffing. High Occupancy for the time agents are scheduled to take customer conversations could mean you are doing well with your forecast, or you are exceptionally busy.
- An example Occupancy calculation:
- Occupancy % = sum (interactingDuration) / sum (interactingDuration + idleDuration + communicatingDuration + notAnsweringDuration) x 100.00
- Utilization. This is the % of time spent interacting with customers versus overall working time. Low Utilization could indicate a supervisor who only occasionally needs to be available to take customer interactions, an agent who is possibly avoiding work, or something in between.
- If Utilization is consistently low, it might be appropriate for your environment because agents are only needed on relatively small shifts to take interactions compared to the other work they perform. It also could point to overstaffing.
- If Utilization trends too high for your environment, that means your agents are possibly getting too much talk time. They are more apt to get fatigued, which introduces risk with agent churn.
- An example Utilization calculation:
- Utilization % = sum (onQueueDuration) / sum (awayDuration + availableDuration + busyDuration + breakDuration + mealDuration + meetingDuration + onQueueDuration + trainingDuration) x 100.00
Regarding Occupancy and Utilization, strive to answer these questions:
- Are you using either of these today and if so, how does your business define the term?
- Does your legacy platform provide standard reports for Occupancy or Utilization, and if so, how are they calculated?
- How are you calculating Occupancy or Utilization in any custom reports?
- How does your new vendor define Occupancy or Utilization and how do they calculate each?
Now that I have reviewed the common metrics impacted by the CX Calculation Discrepancy, in the next and final installment of this article I will review first call/contact resolution (hint: it is a trick metric, which is why I’ve singled it out).
I will also cover some differences to look out for with platforms that started with voice as their initial channel versus digital-first platforms who later added voice. And then explore how you can win with CX reporting out of the gate in your next migration.