I remember a friend of mine saying to me once that when he sees an ugly baby, he tries so hard to find some positive way to describe them to the parent. “Wow, your baby is so… sturdy!” or “Well, he is just… got such a great head of hair!” No one would EVER tell a parent, “Hmm… maybe they will grow into their looks when they are older?”
So I am just going to rip off the Band-Aid here and say that most quality monitoring programs are ugly.
This is NOT a measure of the skill or intelligence of the amazing quality monitoring (QM) professionals out there. God knows how hard the job can be. Often viewed by frontline agents as the quality police or just focused on finding fault (“I got dinged for my empathy statement!”), QM professionals are regularly put in the awkward position of having to defend their scoring.
QM folks, we love you. You are critical. Your work is hard. So let’s get real about how to step up our quality game and become the propellants of amazing employee and customer experiences rather than the last bastion of policy-keepers.
Customers’ Needs Have Evolved
On the note of not blaming the messenger, let’s assign it where it really belongs—the customer. Well, not exactly them, but what they need and what they expect have changed monumentally. Companies have evolved, and with it, so too have customers’ needs. So much has changed yet so much of how we manage contact centers has stayed the same.
Contact centers were originally a practical and straightforward operation: Customers called (or you called them) about a very specific and simple need that had a very prescriptive solution (think, what’s my account balance?). Agents used a script and clear process, and they did this about 100 times a day: “Hi, sir/madam. Thank you for calling XYZ Company. You say you need a copy of your invoice? We would be happy to mail one out. Can I confirm your address? Thank you so much for choosing XYZ. You have a pleasant day now, you hear?”
The contact center was not all that different from a factory, and the quality monitoring process was handled in the same manner as quality assurance in manufacturing. An agent handles 100 widgets in a day, follows a very black-and-white process, the quality team audits to ensure that all steps were adhered to and that the product is consistently uniform.
Fast forward to 2021. Over 90% of the reasons for customer contacts from 1970 to 2010 now are either automated (not getting too many password reset calls these days, now are you?) or simply nonexistent because the product has been improved based on the learnings of complaint data. On top of that, you have Amazon delivering products in a millisecond and digital startups that are flush with capital and give you everything you ask for when you complain to avoid controversy and because, at a low scale, they can afford more liberal policies.
What does all that mean?
Today’s customer is calling with a higher complexity, less common, higher urgency and emotionally charged reason, AND they are armed with expectations set by high-standard companies both in and out of your industry.
The 80/20 rule is reversed (80% of customers are calling about the same issue) because, if 80% of contacts were about the same problems or issues, the company would have fixed it using all that compelling quantitative data at their disposal. People contacting customer service have either tried and failed to do it on their own, or they have no time to try. These customers are now the outliers to the happy path created in customer journeys. These customers just a got a refund on their Uber order by clicking a button without ever having to talk to someone or go through an IVR or wait in a queue.
They are more likely than ever to be emotional, upset and have high expectations of service. Or they have a reason for calling that either the agent wasn’t trained for, there is no clear process for, or which happens so rarely the training has been forgotten due to lack of use.
Wow. That is a stark contrast from the 1970s. But is how we manage contact centers as dramatically different? It has changed, sure. Better technology, no doubt. But have our processes evolved as much as our customers? Has the quality monitoring form changed?
Likely not.
Traditional QA Scorecards Drive Undesirable Outcomes
A traditional approach to quality includes a scorecard that breaks down the call flow into sections (greeting, authentication, problem resolution, closing, etc.). Within each section, there is a linear set of mechanics an agent needs to follow (said customer’s name, thanked them for their business, used an empathy statement, summarized the call, had a positive tone, etc.). Additionally, a scorecard may include risk compliance or regulatory requirements that are almost always non-negotiable elements.
There is typically a weighted point system associated with each line item criteria as well, and a pass/fail threshold based on the cumulative points scored (or as it often feels, based on the number of points lost/dinged).
This approach worked beautifully in the contact center of yesteryear, as it had an established set of proven procedures and scripts that would yield a consistent and reliable outcome.
In today’s world, however, scorecards like these often drive very undesirable outcomes. Why?
- As the customer journey discipline has advanced, we now look at customers as personas or segments on different journeys with different objectives, emotions and expectations. Where once we may have wanted to drive a uniform response, our customers and our insights show us that this will not only miss the mark but could actually worsen customer satisfaction.
- As calls shift to more varied and emotionally charged reasons, service agents need strong emotional intelligence (EQ) skills to meet the new demands. They often need to think quickly, be resourceful, chase things down and provide a custom solution. Hiring profiles are shifting, too. Today’s agent brings a very different set of tools with them. A traditional QA scorecard can limit individuality and flexibility to leverage EQ to tailor responses, prevents authenticity (a building block of connection and understanding), and can discourage decision-making. In short, we hire amazing people and then stop them from using the strengths we so value. The outcome is a demotivated employee who either leaves for a better role, or worse, stays but disengages.
- To improve calibration among scorers and mitigate subjectivity, QA monitoring becomes a slave to the rule, even when the rule is delivering an underwhelming employee and customer experience.
- Employees stop talking to customers and start talking to the scorecard. (Listen for this when you call the companies you buy from. Now you won’t be able to un-hear it.) I once had a bank employee, upon me asking for my account balance (I was driving so couldn’t just check), respond with an elaborate empathy statement: “I’m sorry to hear you are having trouble with this (I wasn’t). I understand how frustrating this could be (it wasn’t). I assure you I will be able to help you today (I was never worried).”
- The best agents’ work, which is likely most in line with the company’s stated values, often is underappreciated, while the most compliant agents’ work is recognized with top scores and trips to Cancun. This conflict erodes employee trust and commitment to values.
Resuscitate Your QA Program to Thrive in This New World
Before you start to renovate your QA program, have a very open mind. It’s easy to fall into the negativity trap: “We tried that once.” But did you iterate or quit when things got hard? “That’s too subjective.” But are there ways to mitigate this without compromising on CX and EX? “This will be a lot more work.” But is it worse than the fallout from adhering to a poor QA form (customer churn, complaints, employee attrition, absenteeism, etc.)?
Also, before you begin, get everyone in a room to listen to calls that have been previously scored, but don’t use scorecards in this session. Answer a simple question at the end. Will that customer rave about us? If yes or no, why? If you are being honest, you might find some surprising contradictions.
When building the quality scorecard, the following are some emerging practices and considerations.
1. Flip the script
- Rather than focus on the skills that might contribute to a good experience, focus on the actual customer experience. What might the customer be feeling based on the issue? What are they expecting to hear? What do we want them to feel?
- Build a scorecard that is based on the customer’s emotional and physical needs as the basis for evaluation. Did we achieve these? Then, the mechanics of “how” become secondary—a set of tools that a savvy EQ agent knows when and how to use. If they need coaching, mark which tools they could choose to use, but the score is not about whether they said a name three times; it’s whether they made the customer feel valued. It’s not whether they used an empathy statement, but did it make the customer feel heard and understood? What that looks like can vary by call type and customer persona.
2. Calibrate like you love it
- Think of calibration as more than gaining consistency in scoring; it’s a way to keep redefining and elevating what “good” looks like. Think of it less as a medium for compliance-building and more as a chance to scrutinize and challenge for continuous improvement.
- Calibrate a lot. This scorecard approach is far grayer because the customers’ needs are, too. That means it’s harder. But, hey, nothing worth doing is ever easy. Don’t shy away because it’s gray—embrace it.
- Include the front line. For obvious reasons, get permission to share calls with peers but bring the front line in. Let them hear your thoughts and you hear theirs. This is built-in change management, built-in adoption, built-in employee engagement and built-in evidence of the value you place in their opinion and the difficulty of their role.
3. Analyze it
- Compare changes you are making to other metrics it will influence—Csat, NPS, Customer Effort, Employee Engagement, Absenteeism, Productivity and so on. Done well, this can help to raise all boats.
4. Stop scoring numerically
- A numeric score can benefit in the way of competitiveness, but it also makes agent scrutiny even more appealing. What is the difference between an 81 and an 82 or a 79? Not much, and likely not anything but the mindset of the evaluator that day.
- Switch to levels of mastery rather than scores (e.g., levels could be Developing, Achieving, Customer Champion, Mastery), which also shifts focus from a test they have to pass or a score in which they were dinged to a culture of coaching and development.
While this is really just scratching the surface, I am for sure exceeding my word count. I hope there is enough here to get you to start thinking about your program critically, and enough direction to help you begin building something that your customers and employees will feel the benefits of.