Take on any CX challenge with Pipeline+ Subscribe today.

Survey Erosion

Survey Erosion

/ Strategy, Customer Experience
Survey Erosion

Customer satisfaction survey data isn’t as reliable as it once was.

In today’s world, the customer satisfaction survey is a given. Some businesses (Lyft, Airbnb) rely on it as part of the model, and could hardly exist without it. Others may not have the same degree of reliance, but are nonetheless diligent (and persistent!) in requesting feedback after every service transaction. Customer feedback today drives purchase decisions, determines employee bonus amounts, and has spurred a cottage industry of people that collect it, analyze it, report on it and help others improve it. It is part of the corporate fabric of every service organization.


And it is no doubt valuable. But that value has been eroding over the past few years, at the very point in time when our dependence on satisfaction data has increased. The combination leads to a deadly situation where contact center leaders presume that all is well and trending upward, when the opposite is really the case. The reasons are numerous, and are led by the trio of survey fatigue, sample bias and score inflation.


Survey Fatigue


Take off your professional hat and put on your consumer hat. From this perspective, it is hard to ignore the constant beat of requests for feedback. As these compete for our attention with everything else in our lives, people react in different ways. A few appreciate it, and take the time to provide accurate and thoughtful insight. Sadly, these people are more often the exception. Others have turned them off completely, never providing any feedback at all (and adding to the survey bias discussed below). The remaining among us pick and choose which surveys to answer, and provide responses with little forethought.


And there is little question that organizations have come to recognize survey fatigue as an issue. In response, surveys that once took a few minutes to complete have been pared down to one or two questions. That increases the response rate, but of course provides little in actionable results. We get an indication of where we stand, but are left to guess at why we got the rating, or what we can do to improve it.


Sample Bias


Bias enters survey data in many forms, not the least of which is a direct result from the fatigue described above. Despite what may look like positive response rates, certain customer types have dropped out of the game, leaving us with data skewed toward the limited population that still appreciates the attention.


But more and more, I see bias creeping into surveys in many other ways that could otherwise be controlled. Some examples of bias in service transaction surveys include:


  • Leaving it to reps to offer the survey—an unlikely offer when the call has not gone well.
  • Limiting the survey to only certain transaction types, typically those that represent a successful experience.
  • Offering the survey for only those that use certain channels.

Sample bias problems represent the most critical flaws in our customer satisfaction survey structure. Everything that follows—reporting, analytics, action steps, etc.—is based on data that simply does not represent the typical customer experience. Even if all those steps are optimized, you wind up doing all the right things for all the wrong reasons.


Score Inflation


The field of education has been dealing with grade inflation for a number of years. An “A” is much more common than it was decades ago, yet independent testing does not show that students are smarter or more prepared than in the past. We are seeing similar patterns in customer satisfaction data. Organizational survey scores often improve, yet many independent studies show that overall satisfaction throughout most industries is not moving upward.


Like grade inflation, score inflation has in part been a problem of our own doing. Satisfaction data is regularly used in performance objectives, but once bonuses and promotions become dependent on a set of data, that data is more likely to become suspect. All of a sudden, rating scales get changed. Or threes get combined with fours and fives to represent “satisfied” customers. Or surveys get offered only to those callers with resolved cases. Or customers are told that if they can’t give a top score, they should call to “give us a chance to make it right.” There are many ways to artificially inflate results, but none of these actually do the customer any good.


Minimize the Impact


Feedback is a good thing, especially when it comes directly from the source. But too much of a good thing can be a bad thing, and we are treading very close to that line of demarcation. Make no mistake, the answer is not to eliminate or devalue customer satisfaction surveys. But we have to deal with the reality, and that means the data we get simply isn’t as reliable as it once was. We have other methods of measuring quality, though, and we need to make sure they can pick up the slack. Quality monitoring programs, transaction audits, and customer complaint logs all offer valuable insight into the customer experience. Make sure these are all operating at peak, and any impact from survey fatigue will be minimized.

Jay Minnucci

Jay Minnucci

Jay Minnucci is the Founder and President of Service Agility, a consulting and training company dedicated to improving customer service and call center operations.

Contact author

x

Most Read

Customer Effort Index
Upland 20231115
Cloud Racers
NICE 20240826
Customer Effort Index
Verint CX Automation
RLZD State of CX Report