For me, the best way to look through the past 15 years in workforce management (WFM) is to look through my notes from the conferences I attended, especially SWPP, Contact Center Expo, and the WFO Virtual Conferences. The highlight of those events is visiting the vendor booth to watch their demos and see what’s new. I also ask what they’re doing that makes them different from everyone else.
The common theme there has always been, “Winning the Race to AI!” We may have called artificial intelligence (AI) by a different name, like “machine learning,” but the idea was always the same – building on technology for better results.
Interestingly, AI is not a new concept for WFM, and ours is a discipline with AI roots that go back much longer than 15 years. In fact, it was exactly 30 years ago when I was learning about a fun new program called TCS above a hibachi restaurant in Nashville. They showed us how fast it created micro-level forecasts and then generated different batches of schedules. It was smart enough to produce one set of schedules that would balance over/understaffing, and an entirely different set that appealed to employee preferences.
Then they trained us on the key to WFM success: The Administrator. They flattered us by referring to us as the Power-Users and the Super-Users. We were responsible for shaping the human elements of the software. It knew we had requirements 24x7 but we restricted it to allow our people to work 40 hours a week and not one minute over. We made it give a few breaks instead of working straight through the day. The software didn’t care about our state’s scheduling compliance laws, so we forced it to care. Setting up all those parameters established boundaries limiting how far we allowed the software to go, which is what AI programmers do today when censuring their code.
AI is not a new concept for WFM.
This became even more granular for the forecasting modules. Sure, the software could take in massive amounts of history and spit out a forecast (or hundreds of forecasts). However, it still needed a human to clean the data and tell it what to throw out, normalize the data that was staying in, and tag the holidays as special events. WFM software cannot distinguish signal vs. noise: AI cannot tell truth from lies. Computers have intelligence without experience; therefore, without wisdom. AI is the medium for humans to teach a computer what we have learned from our wisdom, but it still only knows what we tell it to know.
I’ve always believed the best WFMers are the ones who know how to do it without using WFM software at all because that exercise helps us understand if the software is doing it right. It’s good to know how to build a small set of schedules and exactly what that entails. There was a session at a SWPP conference in 2015 where Denise Kapalko built a template for a board game with the object of building the best schedules. It was a valuable team-building event for everyone in that room and it also helps build empathy towards the people we’re creating these schedules for. It’s difficult to teach a computer empathy, and especially difficult to rank who receives the highest levels of empathy, which every Scheduler is eventually called upon to do.
Unexpected changes to an otherwise perfect forecast have deeper consequences than what most WFM systems allow.
The Forecaster who knows how to generate a one from scratch is a Forecaster who can read numbers, and they are usually more in touch with knowing what WFM software settings need adjusting for better results. There are human elements in contact center forecasting that don’t exist in traditional forecasting methods. It shows up in the relationship between the customer and our service goals and the relationship between our demand forecast and our staffing levels.
WFM software using AI will never be as good as what we can produce in Excel until we teach it how to deal with some truths about human behaviors. Unexpected changes to an otherwise perfect forecast have deeper consequences than what most WFM systems allow. To correct this, some basic updates need to happen. For example:
- Stop ignoring carryover volume – if the workload requires 15 people, and only 13 show up, that workload will either hang on until the next interval, or it may abandon and show up as a 2nd attempt right away, or later that day. Maybe it will appear tomorrow, or maybe next week. The forecasting tool should be dynamic enough to reposition and postpone that neglected volume when it shows up later than expected.
- Understand the customer’s tolerance levels to waiting and recognize that it can be different on a Monday morning than it is on a Wednesday afternoon. It changes when the reason for calling changes and when the customer’s sense of urgency changes. This ultimately affects the time to abandon, which then affects abandonment rates, repeat caller rates, and total offered volumes. The ratio should be evaluated and kept up to date as conditions change, in the same way that we reforecast volumes, handle times, shrinkage, etc.
- We don’t need shrinkage on utilization factors. Here’s an example: in a center that gets 50 calls with a 5-minute handle time, 25% shrinkage rate, and an 80%/20 service goal, we need:
- 8.3 for people to handle the base workload (talk time + ACW time)
- +3.1 for additional people sitting in the idle state, waiting on the next customer, (because we want to answer 80% in 20 seconds, the “utilization factor”)
- +2.8 for additional people to cover for 25% shrinkage on the original workload (8.3÷(1-75%))
- =14.2 Total Fully-Loaded Required Staff
Most software still seems to inflate shrinkage across both the workload and the utilization requirement, making a 25% shrinkage rate = 3.8 people in step 3 of the example above. That’s a whole new body, for the sole purpose of adding more shrinkage headcount on time that is spent in the idle state to begin with, and that becomes excessive. The original shrinkage on the workload is perfectly calculated as-is. For years, Erlang has unfairly caught the blame for bloating the total requirement, when it’s probably the software developers’ fault for a flaw with their order of mathematical operations. And it’s a flaw that costs call centers more money in hiring additional unnecessary staff. - When occupancy rates increase, expect an increase in handle time and non-discretionary shrinkage factors, including schedule adherence. These should be dynamic metrics that change in an intra-day forecast as soon as the net staff turns red (negative). Yes, it will have a snowball effect, and will ultimately put us in a much better position to set expectations and manage our day. Then, if we can resolve that understaffing, the intra-day plan should recognize that, too, and restore the original non-discretionary impacts, the original schedule adherence expectations, and the original handle time forecast, expecting that these not move up and down in a linear way.
The nice thing about solving this is that the story behind each of these things already lives in our historical data. It’s realistic to expect AI to draw predictable conclusions using deductive reasoning like these and more, because I can do all of this in Excel already, and I’m certainly no programmer. Although, to be fair, I am a little forecast-obsessed - I love it and think about it constantly. But the real power behind automation is that it can do it very quickly for far more groups than I could ever manage. My hope is at some point in the next 15 years, these fundamental, core issues with WFM - and what it means to work with a human customer - will be corrected.
AI’s decision-making ability will become successful once we can translate what makes our manual forecasts so great when we create them from using instinct, and intuition. Many times it just comes down to visualizing what the forecast should be and adjusting the software until it catches up. Until then, we will still be in the background, filtering out which events will happen again, which ones will repeat under the right circumstances, and which ones to throw away entirely because they will never repeat.
The real power behind automation is that it can do it very quickly for far more groups.
I’m looking forward to seeing what comes next for the future of WFM, but I think AI still has a long way to go before the forecasting engine reaches V’ger status. So, for now, I remain on Team Human.