How to Measure the Success of Your Service Desk

An illustration of four people sitting in a circle and one person passing out folders to the group

Service desk metrics not only measure the quality of operational performance from a personnel standpoint, but they capture all quantitative data related to contact volume and type as they accumulate over time. Indeed, whatever categorization can be entered by the agent or automatically generated by the ACD, ticketing system can be plotted along the X-axis not only to show where the service desk has been but to chart the trajectory of where it’s going. With such a preponderance of trending data, paralysis by analysis need not apply. Instead, each page of a comprehensive service desk report should represent an opportunity to take action either towards service improvement or preparing for what may come next.

So, what are some of the standard metrics included in most service desk reports?

First and foremost, the SLA report is the best measure of the service desk team’s overall performance. Typical metrics include Average Speed of Answer for all contact channels (voice, voicemail, email, text, web form) as well as abandon, Level 1 resolution, and customer satisfaction rates. Assuming the ITSM platform or ticketing system can automatically generate surveys upon the closure of an incident, measuring individual satisfaction is the most accurate way to gauge the service desk’s reputation among the end user population. Without it, the perception may be limited to anecdotal input or offhand comments by the water cooler. Another metric to include in the SLA report is the contact to ticket ratio as it helps determine agent staffing levels necessary to meet the demand. Also, if the ratio is too high, it serves as a key indicator that there may be gaps in training, process documentation, or agent access limitations that hamper incident resolution on that first contact.

Reviewing contacts broken down by the hour and day of the week and looking at month over monthly volume trending data also help prepare the operations team for scaling up the right resources at the right time. For organizations using a shared model, scalability is not an issue. In such instances, primary agents are assigned to a particular client or internal department and already scheduled during core hours. And during temporary call volume spikes or peak periods, a pool of secondary agents trained to handle the same incident types, environment, and follow the same processes are engaged in the queue to receive those inbound contacts. On the other hand, organizations staffing a finite number of IT professionals, must pay special heed to seasonal volume trends and either get more creative with the current staff’s shifts or bring on additional agents during that period.

Though some people may say comparisons are odious, being able to run reports that correlate performance by individual agents, various IT groups, and even support levels is a must. It’s not about assessing blame either. From an analytical standpoint, looking at how contact volume, CTIs, and closure rates for both incidents and service requests are distributed throughout an IT organization serves a more noble purpose. CIOs, IT directors, and help desk managers more often tend to cross reference this data to assess technology changes (new operating systems, application rollouts, etc.) and their impacts as well as fluctuations in end user demand (growth, seasonal enrollment periods, new customer offerings). In such instances, the metrics are viewed as symptoms of an ever changing technical environment. Ideally, the service desk should have an adaptive solution in place in advance or directly on the heels of any unanticipated impacts with the resounding mantra being increased documentation, access, and training. In other words, has the service desk done all it can to prepare itself with regards to those three service improvement tools in order to maximize resolution rates and minimize escalations to the onsite desktop or infrastructure support staff? A routing and resolution summary by teams including Levels 1 through 3 is the best way to tabulate the answer that question.

Another way to check the technology pulse is to review the top 10 incidents that were resolved and routed as well as the service request selection trend. While the incidents generally indicate what interruptions in IT functionality are affecting the end user populace, doing a similar categorization breakdown for service requests shows what software and hardware assets are most frequently being introduced to the environment. An organization that tracks those assets on a regular basis will also be more prepared procedurally on how to respond to tomorrow’s incidents. For example, employees requesting new smartphones with Office365, may need to have those devices reconfigured to send and receive email. So IT management needs to ensure all agents are thoroughly trained on those procedures in advance. In so doing, high resolution rates and customer satisfaction scores will be a direct consequence of that initiative. Either way, the proof is in the numbers.