Most MSPs hide behind a phrase I’ve come to dislike: “we provide great service.” It’s the kind of thing you put on a homepage when you can’t, or won’t, show the numbers. In twelve years of running service desks I’ve yet to find a single operator who can’t answer the question “are your customers happy?” with data if they actually want to. The ones who duck the question are usually ducking it for a reason.

So this piece is a bit of a look under the bonnet. These are the metrics we publish internally at CloudMatters, the ones we share with customers, and - importantly - the ones we track that most MSPs don’t. If you run hospitality operations and you’re trying to work out whether your current provider is any good, or whether a new one is worth the switch, this is the lens I’d use.

The four headline numbers

On our stats block we publish four figures. They’re the ones we’re proud of and the ones we refuse to let drift. But numbers without context are just marketing, so let me explain what each one actually measures and why it matters for a hospitality business.

98% customer satisfaction. This is the CSAT score on tickets we’ve closed in the last rolling 90 days. Every closed ticket generates a one-click survey - thumbs up or thumbs down, with an optional comment. We count thumbs up divided by total responses. We don’t exclude surveys we don’t like, we don’t pester people who didn’t respond until they do, and we don’t rotate the sample. What it tells you: on almost every ticket we close, the person on the other end felt it was resolved properly. What it doesn’t tell you: whether we resolved the root cause, or just the symptom. That’s why we track other things too.

Under 15 seconds average call pickup. When a duty manager rings our service desk during Saturday service, they don’t want an IVR tree and they don’t want hold music. They want a human. Fifteen seconds is roughly three rings. We measure from the moment the call lands on our system to the moment an engineer - not a receptionist, not a ticket logger - picks it up. If that number creeps, we know we need more bodies on the phones before anything else breaks.

Under 30 minutes average response time. This is the time from a ticket being raised (phone, email, or portal) to an engineer actively working on it and responding to the customer. Not acknowledging with an auto-reply. Not assigning it to a queue. Actually working on it. For hospitality, the difference between “we’ve seen your ticket” and “we’re on it” is the difference between a manager carrying on service and a manager standing by the pass with their arms folded.

75% first contact resolution. Three in four tickets are fixed on the first interaction - no callback, no escalation, no “let me ring you back in an hour”. For a hospitality operator this is the one that changes your life. Every time a ticket bounces to a second engineer, you’ve lost somewhere between twenty minutes and two hours of operational clarity. FCR is the single biggest lever on perceived service quality, and almost no MSP publishes it. Ask yours.

You can see a bit more of how we think about the service desk on our hospitality IT support page, but the headline is that we design the desk around the shape of a trading day, not the shape of an office.

The metrics customers don’t usually see

Those four are the shop window. Behind them, we track another set that matter just as much - they’re the leading indicators that tell us whether the headline numbers will still be true in six months.

Ticket volume trends per site. If a site’s ticket count is rising month over month, something is wrong. Either the kit is aging out, the staff are changing, or we’ve failed to fix something properly. A flat or falling trend line tells us the estate is stable. A rising one is a conversation to have before it becomes a complaint.

Repeat tickets. We flag any ticket that looks like a repeat of one closed in the last 30 days. Same site, same category, same symptom. High repeat rates are the classic sign of “we fixed the symptom, not the cause”. We review these weekly and they feed straight into our problem management process.

Ticket escalation rate. How often does a ticket need a second or third engineer to close it? Some is healthy - you want juniors to escalate when they should. Too much and you’ve either got a training problem, a complexity problem, or a staffing problem. We want this number under 15%.

Engineer tenure. Average time our service desk engineers have been with us. I mention this because it’s the one number you can’t fake and you can’t shortcut. Engineers who’ve been supporting the same estates for two, three, five years know things that no runbook captures. If your MSP has 40% staff churn, you’re effectively retraining them every six months on your own business. Meet the people who’d actually support you on our about page.

Proactive incidents prevented. Every time our monitoring catches something before the customer notices - a failing switch, a misbehaving WAN link, a drive filling up - we log it as a prevented incident. It’s the positive counterpart to the ticket count: work we did that the customer never had to raise a ticket for. This is one of the best arguments for a properly monitored managed network over break-fix, and it’s an easy number to show a finance director.

Why averages lie - and the 95th percentile tells the truth

Here’s the uncomfortable bit. If I told you our average response time was 30 minutes, you’d probably think “fine, good enough”. But average is a terrible statistic for a service desk, especially in hospitality.

Imagine this: ninety tickets a week are picked up in under five minutes. Eight tickets take around an hour. Two tickets take four hours because they landed at 7:30pm on a Saturday and the queue got swamped. Your average might still look like 30 minutes. But those two four-hour tickets? They’re the Saturday night service where the manager couldn’t take cards for half the evening.

That’s why we track the 95th percentile on everything. P95 response time, P95 pickup, P95 time-to-resolve. The 95th percentile is the answer to “how bad is it when it’s bad?” - and in hospitality, how bad it is when it’s bad is what people remember. Nobody writes a Google review about the week everything worked. They write it about the Saturday they couldn’t take payments.

Ask your MSP for their P95 numbers. If they don’t track it, or they can’t produce it, you now know what “we provide great service” actually means.

What a good monthly service review looks like

Every customer of ours gets a monthly service review. Not a quarterly one, not a “we’ll dial in if there’s a problem” one. Monthly. Here’s what’s in it:

  • Tickets opened and closed - by site, by category, by priority. Trended against the previous three months.
  • Top five incident categories - so you can see where pain is concentrated. EPOS printer jams? Network drops at site seven? Payment terminals? We tell you where the weight is.
  • SLA breaches - every single one, with a named reason. No “we were busy”. An actual cause.
  • Root cause analysis on P1/P2 incidents - what happened, why, what we’ve changed so it doesn’t happen again.
  • Proactive work delivered - patching, firmware, monitoring changes, documentation updates.
  • Recommendations - the things we think need investment or attention in the next 90 days.

The review takes about an hour. It’s not a sales meeting. It’s the document you use to decide whether we’re earning our retainer.

The questions to ask your current MSP

If you’re trying to work out whether your incumbent is measuring anything meaningful, these are the questions I’d put in front of them at your next review:

  1. What’s your CSAT score, how is it collected, and what’s the response rate?
  2. What’s your P95 response time - not the average?
  3. What percentage of our tickets were resolved on first contact last month?
  4. How many of our tickets last quarter were repeat tickets from the previous 30 days?
  5. What’s the average tenure of the engineers who actually work on our estate?
  6. How many incidents did your monitoring prevent last month that we never saw?
  7. Can we see a trend chart of ticket volume per site for the last 12 months?
  8. What’s your ticket escalation rate, and is it rising or falling?

If the answer to most of those is “I’d have to come back to you on that”, you have your answer. Good MSPs know their numbers because they look at them every week. They’re not secret. They’re just work.

Transparent by default

Our position at CloudMatters is simple: we show customers the numbers whether they ask for them or not. Monthly reviews, a shared dashboard, open escalation paths to me as Operations Director. If we’re having a bad month, you’ll know before you ask. If we’re having a good one, we’ll still be looking for what we could do better.

This isn’t because we’re saints. It’s because measurement is the only honest way to run a service business. You can’t improve what you don’t count, and you can’t charge a retainer for something you won’t be held to.

If you’re reviewing your IT support and you want to see what transparent service measurement actually looks like in a hospitality context, have a look at how we support hospitality operators or drop me a line. I’m happy to walk through our dashboards with you - including the numbers we’re not quite so proud of.