Most hospitality operators I meet have never experienced genuinely good IT support. That isn’t a slight - it’s a structural problem with the market. The reference point for “what IT support is” is whatever they happen to have right now: a generalist MSP that takes four hours to answer a ticket, a freelancer who knows the EPOS box and not much else, or a head office IT person juggling forty sites and a personal phone that never stops ringing. When that’s your baseline, you don’t ask for better, because you don’t know better exists.

I run operations at CloudMatters. My job is to make sure the service desk, the field engineers, the project team and the account managers are doing the right things in the right order, in the right amount of time. So this post isn’t a sales pitch. It’s a walk through five scenarios that actually happen in hospitality, and a description of what should happen on the inside of a good MSP relationship versus a bad one. If you read this and recognise your current setup in the “bad” column, that’s useful information.

Scenario 1: It’s 8pm on a Friday in Mayfair and a till has gone down

Eighty covers on the books, a private dining room mid-service, and the front-of-house terminal at station two is showing a black screen. The manager has rebooted it. Nothing. They’ve called your IT support line.

Here’s what should happen in the next ten minutes.

Minute 0 to 2. The call is answered by a human in the UK who knows your estate. Not “press 1 for hardware, press 2 for software” - answered. The engineer can see, on screen, that you are CloudMatters’ Mayfair site, that you run a particular EPOS version on a particular till model, that you have four terminals on site and that station two is the one that’s just dropped off the network monitoring twenty seconds ago. They already know which till, before the manager has finished saying which till.

Minute 2 to 5. The engineer triages: is the till offline at the network layer or is it the EPOS application that’s hung? Because we monitor the network and the application separately, this question is answered in seconds, not by asking the manager to look at the back of the unit. In this case it’s the application - the till is pingable, the EPOS service has crashed. The engineer restarts the service remotely. Forty seconds. Station two is back. Total elapsed: under five minutes from the manager picking up the phone.

Minute 5 to 10. The engineer doesn’t hang up. They confirm with the manager that the till is taking orders, that the printers are firing, that the card terminal is paired. They log the incident, flag it for root cause analysis on Monday morning, and tell the manager exactly what happened in plain English so the manager can tell the GM. Then they ask if anything else needs attention while they’re on the line.

What bad looks like: the call rings out for ninety seconds, gets picked up by an answering service, a ticket is logged, an on-call engineer phones back fifteen minutes later, asks the manager to read the serial number off the back of the till, talks them through a reboot they’ve already done, and forty minutes in suggests sending an engineer in the morning. By which point the kitchen is on paper, two tables have walked, and the GM has lost the evening.

The difference isn’t heroics. It’s tooling, monitoring, ownership of your estate, and a service desk model that treats a Friday at 8pm as the busiest moment of the week, not an inconvenience. Read more about how we structure this on our hospitality IT support page.

Scenario 2: You’re opening in Shoreditch in six weeks

You’ve signed heads of terms. The lease isn’t fully done. You’re already worrying about kitchen design, hiring a head chef, and getting the licence application moving. Here’s what your MSP should be doing right now, before you’ve even asked.

In week one after heads of terms, your account manager should already be on a call with you scoping the IT brief: how many covers, how many terminals, what EPOS, what KDS, what payment provider, what guest WiFi expectations, whether you’re integrating with an existing group reservations platform. By the end of that call there should be a draft bill of materials, an indicative cost, and a project plan with the long-lead items already flagged.

The long-lead items are the ones that bite operators every time. Business broadband installations from BT Openreach can take ten to twelve weeks if you need a new line pulled. Static IPs, leased lines, even a basic FTTP install can slip. A good MSP places that order the moment the lease is signed, not the week before opening. Cabling needs to happen before the kitchen fit goes in, because once the stainless steel is bolted down you cannot run a Cat6 anywhere sensible. The KDS screens need to be specified before the plumber arrives, because the data points need to be in the right place on the wall.

By week three before opening, the network kit should be staged in our office, configured, labelled, and tested. By week one before opening, an engineer is on site doing the install, the soak test, and the user training. On opening night, an engineer is either on site or on standby, monitoring the network in real time, ready to react before the GM has noticed anything is wrong.

Bad looks like: you mention the new site three weeks before opening, your MSP says “we’ll get someone out to survey”, the broadband order is placed two weeks before opening, the KDS screens arrive the day before, and on opening night nobody from IT is reachable because it’s a Saturday. I have seen this exact sequence cost an operator their soft launch.

Scenario 3: You’ve just acquired three sites and nothing matches

The deal closes Monday. By Wednesday you’ve discovered the three sites run two different EPOS systems, three different broadband providers, an unknown number of WiFi access points (some still using the default admin password), and a back office PC in one site that nobody has the login for. There’s no asset register. There’s no documentation. The previous IT person was a friend of the owner who has now stopped answering messages.

What good looks like in week one: your MSP sends two engineers to do a physical audit of all three sites within five working days. They photograph every cabinet, label every cable, identify every device on the network, recover or reset every administrative credential, and produce an asset register with serial numbers, warranty status, and end-of-life dates. They flag immediate risks - the AP with the default password, the unpatched Windows 7 machine in the back office, the open guest WiFi that’s bridged to the till network.

By the end of week one you should have a one-page risk summary, a remediation plan with priorities, and a recommendation on standardisation: do we lift everyone onto your existing EPOS, do we run two stacks for now, what’s the migration sequence, what does each option cost, what’s the risk of doing nothing. You should be able to walk into your next board meeting with a clear answer to “what did we just buy and what does it need.”

Bad looks like: your MSP says “send us the details when you have them” and waits for you to chase. Six weeks later the new sites are still running on the old shadow IT setup, and you still don’t have a password for the back office PC.

Network standardisation is one of the things we do most often after acquisitions. Our managed network approach is built around getting a multi-site estate onto a single, monitored, documented footprint as quickly as the operator can absorb the change.

Scenario 4: A supplier breach has exposed guest data

You receive an email from your reservations platform vendor at 4pm on a Tuesday. They’ve had a security incident. Some guest data may have been accessed. The ICO has been informed. They will share more by Friday.

What should happen in the next 24 hours.

Your MSP - assuming they look after your security posture - should already be on the phone within the hour. Not waiting for you to forward the email. They should be asking three questions: what data did the supplier hold for you, what integration do they have into your environment, and have you seen any anomalous activity on your own systems in the last 30 days. They should be pulling logs, checking for unusual access patterns, and making sure that the integration credentials between your systems and the breached supplier are rotated immediately.

Within 24 hours you should have: a written summary of your exposure, a rotation log showing every credential and API key that’s been changed, a guest comms draft ready for your marketing team to send if it becomes necessary, and a clear position on whether you need to make your own ICO notification. You should not be Googling “do I need to tell the ICO” at 11pm.

Bad looks like: you forward the email to your IT person, they say “let me know what the supplier comes back with on Friday”, and you spend the rest of the week feeling sick. We cover the proactive side of this in detail on our cyber security page.

Scenario 5: PCI DSS deadline is next week

Your card acquirer has emailed to remind you that your annual PCI self-assessment questionnaire is due in seven days, and if you don’t return it you’ll be charged a non-compliance fee on every transaction.

Good looks like: your MSP already knew the date, already has 80 percent of the answers documented from the previous year, already knows what’s changed in your environment since then, and books a 60-minute call with you to walk through the SAQ together. They produce evidence - network diagrams, segmentation tests, vulnerability scan reports, patch records - without you having to chase. The form is filed two days before the deadline. You sign it, they submit it. Done.

Bad looks like: you find out about the deadline by accident, your MSP says “we don’t really do PCI”, and you spend a weekend trying to fill in a form full of questions you don’t understand.

The common thread

If you read those five scenarios back, the pattern is consistent: anticipation, ownership, specificity, and speed. Anticipation means knowing what’s coming before the operator has to ask. Ownership means picking up the problem and not putting it down until it’s solved. Specificity means knowing your estate, not generic advice. Speed means measured in minutes, not hours, on the things that matter.

The metrics good MSPs actually report on

Anyone can claim to be fast. Good MSPs publish numbers and let you hold them accountable. The ones that actually matter for hospitality:

  • First response time. How long from the ticket being raised to a human engaging with it. Should be under five minutes during service hours.
  • Time to resolve. How long from ticket raised to problem fixed. Should be tracked by severity, not averaged.
  • First contact resolution. What percentage of issues are fixed on the first call, without escalation. Higher is better.
  • Customer satisfaction. A short post-ticket survey. Real responses, not curated ones.
  • Proactive incidents prevented. Things our monitoring caught and fixed before the operator noticed. This is the number most MSPs don’t track because it’s hard. It’s also the most important one.

If your current provider can’t show you these numbers, that’s a flag.

The CloudMatters approach

We are a hospitality specialist by deliberate choice. Our service desk runs to hospitality service hours, our engineers know the EPOS and KDS stacks operators actually use, and our account managers have walked enough kitchens to know where the data point should go. When something breaks at 8pm on a Friday, we are on it before you’ve finished telling us about it.

If anything in this post described what you wish you had - and not what you currently have - we’d like to talk. Have a look at our hospitality IT support page and book a chat. No pressure, no jargon, just a conversation about what good could look like for your estate.