We looked at a year of hospitality service desk tickets from across our customer base. Here are the seven things that caught our attention.

This isn’t a research paper and I’m not going to pretend it is. What it is, is the unvarnished view from a London MSP that spends most of its working week answering phones from restaurant managers, hotel duty teams, multi-site operators and the occasional very stressed head office finance lead at 8pm on a Saturday. We support estates ranging from single-site independents to operators running tens of venues, so the patterns we see across that book of business are reasonably representative of how British hospitality is actually using technology right now - not how the trade press says it is.

I asked our service desk lead to help me pull together what twelve months of tickets across our hospitality customers tell us about where the pain genuinely lives in 2026. Some of it confirmed things we already suspected. Some of it surprised us. All of it is shaping how we’re advising operators to spend their IT budgets over the next eighteen months.

A note on the numbers: I’m deliberately not going to throw spurious decimal places at you. I dislike industry reports that dress aggregated ticket data up as peer-reviewed statistics. What follows are honest characterisations of patterns in our data - “about a third”, “the majority”, “fewer than one in ten” - rather than invented precision.

Finding 1: POS and payment terminal issues still dominate ticket volume

If you’d asked me five years ago whether EPOS would still be the single largest source of tickets in 2026, I’d have probably said no - surely by now the platforms would have matured and the integrations would be settled. They haven’t. Roughly a third of all hospitality tickets in our data over the last twelve months relate to point of sale or payment terminals: tills that won’t take payments, terminals that have lost their pairing, printers that have gone offline mid-service, integrations that have silently dropped overnight.

The reason is simple. POS is the most-touched piece of technology in any venue, it lives in the harshest environment (heat, spills, knocks, cleaning staff unplugging things), and it’s the bit where every other system meets. When something breaks, it breaks loudly and it breaks in front of guests. This is why we treat hospitality IT support as fundamentally different from generic SME IT - the response model has to assume that “we’ll get to it Monday” is never an acceptable answer.

Finding 2: Guest WiFi spikes on Fridays and Saturdays and barely features mid-week

This one made us laugh a bit when we plotted it. Guest WiFi tickets are almost invisible Tuesday through Thursday, then climb sharply on Friday afternoon and peak on Saturday. The pattern is so clean it could be a textbook example of demand-driven failure.

What’s actually happening is that the network only really gets stress-tested when the venue is full. Designs that look fine when the engineer signs off on a Wednesday morning fall over when 200 guests turn up on a Saturday night, all trying to stream, scan menus and post photos. Capacity planning, channel management, AP density - these aren’t sexy topics, but they’re the difference between a venue that quietly works and one that gets one-star reviews mentioning “couldn’t even get the wifi to work”. Most of the operators we onboard have never had a proper site survey done; we usually find a managed network is the single highest-ROI change we can make in the first ninety days.

Finding 3: Integration failures are under-reported but high-impact

This is the finding I’d most like operators to take seriously. EPOS-to-PMS, EPOS-to-payments, KDS-to-printer, stock-to-EPOS - when these integrations fail, they rarely throw an obvious error. The till keeps taking orders, the kitchen keeps cooking, but the data flowing between systems is wrong, missing, or silently delayed. By the time someone notices, you’ve usually got a reconciliation problem stretching back days or weeks.

In our ticket data, integration failures account for a relatively small slice of raised tickets but a disproportionate share of the engineering hours spent resolving them, and an even bigger share of the after-the-fact “why is our stock count wrong?” conversations. The operators who have invested in monitoring these integrations specifically - not just the underlying systems - catch problems in hours rather than days. The ones who haven’t, find out at month end.

Finding 4: Phishing reports are notably higher than 2024

We track every reported suspicious email that comes through to us, whether or not it turns out to be malicious. The volume in the last twelve months is meaningfully higher than what we saw across 2024. More importantly, the quality has gone up - fewer obvious “Nigerian prince” attempts, far more convincing supplier impersonation, fake invoice chases, and credential harvesting pages that look pixel-perfect.

The encouraging side of this is that more of the suspicious emails are now being reported by staff rather than clicked. Awareness training and simple report-it buttons in Outlook are doing their job. The discouraging side is that the attackers have noticed hospitality is a soft target with high staff turnover and a lot of finance approvals happening over WhatsApp, and they’re calibrating accordingly. If you haven’t reviewed your cyber security posture since 2024, you are operating against a meaningfully different threat than the one your current controls were designed for.

Finding 5: Connectivity is still a leading cause of lost trading minutes

Despite everything we’ve written over the years about resilience, broadband and connectivity issues remain one of the single largest causes of actual lost trading minutes in the data. Not because the underlying circuits have got worse - they haven’t - but because more of the venue depends on them. EPOS in the cloud, payments routed via IP, stock systems, rota apps, even the music. When the line drops, everything drops together.

The operators who’ve moved to dual-circuit setups with automatic failover almost never appear in this category. The ones still running a single line “because it’s been fine for years” appear in it repeatedly. There’s a clear bifurcation in the data and it almost entirely tracks the resilience decision.

Finding 6: The gap between the best-prepared and the worst-prepared has widened

This is the finding that worries me most. When we segment the customer base by the maturity of their IT setup - patching cadence, MFA coverage, backup verification, network design, vendor management - the gap between the top quartile and the bottom quartile is bigger now than it was a year ago.

The well-run estates are getting better. They have fewer tickets per site, fewer P1 incidents, faster recovery when something does go wrong, and they’re spending their IT budget on things that move the needle. The under-invested estates are going the other way. More incidents, longer recoveries, more firefighting, less time for improvement work, and a growing technical debt that eventually demands a painful catch-up project. The middle is hollowing out. There’s no comfortable place to coast any more.

Finding 7: Cyber Essentials is becoming table stakes

A year ago, Cyber Essentials enquiries were mostly driven by operators who’d been told they needed it for a specific contract. In the last twelve months we’ve seen a clear shift: more enquiries overall, more tenders that name it as a requirement, and - this is the new one - more cyber insurance renewals where the broker is asking about it as a precondition or a discount trigger.

If you’re operating in 2026 without Cyber Essentials, you’re increasingly going to find it costs you commercially, not just from a security standpoint. With the April 2026 changes now in force, the time to look at it is now rather than the week before your insurance renewal.

What this tells us about where to invest in 2026 and 2027

If I had to distil the seven findings into a budget conversation, it would go something like this. Spend on the boring things. Resilient connectivity. Properly designed guest WiFi. Integration monitoring. MFA coverage on every account, not just the obvious ones. Backup verification you actually test. Cyber Essentials certification. A service desk that picks up the phone on Saturday night.

What I would not spend on, at least not yet, is the AI-everything pitch decks that landed in every operator’s inbox last year. There’s real potential there, but only on top of fundamentals that are working. Most of the operators we see have plenty of fundamentals to fix first.

A CloudMatters view, briefly

We’re not neutral, obviously - we sell some of the things I’ve described above, and we’d rather you bought them from us than someone else. But the bigger point isn’t who you buy from. It’s that the operators in our data who are pulling ahead are the ones treating IT as an operational system that has to be designed, monitored and improved continuously, not as a pile of boxes you buy once and forget about. The ones falling behind are the ones still treating it as a cost line to minimise.

If the findings above resonated - particularly the gap between best and worst prepared - and you’d like a frank conversation about where your estate sits, that’s the conversation we have most weeks. You can find out more about how we work with hospitality operators on our hospitality IT support page, or just drop us a line. No pitch deck, I promise.