Ransomware in hospitality is up again this year. The pattern we see at CloudMatters is consistent: a mid-sized group, a busy Friday service, a Saturday morning phone call, and a general manager who has never so much as read an incident response plan staring at a black till screen while the first coffees of the day go cold on the pass. The technology varies. The human response almost never does, because there is no plan to follow.
This post is a starting template. It is deliberately written in the form of a runbook - the kind of document you can print, laminate, and pin to the back of the office door. It will not replace a tailored plan written against your actual systems, and it will not substitute for tested backups or a proper cyber security posture. But if the worst happens tomorrow morning, and the only thing standing between your group and a very bad week is a duty manager with a phone in her hand, this is the sequence she needs to follow.
Set a timer at minute zero. Work through it.
Minute 0-5: confirm and contain
Before you do anything else, confirm what you are actually looking at. Not every screen of gibberish is ransomware. A failed Windows update, a corrupted POS database, a tripped RCD in the server cupboard, or a DNS outage can all look apocalyptic at six in the morning and turn out to be nothing. Look for the specific signals: files renamed with an unfamiliar extension, a ransom note text file on the desktop, an on-screen demand in English and a cryptocurrency wallet address, folders that were fine yesterday now showing padlock icons or strings of hexadecimal. If one machine has those signs and the others do not, you have an incident. If every machine in the building has gone dark at once with no ransom note, you probably have a network or power problem - check that first.
Once you have confirmed, contain. The instinct is to switch the machine off. Do not. Powering the device down can destroy exactly the volatile evidence - memory contents, active processes, encryption keys still resident in RAM - that a competent incident responder will need later. Instead, isolate it. Unplug the ethernet cable. If it is on WiFi, turn WiFi off at the device, not at the access point. Leave the machine running, leave it logged in, and step away from the keyboard. The goal in these first five minutes is simple: stop the spread, preserve the evidence, and do no harm.
Minute 5-15: escalate
This is not a ticket. Do not log into your MSP’s web portal, fill out a form, and select “High” from a dropdown. Every managed service provider worth the retainer has a 24/7 incident line - a phone number that rings a human who has the authority to scramble an engineer. If you do not know that number by heart, you should know where it is written down. Ours is on the first page of every client runbook and on a sticker on the server cabinet. Call it.
While your MSP is being mobilised, make two more calls. The first is to your designated internal security lead, if you have one, or to whoever in the business owns IT risk - often an operations director or a finance director. The second is to the managing director or CEO. They do not need to do anything in the first hour except know. Ransomware incidents have a habit of becoming very public very quickly, and the worst possible position for a chief executive is to find out about their own breach from a journalist or a customer on Twitter.
One thing you must not do in this window: communicate with the attacker. Do not click the ransom note. Do not visit the Tor link. Do not start a negotiation. And under no circumstances should anyone - not the GM, not the MD, not a panicked finance director - authorise a payment. Payment decisions, if they are ever made at all, happen hours or days later, under legal and insurer supervision, with specialist negotiators involved. In the first hour, the only correct answer to “should we pay” is “we will decide that later”.
Minute 15-30: contain further
With your MSP on the line and senior leadership informed, widen the containment. Your network engineer - ours sit within our managed network team - will start segmenting the affected VLAN from the rest of the estate, blocking east-west traffic at the firewall, and pulling logs from switches and access points to understand what has touched what. If you run a flat network with everything on one subnet, this is harder and slower, and you will learn expensive lessons about segmentation in the coming week.
In parallel, look for lateral movement. Has the ransomware reached the file server? The back-office PCs at other sites? The domain controller? Crucially, has it reached your backup target? Ransomware families in 2026 are aggressive about hunting and encrypting backups - any backup that is online, writable, and reachable from a compromised host should be assumed at risk until proven otherwise. Immutable, offline, or cloud-isolated backups are a different story, which is why we insist on them.
Preserve what you can. If your MSP uses an EDR platform, they will already be capturing forensic artefacts. If not, do not start copying files around or running antivirus scans - you will trample the evidence. Leave the affected machines exactly as they are, isolated but powered on, and let the responders work.
Minute 30-60: assess and plan
By the half-hour mark, you should have a rough answer to three questions. What is affected? Can we still trade? What is our next four hours?
Scope first. List the systems you know are compromised, the systems you suspect, and the systems you have positively verified as clean. POS terminals, KDS screens, reservation system, CRM, payroll, back-office file shares, email, CCTV, door entry, building management - walk through the list and mark each one. This list will be wrong and will change, but you need a starting map.
Then trading. Can you take payments? If your card terminals are standalone rather than POS-integrated, you can often fall back to manual order-taking and standalone chip-and-PIN. Can you take bookings? If your reservations system is cloud-based and the attacker has not compromised your Microsoft 365 tenant, probably yes, from a phone or a personal device. Can you open the kitchen? If the KDS is down but the printers work, you are back to tickets on a rail. The point of this exercise is not heroism. It is to decide, deliberately, whether to open for service today, open late, or close and send staff home. That decision belongs to operations, not to IT, but it needs IT’s honest input.
Finally the four-hour plan. Who is drafting the holding statement for staff? Who is ringing the insurer? Who is preparing the ICO notification - remember that the 72-hour clock under UK GDPR started the moment you became aware, not the moment you finished investigating. Who is handling the phone calls from suppliers, delivery drivers, and the pub next door asking why your lights are out? Assign each of these to a named person before the hour is up.
Minute 60+: formal incident response
Once the first hour is over, the pace changes. Forensics begin in earnest - imaging affected machines, analysing logs, working out how the attacker got in and how long they were inside. Recovery planning begins against known-good backups, with careful validation before anything is restored to production. Law enforcement notifications go to the NCSC and Action Fraud. Your cyber insurer is formally notified and assigns a breach coach, usually a specialist solicitor, who will steer the legal and regulatory response from that point on. If personal data may have been affected, a customer communications plan is drafted and reviewed by legal before anything is sent.
None of this is fast. A mid-sized hospitality breach typically takes days to contain, a week or two to recover from, and months to fully close out. Your job in hour one is simply to make sure hour two, and hour twelve, and day three, are not made worse by decisions taken in panic.
What you need in place before this happens
Reading the above and thinking “we would never manage any of that on a Saturday morning” is the correct reaction. Nobody does, first time, without preparation. The things that separate the groups who survive from the groups who do not are almost all in place before the incident:
- Tested backups. Not just configured - actually restored, end to end, within the last ninety days, with the restore time measured.
- A documented asset inventory. You cannot protect, segment, or recover what you do not know exists.
- A written incident response plan tailored to your estate, with named roles and current phone numbers. If you do not have one, a proper business continuity engagement will produce it.
- Your MSP’s 24/7 incident line, written down in at least three places including one that does not require a working computer.
- Cyber insurance with a named breach coach and a clear notification procedure.
- A written communications plan covering staff, customers, suppliers, and press.
Our IT support for hospitality engagements build every one of these as standard, because we have seen what happens when they are missing.
The CloudMatters IR capability
When a client calls our incident line, a duty engineer answers inside three rings, twenty-four hours a day, seven days a week. Within fifteen minutes they are joined by a senior responder. Within the hour, if the incident warrants it, we have a containment team working against your estate, a network engineer at the firewall, and an account director on the phone to your MD. We do this from our office in W1T, and we do it for London hospitality operators because hospitality is what we know. We are not a generalist MSP pretending. We are the team who have done this before, on a Friday night, in the middle of service, and who know that the first hour is the one that matters most.
If you do not currently have a plan, or you have one but have never tested it, that is the conversation to have this week - not next quarter. Talk to us about cyber security and incident response, and let us help you write the runbook before you need it.