You trade seven days a week. Your card payments, your EPOS, your booking system, your reservations platform, your back-office accounting, your kitchen display, your guest WiFi, and increasingly your kitchen equipment all run through one broadband line going into the back of the building. That line was probably installed by the previous tenant, terminates somewhere unloved behind the dry store, and has never been touched since the day it went live.
You are one wayward digger, one drunk driver into a roadside cabinet, or one Openreach engineer working on the wrong port away from a very bad Saturday night.
This is not a theoretical risk. We deal with it every few months across the hospitality estates we support in London, and the pattern is almost always the same: the operator knew the line was a single point of failure, kept meaning to do something about it, and the something-about-it became urgent at exactly the wrong moment.
This post is about how to do something about it before that moment arrives.
The reality check on single-line uptime
Single-line broadband uptime at a typical London restaurant runs at around 99.5 per cent annually. That number sounds reassuring until you do the maths.
99.5 per cent uptime is around 43 hours of downtime per year. Forty-three hours. And those forty-three hours are not distributed kindly. They are not at 03:00 on a Tuesday in February. They cluster around storms, around roadworks, around weekend power events, around the busiest trading windows when every other business on the same exchange is hammering the same infrastructure.
In our experience the median outage is somewhere between forty minutes and four hours. Long enough to ruin a service. Short enough that by the time you have given up and started taking cash, the line comes back and you spend the rest of the night reconciling a mess.
If your venue takes 8,000 pounds on a Friday night and you lose two hours of peak trading, the direct cost is in four figures before you count the labour, the comps, the refunds, and the guests who will not come back. We have written about what EPOS and connectivity downtime actually costs in detail. The headline is that the lost sales line is always the smallest number on the page.
A second internet line is not a luxury for groups with money to burn. It is the cheapest insurance policy in your IT stack.
The three resilience options
There is no single right answer here. The right answer depends on the site, the trading volume, the criticality of the systems running over the link, and the budget. But the options narrow down to three patterns we deploy repeatedly.
Option 1: Primary fibre with 4G or 5G failover
The cheapest workable resilience pattern. You keep your existing fibre line as the primary, and you add a 4G or 5G modem - usually built into the firewall or attached as a USB or Ethernet device - as the failover.
When the fibre goes down, the firewall detects the loss of upstream connectivity within seconds and automatically routes traffic over the cellular link. When the fibre comes back, it fails back.
What this is good for: keeping card payments alive, keeping the EPOS talking to its cloud back-end, keeping the booking system online, keeping kitchen tickets flowing. The bandwidth on a decent 5G connection in central London is genuinely usable - often 100 Mbps or more on a good day - but it is not unlimited and the data is metered.
What this is not good for: running a venue full of guests streaming on WiFi. If you let the guest network failover with everything else, you will burn through a 4G data allowance in an afternoon. Critical traffic only.
This is the right answer for most single-site restaurants. It costs a fraction of a second fibre line and covers the failure modes that actually happen.
Option 2: Primary fibre plus secondary fibre from a different carrier
The full-fat option. Two separate physical fibre circuits, from two different ISPs, ideally taking different routes into the building, ideally terminating on different exchanges.
This is genuinely diverse - both lines failing simultaneously requires two unrelated faults at the same time, which happens rarely enough that we treat it as an acceptable risk. Both lines run at full speed all the time, both can carry guest traffic, and the failover is invisible to anything connected behind the firewall.
What this is good for: hotels, high-volume venues, sites where the cost of downtime justifies the cost of redundancy, and any site where the primary line struggles to keep up with peak demand on its own (because in normal operation you can load-balance across both).
What this costs: anywhere from twice the price of a single line upwards, plus install costs which can be substantial if you are pulling new fibre into the building. For some sites that is an easy yes. For others it is hard to justify.
Option 3: Primary fibre plus SoGEA or FTTC backup from a different exchange
The middle ground. You keep your primary fibre, and you add a copper-based backup - SoGEA or FTTC - from a different provider, deliberately routed via a different exchange where possible.
The backup is slower than the primary. That is fine. It does not need to be fast. It needs to be different. The whole point of resilience is diversity, and a 70 Mbps SoGEA line that does not share infrastructure with your primary fibre is far more valuable than a second fibre line from the same carrier that goes through the same cabinet.
This is a sensible pattern for sites where 4G coverage is poor (basements, central London steel-framed buildings, sites with thick concrete) and where a second fibre install is not commercially viable.
The kit that makes it work
None of this is useful without the right hardware sitting behind it.
You need a router or firewall that supports dual-WAN with automatic failover and configurable health checks. Not just two ports - actual logic that pings something on the public internet, detects the failure, and switches the route in seconds rather than minutes. Most decent business firewalls do this; most consumer routers do not.
For multi-site groups, this is where SD-WAN earns its keep. A properly configured SD-WAN appliance at each site gives you central visibility across all the WAN links in the estate, application-aware routing (so payments traffic is prioritised over Spotify in the dining room), and policy-based failover that you can manage from one console. We cover the broader picture in our managed network work.
The design principles that actually matter
Three rules we apply on every dual-WAN install:
Critical traffic must failover automatically within seconds. Payments, EPOS, KDS, and reservations cannot wait for someone to notice and reboot a router. The firewall handles it. No human intervention.
Non-critical traffic can take a hit. Guest WiFi does not need to ride the failover link. If the primary is down, it is acceptable for guest WiFi to be unavailable until the primary comes back. This protects the cellular data allowance and keeps the critical traffic flowing.
Monitoring must alert when the secondary is live. This is the rule operators forget. If your failover is silent and seamless, you will run on the backup for three weeks without realising the primary is dead. By the time anyone notices, you have no resilience left and the next outage is a full one. The monitoring system should alert the support team the moment the secondary takes over, so the ISP gets a call within minutes, not weeks.
Cost ballparks
Numbers vary by site and carrier, but as a rough guide for a single London restaurant:
- 4G or 5G failover hardware and SIM: typically a few hundred pounds for the kit and around twenty to fifty pounds a month for the data plan, depending on volume.
- SoGEA or FTTC backup line: usually thirty to sixty pounds a month plus an install fee.
- Second fibre line from a different carrier: anywhere from a hundred and fifty pounds a month upwards, with install costs that can run into four figures if civils are required.
Across an estate of ten sites the per-site cost falls because the firewall licensing, monitoring, and management overhead is shared.
The mistake we see most often
Operators decide they want resilience, ring up their existing ISP, and ask for a second line. The ISP is delighted to sell them one. It gets installed, it gets plugged in, and everyone feels safer.
Then the cabinet down the road catches fire, both lines go dark at the same moment, and the operator discovers that the “second line” was provisioned over the same physical infrastructure as the first. Same exchange, same cabinet, same fibre route into the building. Zero diversity. The money was wasted the day it was spent.
Diversity is the whole point. Different carrier. Different physical route where possible. Different last-mile technology if you can manage it. If both lines fail in the same incident, you do not have two lines. You have one line with a more expensive bill.
How CloudMatters does this
For the hospitality groups we support, dual-WAN is standard on every site we design. We survey the existing connectivity, identify where the diversity is missing, procure the secondary circuit from a carrier that genuinely takes a different path, install and configure the firewall or SD-WAN appliance, set up the monitoring and alerting, and then watch it from our operations centre 24/7.
When a primary line drops, our team knows about it before the venue does. The ISP gets the call from us, not from a stressed-out general manager. The venue keeps trading. That is the entire point.
Resilient connectivity also has security implications - a properly designed dual-WAN setup needs the firewall rules, segmentation, and PCI controls to apply consistently across both links, not just the primary. We build that in from the start.
If you are running a London restaurant or hotel group on a single broadband line per site, the question is not whether to fix it. It is whether you fix it before or after the next outage.
Talk to us about a managed network design for your estate. We will survey the sites, design the resilience, procure the circuits, and run it for you.