How National Internet Shutdowns Threaten Exchange Liquidity and What Firms Should Do About It
exchangesresilienceoperations

How National Internet Shutdowns Threaten Exchange Liquidity and What Firms Should Do About It

ccoindesk
2026-02-04 12:00:00
11 min read
Advertisement

Operational playbook for exchanges and custodians to preserve liquidity and compliance during national internet shutdowns. Practical steps and checklists.

When a whole country goes dark: why exchanges and custodians must act now

For traders and institutional ops teams, nothing is more destabilizing than a sudden, country-level internet shutdown. Markets fragment, spreads blow out, withdrawal queues balloon and compliance teams lose lines of sight. In January 2026, iran experienced one of the longest nationwide blackouts on record—an acute reminder that state-directed network outages are no longer rare edge cases but operational realities that exchanges and custodians must plan for.

Executive summary — the operational stakes

National internet shutdowns create a cascade of operational risks that directly threaten exchange liquidity and customer trust. The core vulnerabilities are: node centralization, single-point KYC processes, concentrated custody signing, and insufficient cross-border liquidity buffers. This piece lays out practical, prioritized measures for exchange and custody teams to reduce probability and impact: from diversified node distribution to emergency signing protocols and KYC continuity options.

Context: the 2025–2026 pattern and why it matters

State-sponsored shutdowns have increased in frequency and duration through late 2025 and into 2026. NetBlocks and independent monitoring showed extended outages in several countries; in January 2026 iran’s nationwide blackout lasted multiple days, disrupting access for tens of millions of users. As governments view network control as a security lever, outages are being used during civil unrest, elections and economic shocks.

"Iran’s shutdowns remain among the most comprehensive and tightly enforced nationwide blackouts we’ve observed, particularly in terms of population affected." — Isik Mater, NetBlocks (paraphrased)

For digital-asset platforms this is not just a connectivity problem. It is a liquidity and legal-risk event. Local orderbooks can become isolated, fiat rails stall, and custodial signing workflows that rely on local HSM access or single-country staff break down. Preparing for this reality is now a board-level operational requirement.

Principles for resilience

  1. Geo-avoidance: avoid concentration of critical infrastructure inside jurisdictions that may be shut down. Consider sovereign-cloud and isolation patterns such as those described for sovereign cloud deployments.
  2. Graceful degradation: design systems that intentionally reduce functionality (read-only, limit orders, trading-only pools) rather than failing catastrophically.
  3. Separation of concerns: decouple compliance/KYC control planes from execution/settlement planes where legally permissible.
  4. Red-team constantly: run outage war games on a quarterly cadence and review operational playbooks such as the Operational Playbook 2026 to keep governance aligned.

Node distribution: the foundation of continuity

Node and validator distribution is the first line of defense. When a country’s networks are turned off, a node fleet that is overly concentrated in a single country or autonomous system (AS) will fall silent and take local liquidity with it.

Design targets

  • Run infrastructure across a minimum of three jurisdictions in different regions (e.g., Europe, North America, APAC).
  • Use multiple network providers per site—do not rely only on a single cloud region or ISP.
  • Prefer a mix: major cloud providers, colocation facilities, and bare-metal nodes in regulated financial centers.
  • Track AS-level diversity: ensure no more than X% of traffic routes through a single AS (set X ≤ 30%).

Technical patterns and tools

  • Multi-Cloud + Colocation: run validator clusters across at least two different hyperscalers and one independent colo provider to reduce correlated failure. Beware the hidden costs of one-size-fits-all hosting when planning multi-provider architectures.
  • Hot-standby nodes in neutral jurisdictions: pre-provision nodes in neutral countries that rarely deploy wholesale censorship (subject to legal clearance).
  • Out-of-band management: ensure console and BMC access via independent channels (cellular, satellite uplink) to perform triage even if primary networks falter—pair these with resilient power and portability options (see portable power considerations in portable power station comparisons).
  • Edge relays and light clients: deploy edge relays that aggregate global liquidity data and allow remote clients to operate in degraded modes (e.g., read-only orderbooks). Explore edge oracle and relay patterns like those described in edge-oriented oracle architectures.

Satellite and alternative comms — powerful but conditional

Low-Earth orbit (LEO) satellite internet (e.g., Starlink) proved valuable in conflicts and outages from 2022–2025. It is a viable fallback for remote signing centers and key staff. But satellite hardware can be blocked, confiscated or banned. Before deploying, legal and political risk assessments are mandatory.

KYC continuity: keep compliance alive when local systems die

KYC breaks are both a compliance hazard and a user-experience crisis. If local onboarding systems fail, users will seek other exchanges or unsafe alternatives. The goal is to preserve identity verification and remediation capacity while maintaining AML controls.

Architectural approaches

  • Decouple KYC metadata from local endpoints: store encrypted, verifiable KYC attestations in multiple regions or in dedicated secure enclaves that can be accessed outside the impacted country.
  • Use pre-verified KYC tokens: issue cryptographic KYC tokens (signed JWTs or Verifiable Credentials) that grant limited access during an outage window without re-running full identity checks.
  • Cross-border KYC agents: develop arrangements with licensed partners in neighboring jurisdictions to accept and re-verify customers when local systems are unavailable—combine these arrangements with secure remote onboarding patterns from edge-aware remote onboarding playbooks.
  • Offline evidence capture: allow agents to capture KYC evidence offline and sync to global backends when connectivity returns—ensure chain-of-custody and timestamps. Use offline-first document backup and sync tools to preserve integrity during offline capture.

Operational rules for KYC during outages

  1. Classify customers by risk tier. High-risk profiles require stricter continuity controls; low-risk retail users may receive read-only or withdrawal-only access.
  2. Implement temporary access tokens with narrow scopes and short TTLs (time-to-live) to reduce fraud exposure.
  3. Log every outage access decision and retain immutable audit trails to satisfy regulators post-event.
  4. Coordinate with sanctions/compliance teams to ensure KYC continuity does not circumvent legal obligations in the affected jurisdiction.

Custody and signing: keeping assets secure and operational

Custody resilience is a particular challenge: cold storage is secure but immobile; hot signing depends on local HSMs and staff. The answer is resilient signing architectures combined with clear emergency authorizations.

Technical options

  • Distributed HSMs & multi-region key shards: use HSM clusters across jurisdictions with robust quorum policies that permit cross-border signing when local shards are inaccessible.
  • Threshold Signature Schemes (TSS): adopt TSS to avoid single-HSM dependence. TSS enables flexible quorum policies and easier geographic distribution.
  • Emergency approval workflow: pre-authorize a legal and compliance-controlled emergency sign-off process that can be invoked if local custodial nodes are offline for a defined period.
  • Air-gapped secondary signing centers: maintain geographically distributed air-gapped signing sites with documented processes and redundant power/communications (satellite as a last resort).

Never design an emergency signing process that bypasses legal compliance. Prior to enabling cross-border emergency signing, obtain board-level approval and clearances from compliance, legal, and, where necessary, regulators. Document thresholds for invocation: e.g., outage duration, local regulator orders, depth of local orderbook collapse.

Liquidity preservation: keeping markets functional under isolation

The moment local connectivity is lost, isolated orderbooks can suffer extreme volatility and liquidity evaporation. Your goal is to preserve customer access to fair pricing and prevent systemic losses.

Pre-funded liquidity strategies

  • Cross-border prefunding: maintain buffer pools in multiple jurisdictions and in multiple asset classes (stablecoins, major fiat rails, BTC/ETH) to meet withdrawals without relying on local fiat settlements—pair this approach with forecasting and cash-flow toolkits to size buffers accurately.
  • Liquidity lines with prime brokers: establish pre-negotiated short-term credit facilities with providers that can be drawn during outages.
  • On-exchange internalization: increase internal matching capacity so trades are netted off-chain locally when external liquidity dries up.

Automated market controls

  • Dynamic circuit breakers: trigger trading mode changes (read-only, limit-only, or halted) based on predefined indicators: depth collapse, spread widening vs global prices, or loss of connectivity to external pricing oracles.
  • Price anchoring: use global aggregated indices from unaffected regions as reference price—adjust spreads dynamically to account for isolation premium.
  • Orderbook throttling: slow order-matching cadence when volatility spikes to avoid executing trades based on stale info.

Example escalation triggers

  • Loss of >70% traffic in a jurisdiction for >2 hours.
  • Local orderbook depth drops below a set fraction of global average (e.g., <10%).
  • Price divergence >5–10% from global index sustained for >15 minutes.

Monitoring and early-warning systems

Early detection buys time. Implement network and social-signal monitoring to detect government blackouts, ISP throttling and BGP anomalies.

Data sources and tooling

  • Network telemetry: BGP route-monitoring, RIPE Atlas probes, ThousandEyes and active probes from multiple cities inside the jurisdiction. For procurement and incident-response planning see the public procurement news brief for incident response buyers.
  • Third-party monitors: NetBlocks, OONI, local measurement projects that publish outage indicators.
  • Social channels and webhook pipelines: ingest local social-media signals, financial news wires, and regulator notices into a centralized incident dashboard.
  • Automated triggers: tie these feeds into automated runbooks that alert incident commanders and begin pre-authorized mitigation steps—use SOC-grade tooling and playbooks such as those evaluated in the StormStream Controller Pro review for SOC analysts to accelerate detection and response.

Communications and customer handling during an outage

Communications are the difference between calm compliance and customer panic. Assume that local users cannot reach your normal channels. Build alternative communication paths and message plans.

Channels and templates

  • Out-of-band notifications: use email, international SMS (if mobile still works), push notifications via apps that use alternate routing, and public status pages hosted in unaffected regions.
  • Localized templates: pre-write messages for every contingency in local languages. Include clear instructions about account access, withdrawal expectations and safety guidance.
  • Regulator liaison: keep a standing channel with local regulators and banking partners to coordinate permitted operations and to avoid legal surprises.

Playbook: step-by-step actions for the first 24 hours

  1. Detect & validate: automated systems detect outage → SOC validates with at least two independent sources (BGP loss, RIPE probe failure, NetBlocks alert).
  2. Incident declare: declare a Level 1 outage. Notify incident commander, legal, compliance, liquidity desk, custody ops, and comms.
  3. Activate prep measures: move to pre-authorized emergency liquidity pools and switch relevant nodes to global relay endpoints.
  4. Adjust trading mode: apply pre-defined circuit breakers (limit-only or read-only) if price divergence thresholds are breached.
  5. Enable emergency signing: if custody signing depends on local HSMs and outages persist past threshold, invoke the documented emergency signing protocol (with multi-party approval and legal sign-off).
  6. Customer comms: publish an initial statement on status page and social channels. Provide estimated timelines and safety guidance.
  7. Continual reassessment: every 60–180 minutes re-evaluate liquidity exposure, KYC backlogs and legal constraints; escalate to board if event grows.

Testing, drills and measurable KPIs

A plan is only as good as its last test. Simulate national-scale outages using network partitioning and tabletop exercises with legal/regulatory participation.

  • Quarterly fully simulated outage for at least one key jurisdiction with live drills for custody signing and liquidity drawdown.
  • Monthly technical failover tests: switch order routing and node traffic across regions and verify state consistency.
  • Annual regulator engagement: present your contingency playbook and get pre-approval language where possible.

KPIs to track

  • Mean Time To Detect (MTTD) of jurisdictional outages.
  • Mean Time To Recovery (MTTR) for read-only and full-trade modes per region.
  • Liquidity buffer days (how many days of withdrawals can be met from cross-border prefunded pools).
  • Number of successful emergency signatures processed without incident.

Not every operational fix is legally permissible. Emergency cross-border signing or transferring assets may violate local laws, sanctions or court orders. Coordinate legal clearance in advance and ensure every emergency path includes legal counsel sign-off and immutable audit trails.

Case studies and lessons learned

The outages in recent years provide clear lessons. When a jurisdiction loses connectivity, local liquidity frequently becomes isolated within hours, and opportunistic price divergence appears quickly. Exchanges that had multi-jurisdiction node footprints and pre-funded offshore liquidity were able to offer better continuity and suffered fewer customer losses. Those that relied on single-country HSMs or centralized KYC experienced longer recovery times and regulatory scrutiny.

Practical checklist for senior ops teams (15-minute read)

  • Audit node distribution: confirm nodes are spread across at least three jurisdictions and multiple ASNs.
  • Confirm HSM/TSS geography: ensure key shards are not co-located in a single legal jurisdiction.
  • Establish pre-funded liquidity pools in three currencies (local fiat substitute, USD stablecoin, BTC/ETH).
  • Deploy automated outage monitors tied to runbooks and incident alerts.
  • Create KYC token strategy and agreements with cross-border re-verification partners.
  • Pre-approve an emergency sign-off governance framework with legal/compliance and the board.
  • Run an outage tabletop with regulators annually.

Actionable takeaways

  • Reduce single-point concentration of nodes, staff and HSMs in any single country.
  • Build liquidity war chests that can be deployed without local fiat settlement.
  • Design KYC to be portable via verifiable credentials and cross-border agent networks.
  • Formalize emergency signing: TSS and pre-authorized multi-party approval prevent paralysis while preserving legal compliance.
  • Test publicly and often: the best defenses are exercised before the crisis.

Expect more frequent and longer shutdowns tied to political events and economic stress. At the same time, regulators are asking exchanges for stronger operational resilience guarantees. In 2026, exchanges that combine multi-jurisdiction infrastructure, legally vetted emergency signing frameworks and transparent communications will win customer trust and regulatory goodwill.

Conclusion — build resilience before the blackout

National internet shutdowns are not hypothetical. They are operational hazards that can and do fragment markets and threaten client assets. Building resilience requires technical changes (node and custody distribution), operational investments (pre-funded liquidity, outage monitoring) and governance upgrades (legal pre-approval for emergency protocols). Start with a focused 90-day program: audit your exposure, patch the largest single-country dependencies, and run a full outage drill. Do this now—before your next jurisdiction goes dark.

Call to action

Ready to test your readiness? Download our 90-day outage preparedness checklist and incident runbook template, and subscribe to our quarterly resilience drills newsletter to get scenario playbooks and regulator engagement templates that exchanges and custodians use in 2026.

Advertisement

Related Topics

#exchanges#resilience#operations
c

coindesk

Contributor

Senior editor and content strategist. Writing about technology, design, and the future of digital media. Follow along for deep dives into the industry's moving parts.

Advertisement
2026-01-24T04:41:06.401Z