In late April, a massive power outage cascaded across Spain and Portugal, plunging most of the Iberian Peninsula into darkness. What followed wasn’t just a test of the electrical grid — it was a stark reminder of how deeply digital infrastructure depends on physical continuity. The blackout disrupted not only local businesses and residents but also critical network backbones and regional connectivity across ISPs, mobile networks, and datacenters.

NetActuate’s view of internet traffic disruption during the outage

Infrastructure at the Edge of its Limits

NetActuate’s Madrid presence, while designed with redundancy in mind, encountered momentary generator issues during the blackout. Combined with ISP and backbone instability, this resulted in short periods of isolation for customers single-homed in Madrid. Even though direct access to local peers remained viable in some cases, the collapse of regional transit links significantly impacted global connectivity.

The challenge extended beyond any single facility. Much of the supporting carrier infrastructure—including fiber optic amplification sites and edge POPs—went offline when local power backups exhausted their reserves. Because these systems are typically remote and unmanned, their failure can ripple across multiple networks simultaneously.

Anycast in Action

As Madrid and the surrounding region experienced rolling failures, NetActuate’s global Anycast platform automatically redirected traffic to the next nearest operational locations. Most rerouted traffic flowed through nodes in Paris and London, maintaining service availability for users even when large parts of the grid were offline.

While the absolute traffic volume handled by our Madrid site is relatively low compared to major hubs, the platform’s automated routing response ensured continued accessibility for globally distributed services. This proved especially critical for organizations dependent on Spanish user traffic that would have otherwise been unreachable.

What the Data Revealed

Despite limited volume, flow-level analysis from the event yielded several compelling insights:

  • Up to 75% drop in unique source IPs: As the outage took hold, we observed that source IPs originating in Spain dropped precipitously. This sharp decline suggests that internet access—across both mobile and broadband networks—was widely and uniformly lost.
  • Minimal rerouting during blackout onset: Most traffic simply vanished rather than be dynamically rerouted, indicating users lost power before they could fall back to alternate paths. This was particularly true for mobile networks, which also went dark after brief, staggered delays.
  • Unstable transit behavior: We identified anomalies in traffic from specific autonomous systems (ASNs) entering Madrid, which showed instability likely tied to backbone disruptions. While we can’t confirm root causes, the pattern hints at deeper fragility within regional interconnection.
  • No significant DNS spike post-recovery: Unlike traditional outage scenarios, where a burst of DNS retries marks a return to service, this event saw no such spike. The reason: recovery was intentionally staged. Power was restored methodically to avoid grid imbalance, resulting in a gradual return of services.

Peering vs. Transit

NetActuate’s view of peers vs. transit during the outage

One of the most telling patterns during the Iberian power outage wasn’t just the drop in traffic, it was how the remaining traffic flowed. The graph above illustrates a clear distinction between peering traffic (yellow) and transit traffic (green) throughout the event and subsequent recovery:

What stood out is that peered traffic consistently exceeded transit traffic during and after the blackout. As backbone providers and transit routes struggled to stabilize, direct peering connections allowed for faster and more reliable traffic delivery. Transit paths, which often rely on upstream providers and may be routed through distant or indirect locations, took longer to recover and were less consistent.

This reinforces a critical tenet of resilient network design: who you’re directly connected to matters as much as how many paths you have. Peering reduces latency, removes intermediary points of failure, and ensures greater stability, especially when the unexpected occurs.

NetActuate’s Peering Strategy

NetActuate operates the 4th largest network in the world by number of peers, with thousands of direct interconnections to global platforms, carriers, and networks. This dense peering architecture translates to:

  • Faster rerouting and failover during regional outages
  • Greater control over traffic flows
  • Lower latency and improved performance
  • Reduced reliance on upstream transit providers in critical moments

In events like the Iberian blackout, NetActuate’s extensive peering base was instrumental in maintaining availability and minimizing service disruption, a clear demonstration of the importance of network adjacency in real-world crisis conditions.

Engineering for Resilience at Scale

At NetActuate, we focus on building networks that don’t just survive the unexpected — they adapt to it. With over 40 global locations and integrated Anycast support, we help organizations extend their infrastructure beyond geographic constraints and local failure domains.

This event wasn’t just a power failure, it was a resiliency test for networks. And the results were clear: the networks that stayed online weren’t the ones with the most redundancy. They were the ones designed for failure.

Plan for Resilience Today

The Iberian power outage showed that resilient infrastructure is an operational necessity.If you’re ready to take a serious look at your network’s resiliency, our experts are here to help. Contact us today to learn how NetActuate can strengthen your infrastructure against tomorrow’s disruptions.