If Greenland were digitally disconnected tomorrow, how much of its public sector could still operate?

If Greenland were digitally cut off tomorrow, how much of its public sector would still function? The uncomfortable answer: very little. The truth is that not only would the public sector break down, but society as a whole would likely also break down the longer a digital isolation would be in effect. This article outlines why it does not necessarily have to be this way and suggests that some remedies and actions can be taken to minimize the impact of an event where Greenland would be digitally isolated from the rest of the internet for an extended period (e.g., weeks to months).

We may like, or feel tempted, to think of digital infrastructure as neutral plumbing. But as I wrote earlier, “digital infrastructure is no longer just about connectivity, but about sovereignty and resilience.” Greenland today has neither.

A recent Sermitsiaq article on Greenland’s “Digital Afhængighed af Udlandet” by Poul Krarup, which describes research work done by the Tænketanken Digital Infrastruktur, laid it bare and crystal clear: the backbone of Greenland’s administration, email, payments, and even municipal services, runs on servers and platforms that are located mainly outside Greenland (and Denmark). Global giants in Europe and the US hold the keys. Greenland doesn’t. My own research reveals just how dramatic this dependency is. The numbers from my own study of 315 Greenlandic public-sector domains make it painfully clear: over 70% of web/IP hosting is concentrated among just three foreign providers, including Microsoft, Google, and Cloudflare. For email exchanges (MX), it’s even worse: the majority of MX records sit entirely outside Greenland’s control.

So imagine the cable is cut, the satellite links fail, or access to those platforms is revoked. Schools, hospitals, courts, and municipalities. How many could still function? How many could even switch on a computer?

This isn’t a thought experiment. It’s a wake-up call.

In my earlier work on Greenland’s critical communications infrastructure, “Greenland: Navigating Security and Critical Infrastructure in the Arctic – A Technology Introduction”, I have pointed out both the resilience and the fragility of what exists today. Tusass has built and maintained a transport network that keeps the country connected under some of the harshest Arctic conditions. That achievement is remarkable, but it is also costly and economically challenging without external subsidies and long-term public investment. With a population of just 57,000 people, Greenland faces challenges in sustaining this infrastructure on market terms alone.

DIGITAL SOVEREIGNTY.

What do we mean when we use phrases like “the digital sovereignty of Greenland is at stake”? Let’s break down the complex language (for techies like myself). Sovereignty in the classical sense is about control over land, people, and institutions. Digital sovereignty extends this to the virtual space. It is primarily about controlling data, infrastructure, and digital services. As societies digitalize, critical aspects of sovereignty move into the digital sphere, such as,

  • Infrastructure as territory: Submarine cables, satellites, data centers, and cloud platforms are the digital equivalents of ports, roads, and airports. If you don’t own or control them, you depend on others to move your “digital goods.”
  • Data as a resource: Just as natural resources are vital to economic sovereignty, data has become the strategic resource of the digital age. Those who store, process, and govern data hold significant power over decision-making and value creation.
  • Platforms as institutions: Social media, SaaS, and search engines act like global “public squares” and administrative tools. If controlled abroad, they may undermine local political, cultural, or economic authority.

The excellent book by Anu Bradford, “Digital Empires: The Global Battle to Regulate Technology,” describes how the digital world is no longer a neutral, borderless space but is increasingly shaped by the competing influence of three distinct “empires.” The American model is built around the dominance of private platforms, such as Google, Amazon, and Meta, where innovation and market power drive the agenda. The scale and ubiquity of Silicon Valley firms have enabled them to achieve a global reach. In contrast, the Chinese model fuses technological development with state control. Here, digital platforms are integrated into the political system, used not only for economic growth but also for surveillance, censorship, and the consolidation of authority. Between these two poles lies the European model, which has little homegrown platform power but exerts influence through regulation. By setting strict rules on privacy, competition, and online content, Europe has managed to project its legal standards globally, a phenomenon Bradford refers to as the “Brussels effect” (which is used here in a positive sense). Bradford’s analysis highlights the core dilemma for Greenland. Digital sovereignty cannot be achieved in isolation. Instead, it requires navigating between these global forces while ensuring that Greenland retains the capacity to keep its critical systems functioning, its data governed under its own laws, and its society connected even when global infrastructures falter. The question is not which empire to join, but how to engage with them in a way that strengthens Greenland’s ability to determine its own digital future.

In practice, this means that Greenland’s strategy cannot be about copying one of the three empires, but rather about carving out a space of resilience within their shadow. Building a national Internet Exchange Point ensures that local traffic continues to circulate on the island rather than being routed abroad, even when external links fail. Establishing a sovereign GovCloud provides government, healthcare, and emergency services with a secure foundation that is not dependent on distant data centers or foreign jurisdictions. Local caching of software updates, video libraries, and news platforms enables communities to operate in a “local mode” during disruptions, preserving continuity even when global connections are disrupted. These measures do not create independence from the digital empires. Still, they give Greenland the ability to negotiate with them from a position of greater strength, ensuring that participation in the global digital order does not come at the expense of local control or security.

FROM DAILY RESILIENCE TO STRATEGIC FRAGILITY.

I have argued that integrity, robustness, and availability must be the guiding principles for Greenland’s digital backbone, both now and in the future.

  • Integrity means protecting against foreign influence and cyber threats through stronger cybersecurity, AI support, and autonomous monitoring.
  • Robustness requires diversifying the backbone with new submarine cables, satellite systems, and dual-use assets that can serve both civil and defense needs.
  • Availability depends on automation and AI-driven monitoring, combined with autonomous platforms such as UAVs, UUVs, IoT sensors, and distributed acoustic sensing on submarine cables, to keep services running across vast and remote geographies with limited human resources.

The conclusion I drew in my previous work remains applicable today. Greenland must develop local expertise and autonomy so that critical communications are not left vulnerable to outside actors in times of crisis. Dual-use investments are not only about defense; they also bring better services, jobs, and innovation.

Article content
Source: Tusass Annual Report 2023 with some additions and minor edits.

The Figure above illustrates the infrastructure of the Greenlandic sole telecommunications provider, Tusass. Note that Tusass is the incumbent and only telecom provider in Greenland. Currently, five hydropower plants (shown above, location only indicative) provide more than 80% of Greenland’s electricity demand. Greenland is entering a period of significant infrastructure transformation, with several large projects already underway and others on the horizon. The most visible change is in aviation. Following the opening of the new international airport in Nuuk in 2024, with its 2,200-meter runway capable of receiving direct flights from Europe and North America, attention has turned to Ilulissat, on the Northwestern Coast of Greenland, and Qaqortoq. Ilulissat is being upgraded with its own 2,200-meter runway, a new terminal, and a control tower, while the old 845-meter strip is being converted into an access road. In southern Greenland, a new airport is being built in Qaqortoq, with a 1,500-meter runway scheduled to open around 2026. Once completed, these three airports, Nuuk, Ilulissat, and Qaqortoq, the largest town in South Greenland, will together handle roughly 80 percent of Greenland’s passenger traffic, reshaping both tourism and domestic connectivity. Smaller projects, such as the planned airport at Ittoqqortoormiit and changes to heliport infrastructure in East Greenland, are also part of this shift, although on a longer horizon.

Beyond air travel, the next decade is likely to bring new developments in maritime infrastructure. There is growing interest in constructing deep-water ports, both to support commercial shipping and to enable the export of minerals from Greenland’s interior. Denmark has already committed around DKK 1.6 billion (approximately USD 250 million) between 2026 and 2029 for a deep-sea port and related coastal infrastructure, with several proposals directly linked to mining ventures. In southern Greenland, for example, the Tanbreez multi-element rare earth project lies within reach of Qaqortoq, and the new airport’s specifications were chosen with freight requirements in mind. Other mineral prospects, ranging from rare earths to nickel and zinc, will require their own supporting infrastructure, roads, power, and port facilities, if the project transitions from exploration to production. The timelines for these mining and port projects are less certain than for the airports, since they depend on market conditions, environmental approvals, and financing. Yet it is clear that the 2025–2035 period will be decisive for Greenland’s economic and strategic trajectory. The combination of new airports, potential deep-water harbors, and the possible opening of significant mining operations would amount to the largest coordinated build-out of Greenlandic infrastructure in decades. Moreover, several submarine cable projects have been mentioned that would strengthen international connectivity to Greenland, as well as strengthen the redundancy and robustness of settlement connectivity, in addition to the existing long-haul microwave network connecting all settlements along the west coast from North to South.

And this is precisely why the question of a sudden digital cut-off matters so much. Without integrity, robustness, and availability built into the communications infrastructure, Greenland’s public sector and its critical infrastructure remain dangerously exposed. What looks resilient in daily operation could unravel overnight if the links to the outside world were severed or internal connectivity were compromised. In particular, the dependency on Nuuk is a critical risk.

GREENLAND’s DIGITAL INFRASTRUCTURE BY LAYER.

Let’s peel the digital onion layer by layer of Greenland’s digital infrastructure.

Is Greenland’s digital infrastructure broken down by the layers upon which society’s continuous functioning depends? This illustration shows how applications, transport, routing, and interconnect all depend on the external connectivity.

Greenland’s digital infrastructure can be understood as a stack of interdependent layers, each of which reveals a set of vulnerabilities. This is illustrated by the Figure above. At the top of the stack lie the applications and services that citizens, businesses, and government rely on every day. These include health IT systems, banking platforms, municipal services, and cloud-based applications. The critical issue is that most of these services are hosted abroad and have no local “island mode.” In practice, this means that if Greenland is digitally cut off, domestic apps and services will fail to function because there is no mechanism to run them independently within the country.

Beneath this sits the physical transport layer, which is the actual hardware that moves data. Greenland is connected internationally by just two subsea cables, routed via Iceland and Canada. A few settlements, such as Tasiilaq, remain entirely dependent on satellite links, while microwave radio chains connect long stretches of the west coast. At the local level, there is some fiber deployment, but it is limited to individual settlements rather than forming part of a national backbone. This creates a transport infrastructure that, while impressive given Greenland’s geography, is inherently fragile. Two cables and a scattering of satellites do not amount to genuine redundancy for a nation. The next layer is IP/TCP transport, where routing comes into play. Here, too, the system is basic. Greenland relies on a limited set of upstream providers with little true diversity or multi-homing. As a result, if one of the subsea cables is cut, large parts of the country’s connectivity collapse, because traffic cannot be seamlessly rerouted through alternative pathways. The resilience that is taken for granted in larger markets is largely absent here.

Finally, at the base of the stack, interconnect and routing expose the structural dependency most clearly. Greenland operates under a single Autonomous System Number (ASN). An ASN is a unique identifier assigned to a network operator (like Tusass) that controls its own routing on the Internet. It allows the network to exchange traffic and routing information with other networks using the Border Gateway Protocol (BGP). In Greenland, there is no domestic internet exchange point (IXP) or peering between local networks. All traffic must be routed abroad first, whether it is destined for Greenland or beyond. International transit flows through Iceland and Canada via the subsea cables, and via geostationary GreenSat satellite connectivity through Grand Canaria as a limited (in capacity) fallback that connected via the submarine network back to Greenland. There is no sovereign government cloud, almost no local caching for global platforms, and only a handful of small data centers (being generous with the definition here). The absence of scaled redundancy and local hosting means that virtually all of Greenland’s digital life depends on international connections.

GREENLAND’s DIGITAL LIFE ON A SINGLE THREAD.

Considering the many layers described above, a striking picture emerges: applications, transport, routing, and interconnect are all structured in ways that assume continuous external connectivity. What appears robust on a day-to-day basis can unravel quickly. A single cable cut, upstream outage, or local transmission fault in Greenland does not just slow down the internet. It can also disrupt it. It can paralyze everyday life across almost every sector, as much of the country’s digital backbone relies on external connectivity and fragile local transport. For the government, the reliance on cloud-hosted systems abroad means that email, document storage, case management, and health IT systems would go dark. Hospitals and clinics could lose access to patient records, lab results, and telemedicine services. Schools would be cut off from digital learning platforms and exam systems that are hosted internationally. Municipalities, which already lean on remote data centers for payroll, social services, and citizen portals, would struggle to process even routine administrative tasks. In finance, the impact would be immediate. Greenland’s card payment and clearing systems are routed abroad; without connectivity, credit and debit card transactions could no longer be authorized. ATMs would stop functioning. Shops, fuel stations, and essential suppliers would be forced into cash-only operations at best, and even that would depend on whether their local systems can operate in isolation. The private sector would be equally disrupted. Airlines, shipping companies, and logistics providers all rely on real-time reservation and cargo systems hosted outside Greenland. Tourism, one of the fastest-growing industries, is almost entirely dependent on digital bookings and payments. Mining operations under development would be unable to transmit critical data to foreign partners or markets. Even at the household level, the effects could be highly disruptive. Messaging apps, social media, and streaming platforms all require constant external connections; they would stop working instantly. Online banking and digital ID services would be unreachable, leaving people unable to pay bills, transfer money, or authenticate themselves for government services. As there are so few local caches or hosting facilities in Greenland, even “local” digital life evaporates once the cables are cut. So we will be back to reading books and paper magazines again.

This means that an outage can cascade well beyond the loss of entertainment or simple inconvenience. It undermines health care, government administration, financial stability, commerce, and basic communication. In practice, the disruption would touch every citizen and every institution almost immediately, with few alternatives in place to keep essential civil services running.

GREENLAND’s DIGITAL INFRASTRUCTURE EXPOSURE: ABOUT THE DATA.

In this inquiry, I have primarily analyzed two pillars of Greenland’s digital presence: web/IP hosting, as well as MX (mail exchange) hosting. These may sound technical, but they are fundamental to understanding. Web/IP hosting determines where Greenland’s websites and online services physically reside, whether inside Greenland’s own infrastructure or abroad in foreign data centers. MX hosting determines where email is routed and processed, and is crucial for the operation of government, business, and everyday communication. Together, these layers form the backbone of a country’s digital sovereignty.

What the data shows is sobering. For example, the Government’s own portal nanoq.gl is hosted locally by Tele Greenland (i.e., Tusass GL), but its email is routed through Amazon’s infrastructure abroad. The national airline, airgreenland.gl, also relies on Microsoft’s mail servers in the US and UK. These are not isolated cases. They illustrate the broader pattern of dependence. If hosting and mail flows are predominantly external, then Greenland’s resilience, control, and even lawful access are effectively in the hands of others.

The data from the Greenlandic .gl domain space paints a clear and rather bleak picture of dependency and reliance on the outside world. My inquiry covered 315 domains, resolving more than a thousand hosts and IPs and uncovering 548 mail exchangers, which together form a dependency network of 1,359 nodes and 2,237 edges. What emerges is not a story of local sovereignty but of heavy reliance on external, that is, outside Greenland, hosting.

When broken down, it becomes clear how much of the Greenlandic namespace is not even in use. Of the 315 domains, only 190 could be resolved to a functioning web or IP host, leaving 125 domains, or about 40 percent, with no active service. For mail exchange, the numbers are even more striking: only 98 domains have MX records, while 217 domains, it would appear, cannot be used for email, representing nearly seventy percent of the total. In other words, the universe of domains we can actually analyze shrinks considerably once you separate the inactive or unused domains from those that carry real digital services.

It is within this smaller, active subset that the pattern of dependency becomes obvious. The majority of the web/IP hosting we can analyze is located outside Greenland, primarily on infrastructure controlled by American companies such as Cloudflare, Microsoft, Google, and Amazon, or through Danish and European resellers. For email, the reliance is even more complete: virtually all MX hosting that exists is foreign, with only two domains fully hosted in Greenland. This means that both Greenland’s web presence and its email flows are overwhelmingly dependent on servers and policies beyond its own borders. The geographic spread of dependencies is extensive, spanning the US, UK, Ireland, Denmark, and the Netherlands, with some entries extending as far afield as China and Panama. This breadth raises uncomfortable questions about oversight, control, and the exposure of critical services to foreign jurisdictions.

Security practices add another layer of concern. Many domains lack the most basic forms of email protection. The Sender Policy Framework (SPF), which instructs mail servers on which IP addresses are authorized to send on behalf of a domain, is inconsistently applied. DomainKeys Identified Mail (DKIM), which uses cryptographic signatures to verify that an email originates from the claimed sender, is also patchy. Most concerning is that Domain-based Message Authentication, Reporting, and Conformance (DMARC), a policy that allows a domain to instruct receiving mail servers on how to handle suspicious emails (for example, reject or quarantine them), is either missing or set to “none” for many critical domains. Without SPF, DKIM, and DMARC properly configured, Greenlandic organizations are wide open to spoofing and phishing, including within government and municipal domains.

Taken together, the picture is clear. Greenland’s digital backbone is not in Greenland. Its critical web and mail infrastructure lives elsewhere, often in the hands of hyperscalers far beyond Nuuk’s control. The question practically asks itself: if those external links were cut tomorrow, how much of Greenland’s public sector could still function?

GREENLAND’s DIGITAL INFRASTRUCTURE EXPOSURE: SOME KEY DATA OUT OF A VERY RICH DATASET.

Article content
The Figure shows the distribution of Greenlandic (.gl) web/IP domains hosted on a given country’s infrastructure. Note that domains are frequently hosted in multiple countries. However, very few (2!) have an overlap with Greenland.

The chart of Greenland (.gl) Web/IP Infrastructure Hosting by Supporting Country reveals the true geography of Greenland’s digital presence. The data covers 315 Greenlandic domains, of which 190 could be resolved to active web or IP hosts. From these, I built a dependency map showing where in the world these domains are actually served.

The headline finding is stark: 57% of Greenlandic domains depend on infrastructure in the United States. This reflects the dominance of American companies such as Cloudflare, Microsoft, Google, and Amazon, whose services sit in front of or fully host Greenlandic websites. In contrast, only 26% of domains are hosted on infrastructure inside Greenland itself (primarily through Tele Greenland/Tusass). Denmark (19%), the UK (14%), and Ireland (13%) appear as the next layers of dependency, reflecting the role of regional resellers, like One.com/Simply, as well as Microsoft and Google’s European data centers. Germany, France, Canada, and a long tail of other countries contribute smaller shares.

It is worth noting that the validity of this analysis hinges on how the data are treated. Each domain is counted once per country where it has active infrastructure. This means a domain like nanoq.gl (the Greenland Government portal) is counted for both Greenland and its foreign dependency through Amazon’s mail services. However, double-counting with Greenland is extremely rare. Out of the 190 resolvable domains, 73 (38%) are exclusively Greenlandic, 114 (60%) are solely foreign, and only 2 (~1%) domains are hybrids, split between Greenland and another country. Those two are Nanoq.gl and airgreenland.gl, both of which combine a Greenland presence with foreign infrastructure. This is why the Figure above shows percentages that add up to more than 100%. They represent the dependency footprint. The share of Greenlandic domains that touch each country. They do not represent a pie chart of mutually exclusive categories. What is most important to note, however, is that the overlap with Greenland is vanishingly small. In practice, Greenlandic domains are either entirely local or entirely foreign. Very few straddle the boundary.

The conclusion is sobering. Greenland’s web presence is deeply externalized. With only a quarter of domains hosted locally, and more than half relying on US-controlled infrastructure, the country’s digital backbone is anchored outside its borders. This is not simply a matter of physical location. It is about sovereignty, resilience, and control. The dominance of US, Danish, and UK providers means that Greenland’s citizens, municipalities, and even government services are reliant on infrastructure they do not own and cannot fully control.

Article content
Figure shows the distribution of Greenlandic (.gl) domains by the supporting country for the MX (mail exchange) infrastructure. It shows that nearly all email services are routed through foreign providers.

The Figure above of the MX (mail exchange) infrastructure by supporting country reveals an even more pronounced pattern of external reliance compared to the above case for web hosting. From the 315 Greenlandic domains examined, only 98 domains had active MX records. These are the domains that can be analyzed for mail routing and that have been used in the analysis below.

Among them, 19% of all Greenlandic domains send their mail through US-controlled infrastructure, primarily Microsoft’s Outlook/Exchange services and Google’s Gmail. The United Kingdom (12%), Ireland (9%), and Denmark (8%) follow, reflecting the presence of Microsoft and Google’s European data centers and Danish resellers. France and Australia appear with smaller shares at 2%, and beyond that, the contributions of other countries are negligible. Greenland itself barely registers. Only two domains, accounting for 1% of the total, utilize MX infrastructure hosted within Greenland. The rest rely on servers beyond its borders. This result is consistent with our sovereignty breakdown: almost all Greenlandic email is foreign-hosted, with just two domains entirely local and one hybrid combining Greenlandic and foreign providers.

Again, the validity of this analysis rests on the same method as the web/IP chart. Each domain is counted once per country where its MX servers are located. Percentages do not add up to 100% because domains may span multiple countries; however, crucially, as with web hosting, double-counting with Greenland is vanishingly rare. In fact, virtually no Greenlandic domains combine local and foreign MX; they are either foreign-only or, in just two cases, local-only.

The story is clear and compelling: Greenland’s email infrastructure is overwhelmingly externalized. Where web hosting still accounts for a quarter of domains within the country, email sovereignty is almost nonexistent. Nearly all communication flows through servers controlled by US, UK, Ireland, or Denmark. The implication is sobering. In the event of disruption, policy disputes, or surveillance demands, Greenland has little autonomous control over its most basic digital communications.

Article content
A sector-level view of how Greenland’s web/IP domains are hosted, local vs externally (outside Greenland).

This chart provides a sector-level view of how Greenlandic domains are hosted, distinguishing between those resolved locally in Greenland and those hosted outside of Greenland. It is based on the subset of 190 domains for which sufficient web/IP hosting information was available. Importantly, the categorization relies on individual domains, not on companies as entities. A single company or institution may own and operate multiple domains, which are counted separately for the purpose of this analysis. There is also some uncertainty in sector assignment, as many domains have ambiguous names and were categorized using best-fit rules.

The distribution highlights the uneven exercise of digital sovereignty across sectors. In education and finance, the dependency is absolute: 100 percent of domains are hosted externally, with no Greenland-based presence at all. It should not come as a big surprise that ninety percent of government domains are hosted in Greenland, while only 10 percent are hosted outside. From a Digital Government sovereignty perspective, this would obviously be what should be expected. Transportation shows a split, with about two-thirds of domains hosted locally and one-third abroad, reflecting a mix of Tele Greenland-hosted (Tusass GL) domains alongside foreign-hosted services, such as airgreenland.gl. According to the available data, Energy infrastructure is hosted entirely abroad, underscoring possibly one of the most critical vulnerabilities in the dataset. By contrast, telecom domains, unsurprisingly, given Tele Greenland’s role, are entirely local, making it the only sector with 100 percent internal hosting. Municipalities present a more positive picture, with three-quarters of domains hosted locally and one-quarter abroad, although this still represents a partial external dependency. Finally, the large and diverse “Other” category, which contains a mix of companies, organizations, and services, is skewed towards foreign hosting (67 percent external, 33 percent local).

Taken together, the results underscore three important points. First, sector-level sovereignty is highly uneven. While telecom, municipal, and Governmental web services retain more local control, most finance, education, and energy domains are overwhelmingly external. We should keep in mind that when a Greenlandic domain resolves to local infrastructure, it indicates that the frontend web hosting, the visible entry point that users connect to, is located within Greenland, typically through Tele Greenland (i.e., Tusass GL). However, this does not automatically mean that the entire service stack is local. Critical back-end components such as databases, authentication services, payment platforms, or integrated cloud applications may still reside abroad. In practice, a locally hosted domain therefore guarantees only that the web interface is served from Greenland, while deeper layers of the service may remain dependent on foreign infrastructure. This distinction is crucial when evaluating genuine digital sovereignty and resilience. However, the overall pattern is unmistakable. Greenland’s digital presence remains heavily reliant on foreign hosting, with only pockets of local sovereignty.

Article content
A sector-level view of the share of locally versus externally (i.e., outside Greenland) MX (mail exchange) hosted Greenlandic domains (.gl).

The Figure above provides a sector-level view of how Greenlandic domains handle their MX (mail exchange) infrastructure, distinguishing between those hosted locally and those that rely on foreign providers. The analysis is based on the subset of 94 domains (out of 315 total) where MX hosting could be clearly resolved. In other words, these are the domains for which sufficient DNS information was available to identify the location of their mail servers. As with the web/IP analysis, it is important to note two caveats: sector classification involves a degree of interpretation, and the results represent individual domains, not individual companies. A single organization may operate multiple domains, some of which are local and others external.

The results are striking. For most sectors, such as education, finance, transport, energy, telecom, and municipalities, the dependence on foreign MX hosting is total. 100 percent of identified domains rely on external providers for email infrastructure. Even critical sectors such as energy and telecom, where one might expect a more substantial local presence, are fully externalized. The government sector presents a mixed picture. Half of the government domains examined utilize local MX hosting, while the other half are tied to foreign providers. This partial local footprint is significant, as it shows that while some government email flows are retained within Greenland, an equally large share is routed through servers abroad. The “other” sector, which includes businesses, NGOs, and various organizations, shows a small local footprint of about 3 percent, with 97 percent hosted externally. Taken together, the Figure paints a more severe picture of dependency than the web/IP hosting analysis.

While web hosting still retained about a quarter of domains locally, in the case of email, nearly everything is external. Even in government, where one might expect strong sovereignty, half of the domains are dependent on foreign MX servers. This distinction is critical. Email is the backbone of communication for both public and private institutions, and the routing of Greenland’s email infrastructure almost entirely abroad highlights a deep vulnerability. Local MX records guarantee only that the entry point for mail handling is in Greenland. They do not necessarily mean that mail storage or filtering remains local, as many services rely on external processing even when the MX server is domestic.

The broader conclusion is clear. Greenland’s sovereignty in digital communications is weakest in email. Across nearly all sectors, external providers control the infrastructure through which communication must pass, leaving Greenland reliant on systems located far outside its borders. Irrespective of how the picture painted here may appear severe in terms of digital sovereignty, it is not altogether surprising. The almost complete externalization of Greenlandic email infrastructure is not surprising, given that most global email services are provided by U.S.-based hyperscalers such as Microsoft and Google. This reliance on Big Tech is the norm worldwide, but it carries particular implications for Greenland, where dependence on foreign-controlled communication channels further limits digital sovereignty and resilience.

The analysis of the 94 MX hosting entries shows a striking concentration of Greenlandic email infrastructure in the hands of a few large players. Microsoft dominates the picture with 38 entries, accounting for just over 40 percent of all records, while Amazon follows with 20 entries, or around 21 percent. Google, including both Gmail and Google Cloud Platform services, contributes an additional 8 entries, representing approximately 9% of the total. Together, these three U.S. hyperscalers control nearly 70 percent of all Greenlandic MX infrastructure. By contrast, Tele Greenland (Tusass GL) appears in only three cases, equivalent to just 3 percent of the total, highlighting the minimal local footprint. The remaining quarter of the dataset is distributed across a long tail of smaller European and global providers such as Team Blue in Denmark, Hetzner in Germany, OVH and O2Switch in France, Contabo, Telenor, and others. The distribution, however you want to cut it, underscores the near-total reliance on U.S. Big Tech for Greenland’s email services, with only a token share remaining under national control.

Out of 179 total country mentions across the dataset, the United States is by far the most dominant hosting location, appearing in 61 cases, or approximately 34 percent of all country references. The United Kingdom follows with 38 entries (21 percent), Ireland with 28 entries (16 percent), and Denmark with 25 entries (14 percent). France (4 percent) and Australia (3 percent) form a smaller second tier, while Greenland itself appears only three times (2 percent). Germany also accounts for three entries, and all other countries (Austria, Norway, Spain, Czech Republic, Slovakia, Poland, Canada, and Singapore) occur only once each, making them statistically marginal. Examining the structure of services across locations, approximately 30 percent of providers are tied to a single country, while 51 percent span two countries (for example, UK–US or DK–IE). A further 18 percent are spread across three countries, and a single case involved four countries simultaneously. This pattern reflects the use of distributed or redundant MX services across multiple geographies, a characteristic often found in large cloud providers like Microsoft and Amazon.

The key point is that, regardless of whether domains are linked to one, two, or three countries, the United States is present in the overwhelming majority of cases, either alone or in combination with other countries. This confirms that U.S.-based infrastructure underpins the backbone of Greenlandic email hosting, with European locations such as the UK, Ireland, and Denmark acting primarily as secondary anchors rather than true alternatives.

WHAT DOES IT ALL MEAN?

Greenland’s public digital life overwhelmingly runs on infrastructure it does not control. Of 315 .gl domains, only 190 even have active web/IP hosting, and just 98 have resolvable MX (email) records. Within that smaller, “real” subset, most web front-ends are hosted abroad and virtually all email rides on foreign platforms. The dependency is concentrated, with U.S. hyperscalers—Microsoft, Amazon, and Google—accounting for nearly 70% of MX services. The U.S. is also represented in more than a third of all MX hosting locations (often alongside the UK, Ireland, or Denmark). Local email hosting is almost non-existent (two entirely local domains; a few Tele Greenland/Tusass appearances), and even for websites, a Greenlandic front end does not guarantee local back-end data or apps.

That architecture has direct implications for sovereignty and security. If submarine cables, satellites, or upstream policies fail or are restricted, most government, municipal, health, financial, educational, and transportation services would degrade or cease, because their applications, identity systems, storage, payments, and mail are anchored off-island. Daily resilience can mask strategic fragility: the moment international connectivity is severely compromised, Greenland lacks the local “island mode” to sustain critical digital workflows.

This is not surprising. U.S. Big Tech dominates email and cloud apps worldwide. Still, it may pose a uniquely high risk for Greenland, given its small population, sparse infrastructure, and renewed U.S. strategic interest in the region. Dependence on platforms governed by foreign law and policy erodes national leverage in crisis, incident response, and lawful access. It exposes citizens to outages or unilateral changes that are far beyond Nuuk’s control.

The path forward is clear: treat digital sovereignty as critical infrastructure. Prioritize local capabilities where impact is highest (government/municipal core apps, identity, payments, health), build island-mode fallbacks for essential services, expand diversified transport (additional cables, resilient satellite), and mandate basic email security (SPF/DKIM/DMARC) alongside measurable locality targets for hosting and data. Only then can Greenland credibly assure that, even if cut off from the world, it can still serve its people.

CONNECTIVITY AND RESILIENCE: GREENLAND VERSUS OTHER SOVEREIGN ISLANDS.

Article content
Sources: Submarine cable counts from TeleGeography/SubmarineNetworks.com; IXPs and ASNs from Internet Society Pulse/Peering DB and RIR data; GDP and Population from IMF/Worldband (2023/2024); Internet penetration from ITU and National Statistics.

The comparative table shown above highlights Greenland’s position among other sovereign and autonomous islands in terms of digital infrastructure. With two international submarine cables, Greenland shares the same level of cable redundancy as the Faroe Islands, Malta, the Maldives, Seychelles, Cuba, and Fiji. This places it in the middle tier of island connectivity: above small states like Comoros, which rely on a single cable, but far behind island nations such as Cyprus, Ireland, or Singapore, which have built themselves into regional hubs with multiple independent international connections.

Where Greenland diverges is in the absence of an Internet Exchange Point (IXP) and its very limited number of Autonomous Systems (ASNs). Unlike Iceland, which couples four cables with three IXPs and over ninety ASNs, Greenland remains a network periphery. Even smaller states such as Malta, Seychelles, or Mauritius operate IXPs and host more ASNs, giving them greater routing autonomy and resilience.

In terms of internet penetration, Greenland fares relatively well, with a rate of over 90 percent, comparable to other advanced island economies. Yet the country’s GDP base is extremely limited, comparable to the Faroe Islands and Seychelles, which constrains its ability to finance major independent infrastructure projects. This means that resilience is not simply a matter of demand or penetration, but rather a question of policy choices, prioritization, and regional partnerships.

Seen from a helicopter’s perspective, Greenland is neither in the worst nor the best position. It has more resilience than single-cable states such as Comoros or small Pacific nations. Still, it lags far behind peer islands that have deliberately developed multi-cable redundancy, local IXPs, and digital sovereignty strategies. For policymakers, this raises a fundamental challenge: whether to continue relying on the relative stability of existing links, or to actively pursue diversification measures such as a national IXP, additional cable investments, or regional peering agreements. In short, Greenland’s digital sovereignty depends less on raw penetration figures and more on whether its infrastructure choices can elevate it from a peripheral to a more autonomous position in the global network.

HOW TO ELEVATE SOUTH GREENLAND TO A PREFERRED TO A PREFFERED DIGITAL HOST FOR THE WORLD … JUST SAYING, WHY NOT!

At first glance, South Greenland and Iceland share many of the same natural conditions that make Iceland an attractive hub for data centers. Both enjoy a cool North Atlantic climate that allows year-round free cooling, reducing the need for energy-intensive artificial systems. In terms of pure geography and temperature, towns such as Qaqortoq and Narsaq in South Greenland are not markedly different from Reykjavík or Akureyri. From a climatic standpoint, there is no inherent reason why Greenland should not also be a viable location for large-scale hosting facilities.

The divergence begins not with climate but with energy and connectivity. Iceland spent decades developing a robust mix of hydropower and geothermal plants, creating a surplus of cheap renewable electricity that could be marketed to international hyperscale operators. Greenland, while rich in hydropower potential, has only a handful of plants tied to local demand centers, with no national grid and limited surplus capacity. Without investment in larger-scale, interconnected generation, it cannot guarantee the continuous, high-volume power supply that international data centers demand. Connectivity is the other decisive factor. Iceland today is connected to four separate submarine cable systems, linking it to Europe and North America, which gives operators confidence in redundancy and low-latency routes across the Atlantic. South Greenland, by contrast, depends on two branches of the Greenland Connect system, which, while providing diversity to Iceland and Canada, does not offer the same level of route choice or resilience. The result is that Iceland functions as a transatlantic bridge, while Greenland remains an endpoint.

For South Greenland to move closer to Iceland’s position, several changes would be necessary. The most important would be a deliberate policy push to develop surplus renewable energy capacity and make it available for export into data center operations. Parallel to this, Greenland would need to pursue further international submarine cables to break its dependence on a single system and create genuine redundancy. Finally, it would need to build up the local digital ecosystem by fostering an Internet Exchange Point and encouraging more networks to establish Autonomous Systems on the island, ensuring that Greenland is not just a transit point but a place where traffic is exchanged and hosted, and, importantly, making money on its own Digital Infrastructure and Sovereignty. South Greenland already shares the climate advantage that underpins Iceland’s success, but climate alone is insufficient. Energy scale, cable diversity, and deliberate policy have been the ingredients that have allowed Iceland to transform itself into a digital hub. Without similar moves, Greenland risks remaining a peripheral node rather than evolving into a sovereign center of digital resilience.

A PRACTICAL BLUEPRINT FOR GREENLAND TOWARDS OWNING ITS DIGITAL SOVEREIGNTY.

No single measure eliminates Greenland’s dependency on external infrastructure, banking, global SaaS, and international transit, which are irreducible. But taken together, these steps described below maximize continuity of essential functions during cable cuts or satellite disruption, improve digital sovereignty, and strengthen bargaining power with global vendors. The trade-off is cost, complexity, and skill requirements, which means Greenland must prioritize where full sovereignty is truly mission-critical (health, emergency, governance) and accept graceful degradation elsewhere (social media, entertainment, SaaS ERP).

A. Keep local traffic local (routing & exchange).

Proposal: Create or strengthen a national IXP in Nuuk, with a secondary node (e.g., Sisimiut or Qaqortoq). Require ISPs, mobile operators, government, and major content/CDNs to peer locally. Add route-server policies with “island-mode” communities to ensure that intra-Greenland routes stay reachable even if upstream transit is lost. Deploy anycasted recursive DNS and host authoritative DNS for .gl domains on-island, with secondaries abroad.

Pros:

  • Dramatically reduces the latency, cost, and fragility of local traffic.
  • Ensures Greenland continues to “see itself” even if cut off internationally.
  • DNS split-horizon prevents sensitive internal queries from leaking off-island.

Cons:

  • Needs policy push. Voluntary peering is often insufficient in small markets.
  • Running redundant IXPs is a fixed cost for a small economy.
  • CDNs may resist deploying nodes without incentives (e.g., free rack and power).

A natural and technically well-founded reaction, especially given Greenland’s monopolistic structure under Tusass, is that an IXP or multiple ASNs might seem redundant. Both content and users reside on the same Tusass network, and intra-Greenland traffic already remains local at Layer 3. Adding an IXP would not change that in practice. Without underlying physical or organizational diversity, an exchange point cannot create redundancy on its own.

However, over the longer term, an IXP can still serve several strategic purposes. It provides a neutral routing and governance layer that enables future decentralization (e.g., government, education, or sectoral ASNs), strengthens “island-mode” resilience by isolating internal routes during disconnection from the global Internet, and supports more flexible traffic management and security policies. Notably, an IXP also offers a trust and independence layer that many third-party providers, such as hyperscalers, CDNs, and data-center networks, typically require before deploying local nodes. Few global operators are willing to peer inside the demarcation of a single national carrier’s network. A neutral IXP provides them with a technical and commercial interface independent of Tusass’s internal routing domain, thereby making on-island caching or edge deployments more feasible in the future. In that sense, this accurately reflects today’s technical reality. The IXP concept anticipates tomorrow’s structural and sovereignty needs, bridging the gap between a functioning monopoly network and a future, more open digital ecosystem.

In practice (and in my opinion), Tusass is the only entity in Greenland with the infrastructure, staff, and technical capacity to operate an IXP. While this challenges the ideal of neutrality, it need not invalidate the concept if the exchange is run on behalf of Naalakkersuisut (the Greenlandic self-governing body) or under a transparent, multi-stakeholder governance model. The key issue is not who operates the IXP, but how it is governed. Suppose Tusass provides the platform while access, routing, and peering policies are openly managed and non-discriminatory. In that case, the IXP can still deliver genuine benefits: local routing continuity, “island-mode” resilience, and a neutral interface that encourages future participation by hyperscalers, CDNs, and sectoral networks.

B. Host public-sector workloads on-island.

Proposal: Stand up a sovereign GovCloud GL in Nuuk (failover in another town, possible West-East redundancy), operated by a Greenlandic entity or tightly contracted partner. Prioritize email, collaboration, case handling, health IT, and emergency comms. Keep critical apps, archives, and MX/journaling on-island even if big SaaS (like M365) is still used abroad.

Pros:

  • Keeps essential government operations functional in an isolation event.
  • Reduces legal exposure to extraterritorial laws, such as the U.S. CLOUD Act.
  • Provides a training ground for local IT and cloud talent.

Cons:

  • High CapEx + ongoing OpEx; cloud isn’t a one-off investment.
  • Scarcity of local skills; risk of over-reliance on a few engineers.
  • Difficult to replicate the breadth of SaaS (ERP, HR, etc.) locally; selective hosting is realistic, full stack is not.

C. Make email & messaging “cable- and satellite-outage proof”.

Proposal: Host primary MX and mailboxes in GovCloud GL with local antispam, journaling, and security. Use off-island secondaries only for queuing. Deploy internal chat/voice/video systems (such as Matrix, XMPP, or local Teams/Zoom gateways) to ensure that intra-Greenland traffic never routes outside the country. Define an “emergency federation mode” to isolate traffic during outages.

Pros:

  • Ensures communication between government, hospitals, and municipalities continues during outages.
  • Local queues prevent message loss even if foreign relays are unreachable.
  • Pre-tested emergency federation builds institutional muscle memory.

Cons:

  • Operating robust mail and collaboration platforms locally is a resource-intensive endeavor.
  • Risk of user pushback if local platforms feel less polished than global SaaS.
  • The emergency “mode switch” adds operational complexity and must be tested regularly.

D. Put the content edge in Greenland.

Proposal: Require or incentivize CDN caches (Akamai, Cloudflare, Netflix, OS mirrors, software update repos, map tiles) to be hosted inside Greenland’s IXP(s).

Pros:

  • Improves day-to-day performance and cuts transit bills.
  • Reduces dependency on subsea cables for routine updates and content.
  • Keeps basic digital life (video, software, education platforms) usable in isolation.

Cons:

  • CDNs deploy based on scale; Greenland’s market may be marginal without a subsidy.
  • Hosting costs (power, cooling, rackspace) must be borne locally.
  • Only covers cached/static content; dynamic services (banking, SaaS) still break without external connectivity.

E. Implement into law & contracts.

Proposal: Mandate data residency for public-sector data; require “island-mode” design in procurement. Systems must demonstrate the ability to authenticate locally, operate offline, maintain usable data, and retain keys under Greenlandic custody. Impose peering obligations for ISPs and major SaaS/CDNs.

Pros:

  • Creates a predictable baseline for sovereignty across all agencies.
  • Prevents future procurement lock-in to non-resilient foreign SaaS.
  • Gives legal backing to technical requirements (IXP, residency, key custody).

Cons:

  • May raise the costs of IT projects (compliance overhead).
  • Without a strong enforcement, rules risk becoming “checkbox” exercises.
  • Possible trade friction if foreign vendors see it as protectionist.

F. Strengthen physical resilience

Proposal: Maintain and upgrade subsea cable capacity (Greenland Connect and Connect North), add diversity (spur/loop and new landings), and maintain long-haul microwave/satellite as a tertiary backup. Pre-engineer quality of service downgrades for graceful degradation.

Pros:

  • Adds true redundancy. Nothing replaces a working subsea cable.
  • Tertiary paths (satellite, microwave) keep critical services alive during failures.
  • Clear QoS downgrades make service loss more predictable and manageable.

Cons:

  • High (possibly very high) CapEx. New cable segments cost tens to hundreds of millions of euros.
  • Satellite/microwave backup cannot match the throughput of subsea cables.
  • International partners may be needed for funding and landing rights.

Security & trust

Proposal: Deploy local PKI and HSMs for the government. Enforce end-to-end encryption. Require local custody of cryptographic keys. Audit vendor remote access and include kill switches.

Pros:

  • Prevents data exposure via foreign subpoenas (without Greenland’s knowledge).
  • Local trust anchors give confidence in sovereignty claims.
  • Kill switches and audit trails enhance vendor accountability.

Cons:

  • PKI and HSM management requires very specialized skills.
  • Adds operational overhead (key lifecycle, audits, incident response).
  • Without strong governance, there is a risk of “security theatre” rather than absolute security.

On-island first as default. A key step for Greenland is to make on-island first the norm so that local-to-local traffic stays local even if Atlantic cables fail. Concretely, stand up a national IXP in Nuuk to keep domestic traffic on the island and anchor CDN caches; build a Greenlandic “GovCloud” to host government email, identity, records, and core apps; and require all public-sector systems to operate in “island mode” (continue basic services offline from the rest of the world). Pair this with local MX, authoritative DNS, secure chat/collaboration, and CDN caches, so essential content and services remain available during outages. Back it with clear procurement rules on data residency and key custody to reduce both outage risk and exposure to foreign laws (e.g., CLOUD Act), acknowledging today’s heavy—if unsurprising—reliance on U.S. hyperscalers (Microsoft, Amazon, Google).

What this changes, and what it doesn’t. These measures don’t aim to sever external ties. They should rebalance them. The goal is graceful degradation that keeps government services, domestic payments, email, DNS, and health communications running on-island, while accepting that global SaaS and card rails will go dark during isolation. Finally, it’s also worth remembering that local caching is only a bridge, not a substitute for global connectivity. In the first days of an outage, caches would keep websites, software updates, and even video libraries available, allowing local email and collaboration tools to continue running smoothly. But as the weeks pass, those caches would inevitably grow stale. News sites, app stores, and streaming platforms would stop refreshing, while critical security updates, certificates, and antivirus definitions would no longer be available, leaving systems exposed to risk. If isolation lasted for months, the impact would be much more profound. Banking and card clearing would be suspended, SaaS-driven ERP systems would break down, and Greenland would slide into a “local only” economy, relying on cash and manual processes. Over time, the social impact would also be felt, with the population cut off from global news, communication, and social platforms. Caching, therefore, buys time, but not independence. It can make an outage manageable in the short term, yet in the long run, Greenland’s economy, security, and society depend on reconnecting to the outside world.

The Bottom line. Full sovereignty is unrealistic for a sparse, widely distributed country, and I don’t think it makes sense to strive for that. It just appears impractical. In my opinion, partial sovereignty is both achievable and valuable. Make on-island first the default, keep essential public services and domestic comms running during cuts, and interoperate seamlessly when subsea links and satellites are up. This shifts Greenland from its current state of strategic fragility to one of managed resilience, without overlooking the rest of the internet.

ACKNOWLEDGEMENT.

I want to acknowledge my wife, Eva Varadi, for her unwavering support, patience, and understanding throughout the creative process of writing this article. I would also like to thank Dr. Signe Ravn-Højgaard, from “Tænketanken Digital Infrastruktur”, and the Sermitsiaq article “Digital afhængighed af udlandet” (“Digital dependency on foreign countries”) by Paul Krarup, for inspiring this work, which is also a continuation of my previous research and article titled “Greenland: Navigating Security and Critical Infrastructure in the Arctic – A Technology Introduction”. I would like to thank Lasse Jarlskov for his insightful comments and constructive feedback on this article. His observations regarding routing, OSI layering, and the practical realities of Greenland’s network architecture were both valid and valuable, helping refine several technical arguments and improve the overall clarity of the analysis.

CODE AND DATASETS.

The Python code and datasets used in the analysis are available on my public GitHub: https://github.com/drkklarsen/greenland_digital_infrastructure_mapping (the code is still work in progress, but it is functional and will generate similar data as analyzed in this article).

ABBREVIATION LIST.

ASN — Autonomous System Number: A unique identifier assigned to a network operator that controls its own routing on the Internet, enabling the exchange of traffic with other networks using the Border Gateway Protocol (BGP).

BGP — Border Gateway Protocol: The primary routing protocol of the Internet, used by Autonomous Systems to exchange information about which paths data should take across networks.

CDN — Content Delivery Network: A system of distributed servers that cache and deliver content (such as videos, software updates, or websites) closer to users, reducing latency and dependency on international links.

CLOUD Act — Clarifying Lawful Overseas Use of Data Act: A U.S. law that allows American authorities to demand access to data stored abroad by U.S.-based cloud providers, raising sovereignty and privacy concerns for other countries.

DMARC — Domain-based Message Authentication, Reporting and Conformance: An email security protocol that tells receiving servers how to handle messages that fail authentication checks, protecting against spoofing and phishing.

DKIM — DomainKeys Identified Mail: An email authentication method that uses cryptographic signatures to verify that a message has not been altered and truly comes from the claimed sender.

DNS — Domain Name System: The hierarchical system that translates human-readable domain names (like example.gl) into IP addresses that computers use to locate servers.

ERP — Enterprise Resource Planning A type of integrated software system that organizations use to manage business processes such as finance, supply chain, HR, and operations.

GL — Greenland (country code top-level domain, .gl) The internet country code for Greenland, used for local domain names such as nanoq.gl.

GovCloud — Government Cloud: A sovereign or dedicated cloud infrastructure designed for hosting public-sector applications and data within national jurisdiction.

HSM — Hardware Security Module: A secure physical device that manages cryptographic keys and operations, used to protect sensitive data and digital transactions.

IoT — Internet of Things: A network of physical devices (sensors, appliances, vehicles, etc.) connected to the internet, capable of collecting and exchanging data.

IP — Internet Protocol: The fundamental addressing system of the Internet, enabling data packets to be sent from one computer to another.

ISP — Internet Service Provider: A company or entity that provides customers with access to the internet and related services.

IXP — Internet Exchange Point: A physical infrastructure where networks interconnect directly to exchange internet traffic locally rather than through international transit links.

MX — Mail Exchange (Record): A type of DNS record that specifies the mail servers responsible for receiving email on behalf of a domain.

PKI — Public Key Infrastructure: A framework for managing encryption keys and digital certificates, ensuring secure electronic communications and authentication.

SaaS — Software as a Service: Cloud-based applications delivered over the internet, such as Microsoft 365 or Google Workspace, are usually hosted on servers outside the country.

SPF — Sender Policy Framework: An email authentication protocol that defines which mail servers are authorized to send email on behalf of a domain, reducing the risk of forgery.

Tusass is the national telecommunications provider of Greenland, formerly Tele Greenland, responsible for submarine cables, satellite links, and domestic connectivity.

UAV — Unmanned Aerial Vehicle: An aircraft without a human pilot on board, often used for surveillance, monitoring, or communications relay.

UUV — Unmanned Underwater Vehicle: A robotic submarine used for monitoring, surveying, or securing undersea infrastructure such as cables.

The Telco Ascension to the Sky.

It’s 2045. Earth is green again. Free from cellular towers and the terrestrial radiation of yet another G, no longer needed to justify endless telecom upgrades. Humanity has finally transcended its communication needs to the sky, fully served by swarms of Low Earth Orbit (LEO) satellites.

Millions of mobile towers have vanished. No more steel skeletons cluttering skylines and nature in general. In their place: millions of beams from tireless LEO satellites, now whispering directly into our pockets from orbit.

More than 1,200 MHz of once terrestrially-bound cellular spectrum below the C-band had been uplifted to LEO satellites. Nearly 1,500 MHz between 3 and 6 GHz had likewise been liberated from its earthly confines, now aggressively pursued by the buzzing broadband constellations above.

It all works without a single modification to people’s beloved mobile devices. Everyone enjoyed the same, or better, cellular service than in those wretched days of clinging to terrestrial-based infrastructure.

So, how did this remarkable transformation come about?

THE COVERAGE.

First, let’s talk about coverage. The chart below tells the story of orbital ambition through three very grounded curves. On the x-axis, we have the inclination angle, which is the degree to which your satellites are encouraged to tilt away from the equator to perform their job. On the y-axis: how much of the planet (and its people) they’re actually covering. The orange line gives us land area coverage. It starts low, as expected, tropical satellites don’t care much for Greenland. But as the inclination rises, so does their sense of duty to the extremes (the poles that is). The yellow line represents population coverage, which grows faster than land, maybe because humans prefer to live near each other (or they like the scenery). By the time you reach ~53° inclination, you’re covering about 94% of humanity and 84% of land areas. The dashed white line represents mobile cell coverage, the real estate of telecom towers. A constellation at a 53° inclination would cover nearly 98% of all mobile site infrastructure. It serves as a proxy for economic interest. It closely follows the population curve, but adds just a bit of spice, reflecting urban density and tower sprawl.

This chart illustrates the cumulative global coverage achieved at varying orbital inclination angles for three key metrics: land area (orange), population (yellow), and estimated terrestrial mobile cell sites (dashed white). As inclination increases from equatorial (0°) to polar (90°), the percentage of global land and population coverage rises accordingly. Notably, population coverage reaches approximately 94% at ~53° inclination, a critical threshold for satellite constellations aiming to maximize global user reach without the complexity of polar orbits. The mobile cell coverage curve reflects infrastructure density and aligns closely with population distribution.

The satellite constellation’s beams have replaced traditional terrestrial cells, providing a one-to-one coverage substitution. They not only replicate coverage in former legacy cellular areas but also extend service to regions that previously lacked connectivity due to low commercial priority from telecom operators. Today, over 3 million beams substitute obsolete mobile cells, delivering comparable service across densely populated areas. An additional 1 million beams have been deployed to cover previously unserved land areas, primarily rural and remote regions, using broader, lower-capacity beams with radii up to 10 kilometers. While these rural beams do not match the density or indoor penetration of urban cellular coverage, they represent a cost-effective means of achieving global service continuity, especially for basic connectivity and outdoor access in sparsely populated zones.

Conclusion? If you want to build a global satellite mobile network, you don’t need to orbit the whole planet. Just tilt your constellation enough to touch the crowded parts, and leave the tundra to the poets. However, this was the “original sin” of LEO Direct-2-Cellular satellites.

THE DEMAND.

Although global mobile traffic growth slowed notably after the early 2020s, and the terrestrial telecom industry drifted toward its “end of history” moment, the orbital network above inherited a double burden. Not only did satellite constellations need to deliver continuous, planet-wide coverage, a milestone legacy telecoms had never reached, despite millions of ground sites, but they also had to absorb globally converging traffic demands as billions of users crept steadily toward the throughput mean.

This chart shows the projected DL traffic across a full day (UTC), based on regions where local time falls within the evening Busy Hour window (17:00–22:00) and are within satellite coverage (minimum elevation ≥ 25°). The BH population is calculated hourly, taking into account time zone alignment and visibility, with a 20% concurrency rate applied. Each active user is assumed to consume 500 Mbps downlink in 2045. The peak, reaching over
This chart shows the uplink traffic demand experienced across a full day (UTC), based on regions under Busy Hour conditions (17:00–22:00 local time) and visible to the satellite constellation (with a minimum elevation angle of 25°). For each UTC hour, the BH population within coverage is calculated using global time zone mapping. Assuming a 20% concurrency rate and an average uplink throughput of 50 Mbps per active user, the total UL traffic is derived. The resulting curve reflects how demand shifts in response to the Earth’s rotation beneath the orbital band. The peak, reaching over

The radio access uplink architecture relies on low round-trip times for proper scheduling, timing alignment, and HARQ (Hybrid Automatic Repeat Request) feedback cycles. The propagation delay at 350 km yields a round-trip time of about 2.5 to 3 milliseconds, which falls within the bounds of what current specifications can accommodate. This is particularly important for latency-sensitive applications such as voice, video, and interactive services that require low jitter and reliable feedback mechanisms. In contrast, orbits at 550 km or above push latency closer to the edge of what NR protocols can tolerate, which could hinder performance or require non-standard adaptations. The beam geometry also plays a central role. At lower altitudes, satellite beams projected to the ground are inherently smaller. This smaller footprint translates into tighter beam patterns with narrower 3 dB cut-offs, which significantly improves frequency reuse and spatial isolation. These attributes are important for deploying high-capacity networks in densely populated urban environments, where interference and spectrum efficiency are paramount. Narrower beams allow D2C operators to steer coverage toward demand centers while minimizing adjacent-beam interference dynamically. Operating at 350 km is not without drawbacks. The satellite’s ground footprint at this altitude is smaller, meaning that more satellites are required to achieve full Earth coverage. Additionally, satellites at this altitude are exposed to greater atmospheric drag, resulting in shorter orbital lifespans unless they are equipped with more powerful or efficient propulsion systems to maintain altitude. The current design aims for a 5-year orbital lifespan. Despite this, the shorter lifespan has an upside, as it reduces the long-term risks of space debris. Deorbiting occurs naturally and quickly at lower altitudes, making the constellation more sustainable in the long term.

THE CONSTELLATION.

The satellite-to-cellular infrastructure has now fully matured into a global-scale system capable of delivering mobile broadband services that are not only on par with, but in many regions surpass, the performance of terrestrial cellular networks. At its core lies a constellation of low Earth orbit satellites operating at an altitude of 350 kilometers, engineered to provide seamless, high-quality indoor coverage for both uplink and downlink, even in densely urban environments.

To meet the evolving expectations of mobile users, each satellite beam delivers a minimum of 50 Mbps of uplink capacity and 500 Mbps of downlink capacity per user, ensuring full indoor quality even in highly cluttered environments. Uplink transmissions utilize the 600 MHz to 1800 MHz band, providing 1200 MHz of aggregated bandwidth. Downlink channels span 1500 MHz of spectrum, ranging from 2100 MHz to the upper edge of the C-band. At the network’s busiest hour (e.g., around 20:00 local time) across the most densely populated regions south of 53° latitude, the system supports a peak throughput of 60,000 Tbps for downlink and 6,000 Tbps for uplink. To guarantee reliability under real-world utilization, the system is engineered with a 25% capacity overhead, raising the design thresholds to 75,000 Tbps for DL and 7,500 Tbps for UL during peak demand.

Each satellite beam is optimized for high spectral efficiency, leveraging advanced beamforming, adaptive coding, and cutting-edge modulation. Under these conditions, downlink beams deliver 4.5 Gbps, while uplink beams, facing more challenging reception constraints, achieve 1.8 Gbps. Meeting the adjusted peak-hour demand requires approximately 16.7 million active DL beams and 4.2 million UL beams, amounting to over 20.8 million simultaneous beams concentrated over the peak demand region.

Thanks to significant advances in onboard processing and power systems, each satellite now supports up to 5,000 independent beams simultaneously. This capability reduces the number of satellites required to meet regional peak demand to approximately 4,200. These satellites are positioned over a region spanning an estimated 45 million square kilometers, covering the evening-side urban and suburban areas of the Americas, Europe, Africa, and Asia. This configuration yields a beam density of nearly 0.46 beams per square kilometer, equivalent to one active beam for every 2 square kilometers, densely overlaid to provide continuous, per-user, indoor-grade connectivity. In urban cores, beam radii are typically below 1 km, whereas in lower-density suburban and rural areas, the system adjusts by using larger beams without compromising throughput.

Because peak demand rotates longitudinally with the Earth’s rotation, only a portion of the entire constellation is positioned over this high-demand region at any given time. To ensure 4,200 satellites are always present over the region during peak usage, the total constellation comprises approximately 20,800 satellites, distributed across several hundred orbital planes. These planes are inclined and phased to optimize temporal availability, revisit frequency, and coverage uniformity while minimizing latency and handover complexity.

The resulting Direct-to-Cellular satellite constellation and system of today is among the most ambitious communications infrastructures ever created. With more than 20 million simultaneous beams dynamically allocated across the globe, it has effectively supplanted traditional mobile towers in many regions, delivering reliable, high-speed, indoor-capable broadband connectivity precisely where and when people need it.

When Telcos Said ‘Not Worth It,’ Satellites Said ‘Hold My Beam. In the world of 2045, even the last village at the end of the dirt road streams at 500 Mbps. No tower in sight, just orbiting compassion and economic logic finally aligned.

THE SATELLITE.

The Cellular Device to Satellite Path.

The uplink antennas aboard the Direct-to-Cellular satellites have been specifically engineered to reliably receive indoor-quality transmissions from standard (unmodified) mobile devices operating within the 600 MHz to 1800 MHz band. Each device is expected to deliver a minimum of 50 Mbps uplink throughput, even when used indoors in heavily cluttered urban environments. This performance is made possible through a combination of wideband spectrum utilization, precise beamforming, and extremely sensitive receiving systems in orbit. The satellite uplink system operates across 1200 MHz of aggregated bandwidth (e.g., 60 channels of 20 MHz), spanning the entire upper UHF and lower S-band. Because uplink signals originate from indoor environments, where wall and structural penetration losses can exceed 20 dB, the satellite link budget must compensate for the combined effects of indoor attenuation and free-space propagation at a 350 km orbital altitude. At 600 MHz, which represents the lowest frequency in the UL band, the free space path loss alone is approximately 133 dB. When this is compounded with indoor clutter and penetration losses, the total attenuation the satellite must overcome reaches approximately 153 dB or more.

Rather than specifying the antenna system at a mid-band average frequency, such as 900 MHz (i.e., the mid-band of the 600 MHz to 1800 MHz range), the system has been conservatively engineered for worst-case performance at 600 MHz. This design philosophy ensures that the antenna will meet or exceed performance requirements across the entire uplink band, with higher frequencies benefiting from naturally improved gain and narrower beamwidths. This choice guarantees that even the least favorable channels, those near 600 MHz, support reliable indoor-grade uplink service at 50 Mbps, with a minimum required SNR of 10 dB to sustain up to 16-QAM modulation. Achieving this level of performance at 600 MHz necessitated a large physical aperture. The uplink receive arrays on these satellites have grown to approximately 700 to 750 m² in area, and are constructed using modular, lightweight phased-array tiles that unfold in orbit. This aperture size enables the satellite to achieve a receive gain of approximately 45 dBi at 600 MHz, which is essential for detecting low-power uplink transmissions with high spectral efficiency, even from users deep indoors and under cluttered conditions.

Unlike earlier systems, such as AST SpaceMobile’s BlueBird 1, launched in the mid-2020s with an aperture of around 900 m² and challenged by the need to acquire indoor uplink signals, today’s Direct-to-Cellular (D2C) satellites optimize the uplink and downlink arrays separately. This separation allows each aperture to be custom-designed for its frequency and link budget requirements. The uplink arrays incorporate wideband, dual-polarized elements, such as log-periodic or Vivaldi structures, backed by high-dynamic-range low-noise amplifiers and a distributed digital beamforming backend. Assisted by real-time AI beam management, each satellite can simultaneously support and track up to 2,500 uplink beams, dynamically allocating them across the active coverage region.

Despite their size, these receive arrays are designed for compact launch configurations and efficient in-orbit deployment. Technologies such as inflatable booms, rigidizable mesh structures, and ultralight composite materials allow the arrays to unfold into large apertures while maintaining structural stability and minimizing mass. Because these arrays are passive receivers, thermal loads are significantly lower than those of transmit systems. Heat generation is primarily limited to the digital backend and front-end amplification chains, which are distributed across the array surface to facilitate efficient thermal dissipation.

The Satellite to Cellular Device Path.

The downlink communication path aboard Direct-to-Cellular satellites is engineered as a fully independent system, physically and functionally separated from the uplink antenna. This separation reflects a mature architectural philosophy that has been developed over decades of iteration. The downlink and uplink systems serve fundamentally different roles and operate across vastly different frequency bands, with their power, thermal, and antenna constraints. The downlink system operates in the frequency range from 2100 MHz up to the upper end of the C-band, typically around 4200 MHz. This is significantly higher than the uplink range, which extends from 600 to 1800 MHz. Due to this disparity in wavelength, a factor of nearly six between the lowest uplink and highest downlink frequencies, a shared aperture is neither practical nor efficient. It is widely accepted today that integrating transmit and receive functions into a single broadband aperture would compromise performance on both ends. Instead, today’s satellites utilize a dual-aperture approach, with the downlink antenna system optimized exclusively for high-frequency transmission and the uplink array designed independently for low-frequency reception.

In order to deliver 500 Mbps per user with full indoor coverage, each downlink beam must sustain approximately 4.5 Gbps, accounting for spectral reuse and beam overlap. At an orbital altitude of 350 kilometers, downlink beams must remain narrow, typically covering no more than a 1-kilometer radius in urban zones, to match uplink geometry and maintain beam-level concurrency. The antenna gain required to meet these demands is in the range of 50 to 55 dBi, which the satellites achieve using high-frequency phased arrays with a physical aperture of approximately 100 to 200 m². Because the downlink system is responsible for high-power transmission, the antenna tiles incorporate GaN-based solid-state power amplifiers (SSPAs), which deliver hundreds of watts per panel. This results in an overall effective isotropic radiated power (EIRP) of 50 to 60 dBW per beam, sufficient to reach deep indoor devices even at the upper end of the C-band. The power-intensive nature of the downlink system introduces thermal management challenges (describe below in the next section), which are addressed by physically isolating the transmit arrays from the receiver surfaces. The downlink and uplink arrays are positioned on opposite sides of the spacecraft bus or thermally decoupled through deployable booms and shielding layers.

The downlink beamforming is fully digital, allowing real-time adaptation of beam patterns, power levels, and modulation schemes. Each satellite can form and manage up to 2,500 independent downlink beams, which are coordinated with their uplink counterparts to ensure tight spatial and temporal alignment. Advanced AI algorithms help shape beams based on environmental context, usage density, and user motion, thereby further improving indoor delivery performance. The modulation schemes used on the downlink frequently reach 256-QAM and beyond, with spectral efficiencies of six to eight bits per second per Hz in favorable conditions.

The physical deployment of the downlink antenna varies by platform, but most commonly consists of front-facing phased array panels or cylindrical surfaces fitted with azimuthally distributed tiles. These panels can be either fixed or mounted on articulated platforms that allow active directional steering during orbit, depending on the beam coverage strategy, an arrangement also called gumballed.

No Bars? Not on This Planet. In 2045, even the Icebears will have broadband. When satellites replaced cell towers, the Arctic became just another neighborhood in the global gigabit grid.

Satellite System Architecture.

The Direct-to-Cellular satellites have evolved into high-performance, orbital base stations that far surpass the capabilities of early systems, such as AST SpaceMobile’s Bluebird 1 or SpaceX’s Starlink V2 Mini. These satellites are engineered not merely to relay signals, but to deliver full-featured indoor mobile broadband connectivity directly to standard handheld devices, anywhere on Earth, including deep urban cores and rural regions that have been historically underserved by terrestrial infrastructure.

As described earlier, today’s D2C satellite supports up to 5,000 simultaneous beams, enabling real-time uplink and downlink with mobile users across a broad frequency range. The uplink phased array, designed to capture low-power, deep-indoor signals at 600 MHz, occupies approximately 750 m². The DL array, optimized for high-frequency, high-power transmission, spans 150 to 200 m². Unlike early designs, such as Bluebird 1, which used a single, large combined antenna, today’s satellites separate the uplink and downlink arrays to optimize each for performance, thermal behavior, and mechanical deployment. These two systems are typically mounted on opposite sides of the satellite and thermally isolated from one another.

Thermal management is one of the defining challenges of this architecture. While AST’s Bluebird 1 (i.e., from mid-2020s) boasted a large antenna aperture approaching 900 m², its internal systems generated significantly less heat. Bluebird 1 operated with a total power budget of approximately 10 to 12 kilowatts, primarily dedicated to a handful of downlink beams and limited onboard processing. In contrast, today’s D2C satellite requires a continuous power supply of 25 to 35 kilowatts, much of which must be dissipated as heat in orbit. This includes over 10 kilowatts of sustained RF power dissipation from the DL system alone, in addition to thermal loads from the digital beamforming hardware, AI-assisted compute stack, and onboard routing logic. The key difference lies in beam concurrency and onboard intelligence. The satellite manages thousands of simultaneous, high-throughput beams, each dynamically scheduled and modulated using advanced schemes such as 256-QAM and beyond. It must also process real-time uplink signals from cluttered environments, allocate spectral and spatial resources, and make AI-driven decisions about beam shape, handovers, and interference mitigation. All of this requires a compute infrastructure capable of delivering 100 to 500 TOPS (tera-operations per second), distributed across radiation-hardened processors, neural accelerators, and programmable FPGAs. Unlike AST’s Bluebird 1, which offloaded most of its protocol stack to the ground, today’s satellites run much of the 5G core network onboard. This includes RAN scheduling, UE mobility management, and segment-level routing for backhaul and gateway links.

This computational load compounds the satellite’s already intense thermal environment. Passive cooling alone is insufficient. To manage thermal flows, the spacecraft employs large radiator panels located on its outer shell, advanced phase-change materials embedded behind the DL tiles, and liquid loop systems that transfer heat from the RF and compute zones to the radiative surfaces. These thermal systems are intricately zoned and actively managed, preventing the heat from interfering with the sensitive UL receive chains, which require low-noise operation under tightly controlled thermal conditions. The DL and UL arrays are thermally decoupled not just to prevent crosstalk, but to maintain stable performance in opposite thermal regimes: one dominated by high-power transmission, the other by low-noise reception.

To meet its power demands, the satellite utilizes a deployable solar sail array that spans 60 to 80 m². These sails are fitted with ultra-high-efficiency solar cells capable of exceeding 30–35% efficiency. They are mounted on articulated booms that track the sun independently from the satellite’s Earth-facing orientation. They provide enough current to sustain continuous operation during daylight periods, while high-capacity batteries, likely based on lithium-sulfur or solid-state chemistry, handle nighttime and eclipse coverage. Compared to the Starlink V2 Mini, which generates around 2.5 to 3.0 kilowatts, and the Bluebird 1, which operates at roughly 10–12 kilowatts. Today’s system requires nearly three times the generation and five times the thermal rejection capability compared to the initial satellites of the mid-2020s.

Structurally, the satellite is designed to support this massive infrastructure. It uses a rigid truss core (i.e., lattice structure) with deployable wings for the DL system and a segmented, mesh-based backing for the UL aperture. Propulsion is provided by Hall-effect or ion thrusters, with 50 to 100 kilograms of inert propellant onboard to support three to five years of orbital station-keeping at an altitude of 350 kilometers. This height is chosen for its latency and spatial reuse advantages, but it also imposes continuous drag, requiring persistent thrust.

The AST Bluebird 1 may have appeared physically imposing in its time due to its large antenna, thermal, computational, and architectural complexity. Today’s D2C satellite, 20 years later, far exceeds anything imagined two decades earlier. The heat generated by its massive beam concurrency, onboard processing, and integrated network core makes its thermal management system not only more severe than Bluebird 1’s but also one of the primary limiting factors in the satellite’s physical and functional design. This thermal constraint, in turn, shapes the layout of its antennas, compute stack, power system, and propulsion.

Mass and Volume Scaling.

The AST’s Bluebird 1, launched in the mid-2020s, had a launch mass of approximately 1,500 kilograms. Its headline feature was a 900 m² unfoldable antenna surface, designed to support direct cellular connectivity from space. However, despite its impressive aperture, the system was constrained by limited beam concurrency, modest onboard computing power, and a reliance on terrestrial cores for most network functions. The bulk of its mass was dominated by structural elements supporting its large antenna surface and the power and thermal subsystems required to drive a relatively small number of simultaneous links. Bluebird’s propulsion was chemical, optimized for initial orbit raising and limited station-keeping, and its stowed volume fit comfortably within standard medium-lift payload fairings. Starlink’s V2 Mini, although smaller in physical aperture, featured a more balanced and compact architecture. Weighing roughly 800 kilograms at launch, it was designed around high-throughput broadband rather than direct-to-cellular use. Its phased array antenna surface was closer to 20–25 m², and it was optimized for efficient manufacturing and high-density orbital deployment. The V2 Mini’s volume was tightly packed, with solar panels, phased arrays, and propulsion modules folded into a relatively low-profile bus optimized for rapid deployment and low-cost launch stacking. Its onboard compute and thermal systems were scaled to match its more modest power budget, which typically hovered around 2.5 to 3.0 kilowatts.

In contrast, today’s satellites occupy an entirely new performance regime. The dry mass of the satellite ranges between 2,500 and 3,500 kilograms, depending on specific configuration, thermal shielding, and structural deployment method. This accounts for its large deployable arrays, high-density digital payload, radiator surfaces, power regulation units, and internal trusses. The wet mass, including onboard fuel reserves for at least 5 years of station-keeping at 350 km altitude, increases by up to 800 kilograms, depending on the propulsion type (e.g., Hall-effect or gridded ion thrusters) and orbital inclination. This brings the total launch mass to approximately 3,000 to 4,500 kilograms, or more than double ATS’s old Bluebird 1 and roughly five times that of SpaceX’s Starlink V2 Mini.

Volume-wise, the satellites require a significantly larger stowed configuration than either AST’s Bluebird 1 or SpaceX’s Starlink V2 Mini. While both of those earlier systems were designed to fit within traditional launch fairings, Bluebird 1 utilizes a folded hinge-based boom structure, and Starlink V2 Mini is optimized for ultra-compact stacking. Today’s satellite demands next-generation fairing geometries, such as 5-meter-class launchers or dual-stack configurations. This is driven by the dual-antenna architecture and radiator arrays, which, although cleverly folded during launch, expand dramatically once deployed in orbit. In its operational configuration, the satellite spans tens of meters across its antenna booms and solar sails. The uplink array, built as a lightweight, mesh-backed surface supported by rigidizing frames or telescoping booms, unfolds to a diameter of approximately 30 to 35 meters, substantially larger than Bluebird 1’s ~20–25 meter maximum span and far beyond the roughly 10-meter unfolded span of Starlink V2 Mini. The downlink panels, although smaller, are arranged for precise gimballed orientation (i.e., a pivoting mechanism allowing rotation or tilt along one or more axes) and integrated thermal control, which further expands the total deployed volume envelope. The volumetric footprint of today’s D2C satellite is not only larger in surface area but also more spatially complex, as its segregated UL and DL arrays, thermal zones, and solar wings must avoid interference while maintaining structural and thermal equilibrium. Compared to the simplified flat-pack layout of Starlink V2 Mini and the monolithic boom-deployed design of Bluebird 1.

The increase in dry mass, wet mass, and deployed volume is not a byproduct of inefficiency, but a direct result of very substantial performance improvements that were required to replace terrestrial mobile towers with orbital systems. Today’s D2C satellites deliver an order of magnitude more beam concurrency, spectral efficiency, and per-user performance than its 2020s predecessors. This is reflected in every subsystem, from power generation and antenna design to propulsion, thermal control, and computing. As such, it represents the emergence of a new class of satellite altogether: not merely a space-based relay or broadband node, but a full-featured, cloud-integrated orbital RAN platform capable of supporting the global cellular fabric from space.

CAN THE FICTION BECOME A REALITY?

From the perspective of 2025, the vision of a global satellite-based mobile network providing seamless, unmodified indoor connectivity at terrestrial-grade uplink and downlink rates, 50 Mbps up, 500 Mbps down, appears extraordinarily ambitious. The technical description from 2045 outlines a constellation of 20,800 LEO satellites, each capable of supporting 5,000 independent full-duplex beams across massive bandwidths, while integrating onboard processing, AI-driven beam control, and a full 5G core stack. To reach such a mature architecture within two decades demands breakthrough progress across multiple fronts.

The most daunting challenge lies in achieving indoor-grade cellular uplink at frequencies as low as 600 MHz from devices never intended to communicate with satellites. Today, even powerful ground-based towers struggle to achieve sub-1 GHz uplink coverage inside urban buildings. For satellites at an altitude of 350 km, the free-space path loss alone at 600 MHz is approximately 133 dB. When combined with clutter, penetration, and polarization mismatches, the system must close a link budget approaching 153–160 dB, from a smartphone transmitting just 23 dBm (200 mW) or less. No satellite today, including AST SpaceMobile’s BlueBird 1, has demonstrated indoor uplink reception at this scale or consistency. To overcome this, the proposed system assumes deployable uplink arrays of 750 m² with gain levels exceeding 45 dBi, supported by hundreds of simultaneously steerable receive beams and ultra-low-noise front-end receivers. From a 2025 lens, the mechanical deployment of such arrays, their thermal stability, calibration, and mass management pose nontrivial risks. Today’s large phased arrays are still in their infancy in space, and adaptive beam tracking from fast-moving LEO platforms remains unproven at the required scale and beam density.

Thermal constraints are also vastly more complex than anything currently deployed. Supporting 5,000 simultaneous beams and radiating tens of kilowatts from compact platforms in LEO requires heat rejection systems that go beyond current radiator technology. Passive radiators must be supplemented with phase-change materials, active fluid loops, and zoned thermal isolation to prevent transmit arrays from degrading the performance of sensitive uplink receivers. This represents a significant leap from today’s satellites, such as Starlink V2 Mini (~3 kW) or BlueBird 1 (~10–12 kW), neither of which operates with a comparable beam count, throughput, or antenna scale.

The required onboard compute is another monumental leap. Running thousands of simultaneous digital beams, performing real-time adaptive beamforming, spectrum assignment, HARQ scheduling, and AI-driven interference mitigation, all on-orbit and without ground-side offloading, demands 100–500 TOPS of radiation-hardened compute. This is far beyond anything that will be flying in 2025. Even state-of-the-art military systems rely heavily on ground computing and centralized control. The 2045 vision implies on-orbit autonomy, local decision-making, and embedded 5G/6G core functionality within each spacecraft, a full software-defined network node in orbit. Realizing such a capability requires not only next-gen processors but also significant progress in space-grade AI inference, thermal packaging, and fault tolerance.

On the power front, generating 25–35 kW per satellite in LEO using 60–80 m² solar sails pushes the boundary of photovoltaic technology and array mechanics. High-efficiency solar cells must achieve conversion rates exceeding 30–35%, while battery systems must maintain high discharge capacity even in complete darkness. Space-based power architectures today are not yet built for this level of sustained output and thermal dissipation.

Even if the individual satellite challenges are solved, the constellation architecture presents another towering hurdle. Achieving seamless beam handover, full spatial reuse, and maintaining beam density over demand centers as the Earth rotates demands near-perfect coordination of tens of thousands of satellites across hundreds of planes. No current LEO operator (including SpaceX) manages a constellation of that complexity, beam concurrency, or spatial density. Furthermore, scaling the manufacturing, testing, launch, and in-orbit commissioning of over 20,000 high-performance satellites will require significant cost reductions, increased factory throughput, and new levels of autonomous deployment.

Regulatory and spectrum allocation are equally formidable barriers. The vision entails the massively complex undertaking of a global reallocation of terrestrial mobile spectrum, particularly in the sub-3 GHz bands, to LEO operators. As of 2025, such a reallocation is politically and commercially fraught, with entrenched mobile operators and national regulators unlikely to cede prime bands without extensive negotiation, incentives, and global coordination. The use of 600–1800 MHz from orbit for direct-to-device is not yet globally harmonized (and may never be), and existing terrestrial rights would need to be either vacated or managed via complex sharing schemes.

From a market perspective, widespread device compatibility without modification implies that standard mobile chipsets, RF chains, and antennas evolve to handle Doppler compensation, extended RTT timing budgets, and tighter synchronization tolerances. While this is not insurmountable, it requires updates to 3GPP standards, baseband silicon, and potentially network registration logic, all of which must be implemented without degrading terrestrial service. Although NTN (non-terrestrial networks) support has begun to emerge in 5G standards, the level of transparency and ubiquity envisioned in 2045 is not yet backed by practical deployments.

While the 2045 architecture described so far assumes a single unified constellation delivering seamless global cellular service from orbit, the political and commercial realities of space infrastructure in 2025 strongly suggest a fragmented outcome. It is unlikely that a single actor, public or private, will be permitted, let alone able, to monopolize the global D2C landscape. Instead, the most plausible trajectory is a competitive and geopolitically segmented orbital environment, with at least one major constellation originating from China (note: I think it is quit likely we may see two major ones), another from the United States, a possible second US-based entrant, and potentially a European-led system aimed at securing sovereign connectivity across the continent. This fracturing of the orbital mobile landscape imposes a profound constraint on the economic and technical scalability of the system. The assumption that a single constellation could achieve massive economies of scale, producing, launching, and managing tens of thousands of high-performance satellites with uniform coverage obligations, begins to collapse under the weight of geopolitical segmentation. Each competitor must now shoulder its own development, manufacturing, and deployment costs, with limited ability to amortize those investments over a unified global user base. Moreover, such duplication of infrastructure risks saturating orbital slots and spectrum allocations, while reducing the density advantage that a unified system would otherwise enjoy. Instead of concentrating thousands of active beams over a demand zone with a single coordinated fleet, separate constellations must compete for orbital visibility and spectral access over the same urban centers. The result is likely to be a decline in per-satellite utilization efficiency, particularly in regions of geopolitical overlap or contested regulatory coordination.

2045: One Vision, Many Launch Pads. The dream of global satellite-to-cellular service may shine bright, but it won’t rise from a single constellation. With China, the U.S., and others racing skyward, the economics of universal LEO coverage could fracture into geopolitical silos, making scale, spectrum, and sustainability more contested than ever.

Finally, the commercial viability of any one constellation diminishes when the global scale is eroded. While a monopoly or globally dominant operator could achieve lower per-unit satellite costs, higher average utilization, and broader roaming revenues, a fractured environment reduces ARPU (average revenue per user). It increases the breakeven threshold for each deployment. Satellite throughput that could have been centrally optimized now risks duplication and redundancy, increasing operational overhead and potentially slowing innovation as vendors attempt to differentiate on proprietary terms. In this light, the architecture described earlier must be seen as an idealized vision. This convergence point may never be achieved in pure form unless global policy, spectrum governance, and commercial alliances move toward more integrated outcomes. While the technological challenges of the 2045 D2C system are significant, the fragmentation of market structure and geopolitical alignment may prove an equally formidable barrier to realizing the full systemic potential. While a monopoly or globally dominant operator could achieve lower per-unit satellite costs, higher average utilization, and broader roaming revenues, a fractured environment reduces ARPU (average revenue per user). It increases the breakeven threshold for each deployment. Satellite throughput that could have been centrally optimized now risks duplication and redundancy, increasing operational overhead and potentially slowing innovation as vendors attempt to differentiate on proprietary terms. In this light, the architecture described earlier must be seen as an idealized vision. This convergence point may never be achieved in pure form unless global policy, spectrum governance, and commercial alliances move toward more integrated outcomes. While the technological challenges of the 2045 D2C system are significant, the fragmentation of market structure and geopolitical alignment may prove an equally formidable barrier to realizing the full systemic potential.

Heavenly Coverage, Hellish Congestion. Even a single mega-constellation turns the sky into premium orbital real estate … and that’s before the neighbors show up with their own fleets. Welcome to the era of broadband traffic … in space.

Despite these barriers, incremental paths forward exist. Demonstration satellites in the late 2020s, followed by regional commercial deployments in the early 2030s, could provide real-world validation. The phased evolution of spectrum use, dual-use handsets, and AI-assisted beam management may mitigate some of the scaling concerns. Regulatory alignment may emerge as rural and unserved regions increasingly depend on space-based access. Ultimately, the achievement of the 2045 architecture relies not only on engineering but also on sustained cross-industry coordination, geopolitical alignment, and commercial viability on a planetary scale. As of 2025, the probability of realizing the complete vision by 2045, in terms of indoor-grade, direct-to-device service via a fully orbital mobile core, is perhaps 40–50%, with a higher probability (~70%) for achieving outdoor-grade or partially integrated hybrid services. The coming decade will reveal whether the industry can fully solve the unique combination of thermal, RF, computational, regulatory, and manufacturing challenges required to replace the terrestrial mobile network with orbital infrastructure.

POSTSCRIPT – THE ECONOMICS.

The Direct-to-Cellular satellite architecture described in this article would reshape not only the technical landscape of mobile communications but also its economic foundation. The very premise of delivering mobile broadband directly from space, bypassing terrestrial towers, fiber backhaul, and urban permitting, undermines one of the most entrenched capital systems of the 20th and early 21st centuries: the mobile infrastructure economy. Once considered irreplaceable, the sprawling ecosystem of rooftop leases, steel towers, field operations, base stations, and fiber rings has been gradually rendered obsolete by a network that floats above geography.

The financial implications of such a shift are enormous. Before such an orbital transition described in this article, the global mobile industry invested well over 300 billion USD annually in network CapEx and Opex, with a large share dedicated to the site infrastructure layer, construction, leasing, energy, security, and upkeep of millions of base stations and their associated land or rooftop assets. Tower companies alone have become multi-billion-dollar REITs (i.e., Real Estate Investment Trusts), profiting from site tenancy and long-term operating contracts. As of the mid-2020s, the global value tied up in the telecom industry’s physical infrastructure is estimated to exceed 2.5 to 3 trillion USD, with tower companies like Cellnex and American Tower collectively managing hundreds of billions of dollars in infrastructure assets. An estimated $300–500 billion USD invested in mobile infrastructure represents approximately 0.75% to 1.5% of total global pension assets and accounts for 15% to 30% of pension fund infrastructure investments. This real estate-based infrastructure model defined mobile economics for decades and has generally been regarded as a reasonably safe haven for investors. In contrast, the 2045 D2C model front-loads its capital burden into satellite manufacturing, launch, and orbital operations. Rather than being geographically bound, capital is concentrated into a fleet of orbital base stations, each capable of dynamically serving users across vast and shifting geographies. This not only eliminates the need for millions of distributed cell sites, but it also breaks the historical tie between infrastructure deployment and national geography. Coverage no longer scales with trenching crews or urban permitting delays but with orbital plane density and beamforming algorithms.

Yet, such a shift does not necessarily mean lower cost, only different economics. Launching and operating tens of thousands of advanced satellites, each capable of supporting thousands of beams and running onboard compute environments, still requires massive capital outlay and ongoing expenditures in space traffic management, spectrum coordination, ground gateways, and constellation replenishment. The difference lies in utilization and marginal reach. Where terrestrial infrastructure often struggles to achieve ROI in rural or low-income markets, orbital systems serve these zones as part of the same beam budget, with no new towers or trenches required.

Importantly, the 2045 model would likely collapse the mobile value chain. Instead of a multi-layered system of operators, tower owners, fiber wholesalers, and regional contractors, a vertically integrated satellite operator can now deliver the full stack of mobile service from orbit, owning the user relationship end-to-end. This disintermediation has significant implications for revenue distribution and regulatory control, and challenges legacy operators to either adapt or exit.

The scale of economic disruption mirrors the scale of technical ambition. This transformation could rewrite the very economics of connectivity. While the promise of seamless global coverage, zero tower density, and instant-on mobility is compelling, it may also signal the end of mobile telecom as a land-based utility.

If this little science fiction story comes true, and there are many good and bad reasons to doubt it, Telcos may not Ascend to the Sky, but take the Stairway to Heaven.

Graveyard of the Tower Titans. This symbolic illustration captures the end of an era, depicting headstones for legacy telecom giants such as American Tower, Crown Castle, and SBA Communications, as well as the broader REIT (Real Estate Investment Trust) infrastructure model that once underpinned the terrestrial mobile network economy. It serves as a metaphor for the systemic shift brought on by Direct-to-Cellular (D2C) satellite networks. What’s fading is not only the mobile tower itself, but also the vast ancillary industry that has grown around it, including power systems, access rights, fiber-infrastructure, maintenance firms, and leasing intermediaries, as well as the telecom business model that relied on physical, ground-based infrastructure. As the skies take over the signal path, the economic pillars of the old telecom world may no longer stand.

FURTHER READING.

Kim K. Larsen, “Will LEO Satellite Direct-to-Cellular Networks Make Traditional Mobile Networks Obsolete?”, A John Strand Consult Report, (January 2025). This has also been published in full on my own Techneconomyblog.

Kim K. Larsen, “Can LEO Satellites close the Gigabit Gap of Europe’s Unconnectables?“ Techneconomyblog (April 2025).

Kim K. Larsen, “The Next Frontier: LEO Satellites for Internet Services.” Techneconomyblog (March 2024).

Kim K. Larsen, “Stratospheric Drones & Low Earth Satellites: Revolutionizing Terrestrial Rural Broadband from the Skies?” Techneconomyblog (January 2024).

Kim K. Larsen, “A Single Network Future“, Techneconomyblog (March 2024).

ACKNOWLEDGEMENT.

I would like to acknowledge my wife, Eva Varadi, for her unwavering support, patience, and understanding throughout the creative process of writing this article.

Can LEO Satellites close the Gigabit Gap of Europe’s Unconnectables?

Is LEO satellite broadband a cost-effective and capable option for rural areas of Europe? Given that most seem to agree that LEO satellites will not replace mobile broadband networks, it seems only natural to ask whether LEO satellites might help the EU Commission’s Digital Decade Policy Programme (DDPP) 2030 goal of having all EU households (HH) covered by gigabit connections delivered by so-called very high-capacity networks, including gigabit capable fiber-optic and 5G networks, by 2030 (i.e., only focusing on the digital infrastructure pillar of the DDPP).

As of 2023, more than €80 billion had been allocated in national broadband strategies and through EU funding instruments, including the Connecting Europe Facility and the Recovery and Resilience Facility. However, based on current deployment trajectories and cost structures, an additional €120 billion or more is expected to close the remaining connectivity gap from the approximately 15.5 million rural homes without a gigabit option in 2023. This brings the total investment requirement to over €200 billion. The shortfall is most acute in rural and hard-to-reach regions where network deployment is significantly more expensive. In these areas, connecting a single household with high-speed broadband infrastructure, especially via FTTP, can easily exceed €10,000 in public subsidy, given the long distances and low density of premises. It would be a very “cheap” alternative for Europe if a non-EU-based (i.e., USA) satellite constellation could close the gigabit coverage gap even by a small margin. However, given some of the current geopolitical factors, 200 billion euros could enable Europe to establish its own large LEO satellite constellation if it can match (or outperform) the unitary economics of SpaceX, rather than its IRIS² satellite program.

In this article, my analysis focuses on direct-to-dish low Earth orbit (LEO) satellites with expected capabilities comparable to, or exceeding, those projected for SpaceX’s Starlink V3, which is anticipated to deliver up to 1 Terabit per second of total downlink capacity. For such satellites to represent a credible alternative to terrestrial gigabit connectivity, several thousand would need to be in operation. This would allow overlapping coverage areas, increasing effective throughput to household outdoor dishes across sparsely populated regions. Reaching such a scale may take years, even under optimistic deployment scenarios, highlighting the importance of aligning policy timelines with technological maturity.

GIGABITS IN THE EU – WHERE ARE WE, AND WHERE DO WE THINK WE WILL GO?

  • In 2023, Fibre-to-the-Premises (FTTP) rural HH coverage was ca. 52%. For the EU28, this means that approximately 16 million rural homes lack fiber coverage.
  • By 2030, projected FTTP deployment in the EU28 will result in household coverage reaching almost 85% of all rural homes (under so-called BaU conditions), leaving approximately 5.5 million households without it.
  • Due to inferior economics, it is estimated that approximately 10% to 15% of European households are “unconnectable” by FTTP (although not necessarily by FWA or broadband mobile in general).
  • EC estimated (in 2023) that over 80 billion euros in subsidies have been allocated in national budgets, with an additional 120 billion euros required to close the gigabit ambition gap by 2030 (e.g., over 10,000 euros per remaining rural household in 2023).

So, there is a considerable number of so-called “unconnectable” households within the European Union (i.e., EU28). These are, for example, isolated dwellings away from inhabited areas (e.g., settlements, villages, towns, and cities). They often lack the most basic fixed communications infrastructure, although some may have old copper lines or only relatively poor mobile coverage.

The figure below illustrates the actual state of FTTP deployment in rural households in 2023 (orange bars) as well as a Rural deployment scenario that extends FTTP deployment to 2030, using the maximum of the previous year’s deployment level and the average of the last three years’ deployment levels. Any level above 80% grows by 1% pa (arbitrarily chosen). The data source for the above is “Digital Decade 2024: Broadband Coverage in Europe 2023” by the European Commission. The FTTP pace has been chosen individually for suburban and rural areas to match the expectations expressed in the reports for 2030.

ARE LEO DIRECT-TO-DISH (D2D) SATELLITES A CREDIBLE ALTERNATIVE FOR THE “UNCONNECTABLES”?

  • For Europe, a non-EU-based (i.e., US-based) satellite constellation could be a very cost-effective alternative to closing the gigabit coverage gap.
  • Megabit connectivity (e.g., up to 100+ Mbps) is already available today with SpaceX Starlink LEO satellites in rural areas with poor broadband alternatives.
  • The SpaceX Starlink V2 satellite can provide approximately 100 Gbps (V1.5 ~ 20+ Gbps), and its V3 is expected to deliver 1,000 Gbps within the satellite’s coverage area, with a maximum coverage radius of over 500 km.
  • The V3 may have 320 beams (or more), each providing approximately ~3 Gbps (i.e., 320 x 3 Gbps is ca. 1 Tbps). With a frequency re-use factor of 40, 25 Gbps can be supplied within a unique coverage area. With “adjacent” satellites (off-nadir), the capacity within a unique coverage area can be enhanced by additional beams that overlap the primary satellite (nadir).
  • With an estimated EU28 “unconnectable” household density of approximately 1.5 per square kilometer, the LEO satellite constellation would cover more than 20,000 households, each with a capacity of 20 Gbps over an area of 15,000 square kilometers.
  • At a peak-hour user concurrency of 15% and a per-user demand of 1 Gbps, the backhaul demand would reach 3 terabits per second (Tbps). This means we have an oversubscription ratio of approximately 3:1, which must be met by a single 1 Tbps satellite, or could be served by three overlapping satellites.
  • This assumes a 100% take-up rate of the unconnectable HHs and that each would select a 1 Gbps service (assuming such would be available). In rural areas, the take-up rate may not be significantly higher than 60%, and not all households will require a 1 Gbps service.
  • This also assumes that there are no alternatives to LEO satellite direct-to-dish service, which seems unlikely for at least some of the 20,000 “unconnectable” households. Given the typical 5G coverage conditions associated with the frequency spectrum license conditions, one might hope for some decent 5G coverage; alas, it is unlikely to be gigabit in deep rural and isolated areas.

For example, consider the Starlink LEO satellite V1.5, which has a total capacity of approximately 25 Gbps, comprising 32 beams that deliver 800 Mbps per beam, including dual polarization, to a ground-based user dish. It can provide a maximum of 6.4 Gbps over a minimum area of ca. 6,000 km² at nadir with an Earth-based dish directly beneath the satellite. If the coverage area is situated in a UK-based rural area, for example, we would expect to find, on average, 150,000 rural households using an average of 25 rural homes per km². If a household demands 100 Mbps at peak, only 60 households can be online at full load concurrently per area. With 10% concurrency, this implies that we can have a total of 600 households per area out of 150,000 homes. Thus, 1 in 250 households could be allowed to subscribe to a Starlink V1.5 if the target is 100 Mbps per home and a concurrency factor of 10% within the coverage area. This is equivalent to stating that the oversubscription ratio is 250:1, and reflects the tension between available satellite capacity and theoretical rural demand density. In rural UK areas, the beam density is too high relative to capacity to allow universal subscription at 100 Mbps unless more satellites provide overlapping service. For a V1.5 satellite, we can support four regions (i.e., frequency reuse groups), each with a maximum throughput of 6.4 Gbps. Thus, the satellite can support a total of 2,400 households (i.e., 4 x 600) with a peak demand of 100 Mbps and a concurrency rate of 10%. As other satellites (off-nadir) can support the primary satellite, it means that some areas’ demand may be supported by two to three different satellites, providing a multiplier effect that can increase the capacity offered. The Starlink V2 satellite is reportedly capable of supporting up to a total of 100 Gbps (approximately four times that of V1.5), while the V3 will support up to 1 Tbps, which is 40 times that of V1.5. The number of beams and, consequently, the number of independent frequency groups, as well as spectral efficiency, are expected to be improved over V1.5, which are factors that will enhance the overall total capacity of the newer Starlink satellite generations.

By 2030, the EU28 rural areas are expected to achieve nearly 85% FTTP coverage under business-as-usual deployment scenarios. This would leave approximately 5.5 million households, referred to as “unconnectables,” without direct access to high-speed fiber. These households are typically isolated, located in sparsely populated or geographically challenging regions, where the economics of fiber deployment become prohibitively uneconomical. Although there may be alternative broadband options, such as FWA, 5G mobile coverage, or copper, it is unlikely that such “unconnectable” homes would sustainably have a gigabit connection.

This may be where LEO satellite constellations enter the picture as a possible alternative to deploying fiber optic cables in uneconomical areas, such as those that are unconnectable. The anticipated capabilities of Starlink’s third-generation (V3) satellites, offering approximately 1 Tbps of total downlink capacity with advanced beamforming and frequency reuse, already make them a viable candidate for servicing low-density rural areas, assuming reasonable traffic models similar to those of an Internet Service Provider (ISP). With modest overlapping coverage from two or three such satellites, these systems could deliver gigabit-class service to tens of thousands of dispersed households without (much) oversubscription, even assuming relatively high concurrency and usage.

Considering this, there seems little doubt that an LEO constellation, just slightly more capable than SpaceX’s Starlink V3 satellite, appears to be able to fully support the broadband needs of the remaining unconnected European households expected by 2030. This also aligns well with the technical and economic strengths of LEO satellites: they are ideally suited for delivering high-capacity service to regions where population density is too low to justify terrestrial infrastructure, yet digital inclusion remains equally essential.

LOW-EARTH ORBIT SATELLITES DIRECT-TO-DISRUPTION.

I have in my blog “Will LEO Satellite Direct-to-Cell Networks make Terrestrial Networks Obsolete?” I provided some straightforward reasons why the LEO satellite with direct to an unmodified smartphone capabilities (e.g., Lynk Global, AST Spacemobile) would not make existing cellular network obsolete and would be of most value in remote or very rural areas where no cellular coverage would be present (as explained very nicely by Lynk Global) offering a connection alternative to satellite phones such as Iridium, and thus being complementary existing terrestrial cellular networks. Thus, despite the hype, we should not expect a direct disruption to regular terrestrial cellular networks from LEO satellite D2C providers.

Of course, the question could also be asked whether LEO satellites directed to an outdoor (terrestrial) dish could threaten existing fiber optic networks, the business case, and the value proposition. After all, the SpaceX Starlink V3 satellite, not yet operational, is expected to support 1 Terabit per second (Tbps) over a coverage area of several thousand kilometers in diameter. It is no doubt an amazing technological achievement for SpaceX to have achieved a 10x leap in throughput from its present generation V2 (~100 Gbps).

However, while a V3-like satellite may offer an (impressive) total capacity of 1 Tbps, this capacity is not uniformly available across its entire footprint. It is distributed across multiple beams, potentially 256 or more, each with a bandwidth of approximately 4 Gbps (i.e., 1 Tbps / 256 beams). With a frequency reuse factor of, for example, 5, the effective usable capacity per unique coverage area becomes a fraction of the satellite’s total throughput. This means that within any given beam footprint, the satellite can only support a limited number of concurrent users at high bandwidth levels.

As a result, such a satellite cannot support more than roughly a thousand households with concurrent 1 Gbps demand in any single area (or, alternatively, about 10,000 households with 100 Mbps concurrent demand). This level of support would be equivalent to a small FTTP (sub)network serving no more than 20,000 households at a 50% uptake rate (i.e., 10,000 connected homes) and assuming a concurrency of 10%. A deployment of this scale would typically be confined to a localized, dense urban or peri-urban area, rather than the vast rural regions that LEO systems are expected to serve.

In contrast, a single Starlink V3-like satellite would cover a vast region, capable of supporting similar or greater numbers of users, including those in remote, low-density areas that FTTP cannot economically reach. The satellite solution described here is thus not designed to densify urban broadband, but rather to reach rural, remote, and low-density areas where laying fiber is logistically or economically impractical. Therefore, such satellites and conventional large-scale fiber networks are not in direct competition, as they cannot match their density, scale, or cost-efficiency in high-demand areas. Instead, it complements fiber infrastructure by providing connectivity and reinforces the case for hybrid infrastructure strategies, in which fiber serves the dense core, and LEO satellites extend the digital frontier.

However, terrestrial providers must closely monitor their FTTP deployment economics and refrain from extending too far into deep rural areas beyond a certain household density, which is likely to increase over time as satellite capabilities improve. The premise of this blog is that capable LEO satellites by 2030 could serve unconnected households that are unlikely to have any commercial viability for terrestrial fiber and have no other gigabit coverage option. Within the EU28, this represents approximately 5.5 million remote households. A Starlink V3-like 1 Tbps satellite could provide a gigabit service (occasionally) to those households and certainly hundreds of megabits per second per isolated household. Moreover, it is likely that over time, more capable satellites will be launched, with SpaceX being the most likely candidate for such an endeavor if it maintains its current pace of innovation. Such satellites will likely become increasingly interesting for household densities above 2 households per square kilometer. However, suppose an FTTP network has already been deployed. In that case, it seems unlikely that the satellite broadband service would render the terrestrial infrastructure obsolete, as long as it is priced competitively in comparison to the satellite broadband network.

LEO satellite direct-to-dish (D2D) based broadband networks may be a credible and economical alternative to deploying fiber in low-density rural households. The density boundary of viable substitution for a fiber connection with a gigabit satellite D2D connection may shift inward (from deep rural, low-density household areas). This reinforces the case for hybrid infrastructure strategies, in which fiber serves the denser regions and LEO satellites extend the digital frontier to remote and rural areas.

THE USUAL SUSPECT – THE PUN INTENDED.

By 2030, SpaceX’s Starlink will operate one of the world’s most extensive low Earth orbit (LEO) satellite constellations. As of early 2025, the company has launched more than 6,000 satellites into orbit; however, most of these, including V1, V1.5, and V2, are expected to cease operation by 2030. Industry estimates suggest that Starlink could have between 15,000 and 20,000 operational satellites by the end of the decade, which I anticipate to be mainly V3 and possibly a share of V4. This projection depends largely on successfully scaling SpaceX’s Starship launch vehicle, which is designed to deploy up to 60 or more next-generation V3 satellites per mission with the current cadence. However, it is essential to note that while SpaceX has filed applications with the International Telecommunication Union (ITU) and obtained FCC authorization for up to 12,000 satellites, the frequently cited figure of 42,000 satellites includes additional satellites that are currently proposed but not yet fully authorized.

The figure above, based on an idea of John Strand of Strand Consult, provides an illustrative comparison of the rapid innovation and manufacturing cycles of SpaceX LEO satellites versus the slower progression of traditional satellite development and spectrum policy processes, highlighting the growing gap between technological advancement and regulatory adaptation. This is one of the biggest challenges that regulatory institutions and policy regimes face today.

Amazon’s Project Kuiper has a much smaller planned constellation. The Federal Communications Commission (FCC) has authorized Amazon to deploy 3,236 satellites under its initial phase, with a deadline requiring that at least 1,600 be launched and operational by July 2026. Amazon began launching test satellites in 2024 and aims to roll out its service in late 2025 or early 2026. On April 28, 2025, Amazon launched its first 27 operational satellites for Project Kuiper aboard a United Launch Alliance Atlas (ULA) V rocket from Cape Canaveral, Florida. This marks the beginning of Amazon’s deployment of its planned 3,236-satellite constellation aimed at providing global broadband internet coverage. Though Amazon has hinted at potential expansion beyond its authorized count, any Phase 2 remains speculative and unapproved. If such an expansion were pursued and granted, the constellation could eventually grow to 6,000 satellites, although no formal filings have yet been made to support the higher amount.

China is rapidly advancing its low Earth orbit (LEO) satellite capabilities, positioning itself as a formidable competitor to SpaceX’s Starlink by 2030. Two major Chinese LEO satellite programs are at the forefront of this effort: the Guowang (13,000) and Qianfan (15,000) constellations. So, by 2030, it is reasonable to expect that China will field a national LEO satellite broadband system with thousands of operational satellites, focused not just on domestic coverage but also on extending strategic connectivity to Belt and Road Initiative (BRI) countries, as well as regions in Africa, Asia, and South America. Unlike SpaceX’s commercially driven approach, China’s system is likely to be closely integrated with state objectives, combining broadband access with surveillance, positioning, and secure communication functionality. While it remains unclear whether China will match SpaceX’s pace of deployment or technological performance by 2030, its LEO ambitions are unequivocally driven by geopolitical considerations. They will likely shape European spectrum policy and infrastructure resilience planning in the years ahead. Guowang and Qianfan are emblematic of China’s dual-use strategy, which involves developing technologies for both civilian and military applications. This approach is part of China’s broader Military-Civil Fusion policy, which seeks to integrate civilian technological advancements into military capabilities. The dual-use nature of these satellite constellations raises concerns about their potential military applications, including surveillance and communication support for the People’s Liberation Army.

AN ILLUSTRATION OF COVERAGE – UNITED KINGDOM.

It takes approximately 172 Starlink beams to cover the United Kingdom, with 8 to 10 satellites overhead simultaneously. To have a persistent UK coverage in the order of 150 satellite constellations across appropriate orbits. Starlink’s 53° inclination orbital shell is optimized for mid-latitude regions, providing frequent satellite passes and dense, overlapping beam coverage over areas like southern England and central Europe. This results in higher throughput and more consistent connectivity with fewer satellites. In contrast, regions north of 53°N, such as northern England and Scotland, lie outside this optimal zone and depend on higher-inclination shells (70° and 97.6°), which have fewer satellites and wider, less efficient beams. As a result, coverage in these Northern areas is less dense, with lower signal quality and increased latency.

For this blog, I developed a Python script, with fewer than 600 lines of code (It’s a physicist’s code, so unlikely to be super efficient), to simulate and analyze Starlink’s satellite coverage and throughput over the United Kingdom using real orbital data. By integrating satellite propagation, beam modeling, and geographic visualization, it enables a detailed assessment of regional performance from current Starlink deployments across multiple orbital shells. Its primary purpose is to assess how the currently deployed Starlink constellation performs over UK territory by modeling where satellites pass, how their beams are steered, and how often any given area receives coverage. The simulation draws live TLE (Two-Line Element) data from Celestrak, a well-established source for satellite orbital elements. Using the Skyfield library, the code propagates the positions of active Starlink satellites over a 72-hour period, sampling every 5 minutes to track their subpoints across the United Kingdom. There is no limitation on the duration or sampling time. Choosing a more extended simulation period, such as 72 hours, provides a more statistically robust and temporally representative view of satellite coverage by averaging out orbital phasing artifacts and short-term gaps. It ensures that all satellites complete multiple orbits, allowing for more uniform sampling of ground tracks and beam coverage, especially from shells with lower satellite densities, such as the 70° and 97.6° inclinations. This results in smoother, more realistic estimates of average signal density and throughput across the entire region.

Each satellite is classified into one of three orbital shells based on inclination angle: 53°, 70°, and 97.6°. These shells are simulated separately and collectively to understand their individual and combined contributions to UK coverage. The 53° shell dominates service in the southern part of the UK, characterized by its tight orbital band and high satellite density (see the Table below). The 70° shell supplements coverage in northern regions, while the 97.6° polar shell offers sparse but critical high-latitude support, particularly in Scotland and surrounding waters. The simulation assumes several (critical) parameters for each satellite type, including the number of beams per satellite, the average beam radius, and the estimated throughput per beam. These assumptions reflect engineering estimates and publicly available Starlink performance information, but are deliberately simplified to produce regional-level coverage and throughput estimates, rather than user-specific predictions. The simulation does not account for actual user terminal distribution, congestion, or inter-satellite link (ISL) performance, focusing instead on geographic signal and capacity potential.

These parameters were used to infer beam footprints and assign realistic signal density and throughput values across the UK landmass. The satellite type was inferred from its shell (e.g., most 53° shell satellites are currently V1.5), and beam properties were adjusted accordingly.

The table above presents the core beam modeling parameters and satellite-specific assumptions used in the Starlink simulation over the United Kingdom. It includes general values for beam steering behavior, such as Gaussian spread, steering limits, city-targeting probabilities, and beam spacing constraints, as well as performance characteristics tied to specific satellite generations to the extent it is known (e.g., Starlink V1.5, V2 Mini, and V2 Full). These assumptions govern the placement of beams on the Earth’s surface and the capacity each beam can deliver. For instance, the City Exclusion Radius of 0.25 degrees corresponds to a ~25 km buffer around urban centers, where beam placement is probabilistically discouraged. Similarly, the beam radius and throughput per beam values align with known design specifications submitted by SpaceX to the U.S. Federal Communications Commission (FCC), particularly for Starlink’s V1.5 and V2 satellites. The table above also defines overlap rules, specifying the maximum number of beams that can overlap in a region and the maximum number of satellites that can contribute beams to a given point. This helps ensure that simulations reflect realistic network constraints rather than theoretical maxima.

Overall, the developed code offers a geographically and physically grounded simulation of how the existing Starlink network performs over the UK. It helps explain observed disparities in coverage and throughput by visualizing the contribution of each shell and satellite generation. This modeling approach enables planners and researchers to quantify satellite coverage performance at national and regional scales, providing insight into both current service levels and facilitating future constellation evolution, which is not discussed here.

The figure illustrates a 72-hour time-averaged Starlink coverage density over the UK. The asymmetric signal strength pattern reflects the orbital geometry of Starlink’s 53° inclination shell, which concentrates satellite coverage over southern and central England. Northern areas receive less frequent coverage due to fewer satellite passes and reduced beam density at higher latitudes.

This image above presents the Starlink Average Coverage Density over the United Kingdom, a result from a 72-hour simulation using real satellite orbital data from Celestrak. It illustrates the mean signal exposure across the UK, where color intensity reflects the frequency and density of satellite beam illumination at each location.

At the center of the image, a bright yellow core indicating the highest signal strength is clearly visible over the English Midlands, covering cities such as Birmingham, Leicester, and Bristol. The signal strength gradually declines outward in a concentric pattern—from orange to purple—as one moves northward into Scotland, west toward Northern Ireland, or eastward along the English coast. While southern cities, such as London, Southampton, and Plymouth, fall within high-coverage zones, northern cities, including Glasgow and Edinburgh, lie in significantly weaker regions. The decline in signal intensity is especially apparent beyond the 56°N latitude. This pattern is entirely consistent with what we know about the structure of the Starlink satellite constellation. The dominant contributor to coverage in this region is the 53° inclination shell, which contains 3,848 satellites spread across 36 orbital planes. This shell is designed to provide dense, continuous coverage to heavily populated mid-latitude regions, such as the southern United Kingdom, continental Europe, and the continental United States. However, its orbital geometry restricts it to a latitudinal range that ends near 53 to 54°N. As a result, southern and central England benefit from frequent satellite passes and tightly packed overlapping beams, while the northern parts of the UK do not. Particularly, Scotland lies at or beyond the shell’s effective coverage boundary.

The simulation may indicate how Starlink’s design prioritizes population density and market reach. Northern England receives only partial benefit, while Scotland and Northern Ireland fall almost entirely outside the core coverage of the 53° shell. Although some coverage in these areas is provided by higher inclination shells (specifically, the 70° shell with 420 satellites and the 97.6° polar shell with 227 satellites), these are sparser in both the number of satellites and the orbital planes. Their beams may also be broader and less (thus) less focused, resulting in lower average signal strength in high-latitude regions.

So, why is the coverage not textbook nice hexagon cells with uniform coverage across the UK? The simple answer is that real-world satellite constellations don’t behave like the static, idealized diagrams of hexagonal beam tiling often used in textbooks or promotional materials. What you’re seeing in the image is a time-averaged simulation of Starlink’s actual coverage over the UK, reflecting the dynamic and complex nature of low Earth orbit (LEO) systems like Starlink’s. Unlike geostationary satellites, LEO satellites orbit the Earth roughly every 90 minutes and move rapidly across the sky. Each satellite only covers a specific area for a short period before passing out of view over the horizon. This movement causes beam coverage to constantly shift, meaning that any given spot on the ground is covered by different satellites at different times. While individual satellites may emit beams arranged in a roughly hexagonal pattern, these patterns move, rotate, and deform continuously as the satellite passes overhead. The beams also vary in shape and strength depending on their angle relative to the Earth’s surface, becoming elongated and weaker when projected off-nadir, i.e., when the satellite is not directly overhead. Another key reason lies in the structure of Starlink’s orbital configuration. Most of the UK’s coverage comes from satellites in the 53° inclination shell, which is optimized for mid-latitude regions. As a result, southern England receives significantly denser and more frequent coverage than Scotland or Northern Ireland, which are closer to or beyond the edge of this shell’s optimal zone. Satellites serving higher latitudes originate from less densely populated orbital shells at 70° and 97.6°, which result in fewer passes and wider, less efficient beams.

The above heatmap does not illustrate a snapshot of beam locations at a specific time, but rather an averaged representation of how often each part of the UK was covered over a simulation period. This type of averaging smooths out the moment-to-moment beam structure, revealing broader patterns of coverage density instead. That’s why we see a soft gradient from intense yellow in the Midlands, where overlapping beams pass more frequently, to deep purple in northern regions, where passes are less common and less centered.

Illustrates an idealized hexagonal beam coverage footprint over the UK. For visual clarity, only a subset of hexagons is shown filled with signal intensity (yellow core to purple edge), to illustrate a textbook-like uniform tiling. In reality, satellite beams from LEO constellations, such as Starlink, are dynamic, moving, and often non-uniform due to orbital motion, beam steering, and geographic coverage constraints.

The two charts below provide a visual confirmation of the spatial coverage dynamics behind the Starlink signal strength distribution over the United Kingdom. Both are based on a 72-hour simulation using real Starlink satellite data obtained from Celestrak, and they accurately reflect the operational beam footprints and orbital tracks of currently active satellites over the United Kingdom.

This figure illustrates time-averaged Starlink coverage density over the UK with beam footprints (left) and satellite ground tracks (right) by orbital shell. The high density of beams and tracks from the 53° shell over southern UK leads to stronger and more consistent coverage. At the same time, northern regions receive fewer, more widely spaced passes from higher-inclination shells (70° and 97.6°), resulting in lower aggregate signal strength.

The first chart displays the beam footprints (i.e., the left side chart above) of Starlink satellites across the UK, color-coded by orbital shell: cyan for the 53° shell, green for the 70° shell, and magenta for the 97° polar shell. The concentration of cyan beam circles in southern and central England vividly demonstrates the dominance of the 53° shell in this region. These beams are tightly packed and frequent, explaining the high signal coverage in the earlier signal strength heatmap. In contrast, northern England and Scotland are primarily served by green and magenta beams, which are more sparse and cover larger areas — a clear indication of the lower beam density from the higher-inclination shells.

The second chart illustrates the satellite ground tracks (i.e., the right side chart above) over the same period and geographic area. Again, the saturation of cyan lines in the southern UK underscores the intensive pass frequency of satellites in the 53° inclined shell. As one moves north of approximately 53°N, these tracks vanish almost entirely, and only the green (70° shell) and magenta (97° shell) paths remain. These higher inclination tracks cross through Scotland and Northern Ireland, but with less spatial and temporal density, which supports the observed decline in average signal strength in those areas.

Together, these two charts provide spatial and orbital validation of the signal strength results. They confirm that the stronger signal levels seen in southern England stem directly from the concentrated beam targeting and denser satellite presence of the 53° shell. Meanwhile, the higher-latitude regions rely on less saturated shells, resulting in lower signal availability and throughput. This outcome is not theoretical — it reflects the live state of the Starlink constellation today.

The figure illustrates the estimated average Starlink throughput across the United Kingdom over a 72-hour window. Throughput is highest over southern and central England due to dense satellite traffic from the 53° orbital shell, which provides overlapping beam coverage and short revisit times. Northern regions experience reduced throughput from sparser satellite passes and less concentrated beam coverage.

The above chart shows the estimated average throughput of Starlink Direct-2-Dish across the United Kingdom, simulated over 72 hours using real orbital data from Celestrak. The values are expressed in Megabits per second (Mbps) and are presented as a heatmap, where higher throughput regions are shown in yellow and green, and lower values fade into blue and purple. The simulation incorporates actual satellite positions and coverage behavior from the three operational inclination shells currently providing Starlink service to the UK. Consistent with the signal strength, beam footprint density, and orbital track density, the best quality and most supplied capacity are available south of the 53°N inclination.

The strongest throughput is concentrated in a horizontal band stretching from Birmingham through London to the southeast, as well as westward into Bristol and south Wales. In this region, the estimated average throughput peaks at over 3,000 Mbps, which can support more than 30 concurrent customers each demanding 100 Mbps within the coverage area or up to 600 households with an oversubscription rate of 1 to 20. This aligns closely with the signal strength and beam density maps also generated in this simulation and is driven by the dense satellite traffic of the 53° inclination shell. These satellites pass frequently over southern and central England, where their beams overlap tightly and revisit times are short. The availability of multiple beams from different satellites at nearly all times drives up the aggregate throughput experienced at ground level. Throughput falls off sharply beyond approximately 54°N. In Scotland and Northern Ireland, values typically stay well below 1,000 Mbps. This reduction directly reflects the sparser presence of higher-latitude satellites from the 70° and 97.6° shells, which are fewer in number and more widely spaced, resulting in lower revisit frequencies and broader, less concentrated beams. The throughput map thus offers a performance-level confirmation of the underlying orbital dynamics and coverage limitations seen in the satellite and beam footprint charts.

While the above map estimates throughput in realistic terms, it is essential to understand why it does not reflect the theoretical maximum performance implied by Starlink’s physical layer capabilities. For example, a Starlink V1.5 satellite supports eight user downlink channels, each with 250 MHz of bandwidth, which in theory amounts to a total of 2 GHz of spectrum. Similarly, if one assumes 24 beams, each capable of delivering 800 Mbps, that would suggest a satellite capacity in the range of approximately 19–20 Gbps. However, these peak figures assume an ideal case with full spectrum reuse and optimized traffic shaping. In practice, the estimated average throughput shown here is the result of modeling real beam overlap and steering constraints, satellite pass timing, ground coverage limits, and the fact that not all beams are always active or directed toward the same location. Moreover, local beam capacity is shared among users and dynamically managed by the constellation. Therefore, the chart reflects a realistic, time-weighted throughput for a given geographic location, not a per-satellite or per-user maximum. It captures the outcome of many beams intermittently contributing to service across 72 hours, modulated by orbital density and beam placement strategy, rather than theoretical peak link rates.

A valuable next step in advancing the simulation model would be the integration of empirical user experience data across the UK footprint. If datasets such as comprehensive Ookla performance measurements (e.g., Starlink-specific download and upload speeds, latency, and jitter) were available with sufficient geographic granularity, the current Python model could be calibrated and validated against real-world conditions. Such data would enable the adjustment of beam throughput assumptions, satellite visibility estimates, and regional weighting factors to better reflect the actual service quality experienced by users. This would enhance the model’s predictive power, not only in representing average signal and throughput coverage, but also in identifying potential bottlenecks, underserved areas, or mismatches between orbital density and demand.

It is also important to note that this work relies on a set of simplified heuristics for beam steering, which are designed to make the simulation both tractable and transparent. In this model, beams are steered within a fixed angular distance from each satellite’s subpoint, with probabilistic biases against cities and simple exclusion zones (i.e., I operate with an exclusion radius of approximately 25 km or more). However, in reality, Starlink’s beam steering logic is expected to be substantially more advanced, employing dynamic optimization algorithms that account for real-time demand, user terminal locations, traffic load balancing, and satellite-satellite coordination via laser interlinks. Starlink has the crucial (and obvious) operational advantage of knowing exactly where its customers are, allowing it to direct capacity where it is needed most, avoid congestion (to an extent), and dynamically adapt coverage strategies. This level of real-time awareness and adaptive control is not replicated in this analysis, which assumes no knowledge of actual user distribution and treats all geographic areas equally.

As such, the current Python code provides a first-order geographic approximation of Starlink coverage and capacity potential, not a reflection of the full complexity and intelligence of SpaceX’s actual network management. Nonetheless, it offers a valuable structural framework that, if calibrated with empirical data, could evolve into a much more powerful tool for performance prediction and service planning.

Median Starlink download speeds in the United Kingdom, as reported by Ookla, from Q4 2022 to Q4 2024, indicate a general decline through 2023 and early 2024, followed by a slight recovery in late 2024. Source: Ookla.com.

The decline in real-world median user speeds, observed in the chart above, particularly from Q4 2023 to Q3 2024, may reflect increasing congestion and uneven coverage relative to demand, especially in areas outside the dense beam zones of the 53° inclination shell. This trend supports the simulation’s findings: while orbital geometry enables strong average coverage in the southern UK, northern regions rely on less frequent satellite passes from higher-inclination shells, which limits performance. The recovery of the median speed in Q4 2024 could be indicative of new satellite deployments (e.g., more V2 Minis or V2 Fulls) beginning to ease capacity constraints, something future simulation extensions could model by incorporating launch timelines and constellation updates.

Illustrates a European-based dual-use Low Earth Orbit (LEO) satellite constellation providing broadband connectivity to Europe’s millions of unconnectables by 2030 on a secure and strategic infrastructure platform covering Europe, North Africa, and the Middle East.

THE 200 BILLION EUROS QUESTION – IS THERE A PATH TO A EUROPEAN SPACE INDEPENDENCE?

Let’s start with the answer! Yes!

Is €200 billion, the estimated amount required to close the EU28 gigabit gap between 2023 and 2030, likely to enable Europe to build its own LEO satellite constellation and potentially develop one that is more secure, inclusive, and strategically aligned with its values and geopolitical objectives? In comparison, the European Union’s IRIS² (Infrastructure for Resilience, Interconnectivity and Security by Satellite) program has been allocated a total budget of 10+ billion euros aiming at building 264 LEO satellites (1,200 km) and 18 MEO satellites (8,000 km) mainly by the European “Primes” (i.e., the usual “suspects” of legacy defense contractors) by 2030. For that amount, we should even be able to afford our dedicated European stratospheric drone program for real-world use cases, as opposed to, for example, Airbus’s (AALTO) Zephyr fragile platform, which, imo, is more an impressive showcase of an innovative, sustainable (solar-driven) aerial platform than a practical, robust, high-performance communications platform.

A significant portion of this budget should be dedicated to designing, manufacturing, and launching a European-based satellite constellation. If Europe could match the satellite cost price of SpaceX, and not that of IRIS² (which appears to be large based on legacy satellite platform thinking or at least the unit price tag is), it could launch a very substantial number of EU-based LEO satellites within 200 billion euros (also for a lot less obviously). It easily matches the number of SpaceX’s long-term plans and would vastly surpass the satellites authorized under Kuiper’s first phase. To support such a constellation, Europe must invest heavily in launch infrastructure. While Ariane 6 remains in development, it could be leveraged to scale up the Ariane program or develop a reusable European launch system, mirroring and improving upon the capabilities of SpaceX’s Starship. This would reduce long-term launch costs, boost autonomy, and ensure deployment scalability over the decade. Equally essential would be establishing a robust ground segment covering the deployment of a European-wide ground station network, edge nodes, optical interconnects, and satellite laser communication capabilities.

Unlike Starlink, which benefits from SpaceX’s vertical integration, and Kuiper, which is backed by Amazon’s capital and logistics empire, a European initiative would rely heavily on strong multinational coordination. With 200 billion euros, possibly less if the usual suspects (i.e., ” Primes”) are managed accordingly, Europe could close the technology gap rapidly, secure digital sovereignty, and ensure that it is not dependent on foreign providers for critical broadband infrastructure, particularly for rural areas, government services, and defense.

Could this be done by 2030? Doubtful, unless Europe can match SpaceX’s impressive pace of innovation. That is at least to match the 3 years (2015–2018) it took SpaceX to achieve a fully reusable Falcon 9 system and the 4 years (2015–2019) it took to go from concept to the first operational V1 satellite launch. Elon has shown it is possible.

KEY TAKEAWAYS.

LEO satellite direct-to-dish broadband, when strategically deployed in underserved and hard-to-reach areas, should be seen not as a competitor to terrestrial networks but as a strategic complement. It provides a practical, scalable, and cost-effective means to close the final connectivity gap, one that terrestrial networks alone are unlikely to bridge economically. In sparsely populated rural zones, where fiber deployment becomes prohibitively expensive, LEO satellites may render new rollouts obsolete. In these cases, satellite broadband is not just an alternative. It may be essential. Moreover, it can also serve as a resilient backup in areas where rural fiber is already deployed, especially in regions lacking physical network redundancy. Rather than undermining terrestrial infrastructure, LEO extends its reach, reinforcing the case for hybrid connectivity models central to achieving EU-wide digital reach by 2030.

Instead of continuing to subsidize costly last-mile fiber in uneconomical areas, European policy should reallocate a portion of this funding toward the development of a sovereign European Low-Earth Orbit (LEO) satellite constellation. A mere 200 billion euros, or even less, would go a very long way in securing such a program. Such an investment would not only connect the remaining “unconnectables” more efficiently but also strengthen Europe’s digital sovereignty, infrastructure resilience, and strategic autonomy. A European LEO system should support dual-use applications, serving both civilian broadband access and the European defense architecture, thereby enhancing secure communications, redundancy, and situational awareness in remote regions. In a hybrid connectivity model, satellite broadband plays a dual role: as a primary solution in hard-to-reach zones and as a high-availability backup where terrestrial access exists, reinforcing a layered, future-proof infrastructure aligned with the EU’s 2030 Digital Decade objectives and evolving security imperatives.

Non-European dependence poses strategic trade-offs: The rise of LEO broadband providers, SpaceX, and China’s state-aligned Guowang and Qianfan, underscores Europe’s limited indigenous capacity in the Low Earth Orbit (LEO) space. While non-EU options may offer faster and cheaper rural connectivity, reliance on foreign infrastructure raises concerns about sovereignty, data governance, and security, especially amid growing geopolitical tensions.

LEO satellites, especially those similar or more capable than Starlink V3, can technically support the connectivity needs of Europe’s 2030s “unconnectable” (rural) households. Due to geography or economic constraints, these homes are unlikely to be reached by FTTP even under the most ambitious business-as-usual scenarios. A constellation of high-capacity satellites could serve these households with gigabit-class connections, especially when factoring in user concurrency and reasonable uptake rates.

The economics of FTTP deployment sharply deteriorate in very low-density rural regions, reinforcing the need for alternative technologies. By 2030, up to 5.5 million EU28 households are projected to remain beyond the economic viability of FTTP, down from 15.5 million rural homes in 2023. The European Commission has estimated that closing the gigabit gap from 2023 to 2030 requires around €200 billion. LEO satellite broadband may be a more cost-effective alternative, particularly with direct-to-dish architecture, at least for the share of unconnectable homes.

While LEO satellite networks offer transformative potential for deep rural coverage, they do not pose a threat to existing FTTP deployments. A Starlink V3 satellite, despite its 1 Tbps capacity, can serve the equivalent of a small fiber network, about 1,000 homes at 1 Gbps under full concurrency, or roughly 20,000 homes with 50% uptake and 10% busy-hour concurrency. FTTP remains significantly more efficient and scalable in denser areas. Satellites are not designed to compete with fiber in urban or suburban regions, but rather to complement it in places where fiber is uneconomical or otherwise unviable.

The technical attributes of LEO satellites make them ideally suited for sparse, low-density environments. Their broad coverage area and increasingly sophisticated beamforming and frequency reuse capabilities allow them to efficiently serve isolated dwellings, often spread across tens of thousands of square kilometers, where trenching fiber would be infeasible. These technologies extend the digital frontier rather than replace terrestrial infrastructure. Even with SpaceX’s innovative pace, it seems unlikely that this conclusion will change substantially within the next five years, at the very least.

A European LEO constellation could be feasible within a € 200 billion budget: The €200 billion gap identified for full gigabit coverage could, in theory, fund a sovereign European LEO system capable of servicing the “unconnectables.” If Europe adopts leaner, vertically integrated innovation models like SpaceX (and avoids legacy procurement inefficiencies), such a constellation could deliver comparable technical performance while bolstering strategic autonomy.

The future of broadband infrastructure in Europe lies in a hybrid strategy. Fiber and mobile networks should continue to serve densely populated areas, while LEO satellites, potentially supplemented by fixed wireless and 5G, offer a viable path to universal coverage. By 2030, a satellite constellation only slightly more capable than Starlink V3 could deliver broadband to virtually all of Europe’s remaining unconnected homes, without undermining the business case for large-scale FTTP networks already in place.

CAUTIONARY NOTE.

While current assessments suggest that a LEO satellite constellation with capabilities on par with or slightly exceeding those anticipated for Starlink V3 could viably serve Europe’s remaining unconnected households by 2030, it is important to acknowledge the speculative nature of these projections. The assumptions are based on publicly available data and technical disclosures. Still, it is challenging to have complete visibility into the precise specifications, performance benchmarks, or deployment strategies of SpaceX’s Starlink satellites, particularly the V3 generation, or, for that matter, Amazon’s Project Kuiper constellation. Much of what is known comes from regulatory filings (e.g., FCC), industry reports and blogs, Reddit, and similar platforms, as well as inferred capabilities. Therefore, while the conclusions drawn here are grounded in credible estimates and modeling, they should be viewed with caution until more comprehensive and independently validated performance data become available.

THE SATELLITE’S SPECS – MOST IS KEPT A “SECRET”, BUT THERE IS SOME LIGHT.

Satellite capacity is not determined by a single metric, but instead emerges from a tightly coupled set of design parameters. Variables such as spectral efficiency, channel bandwidth, polarization, beam count, and reuse factor are interdependent. Knowing a few of them allows us to estimate, bound, or verify others. This is especially valuable when analyzing or validating constellation design, performance targets, or regulatory filings.

For example, consider a satellite that uses 250 MHz channels with 2 polarizations and a spectral efficiency of 5.0 bps/Hz. These inputs directly imply a channel capacity of 1.25 Gbps and a beam capacity of 2.5 Gbps. If the satellite is intended to deliver 100 Gbps of total throughput, as disclosed in related FCC filings, one can immediately deduce that 40 beams are required. If, instead, the satellite’s reuse architecture defines 8 x 250 MHz channels per reuse group with a reuse factor of 5, and each reuse group spans a fixed coverage area. Both the theoretical and practical throughput within that area can be computed, further enabling the estimation of the total number of beams, the required spectrum, and the likely user experience. These dependencies mean that if the number of user channels, full bandwidth, channel bandwidth, number of beams, or frequency reuse factor is known, it becomes possible to estimate or cross-validate the others. This helps identify design consistency or highlight unrealistic assumptions.

In satellite systems like Starlink, the total available spectrum is limited. This is typically divided into discrete channels, for example, eight 250 MHz channels (as is the case for Starlink’s Ku-band downlink to the user’s terrestrial dish). A key architectural advantage of spot-beam satellites (e.g., with spots that are at least 50 to 80 km wide) is that frequency channels can be reused in multiple spatially separated beams, as long as the beams do not interfere with one another. This is not based on a fixed reuse factor, as seen in terrestrial cellular systems, but on beam isolation, achieved through careful beam shaping, angular separation, and sidelobe control (as also implemented in the above Python code for UK Starlink satellite coverage, albeit in much simpler ways). For instance, one beam covering southern England can use the same frequency channels as another beam covering northern Scotland, because their energy patterns do not overlap significantly at ground level. In a constellation like Starlink’s, where hundreds or even thousands of beams are formed across a satellite footprint, frequency reuse is achieved through simultaneous but non-overlapping spatial beam coverage. The reuse logic is handled dynamically on board or through ground-based scheduling, based on real-time traffic load and beam geometry.

This means that for a given satellite, the total instantaneous throughput is not only a function of spectral efficiency and bandwidth per beam, but also of the number of beams that can simultaneously operate on overlapping frequencies without causing harmful interference. If a satellite has access to 2 GHz of bandwidth and 250 MHz channels, then up to 8 distinct channels can be formed. These channels can be replicated across different beams, allowing many more than 8 beams to be active concurrently, each using one of those 8 channels, as long as they are separated enough in space. This approach allows operators to scale capacity dramatically through dense spatial reuse, rather than relying solely on expanding spectrum allocations. The ability to reuse channels across beams depends on antenna performance, beamwidth, power control, and orbital geometry, rather than a fixed reuse pattern. The same set of channels is reused across non-interfering coverage zones enabled by directional spot beams. Satellite beams can be “stacked on top of each other” up to the number of available channels, or they can be allocated optimally across a coverage area determined by user demand.

Although detailed specifications of commercial satellites, whether in operation or in the planning phase, are usually not publicly disclosed. However, companies are required to submit technical filings to the U.S. Federal Communications Commission (FCC). These filings include orbital parameters, frequency bands in use, EIRP, and antenna gain contours, as well as estimated capabilities of the satellite and user terminals. The FCC’s approval of SpaceX’s Gen2 constellation, for instance, outlines many of these values and provides a foundation upon which informed estimates of system behavior and performance can be made. The filings are not exhaustive and may omit sensitive performance data, but they serve as authoritative references for bounding what is technically feasible or likely in deployment.

ACKNOWLEDGEMENT.

I would like to acknowledge my wife, Eva Varadi, for her unwavering support, patience, and understanding throughout the creative process of writing this article.

FURTHER READINGS.

Kim K. Larsen, “Will LEO Satellite Direct-to-Cellular Networks Make Traditional Mobile Networks Obsolete?”, A John Strand Consult Report, (January 2025). This has also been published in full on my own Techneconomy blog.

Kim K. Larsen, “The Next Frontier: LEO Satellites for Internet Services.” Techneconomyblog (March 2024).

Kim K. Larsen, “Stratospheric Drones & Low Earth Satellites: Revolutionizing Terrestrial Rural Broadband from the Skies?” Techneconomyblog (January 2024).

Kim K. Larsen, “A Single Network Future“, Techneconomyblog (March 2024).

NOTE: My “Satellite Coverage Concept Model,” which I have applied to Starlink Direct-2-Dish coverage and Services in the United Kingdom, is not limited to the UK alone but can be straightforwardly generalized to other countries and areas.

Submarine Cable Sensing for Strategic Infrastructure Defense and Arctic Deployment.

A diver approaches a sensing fiber-optic submarine cable beneath the icy waters of the North Atlantic, as a rusting cargo ship floats above and a submarine lurks nearby. The cable’s radiant rings symbolize advanced sensing capabilities, detecting acoustic, seismic, and movement signals. Yet, its exposure also reveals the vulnerability of subsea infrastructure to tampering, espionage, and sabotage, especially in geopolitically tense regions like the Arctic.

WHY WE NEED VISIBILITY INTO SUBMARINE CABLE ACTIVITY.

We can’t protect what we can’t measure. Today, we are mostly blind concerning our global submarine communications networks. We cannot state with absolute certainty whether critical parts of this infrastructure are already compromised by capable hostile state actors ready to press the button at an appropriate time. If the global submarine cable network were to break down, so would the world order as we know it. Submarine cables form the “invisible” backbone of the global digital infrastructure, yet they remain highly vulnerable. Over 95% of intercontinental internet and data traffic traverses subsea cables (which is in the order of between 25% of the total internet traffic worldwide), but these critical assets lie largely unguarded on the ocean floor, exposed to environmental events, shipping activities, and increasingly, geopolitical interference.

In 2024 and early 2025, multiple high-profile incidents involving submarine cable damage have occurred, highlighting the fragility of undersea communication infrastructure in an increasingly unstable geopolitical environment. Several disruptions affected strategic submarine cable routes, raising concerns about sabotage, poor seamanship, and hybrid threats, particularly in sensitive maritime corridors (e.g., Baltic Sea, Taiwan Strait, Red Sea, etc.).

As also discussed in my recent article (“What lies beneath“), one of the most prominent cases of subsea cable cuts occurred November 2024 in the Baltic Sea, where two critical submarine cables, the East-West Interlink between Lithuania and Sweden, and the C-Lion1 cable between Finland and Germany, were damaged in close temporal and spatial proximity. The Chinese cargo vessel Yi Peng 3 was identified as having been in the vicinity during both incidents. During a Chinese-led probe, investigators from Sweden, Germany, Finland, and Denmark boarded the ship in early December. By March 2025, European officials expressed growing confidence that the breaks were accidental rather than acts of sabotage. In December 2025, and also in the Baltic Sea, the Estlink 2 submarine power cable and two telecommunications cables operated by Elisa were ruptured. The suspected culprit was the Eagle S, an oil tanker believed to be part of Russia’s “shadow fleet”, a group of poorly maintained vessels that emerged after Russia’s invasion of Ukraine to circumvent sanctions and transport goods covertly. These vessels are frequently operated by opportunists with little maritime training or seamanship, posing a growing risk to maritime-based infrastructure.

These recent incidents further emphasize the need for proactive monitoring or sensing tools applied to the submarine cable infrastructure. Today, more than 100 subsea cable outages are logged each year globally. Most are attributed to natural or unintentional human-related causes, including poor seamanship and even worse vessels. Moreover, Authorities have noted that, since Russia’s full-scale invasion of Ukraine in 2022, the use of a “ghost fleet” of vessels, often in barely seaworthy condition and operated by underqualified or loosely regulated crews, has grown substantially in scope. These ships, appearing also to be used for hybrid operations or covert missions, operate under minimal oversight, raising the risk of both deliberate interference and catastrophic negligence.

As detailed in my article “What lies beneath“, several particular cable break signatures may be “fingerprints” of hybrid or hostile interference signatures. This may include simultaneous localized cuts, unnatural uniform damage profiles, and activity in geostrategic cable chokepoints, traits that appear atypical of commercial maritime incidents. One notable pattern is the lack of conventional warning signals, e.g., no seismic precursors, known trawling vessels in the area, and rapid phase discontinuities captured in coherent signal traces of the few sensing equipment on submarine cables we have. Equally concerning is the geopolitical context. The Baltic Sea is a critical artery connecting Northern Europe’s cloud infrastructure. Taiwan’s subsea cables are vital to the global chip supply chain and financial systems. Disrupting these routes can create outsized geopolitical pressure, allowing the hostile actor to maintain plausible deniability..

Modern sensing technologies now offer a pathway to detect and characterize such disturbances. Research by Mazur et al. (OFC 2024) has demonstrated real-time anomaly detection across transatlantic submarine cable systems. Their methodology could spot small mechanical vibrations and sudden cable stresses that precede an optical cable failure. Such sensing systems can be retrofitted onto existing landing stations, enabling authorities or cable operations to issue early alerts for potential sabotage or environmental threats.

Furthermore, continuous monitoring allows real-time threat classification, differentiating between earthquake-triggered phase drift and artificial localized cuts. Combined with AI-enhanced analytics and (near) real-time AIS (Automatic Identification System) information, these sensing systems can serve as a digital tripwire along the seabed, transforming our ability to monitor and defend strategic infrastructure.

Without these capabilities, the subsea cable infrastructure landscape remains an operational blind spot, susceptible to exploitation in the next phase of global competition or geopolitical conflict. As threats evolve and hybrid tactics and actions increase, visibility into what lies beneath is advantageous and essential.

Illustration of a so-called Russian “ghost” vessel (e.g., bulk carrier) dragging its stern anchor through a subsea optical communications cable. It is an informal term that describes a Russian vessel operating covertly or suspiciously, often without broadcasting its identity or location using the Automatic Identification System (AIS), the global maritime safety protocol that civilian ships must use.

ISLANDS AT RISK: THE FRAGILE NETWORK BENEATH THE WAVES.

Submarine fiber-optic cables form the “invisible” backbone of global connectivity, silently transmitting over 95% of international data traffic beneath the world’s oceans (note: intercontinental data traffic represents ~25% of the worldwide data traffic). These subsea cables are essential for everyday internet access, cloud services, financial transactions (i.e., over 10 billion euros daily), critical infrastructure operations, emergency response coordination, and national security. Despite their importance, they are physically fragile, vulnerable to natural disruptions such as undersea earthquakes, volcanic activity, and ice movement, as well as to human causes like accidental trawling, ship anchor drags, and even deliberate sabotage. A single cut to a key cable can isolate entire regions or nations from the global network, disrupt trade and governance, and slow or sever international communication for days or weeks.

This fragility becomes even more acute when viewed through the lens of island nations and territories. The figure below presents a comparative snapshot of various islands across the globe, illustrating the number of international subsea cable connections each has (in blue bars), overlaid with the population size in millions (in orange). The disparity is striking: densely populated islands such as Taiwan, Sri Lanka, or Madagascar often rely on only a few cables, while smaller territories like Saint Helena or Gotland may have just a single connection to the rest of the world. These islands inherently depend on subsea infrastructure for access to digital services, economic stability, and international communication, yet many remain poorly connected or dangerously exposed to single points of failure. Some of these Islands may be less important from a global security, geopolitical context and a defense perspective. However, for the inhabitants of those islands, that of course will not matter much, and some islands are of critical importance to a safe and secure world order.

The chart below underscores a critical truth. Island connectivity is not just a matter of bandwidth or speed but a matter of resilience. For many of the world’s islands, a break in the cable doesn’t just slow the internet; it severs the lifeline. Every additional cable significantly reduces systemic risk. For example, going from two to three cables can cut expected unavailability by more than 60–80%, and moving from three to four cables supports near-continuous availability, which is now required for modern economies and national security.

The bar chart shows the number of subsea cable connections, while the orange line represents each island’s population (plotted on a log-scale), highlighting disparities between connectivity and population density.

Reducing systemic risk means lowering the chance that a single point of failure, or a small set of failures, can cause a complete system breakdown. In the context of subsea cable infrastructure, systemic risk refers to the vulnerability that arises when a country’s or island’s entire digital connectivity relies on just one or two physical links to the outside world. With only two international submarine cables connecting a given island in parallel, it would mean that it is deemed acceptable to have up to ~13 minutes of (a total service loss) downtime per year (note: for a single cable, that would be ~2 days per year). This should be compared to the time it may take to get the submarine cable repaired and operational again (after a cut), which may take weeks, or even months, depending on the circumstances and location. Adding a third submarine cable (parallel to the other two) reduces the maximum expected total loss of service to ~4 seconds per year. The likelihood that all 3 would be compromised by naturally occurring incidents would be very small (i.e., one in ten million). Relying on only two submarine cables for an island’s entire international connectivity, at bandwidth-critical scale, is a high-stakes gamble. While dual-cable redundancy may offer sufficient availability on paper, it fails to account for real-world risks such as correlated failures, extended repair times, and the escalating strategic value of uninterrupted digital access. This represents a technical fragility and a substantial security liability for an island economy and a digitally reliant society.

Suppose one cable is accidentally or deliberately damaged, with little or no redundancy. In that case, the entire system can collapse, cutting off internet access, disrupting communication, and halting financial and governmental operations. Reducing systemic risk involves increasing resilience through redundancy, ensuring the overall system continues functioning even if one or more cables fail. This also means not relying on only one type of connectivity, e.g., subsea cables or satellite. Still, combinations of different kinds of connectivity are incredibly important to safeguard continuous connectivity to the outside world from the perspective of an Island, even if alternative or backup connectivity does not match the capacity of the primary means of connectivity. Moreover, islands with relatively low populations tend to rely on one central terrestrial-based switching hub (e.g., typically at the central population hub), without much or meshed connectivity, exposing all communication on an island if such a hub becomes compromised.

Submarine cables are increasingly recognized as strategic targets in a hybrid warfare or full-scale military conflict scenario. Deliberate severance of these cables, particularly in chokepoints, near shore landing zones (i.e., landing stations), or cable branching points, can be a high-impact, low-visibility tactic to cripple communications without overt military action.

Going from two to three (or three to four) subsea cables may offer some strategic buffer. If an attacker compromises one or even two links, the third can preserve some level of connectivity, allowing essential communications, coordination, and early warning systems to remain operational. This may reduce the impact window for disruption and provide authorities time to respond or re-route traffic. However, it is unlikely to make a substantial difference in a conflict scenario, where a capable hostile actor may easily compromise a relatively low number of submarine cable connections. Moreover, if the terrestrial network is exposed to a single point of failure via a central switching hub design, having multiple subsea connections may matter very little in a crisis situation.

And, think about it, there is no absolute guarantee that the world’s critical subsea infrastructure has not already been compromised by hostile actors. In fact, given the strategic importance of submarine cables and the increasing sophistication of state and non-state actors in hybrid warfare, it appears entirely plausible that certain physical and cyber vulnerabilities have already been identified, mapped, or even covertly exploited.

In short, the absence of evidence is not evidence of absence. While major nations and alliances like NATO have increased efforts to monitor and secure subsea infrastructure, the sheer scale and opacity of the undersea environment mean that strategic surprise is still possible (maybe even likely). It is also worth remembering that most submarine cables operate in the dark in the historical and even present-day context. We rely on their redundancy and robustness, but we largely lack the sensory systems that allow us to proactively defend or observe them in real time.

This is what makes submarine cable sensing technologies such a strategic frontier today and why resilience, through redundancy, sensing technologies, and international cooperation, is critical. We may not be able to prevent every act of sabotage, but we can reduce the risk of catastrophic failure and improve our ability to detect and respond in real time.

THE LIKELY SUSPECTS – THE CAPABLE HOSTILE ACTOR SEEN FROM A WESTERN PERSPECTIVE.

As observed in the Western context, Russia and China are considered the most capable hostile actors in submarine cable sabotage. China is reportedly advancing its ability to conduct such operations at scale. These developments underscore the growing need for technological defenses and multilateral coordination to safeguard global digital infrastructure.

Several state actors possess the capability and potential intent to compromise or destroy submarine communications networks. Among them, Russia is perhaps the most openly scrutinized. Its specialized naval platforms, such as the Yantar-class intelligence ships and deep-diving submersibles like the AS-12 “Losharik”, can access cables on the ocean floor for tapping or cutting purposes. Western military officials have repeatedly raised concerns about Russia’s activities near undersea infrastructure. For example, NATO has warned of increased Russian naval activity near transatlantic cable routes, viewing this as a serious security risk impacting nearly a billion people across North America and Western Europe.

China is also widely regarded as a capable actor in this domain. The People’s Liberation Army Navy (PLAN) and a vast network of state-linked maritime engineering firms possess sophisticated underwater drones, survey vessels, and cable-laying ships. These assets allow for potential cable mapping, interception, or sabotage operations. Chinese maritime activity around strategic chokepoints such as the South China Sea has raised suspicions of dual-use missions under the guise of oceanographic research.

Furthermore, credible reports and analyses suggest that China is developing methods and technologies that could allow it to compromise subsea cable networks at scale. This includes experimental systems enabling simultaneous disruption or surveillance of multiple cables. According to Newsweek, recent Chinese patents may indicate that China has explored ways to “cut or manipulate undersea cables” as part of its broader strategy for information dominance.

Other states, such as North Korea and Iran, may not possess full deep-sea capabilities but remain threats to regional segments, particularly shallow water cables and landing stations. With its history of asymmetric tactics, North Korea could plausibly disrupt cable links to South Korea or Japan. Meanwhile, Iran may threaten Persian Gulf routes, especially during heightened conflict.

While non-state actors are not typically capable of attacking deep-sea infrastructure directly, they could be used by state proxies or engage in sabotage at cable landing sites. These actors may exploit the relative physical vulnerability of cable infrastructure near shorelines or in countries with less robust monitoring systems.

Finally, it is not unthinkable that NATO countries possess the technical means and operational experience to compromise submarine cables if required. However, their actions are typically constrained by strategic deterrence, international law, and alliance norms. In contrast, Russia and China are perceived as more likely to use these capabilities to project coercive power or achieve geopolitical disruption under a veil of plausible deniability.

WE CAN’T PROTECT WHAT WE CAN’T MEASURE – WHAT IS THE SENSE OF SENSING SUBMARINE CABLES?

In the context of submarine fiber-optic cable connections, it should be clear that we cannot protect this critical infrastructure if we are blind to the environment around it and along the cables themselves.

While traditionally designed for high-capacity telecommunications, submarine optical cables are increasingly recognized as dual-use assets, serving civil and defense purposes. When enhanced with distributed sensing technologies, these cables can act as persistent monitoring platforms, capable of detecting physical disturbances along the cable routes in (near) real time.

From a defense perspective, sensing-enabled subsea cables offer a discreet, infrastructure-integrated solution for maritime situational awareness. Technologies such as Distributed Acoustic Sensing (DAS), Coherent Optical Frequency Domain Reflectometry (C-OFDR), and State of Polarization (SOP) sensing can detect anomalies like trawling activity, anchor dragging, undersea vehicle movement, or cable tampering, especially in coastal zones or strategic chokepoints like the GIUK gap or Arctic straits. When paired with AI-driven classification algorithms, these systems can provide early-warning alerts for hybrid threats, such as sabotage or unregistered diver activity near sensitive installations.

For critical infrastructure protection, these technologies play an essential role in real-time monitoring of cable integrity. They can detect:

  • Gradual mechanical strain due to shifting seabed or ocean currents,
  • Seismic disturbances that may precede physical breaks,
  • Ice loading or iceberg impact events in polar regions.

These sensing systems also enable faster fault localization. While they are not likely to prevent a cable from being compromised, whether by accidental impact or deliberate sabotage, they dramatically reduce the time required to identify the problem’s location. In traditional submarine cable operations, pinpointing a break can take days, especially in deep or remote waters. With distributed sensing, operators can localize disturbances within meters along thousands of kilometers of cable, enabling faster dispatch of repair vessels, route reconfiguration, and traffic rerouting.

Moreover, sensing technologies that operate passively or without interrupting telecom traffic, such as SOP sensing or C-OFDR, are particularly well suited for retrofitting onto existing brownfield infrastructure or deployment on dual-use commercial-defense systems. They offer persistent, covert surveillance without consuming bandwidth or disrupting service, an advantage for national security stakeholders seeking scalable, non-invasive monitoring solutions. As such, they are emerging as a critical layer in the defense of underwater communications infrastructure and the broader maritime domain.

We should remember that no matter how advanced our monitoring systems are, they are unlikely to prevent submarine cables from being compromised by natural events like earthquakes and icebergs or unintentional and deliberate human activity such as trawling, anchor strikes, or sabotage. However, the sensing technologies offer the ability to detect and localize problems faster, enabling quicker response and mitigation.

TECHNOLOGY OVERVIEW: SUBMARINE CABLE SENSING.

Modern optical fiber sensing leverages the cable’s natural backscatter phenomena, such as Rayleigh, Brillouin, and Raman effects, to extract environmental data from a subsea communications cable. The physics of these effects is briefly described at the end of this article.

In the following, I will provide a comparative outline of the major sensing technologies in use today or may be deployed in future greenfield submarine fiber deployments. Each method has trade-offs in spatial or temporal resolution, compatibility with existing infrastructure, cost, and robustness to background noise. We will focus on defense applications in general applied to Arctic coastal environments, such as around Greenland. The relevance of each optical cable sensing technology described below to maritime defense will be summarized.

Some of the most promising sensing technologies today are based on the principles of Rayleigh scattering. For most sensing techniques, Rayleigh scattering is crucial in transforming standard optical cables into powerful sensor arrays without necessarily changing the physical cable structure. This makes it particularly valuable for submarine cable applications in the Arctic and strategic defense settings. By analyzing the light that bounces back from within the fiber, these systems can enable (near) real-time monitoring of intrusions or seismic activity over vast distances, spanning thousands of kilometers. Importantly, promising techniques are leverage Rayleigh scattering to function effectively even on legacy cable infrastructure, where installing additional reflectors would be impractical or uneconomical. Since Rayleigh-based sensing can be performed passively and non-invasively, it does not interfere with active data traffic, making it ideal for dual-use cables for communication and surveillance purposes. This approach offers a uniquely scalable and resilient way to enhance situational awareness and infrastructure defense in harsh or remote environments like the Arctic.

Before we get started on the various relevant sensing technologies let us briefly discuss what we mean by a sensing technology’s performance and its sensing capability, that is how well it can detect, localize, and classify physical disturbances, such as vibration, strain, acoustic pressure, or changes in light polarization, along a fiber-optic cable. The performance is typically judged by parameters like spatial resolution, detection range, sensitivity, signal-to-noise ratio, and the system’s ability to operate in noisy or variable environments. In the context of submarine detection, these disturbances are often caused by acoustic signals generated by vessel propulsion, machinery noise, or pressure waves from movement through the water. While the fiber does not measure sound pressure directly, it can detect the mechanical effects of those acoustic waves, such as tiny vibrations or refractive index changes in the surrounding seabed or cable sheath. The technologies we deploy have to be able to detect these vibrations as phase shifts in backscattered light. In contrast, other technologies may track subtle polarization changes induced by environmental stress on the subsea optical cables (as a result of an event in the proximity of the cable). A sensing system is considered effective when it can capture and resolve these indirect signatures of underwater activity with enough fidelity to enable actionable interpretation, especially in complex environments like coastal Arctic zones or the deep ocean.

In underwater acoustics, sound is measured in units of decibels relative to 1 micro Pascal, expressed as “dB re 1 µPa”, which defines a standard reference pressure level. The notation “dB re 1 µPa @ 1 m” refers to the sound pressure level of an underwater source, expressed in decibels relative to 1 micro Pascal and measured at a standard distance of one meter from the source. This metric quantifies how loud an object, such as a submarine, diver, or vessel, sounds when observed at close range, and is essential for modeling how sound propagates underwater and estimating detection ranges. In contrast, noise floor measurements use “dB re 1 µPa/√Hz,” which describes the distribution of background acoustic energy across frequencies, normalized per unit bandwidth. While source level describes how powerful a sound is at its origin, noise floor values indicate how easily such a sound could be detected in a given underwater environment.

Measurements are often normalized to bandwidth to assess sound or noise frequency characteristics, using “dB re 1 µPa/√Hz”. For example, stating a noise level of 90 dB re 1 µPa/√Hz in the 10 to 1000 Hz band means that within that frequency range, the acoustic energy is distributed at an average pressure level referenced per square root of Hertz. This normalization allows fair comparison of signals or noise across different sensing bandwidths. It helps determine whether a signal, such as a submarine’s acoustic signature, can be detected above the background noise floor. The effectiveness of a sensing technology is ultimately judged by whether it can resolve these types of signals with sufficient clarity and reliability for the specific use case.

In the mid-latitude Atlantic Ocean, typical noise floor levels range between 85 and 105 dB re 1 µPa/√Hz in the 10 to 1000 Hz frequency band. This environment is shaped by intense shipping traffic, consistent wave action, wind-generated surface noise, and biological sources such as whales. The noise levels are generally higher near busy shipping lanes and during storms, which raises the acoustic background and makes it more challenging to detect subtle events such as diver activity or low-signature submersibles (e.g., ballistic missile submarine, SSBN). In such settings, sensing techniques must operate with high signal-to-noise ratio thresholds, often requiring filtering or focusing on specific narrow frequency bands and enhanced by machine learning applications.

On the other hand, the Arctic coastal environment, such as the waters surrounding Greenland, is markedly quieter than, for example, the Atlantic Ocean. Here, the noise floor typically falls between 70 and 95 dB re 1 µPa/√Hz, and in winter, when sea ice covers the surface, it can drop even lower to around 60 dB. In these conditions, noise sources are limited to occasional vessel traffic, wind-driven surface activity, and natural phenomena such as glacial calving or ice cracking. The seasonal nature of Arctic noise patterns means that the acoustic environment is especially quiet and stable during winter, creating ideal conditions for detecting faint mechanical disturbances. This quiet background significantly improves the detectability of low-amplitude events, including the movement of stealth submarines, diver-based tampering, or UUV (i.e., unmanned underwater vehicles) activity.

Distributed Acoustic Sensing (DAS) uses phase-sensitive optical time-domain reflectometry (φ-OTDR) to detect acoustic vibrations and dynamic strain in general. Dynamic strain may arise from seismic waves or mechanical impacts along an optical fiber path. DAS allows for structural monitoring at a resolution of ca. 10 meters and a typical distance with amplification of 10 to 100 kilometers (can be extended by more amplifiers). It is an active sensor technology. DAS can be installed on shorter submarine cables (e.g., less than 100 km), although installing on a brownfield subsea cable is relatively complex. For long submarine cables (e.g., transatlantic), DAS would be greenfield deployed in conjunction with the subsea cable rollout, as retrofitting on an existing fiber installation would be impractical.

Phase-sensitive optical time domain reflectometry is a sensing technique that allows an optical fiber, like those used in subsea cables, to act like a long string of virtual microphones or vibration sensors. The method works by sending short pulses of laser light into the fiber and measuring the tiny reflections that bounce back due to natural imperfections inside the glass. When there is no activity near the cable, the backscattered light has a stable pattern. But when something happens near the cable, like a ship dragging an anchor, seismic shaking, or underwater movement, those vibrations cause tiny changes in the fiber’s shape. This physically stretches or compresses the fiber, changing the phase of the light traveling through it. φ-OTDR is specially designed to be sensitive to these phase changes. What is being detected, then, is not a “sound” per se, but a tiny change in the timing (phase) of the light as it reflects back. These phase shifts happen because mechanical energy from the outside world, like movement, stress, or pressure, slightly changes the length of the fiber or its refractive properties at specific points. φ-OTDR is ideal for detecting vibrations, like footsteps (yes, the technique also works on terra firma), vehicle movement, or anchor dragging. It is best suited for acoustic sensing over relatively long distances with moderate resolution.

So, in simple terms:

  • The “event” is not inside the fiber but in sufficient vicinity to cause a reaction in the fiber.
  • That external event causes micro-bending or stretching of the fiber.
  • The fiber cable’s mechanical deformation changes the phase of light that is then detected.
  • The sensing system uses these changes to pinpoint where along the fiber the event happened, often with meter-scale precision.

DAS has emerged as a powerful tool for transforming optical fibers into real-time acoustic sensor arrays, capable of detecting subtle mechanical disturbances such as vibrations, underwater movement, or seismic waves. While this capability is very attractive for defense and critical infrastructure monitoring, its application across existing long-haul subsea cables, particularly transoceanic systems, is severely constrained. The technology requires dark fibers or at least isolated, unused wavelengths, which are generally unavailable in (older) operational submarine systems already carrying high-capacity data traffic. Moreover, most legacy subsea cables were not designed with DAS compatibility in mind, lacking the bidirectional amplification or optical access points required to maintain sufficient signal integrity for acoustic sensing over long distances.

Retrofitting existing transatlantic or pan-Arctic submarine cables for DAS would be technically complex and, in most scenarios, likely economically unfeasible. These systems span thousands of kilometers, are deeply buried or armored along parts of their route, and incorporate in-line repeaters that do not support the backscattering reflection needed for DAS. As a result, implementing DAS across such long-haul infrastructure would entail replacing major cable components or deploying parallel sensing fibers. Both options may likely be inconsistent with the constraints of an already-deployed system. Suppose this kind of sensing capability is deemed strategically necessary. In that case, it may be operationally much less complex and more economical to deploy a greenfield cable with the embedded sensing technology, particularly for submarine cables that are 10 years old or older.

Despite these limitations, DAS offers significant potential for defense applications over shorter submarine segments, particularly near coastal landing points or within exclusive economic zones. One promising use case involves the Arctic and sub-Arctic regions surrounding Greenland. As geopolitical interest in the Arctic intensifies and ice-free seasons expand, the cables that connect Greenland to Iceland, Canada, and northern Europe will increasingly represent strategic infrastructure. DAS could be deployed along these shorter subsea spans, especially within fjords, around sensitive coastal bases, or in narrow straits, to monitor for hybrid threats such as diver incursions, submersible drones, or anchor dragging from unauthorized vessels. Greenland’s coastal cables often traverse relatively short distances without intermediate amplifiers and with accessible routes, making them more amenable to partial DAS coverage, especially if dark fiber pairs or access points exist at the landing stations.

The technology can be integrated into the infrastructure in a greenfield context, where new submarine cables are being designed and laid out. This includes reserving fiber strands exclusively for sensing, installing bidirectional optical amplifiers compatible with DAS, and incorporating coastal and Arctic-specific surveillance requirements into the architecture. For example, new Arctic subsea cables could be designed with DAS-enabled branches that extend into high-risk zones, allowing for passive real-time monitoring of marine activity without deploying sonar arrays or surface patrol assets (e.g., not actively communicate for example a ballistic missile submarine that it has been found as would have been the case with an active sonar).

DAS also supports geophysical and environmental sensing missions relevant to Arctic defense. When deployed along the Greenlandic shelf or near tectonic fault lines, DAS can contribute to early-warning systems for undersea earthquakes, landslides, or ice-shelf collapse events. These capabilities enhance environmental resilience and strengthen military situational awareness in a region where traditional sensing infrastructure is sparse.

DAS is best suited for detecting mid-to-high frequency acoustic energy, such as propeller cavitation or hull vibrations. However, stealth submarines may not produce strong enough vibrations to be detected unless they operate close to the fiber (e.g., <1 km) or in shallow water where coupling to the seabed is enhanced. Detection is plausible under favorable conditions but uncertain in deep-sea environments. However, in shallow Greenlandic coastal waters, DAS may detect a submarine’s acoustic wake, cavitation onset, or low-frequency hull vibrations, especially if the vessel passes within several hundred meters of the fiber.

Deploying φ-OTDR on brownfield submarine cables requires minimal infrastructure changes, as the sensing system can be installed directly at the landing station using a dedicated or wavelength-isolated fiber. However, its effective sensing range is limited to the segment between the landing station and the first in-line optical amplifier, typically around 80 to 100 kilometers. This limitation exists because standard submarine amplifiers are unidirectional and amplify the forward-traveling signal only. They do not support the return of backscattered light required by φ-OTDR, effectively cutting off sensing beyond the first repeater in brownfield systems. Even in a greenfield deployment, φ-OTDR is fundamentally constrained by weak backscatter, incoherent detection, poor long-distance SNR, and amplifier design, making it a technology mainly for coastal environments.

Coherent Optical Frequency Domain Reflectometry (C-OFDR) employs continuous-wave frequency-chirped laser probe signals and measures how the interference pattern (of the reflected light) changes (i.e., coherent detection). It offers high resolution (i.e., 100 -200 meters) and, for telecom-grade implementations, long-range sensing (i.e., 100s km), even over legacy submarine cables without Bragg gratings (i.e., period variation of the refractive index of the fiber). It is an active sensor technology. C-OFDR is one of the most promising techniques for high-resolution distributed sensing over long distances (e.g., transatlantic distances), and it can, in fact, be used on existing operational subsea cables without any special modifications to the cable itself, although with some practical considerations on older systems and limitations due to a reduced dynamic range. However, this sensing technology does require coherent detection systems with narrow-linewidth lasers and advanced DSP, which might make brownfield integration complex without significant upgrades. In contrast, greenfield deployments can seamlessly incorporate C-OFDR by leveraging the coherent optical infrastructure already standard in modern long-haul submarine cables. C-OFDR technique, like φ-OTDR, also relies on sensing changes in lights properties as it is reflected from imperfections in the fiber optical cable (i.e., Rayleigh backscattering). When something (an “event”) happens near the fiber, like the ground shakes from an earthquake, an anchor hits the seabed, or temperature changes, the optical fiber experiences microscopic stretching, squeezing, or vibration. These tiny changes affect how the light reflects back. Specifically, they change the phase and frequency of the returning signal. C-OFDR uses interferometry to measure these small differences very precisely. It is important to understand that the “event” we talk about is not inside the fiber, but its effects are causing changes to the fiber that can be measured by our chosen sensing technique. External forces (like pressure or motion) cause strain or stress in the glass fiber, which changes how the light moves inside. C-OFDR detects those changes and tells you where along the cable these changes happened, sometimes within a few centimeters.

Deploying C-OFDR on brownfield submarine cables is more challenging, as it typically requires more changes to the landing station, such as coherent transceivers with narrow-linewidth lasers and high-speed digital signal processing, which are normally not present in legacy landing stations. Even if such equipment is added at the landing station, like φ-OTDR, sensing may be limited to the segment up to the first in-line amplifier unless modified as shown in the work by Mazur et al.. C-OFDR, compared to φ-OTDR, leverages coherent receivers, DSP, and telecom-grade infrastructure to overcome those barriers, making C-OFDR a very relevant long-haul subsea cable sensing technology.

An interesting paper using a modified C-OFDR technique,  “Continuous Distributed Phase and Polarization Monitoring of Trans-Atlantic Submarine Fiber Optic Cable” by Mazur et al., demonstrates a powerful proof-of-concept for using existing long-haul submarine telecom cables, equipped with more than 70 amplifiers, for real-time environmental sensing without interrupting data transmission. The authors used a prototype system combining a fiber laser, FPGA (Field-Programmable Gate Array), and GPU (Graphical Processing Unit) to perform long-range optical frequency domain reflectometry (C-OFDR) over a 6,500 km transatlantic submarine cable. By measuring phase and polarization changes between repeaters, they successfully detected a 6.4 magnitude earthquake near Ferndale, California, showing the seismic wave propagating in real-time from the West Coast of the USA, across North America, and was eventually observed by Mazur et al. in the Atlantic Ocean. Furthermore, they demonstrated deep-sea temperature measurements by analyzing round-trip time variations along the full cable spans. The system operated for over two months without service interruptions, underscoring the feasibility of repurposing submarine cables as large-scale oceanic sensing arrays for geophysical and defense applications. Their system’s ability to monitor deep-sea environmental variations, such as temperature changes, contributes to situational awareness in remote oceanic regions like the Arctic or the Greenland-Iceland-UK (GIUK) Gap, areas of increasing strategic importance. It is worth noting that while the basic structure of the cable (in terms of span length and repeater placement) is standard for long-haul subsea cable systems, what sets this cable apart is the integration of a non-disruptive monitoring system that leverages existing infrastructure for advanced environmental sensing, a capability not found in most subsea systems deployed purely for telecom.

Furthermore, using C-OFDR and polarization-resolved sensing (SOP) without disrupting live telecommunications traffic provides a discreet means of monitoring infrastructure. This is particularly advantageous for covert surveillance of vital undersea routes. Finally, the system’s fine-grained phase and polarization diagnostics have the potential to detect disturbances such as anchor drags, unauthorized vessel movement, or cable tampering, activities that may indicate hybrid threats or espionage. These features position the technology as a promising enabler for real-time intelligence, surveillance, and reconnaissance (ISR) applications over existing subsea infrastructure.

C-OFDR is very sensitive over long distances and, when optimized with narrowband probing, may detect subtle refractive index changes caused by waterborne pressure variations. While more robust than DAS at long range, its ability to resolve weak, broadband submarine noise signatures remains speculative and would likely require AI-based classification. In Greenland, C-OFDR might be able to detect subtle pressure variations or cable stress caused by passing submarines, but only if the cable is close to the source.

Phase-based sensing, which φ-OTDR belongs to, is an active sensing technique that tracks the phase variation of optical signals for precise mechanical event detection. It requires narrow linewidth lasers and sensitive DSP algorithms. In phase-based sensing, we send very clean, stable light from a narrow-linewidth laser through the fiber cable. We then measure how the phase of that light changes as it travels. These phase shifts are incredibly sensitive to tiny movements, smaller than a wavelength of light. As discussed above, when the fiber is disturbed, even just a little, the light’s phase changes, which is what the system detects. This sensing technology offers a theoretical spatial resolution of 1 meter and is currently expected to be practical over distances less than 10 kilometers. In general, phase-based sensing is a broader class of fiber-optic sensing methods that detect optical phase changes caused by mechanical, thermal, or acoustic disturbances.

Phase-based sensing technologies detect sub-nanometer variations in the phase of light traveling through an optical fiber, offering exceptional sensitivity to mechanical disturbances such as vibrations or pressure waves. However, its practical application over the existing installed base of submarine cable infrastructure remains extremely limited. Some of the more advanced implementations are largely confined to laboratory settings due to the need for narrow-linewidth lasers, high-coherence probe sources, and low-noise environments. These conditions are difficult to achieve across real-world subsea spans, especially those with optical amplifiers and high traffic loads. These technical demands make retrofitting phase-based sensing onto operational subsea cables impractical, particularly given the complexity of accessing in-line repeaters and the susceptibility of phase measurements to environmental noise. Still, as the technology matures and can be adapted to tolerate noisy and lossy environments, it could enable ultra-fine detection of small-scale events such as underwater cutting tools, diver-induced vibrations, or fiber tampering attempts.

In a defense context, phase-based sensing might one day be used to monitor high-risk cable landings or militarized undersea chokepoints where detecting subtle mechanical signatures could provide an early warning of sabotage or surveillance activity. Its extraordinary resolution could also contribute to low-profile detection of seabed motion near sensitive naval installations. While not yet field-deployable at scale, it represents a promising frontier for future submarine sensing systems in strategic environments, typically in proximity to coastal areas.

Coherent MIMO Distributed Fiber Sensing (DFS) is another cutting-edge active sensing technique belonging to the phase-based sensing family that uses polarization-diverse probing for spatially-resolved sensing on deployed multi-core fibers (MCF), enabling robust, high-resolution environmental mapping. This technology remains currently limited to laboratory environments and controlled testbeds, as the widespread installed base of submarine cables does not use MCF and lacks the transceiver infrastructure required to support coherent MIMO interrogation. Retrofitting existing subsea systems with this capability would require complete replacement of the fiber plant, making it infeasible for legacy infrastructure, but potentially interesting for greenfield deployments.

Despite these limitations, the future application of Coherent MIMO DFS in defense contexts is compelling. Greenfield deployments, such as new Arctic cables or secure naval corridors, could enable real-time acoustic and mechanical activity mapping across multiple parallel cores, offering spatial resolution that rivals or exceeds existing sensing platforms. This level of precision could support the detection and classification of complex underwater threats, including stealth submersibles or distributed tampering attempts. With further development, it might also support wide-area surveillance grids embedded directly into the fiber infrastructure of critical sea lanes or military installations. While not deployable on today’s global cable networks, it represents a next-generation tool for submarine situational awareness in future defense-grade fiber systems.

State of Polarization (SOP) sensing technology detects changes in light polarization that allow sensing environmental disturbances to a submarine optical cable. It can be implemented passively using existing coherent transceivers and thus can be used on existing operational submarine cables. The SOP sensing technology does not offer spatial resolution by default. However, it has a very high temporal sensitivity on a millisecond level, allowing it to resolve temporally localized SOP anomalies that may often be precursors for a structurally compromised submarine cable. SOP sensing provides timely and actionable information even without pinpoint spatial resolution for applications like cable break prediction, anomaly detection, and hybrid threat alerts. However, if the temporal information can be mapped back to the compromised physical location within 10s of kilometers. The SOP sensing can cover up to 1000s of kilometers of a submarine system.

SOP sensing provides path-integrated information about mechanical stress or vibration. While it lacks spatial resolution, it could register anomalous polarization disturbances along Arctic cable routes that coincide with suspected submarine activity. Even global SOP anomalies may be suspicious in Greenland’s sparse traffic environment, but localizing the source would remain challenging. It is likely a technique that, combined with C-OFDR, would offer both a spatial and temporal picture that, in combination, could become a promising use case. SOP provides fast, passive temporal detection, while C-OFDR (or DAS) delivers spatial resolution and event classification. The combination may offer a more robust and operationally viable architecture for strategic subsea sensing, suitable for civilian and defense applications across existing and future cable systems.

Deploying SOP-based sensing on brownfield submarine cables requires no changes to the cable infrastructure, such as landing stations. It passively monitors changes in the state of polarization at the transceiver endpoints. However, this method does not provide spatial resolution and cannot localize events along the cable. It also does not rely on backscatter, and therefore its sensing capability is not limited by the presence of amplifiers, unlike φ-OTDR or C-OFDR. The limitation, instead, is that SOP sensing provides only a global, integrated signal over the entire fiber span, making it effective for detecting disturbances but not pinpointing their location.

Table: Performance characteristics of key optical fiber sensing technologies for subsea applications.
The table summarizes spatial resolution, operational range, minimum detectable sound levels, activation state, and compatibility with existing subsea cable infrastructure. Values reflect current best estimates and lab performance where applicable, highlighting trade-offs in detection sensitivity and deployment feasibility across sensing modalities. Range depends heavily on system design. While traditional C-OFDR typically operates over short ranges (<100 m), advanced variants using telecom-grade coherent receivers may extend reach to 100s of km at lower resolution. This table, as well as the text, considers the telecom-grade variant of C-OFDR.

Beyond the sensing technologies already discussed, such as DAS (including φ-OTDR), C-OFDR, SOP, and Coherent MIMO DFS, several additional, lesser-known sensing modalities can be deployed on or alongside submarine cables. These systems differ in physical mechanisms, deployment feasibility, and sensitivity, and while some remain experimental, others are used in niche environmental or energy-sector applications. Several of these have implications for defense-related detection scenarios, including submarine tracking, sabotage attempts, or unauthorized anchoring, particularly in strategically sensitive Arctic regions like Greenland’s West and East Coasts.

One such system is Brillouin-based distributed sensing, including Brillouin Optical Time Domain Analysis (BOTDA) and Brillouin Optical Time Domain Reflectometry (BOTDR). These methods operate by sending pulses down the fiber and analyzing the Brillouin frequency shift, which varies with temperature and strain. The spatial resolution is typically between 0.5 and 1 meter, and the sensing range can extend to 50 km under optimized conditions. The system’s strength is detecting slow-moving structural changes, such as seafloor deformation, tectonic strain, or sediment pressure buildup. However, because the Brillouin interaction is weak and slow to respond, it is poorly suited for real-time detection of fast or low-amplitude acoustic events like those produced by a stealth submarine or diver. Anchor dragging might be detected, but only if it results in significant, sustained strain in the cable. These systems could be modestly effective in shallow Arctic shelf environments, such as Greenland’s west coast, but they are not viable for real-time defense monitoring.

Another temperature-focused method is Raman-based distributed temperature sensing (DTS). This technique analyzes the ratio of Stokes and anti-Stokes backscatter to detect temperature changes along the fiber, with spatial resolution typically on the order of 1 meter and ranges up to 10–30 km. Raman DTS is widely used in the oil and gas industry for downhole monitoring, but is not optimized for dynamic or mechanical disturbances. It offers little utility in detecting diver activity, submarine motion, or anchor drag unless such events lead to secondary thermal effects. Furthermore, Raman DTS is unsuitable for detecting fast-moving threats like submarines or divers. It can detect slow thermal anomalies caused by prolonged contact, buried tampering devices, or gradual sediment buildup. Thus, it may serve as a background “health monitor” for defense-relevant subsea critical infrastructures. As its enabling mechanism is Raman scattering, which is even weaker than Rayleigh and Brillouin scattering, it is likely to make this sensor technology unsuitable for Arctic defense applications. Moreover, the cold and thermally stable Arctic seabed provides a limited dynamic range for temperature-induced sensing.

A more advanced but experimental method is optical frequency comb (OFC)-based sensing, which uses an ultra-stable frequency comb to probe changes in fiber length and strain with sub-picometer resolution. This offers unparalleled spatial granularity (down to millimeters) and could, in theory, detect subtle refractive index changes induced by acoustic coupling or mechanical perturbation. However, range is limited to short spans (<10 km), and implementation is complex and not yet field-viable. This technology might detect micro-vibrations from nearby submersibles or diver-induced strain signatures in a future defense-grade network, especially greenfield deployments in Arctic coastal corridors. The physical mechanism is interferometric phase detection, amplified by comb coherence and time-of-flight mapping. Frequency comb-based techniques could be the foundation for a next-generation submarine cable monitoring system, especially in greenfield defense-focused coastal deployments requiring excellent spatial resolution under variable environmental conditions. Unlike traditional reflectometry or phase sensing, the laser frequency comb should be able to maintain calibrated performance in fluctuating Arctic environments, where salinity and temperature affect refractive index dramatically, and therefore, a key benefit for Greenlandic and Arctic deployments.

Another emerging direction is Integrated Sensing and Communication (ISAC), where linear frequency-modulated sensing signals are embedded directly into the optical communication waveform. This approach avoids dedicated dark fiber and can achieve moderate spatial resolution (~100–500 meters) with ranges of up to 80 km using coherent receivers. ISAC has been proposed for simultaneous data transmission and distributed vibration sensing. In Arctic coastal areas, where telecom capacity may be underutilized and infrastructure redundancy is limited, ISAC could enable non-invasive monitoring of anchor strikes or structural cable disturbances. It may not detect quiet submarines unless direct coupling occurs, but it could potentially flag diver-based sabotage or hybrid threats that cause physical cable contact.

Lastly, hybrid systems combining external sensor pods, such as tethered hydrophones, magnetometers, or pressure sensors, with submarine cables are deployed in specialized ocean observatories (e.g., NEPTUNE Canada). These use the cable for power and telemetry and offer excellent sensitivity for detecting underwater acoustic and geophysical events. However, they require custom cable interfaces, increased power provisioning, and are not easily retrofitted to commercial or legacy submarine systems. In Arctic settings, such systems could offer unparalleled awareness of glacier calving, seismic activity, or vessel movement in chokepoints like the Kangertittivaq (i.e., Scoresby Sund) or the southern exit of Baffin Bay (i.e., Avannaata Imaa). The main limitation of hybrid systems lies in their cost and the need for local infrastructure support. The economics relative to such systems’ benefits requires careful consideration compared to more conventional maritime sensor architectures.

DEFENSE SCENARIOS OF CRITICAL SUBSEA CABLE INFRASTRUCTURE.

Submarine cable infrastructure is increasingly recognized as a medium for data transmission and a platform for environmental and security monitoring. With the integration of advanced optical sensing technologies, these cables can detect and interpret physical disturbances across vast underwater distances. This capability opens up new opportunities for national defense, situational awareness, and infrastructure resilience, particularly in coastal and Arctic regions where traditional surveillance assets are limited. The following section outlines how different sensing modalities, such as DAS, C-OFDR, SOP, and emerging MIMO DFS, can support key operational objectives ranging from seismic early warning to hybrid threat detection. Each scenario case reflects a unique combination of acoustic signature, environmental setting, and technological suitability.

  • Intrusion Detection: Detect tampering, trawling, or vehicle movement near cables in coastal zones.
  • Seismic Early Warning: Monitor undersea earthquakes with high fidelity, enabling early warning for tsunami-prone regions.
  • Cable Integrity Monitoring: Identify precursor events to fiber breaks and trigger alerts to reroute traffic or dispatch response teams.
  • Hybrid Threat Detection: Monitor signs of hybrid warfare activities such as sabotage or unauthorized seabed operations near strategic cables. This also includes anchor-dragging sounds.
  • Maritime Domain Awareness: Track vessel movement patterns in sensitive maritime zones using vibrations induced along shore-connected cable infrastructure.

Intrusion Detection involving trawling, tampering, or underwater vehicle movement near the cable is best addressed using Distributed Acoustic Sensing (DAS), especially on coastal Arctic subsea cables where environmental noise is lower and mechanical coupling between the cable and the seafloor is stronger. DAS can detect short-range, high-frequency mechanical disturbances from human activity. However, this is more challenging in the open ocean due to poor acoustic coupling and cable burial. Coherent Optical Frequency Domain Reflectometry (C-OFDR) combined with State of Polarization (SOP) sensing offers a more passive and feasible alternative in such environments. C-OFDR can detect strain anomalies and localized pressure effects, while SOP sensing can identify anomalous polarization drift patterns caused by motion or stress, even on live traffic-carrying fibers.

For Seismic Early Warning, phase-based sensing (including both φ-OTDR and C-OFDR) is well suited across coastal and oceanic deployments. These technologies detect low-frequency ground motion with high sensitivity and temporal resolution. Phase-based methods can sense teleseismic activity or tectonic shifts along the cable route in deep ocean environments. The advantage increases in the Arctic coastal zones due to low background noise and shallow deployment, enabling the detection of smaller regional seismic events. Additionally, SOP sensing, while not a primary seismic tool, can detect long-duration cable strain or polarization shifts during large quakes, offering a redundant sensing layer.

Combining C-OFDR and SOP sensing is most effective for Cable Integrity Monitoring, particularly for early detection of fiber stress, micro-bending, or fatigue before a break occurs. SOP sensing works especially well for long-haul ocean cables with live data traffic, where passive, non-intrusive monitoring is essential. C-OFDR is more sensitive to local strain patterns and can precisely locate deteriorating sections. In Arctic coastal cables, this combination enables operators to detect damage from ice scouring, sediment movement, or thermal stress due to permafrost dynamics.

Hybrid Threat Detection benefits most from high-resolution, multi-modal sensing, such as detecting sabotage or seabed tampering by divers or unmanned vehicles. Along coastal regions, including Greenland’s fjords, Coherent MIMO Distributed Fiber Sensing (DFS), although still in its early stages, shows great promise due to its ability to spatially resolve overlapping disturbance signatures across multiple cores or polarizations. DAS may also contribute to near-shore detection if acoustic coupling is sufficient. On ocean cables, SOP sensing fused with AI-based anomaly detection provides a stealthy, always-on layer of hybrid threat monitoring, especially when other modalities (e.g., sonar, patrols) are absent or infeasible.

Finally, DAS is effective along coastal fiber segments for Maritime Domain Awareness, particularly tracking vessel movement in sensitive Arctic corridors or near military installations. It detects the acoustic and vibrational signatures of passing vessels, anchor deployment, or underwater vehicle operation. These signatures can be classified using spectrogram-based AI models to differentiate between fishing boats, cargo vessels, or small submersibles. While unable to localize the event, SOP sensing can flag cumulative disturbances or repetitive mechanical interactions along the fiber. This use case becomes less practical in oceanic settings unless vessel activity occurs near cable landing zones or shallow fiber stretches.

These scenario considerations have been summarised in the Table below.

Table: Summarises of subsea sensing use cases and corresponding detection performance.
The table outlines representative sound power levels, optimal sensing technologies, environmental suitability, and estimated detection distances for key maritime and defense-related use cases. Detection range is inferred from typical source levels, local noise floors, and sensing system capabilities in Arctic coastal and oceanic environments.

LEGACY SUBSEA SENSING NETWORKS: SONOR SYSTEMS AND THEIR EVOLVING ROLE.

The observant reader might at this point feel (rightly) that I am totally ignoring the good old sonar (e.g., sound navigation and ranging), which has been around since World War I and is thus approximately 110 years old as a technology. In the Cold War era, at its height from the 1950s to the 1980s, sonar technology advanced further into the strategic domain. The United States and its allies developed large-scale systems like SOSUS (Sound Surveillance System) and SURTASS (Surveillance Towed Array Sensor System) to detect and monitor the growing fleet of Soviet nuclear submarines. These systems enabled long-range, continuous underwater surveillance, establishing sonar as a tactical tool and a key component of strategic deterrence and early warning architectures.

So, let us briefly look at Sonar as a defensive (and offensive) technology.

Undersea sensing as a cornerstone of naval strategy and maritime situational awareness; for example, see the account “66 Years of Undersea Surveillance” by Taddiken et al. Throughout the Cold War, the world’s major powers invested heavily in long-range underwater surveillance systems, especially passive and active sonar networks. These systems remain relevant today, providing persistent monitoring for submarine detection, anti-access/area denial operations, and undersea infrastructure protection.

Passive sonar systems detect acoustic signatures emitted by ships, submarines, and underwater seismic activity. These systems rely on the natural propagation of sound through water and are often favored for their stealth since they do not emit signals. Their operation is inherently covert. In contrast, active sonar transmits acoustic pulses and measures reflected signals to detect and range objects that might not produce detectable noise, such as quiet submarines or inert objects on the seafloor.

The most iconic example of a passive sonar network is the U.S. Navy’s Sound Surveillance System (SOSUS), initially deployed in the 1950s. SOSUS comprises a series of hydrophone arrays fixed to the ocean floor and connected by undersea cables to onshore processing stations. While much of SOSUS remains classified, its operational role continues today with mobile and advanced fixed networks under the Integrated Undersea Surveillance System (IUSS). Other nations have developed analogous capabilities, including Russia’s MGK-series networks, China’s emerging Great Undersea Wall system, and France’s SLAMS network. These systems offer broad area acoustic coverage, especially in strategic chokepoints like the GIUK (Greenland-Iceland-UK) gap and the South China Sea.

Despite sonar’s historical and operational value, traditional sonar networks have significant limitations. Passive sonar is susceptible to acoustic masking by oceanic noise and may struggle to detect vessels employing acoustic stealth technologies. Active sonar, while more precise, risks disclosing its location to adversaries due to its emitted signals. Furthermore, sonar performance is constrained by water conditions, salinity, temperature gradients, and depth, affecting acoustic propagation. Additionally, sonar coverage is inherently sparse and highly dependent on the geographical layout of sensor arrays and underwater topology. Furthermore, deployment and maintenance of sonar arrays are logistically complex and costly, often requiring naval support or undersea construction assets. These limitations suggest a decreasing standalone effectiveness of sonar systems in high-resolution detection, mainly as adversaries develop quieter and more agile underwater vehicles.

This table summarizes key sonar technologies used in naval and infrastructure surveillance, highlighting typical unit spacing, effective coverage radius, and operational notes for systems ranging from deep-ocean fixed arrays (SOSUS/IUSS) to mobile and nearshore defense systems.

Think of sonar as a radar for the sea, sensing outward into the subsea environment. Due to sound propagation characteristics (i.e., in water sound travels more than 4 times faster and attenuates very slowly compared to sound waves in air), sonar is an ideal technology for submarine detection and seismic monitoring. In contrast, optical sensing in subsea cables is like a tripwire or seismograph, detecting anything that physically touches, moves, or perturbs the cable along its length. The emergence of distributed sensing over fiber optics has introduced a transformative approach to undersea and terrestrial monitoring. Distributed Acoustic Sensing (DAS), Distributed Fiber Sensing (DFS), and Coherent Optical Frequency Domain Reflectometry (C-OFDR) leverage the existing footprint of submarine telecommunications infrastructure to detect environmental disturbances, including vibrations, seismic activity, and human interaction with cables, at high spatial and temporal resolution. Unlike traditional sonar, these fiber-based systems do not rely on acoustic wave propagation in water but instead monitor the optical fiber’s phase, strain, or polarization variations. So, very simple sonar uses acoustics to sense sound waves in water, while fiber-based sensing is based on optics and how lights travel in an optical fiber. When embedded in submarine cables, such sensing techniques allow for continuous, covert, and high-resolution surveillance of the cable’s immediate environment, including detection of trawler interactions, anchor dragging, subsea landslides, and localized mechanical disturbances. They operate within the optical transmission spectrum without interrupting the core data service. While sonar systems excel at broad ocean surveillance and object tracking, their coverage is limited to specific regions and depths where arrays are installed. Conversely, fiber-based sensing offers persistent surveillance along entire transoceanic links, albeit restricted to the immediate vicinity of the cable path. Together, these systems should not be seen as competitors but very much complementary tools. Sonar covers the strategic expanse, while fiber-optic sensing provides fine-grained visibility where infrastructure resides.

This table contrasts traditional active and passive sonar networks with emerging fiber-integrated sensing systems (e.g., DAS, DFS, and C-OFDR) across key operational dimensions, including detection medium, infrastructure, spatial resolution, and security characteristics. It highlights the complementary strengths of each technology for undersea surveillance and strategic infrastructure monitoring.

The future of sonar sensing lies in hybridization and adaptive intelligence. Ongoing research explores networks that combine passive sonar arrays with intelligent edge processing using AI/ML to discriminate between ambient and threat signatures. There is also a push to integrate mobile platforms, such as Unmanned Underwater Vehicles (UUVs), into sonar meshes, expanding spatial coverage dynamically based on threat assessments. Material advances may also lead to miniaturized or modular hydrophone systems that can be ad hoc or embedded into multipurpose seafloor assets. Some navies are exploring Acoustic Vector Sensors (AVS), which can detect the pressure and direction of incoming sound waves, offering a richer data set for tracking and identification. Coupled with improvements in real-time ocean modeling and environmental acoustics, these future sonar systems may offer higher fidelity detection even in shallow and complex coastal waters where passive sensors are less effective. Moreover, integration with optical fiber systems is an area of active development. Some proposals suggest co-locating acoustic sensors with fiber sensing nodes or utilizing fiber backhaul for sonar telemetry in real-time, thereby merging the benefits of both approaches into a coherent undersea surveillance architecture.

THE ARCTIC DEPLOYMENT CONCEPT.

As global power competition extends into the Arctic, military planners and analysts are increasingly concerned about the growing strategic role of Greenland’s coastal waters, particularly in the context of Russian nuclear submarine operations. For decades, Russia has maintained a doctrine of deploying ballistic missile submarines (SSBNs) capable of launching nuclear retaliation strikes from stealth positions in remote ocean zones. Once naturally shielded by persistent sea ice, the Arctic has become more navigable due to climate change, creating new opportunities for submerged access to maritime corridors and concealment zones.

Historically, Russian submarines seeking proximity to U.S. and NATO targets would patrol areas along the Greenland-Iceland-UK (GIUK) gap and the eastern coast of Greenland, using the remoteness and challenging acoustic environment to remain hidden. However, strategic speculation and evolving threat assessments now suggest a westward shift, toward the sparsely monitored Greenlandic West Coast. This region offers even greater stealth potential due to limited surveillance infrastructure, complex fjord geography, and weaker sensor coverage than traditional GIUK chokepoints. Submarines could strike the U.S. East Coast from these waters in under 15 minutes, leveraging geographic proximity and acoustic ambiguity. Even if the difference in warning time would be no more than about 2–4 minutes depending on launch angle, trajectory, and detection latency, in the context of strategic warning systems and nuclear command and control, the loss of several minutes of additional reaction time can matter significantly, especially for early-warning systems, evacuation orders, or launch-on-warning decisions.

U.S. and Canadian defense communities have increasingly voiced concern over this evolving threat. U.S. Navy leadership, including Vice Admiral Andrew Lewis, has warned that the U.S. East Coast is “no longer a sanctuary,” underscoring the return of great power maritime competition and the pressing need for situational awareness even in home waters. As Russia modernizes its submarine fleet with quieter propulsion and longer-range missiles, its ability to hide near strategic seams like Greenland becomes a direct vulnerability to North American security.

This emerging risk makes the case for integrating advanced sensing capabilities into subsea cable infrastructure across Greenland and the broader Arctic theatre. Cable-based sensing technologies, such as Distributed Acoustic Sensing (DAS) and State of Polarization (SOP) monitoring, could dramatically enhance NATO’s ability to detect anomalous underwater activity, particularly in the fjords and shallow coastal regions of Greenland’s western seaboard. In a region where traditional sonar and surface surveillance are limited by ice, darkness, and remoteness, the subsea cable system could become an invisible tripwire, transforming Greenland’s digital arteries into dual-use defense assets.

Therefore, advanced sensing technologies should not be treated as optional add-ons but as foundational elements of Greenland’s Arctic defense architecture. Particular technologies that can work well and are relatively uncomplicated to operationalize on brownfield subsea cable installations. These would offer a critical layer of redundancy, early warning, and environmental insight, capabilities uniquely suited to the high north’s emerging strategic and climatic realities.

The Arctic Deployment Concept outlines a forward-looking strategy to integrate submarine cable sensing technologies into the defense and intelligence infrastructure of the Arctic region, particularly Greenland, as geopolitical tensions and environmental instability intensify. Greenland’s strategic location at the North Atlantic and Arctic Ocean intersection makes it a critical node in transatlantic communications and military situational awareness. As climate change opens new maritime passages and exposes previously ice-locked areas, the region becomes increasingly vulnerable, not only to environmental hazards like shifting ice masses and undersea seismic activity, but also to the growing risks of geopolitical friction, cyber operations, and hybrid threats targeting critical infrastructure.

In this context, sensing-enhanced submarine cables offer a dual-use advantage: they carry data traffic and serve as real-time monitoring assets, effectively transforming passive infrastructure into a distributed sensor network. These capabilities are especially vital in Greenland, where terrestrial sensing is sparse, the weather is extreme, and response times are long due to the remoteness of the terrain. By embedding Distributed Acoustic Sensing (DAS), Coherent Optical Frequency Domain Reflectometry (C-OFDR), and State of Polarization (SOP) sensing along cable routes, operators can monitor for ice scouring, tectonic activity, tampering, or submarine presence in near real time.

This chart illustrates the Greenlandic telecommunications provider Tusass’s infrastructure (among other things). Note that Tussas is the incumbent and only telecom provider in Greenland. Currently, five hydropower plants (shown above; location is only indicative) provide more than 80% of Greenland’s electricity demand. Greenland’s new international airport became operational in Nuuk in November 2024. Source: from the Tusass Annual Report 2023 with some additions and minor edits.

As emphasized in the article “Greenland: Navigating Security and Critical Infrastructure in the Arctic”, Greenland is not only a logistical hub for NATO but also home to increasingly digitalized civilian systems. This dual-use nature of Arctic subsea cables underscores the need for resilient, secure, and monitored communications infrastructure. Given the proximity of Greenland to the GIUK gap, a historic naval choke point between Greenland, Iceland, and the UK, any interruption or undetected breach in subsea connectivity here could undermine both civilian continuity and allied military posture in the region.

Moreover, the cable infrastructure along Greenland’s coastline, connecting remote settlements, research stations, and defense assets, is highly linear and often exposed to physical threats from shifting icebergs, seabed movement, or vessel anchoring. These shallow, coastal environments are ideally suited for sensing deployments, where good coupling between the fiber and the seabed enables effective detection of local activity. Integrating sensing technologies here supports ISR (i.e., Intelligence, Surveillance, and Reconnaissance) and predictive maintenance. It extends domain awareness into remote fjords and ice-prone straits where traditional radar or sonar systems may be ineffective or cost-prohibitive.

The map of Greenland’s telecommunications infrastructure provides a powerful visual framework for understanding how sensing capabilities could be integrated into the nation’s subsea cable system to enhance strategic awareness and defense. The western coastline, where the majority of Greenland’s population resides (~35%) and where the main subsea cable infrastructure runs, offers an ideal geographic setting for deploying cable-integrated sensing technologies. The submarine cable routes from Nanortalik in the south to Upernavik in the north connect critical civilian hubs such as Nuuk, Ilulissat, and Qaqortoq, while simultaneously passing near U.S. military installations like Pituffik Space Base. While essential for digital connectivity, this infrastructure also represents a strategic vulnerability if left unsensed and unprotected.

Given that Russian nuclear-powered submarines (e.g., SSBMs) are suspected of operating closer to the Greenlandic coastline, shifting from the historical GIUK gap to potentially less monitored regions along the west, Greenland’s cable network could be transformed into an invisible perimeter sensor array. Technologies such as Distributed Acoustic Sensing (DAS) and State of Polarization (SOP) monitoring could be layered onto the existing fiber without disrupting data traffic. These technologies would allow authorities to detect minute vibrations from nearby vessel movement or unauthorized subsea activity, and to monitor for seismic shifts or environmental anomalies like iceberg scouring.

The map above shows the submarine cable backbone, microwave-chain sites, and satellite ground stations. If integrated, these components could act as hybrid communication-and-sensing relay points, particularly in remote locations like Qaanaaq or Tasiilaq, further extending domain awareness into previously unmonitored fjords and inlets. The location of the new international airport in Nuuk, combined with Nuuk’s proximity to hydropower and a local datacenter, also suggests that the capital could serve as a national hub for submarine cable-based surveillance and anomaly detection processing.

Much of this could be operationalized using existing infrastructure with minimal intrusion (at least in the proximity of Greenland’s coastline). Brownfield sensing upgrades, mainly using coherent transceiver-based SOP methods or in-line C-OFDR reflectometry, may be implemented on live cable systems, allowing Greenland’s existing communications network to become a passive tripwire for submarine activity and other hybrid threats. This way, the infrastructure shown on the map could evolve into a dual-use defense asset, vital in securing Greenland’s civilian connectivity and NATO’s northern maritime flank.

POLICY AND OPERATIONAL CONSIDERATIONS.

As discussed previously, today, we are essentially blind to what happens to our submarine infrastructure, which carries over 95% of the world’s intercontinental internet traffic and supports more than 10 trillion euros daily in financial transactions. This incredibly important global submarine communications network was taken for granted for a long time, almost like a deploy-and-forget infrastructure. It is worthwhile to remember that we cannot protect what we cannot measure.

Arctic submarine cable sensing is as much a policy and sourcing question as a technical one. The integration of sensing platforms should follow a modular, standards-aligned approach, supported by international cooperation, robust cybersecurity measures, and operational readiness for Arctic conditions. If implemented strategically, these systems can offer enhanced resilience and a model for dual-use infrastructure governance in the digital age.

As Arctic geostrategic relevance increases due to climate change, geopolitical power rivalry, and the expansion of digital critical infrastructure, submarine cable sensing has emerged as both a technological opportunity and a governance challenge. The deployment of sensing techniques such as State of Polarization (SOP) monitoring and Coherent Optical Frequency Domain Reflectometry (C-OFDR) offers the potential to transform traditionally passive infrastructure into active, real-time monitoring platforms. However, realizing this vision in the Arctic, particularly for Greenlandic and trans-Arctic cable systems, requires a careful approach to policy, interoperability, sourcing, and operational governance.

One of the key operational advantages of SOP-based sensing is that it allows for continuous, passive monitoring of subsea cables without consuming bandwidth or disrupting live traffic​. When analyzed using AI-enhanced models, SOP fluctuations provide a low-impact way to detect seismic activity, cable tampering, or trawling events. This makes SOP a highly viable candidate for brownfield deployments in the Arctic, where live traffic-carrying cables traverse vulnerable and logistically challenging environments. Similarly, C-OFDR, while slightly more complex in deployment, has been demonstrated in real-world conditions on transatlantic cables, offering precise localization of environmental disturbances using coherent interferometry without the need for added reflectors​.

From a policy standpoint, Arctic submarine sensing intersects with civil, commercial, and defense domains, making multinational coordination essential. Organizations such as NATO, NORDEFCO (Nordic Defence Cooperation), and the Arctic Council must harmonize protocols for sensor data sharing, event attribution, and incident response. While SOP and C-OFDR generate valuable geophysical and security-relevant data, questions remain about how such data can be lawfully shared across borders, especially when detected anomalies may involve classified infrastructure or foreign-flagged vessels.

Moreover, integration with software-defined networking and centralized control planes can enable rapid traffic rerouting when anomalies are detected, improving resilience against natural or intentional disruptions. This also requires technical readiness in Greenlandic and Nordic telecom systems, many of which are evolving toward open architectures but may still depend on legacy switching hubs vulnerable to single points of failure.

Sensory compatibility and strategic trust must guide the acquisition and sourcing of sensing systems. Vendors like Nokia Bell Labs, which developed AI-based SOP anomaly detection models, have demonstrated in-band sensing on submarine networks without service degradation. A sourcing team may want to ensure that due diligence is done on the foundational models and that high-risk countries or vendors have not compromised their origin. I would recommend that sourcing teams follow the European Union’s 5G security framework as guidance in selecting the algorithmic solution, ensuring that no high-risk vendor/country has been involved at any point in the model development, training, or operational aspects of inferences and updates that are involved in applications of such models. By the way, it might be a very good and safe idea to extend this principle to the submarine cable construction and repair industry (just saying!).

When sourcing such systems, governments and operators should prioritize:

  • Proven compatibility with coherent transceiver infrastructure (i.e., brownfield submarine cable installations). Needless to say, solutions are tested before final sourcing (e.g., PoC).
  • Supplier alignment with NATO or Nordic/Arctic security frameworks. At a minimum, guidance should be taken from the EU 5G security framework and its approach to high-risk vendors and countries.
  • Firmware and AI models need clear IP ownership and cybersecurity compliance. Needless to say, the foundational models must originate from trusted companies and markets.
  • Inclusion of post-deployment support in Arctic (and beyond Arctic) operational conditions.

It cannot be emphasized enough that not all sensing systems are equally suitable for long-haul submarine cable stretches, such as transatlantic routes. Different sensing strategies may be required for the same subsea cable at different cable parts or spans (e.g., the bottom of the Atlantic Ocean vs coastal areas or proximity). A hybrid sensing approach is often more effective than a single solution. The physical length, signal attenuation, repeater spacing, and bandwidth constraints inherent to long-haul cables introduce technical limitations that influence which sensing techniques are viable and scalable.

For example, φ-OTDR (phase-sensitive OTDR) and standard DAS techniques, while powerful for acoustic sensing on terrestrial or coastal cables, face significant challenges over ultra-long distances due to signal loss and diminishing signal-to-noise ratio. These methods typically require access to dark fiber and may struggle to operate effectively across repeated links or when deployed mid-span across thousands of kilometers without amplification. Contrastingly, techniques like State of Polarization (SOP) sensing and Coherent Optical Frequency Domain Reflectometry (C-OFDR) have demonstrated strong potential for brownfield integration on transoceanic cables. SOP sensing can operate passively on live, traffic-carrying fibers and has been successfully demonstrated over 6,500 km transatlantic spans without an invasive retrofit​. Similarly, C-OFDR, particularly in its in-line coherent implementation, can leverage existing coherent transceivers and loop-back paths to perform long-range distributed sensing across legacy infrastructure..

This leads to the reasonable conclusion that a mix of sensing technologies tailored to cable type, length, environment, and use case is appropriate and necessary. For example, coastal or Arctic shelf cables may benefit more from high-resolution φ-OTDR/DAS deployments. In contrast, transoceanic cables call for SOP, or C-OFDR-based systems compatible with repeated, live traffic environments. This modular, multi-modal approach ensures maximum coverage, resilience, and relevance, especially as sensing is extended across greenfield and brownfield deployments.

Thus, hybrid sensing architectures are emerging as a best practice, with each technique contributing unique strengths toward a comprehensive monitoring and defense capability for critical submarine infrastructure.

Last but not least, cybersecurity and signal integrity protections are critical. Sensor platforms that generate real-time alerts must include spoofing detection, data authentication, and secured telemetry channels to prevent manipulation or false alarms. SOP sensing, for instance, may be vulnerable to polarization spoofing unless validated against multi-parameter baselines, such as concurrent C-OFDR strain signatures or external ISR (i.e., Intelligence, Surveillance, and Reconnaissance) inputs.

CONCLUSION AND RECOMMENDATION.

Submarine cables are indispensable for global connectivity, transmitting over 95% of international internet traffic, yet they remain primarily unmonitored and physically vulnerable. Recent events and geopolitical tensions reveal that hostile actors could target this infrastructure with plausible deniability, especially in regions with low surveillance like the Arctic. As described in this article, enhanced sensing technologies, such as DAS, SOP, and C-OFDR, can provide real-time awareness and threat detection, transforming passive infrastructure into active security assets. This is particularly urgent for islands and Arctic regions like Greenland, where fragile cable networks (in the sense of few independent international connections) represent single points of failure.

Key Considerations:

  • Submarine cables are strategic, yet “blind & deaf” infrastructures.
    Despite carrying the majority of global internet and financial data, most cables lack embedded sensing capabilities, leaving them vulnerable to natural and hybrid threats. This is especially true in the Arctic and island regions with minimal redundancy.
  • Recent hybrid threat patterns reinforce the need for monitoring.
    Cases like the 2024–2025 Baltic and Taiwan cable incidents show patterns (e.g., clean cuts, sudden phase shifts) that may be consistent with deliberate interference. These events demonstrate how undetected tampering can have immediate national and global impacts.
  • The Arctic is both a strategic and environmental hotspot.
    Melting sea ice has made the region more accessible to submarines and sabotage, while Greenland’s cables are often shallow, unprotected, and linked to critical NATO and civilian installations. Integrating sensing capabilities here is urgent.
  • Sensing systems enable early warning and reduce repair times.
    Technologies like SOP and C-OFDR can be applied to existing (brownfield) subsea systems without disrupting live traffic. This allows for anomaly detection, seismic monitoring, and rapid localization of cable faults, cutting response times from days to minutes.
  • Hybrid sensing systems and international cooperation are essential.
    No single sensing technology fits all submarine environments. The most effective strategy for resilience and defense involves combining multiple modalities tailored to cable type, geography, and threat level while ensuring trusted procurement and governance.
  • Relying on only one or two submarine cables for an island’s entire international connectivity at a bandwidth-critical scale is a high-stakes gamble. For example, a dual-cable redundancy may offer sufficient availability on paper. However, it fails to account for real-world risks such as correlated failures, extended repair times, and the escalating strategic value of uninterrupted digital access.
  • Quantity doesn’t matter for capable hostile actors: for a capable hostile actor, whether a country or region has two, three, or a handful of international submarine cables is unlikely to matter in terms of compromising those critical infrastructure assets.

In addition to the key conclusions above, there is a common belief that expanding the number of international submarine cables from two to three or three to four offers meaningful protection against deliberate sabotage by hostile state actors. While intuitively appealing, this notion underestimates a determined adversary’s intent and capability. For a capable actor, targeting an additional one or two cables is unlikely to pose a serious operational challenge. If the goal is disruption or coercion, a capable adversary will likely plan for multi-point compromise from the outset (including landing station considerations).

However, what cannot be overstated is the resilience gained through additional, physically distinct (parallel) cable systems. Moving from two to three truly diverse and independently repairable cables improves system availability by a factor of roughly 200, reducing expected downtime from over hours per year to under one minute. Expanding to four cables can reduce expected downtime to mere seconds annually. These figures reflect statistical robustness and operational continuity in the face of failure. Yet availability alone is not enough. Submarine cable repair timelines remain long, stretching from weeks to months, even under favorable conditions. And while natural disruptions are significant, they are no longer our only concern. Undersea infrastructure has become a deliberate target in hybrid and kinetic conflict scenarios in today’s geopolitical climate. The most pressing threat is not that these cables might be compromised, but that they may already be; we are simply unaware. The undersea domain is poorly monitored, poorly defended, and rich in asymmetric leverage.

Submarine cable infrastructure is not just the backbone of global digital connectivity. It is also a strategic asset with profound implications for civil society and national defense. The reliance on subsea cables for internet access, financial transactions, and governmental coordination is absolute. Satellite-based communications networks can only carry an infinitesimal amount of the traffic carried by subsea cable networks. If the global submarine cable network were to break down, so would the world order as we know it. Integrating advanced sensing technologies such as SOP, DAS, and C-OFDR into these networks transforms them from passive conduits into dynamic surveillance and monitoring systems. This dual-use capability enables faster fault detection and enhanced resilience for civilian communication systems, but also supports situational awareness, early-warning detection, and hybrid threat monitoring in contested or strategically sensitive areas like the Arctic. Ensuring submarine cable systems are robust, observable, and secured must therefore be seen as a shared priority, bridging commercial, civil, and military domains.

THE PHYSICS BEHIND SENSING – A BIT OF BACKUP.

Rayleigh Scattering: Imagine shining a flashlight through a long glass tunnel. Even though the glass tunnel looks super smooth, it has tiny bumps and little specks you can not see. When the light hits those tiny bumps, some bounce back, like a ball bounces off a wall. That bouncing light is called Rayleigh scattering.

Rayleigh scattering is a fundamental optical phenomenon in which light is scattered by small-scale variations in the refractive index of a medium, such as microscopic imperfections or density fluctuations within an optical fiber. It occurs naturally in all standard single-mode fibers and results in a portion of the transmitted light being scattered in all directions, including backward toward the transmitter. The intensity of Rayleigh backscattered light is typically very weak, but it can be detected and analyzed using highly sensitive receivers. The scattering is elastic, meaning there is no change in wavelength between the incident and scattered light.

In distributed fiber optic sensing (DFOS), Rayleigh backscatter forms the basis for several techniques:

  • Distributed Acoustic Sensing (DAS):
    The DAS sensing solution uses phase-sensitive optical time-domain reflectometry (i.e., φ-OTDR) to measure minute changes in the backscattered phase caused by vibrations. These changes indicate environmental disturbances such as seismic waves, intrusions, or cable movement.
  • Coherent Optical Frequency Domain Reflectometry (C-OFDR):
    C-OFDR leverages Rayleigh backscatter to measure changes in the fiber over distance with high resolution. By sweeping a narrow-linewidth laser over a frequency range and detecting interference from the backscatter, C-OFDR enables continuous distributed sensing along submarine cables. Unlike earlier methods requiring Bragg gratings, recent innovations allow this technique to work even over legacy subsea cables without them.
  • Coherent Receiver Sensing:
    This technique monitors Rayleigh backscatter and polarization changes using existing telecom equipment’s DSP (digital signal processing) capabilities. This allows for passive sensing with no additional probes, and the sensing does not interfere with data traffic.

Brillouin Scattering: Imagine you are talking through a long string tied between two cups, like a string telephone most of us played with as kids (before everyone got a smartphone when they turned 3 years old). Now, picture that the string is not still. It shakes a little, like shivering or wiggling in the wind or the strain of the hands holding the cups. When your voice travels down that string, it bumps into those little wiggles. That bumping makes the sound of your voice change a tiny bit. Brillouin scattering is like that. When light travels through our string (that could be a glass fiber), the tiny wiggles inside the string make the light change direction, and the way that light and cable “wiggles” work together can tell our engineers stories about what happens inside the cable.

Brillouin scattering is a nonlinear optical effect that occurs when light interacts with acoustic (sound) waves within the optical fiber. When a continuous wave or pulsed laser signal travels through the fiber, it can generate small pressure waves due to a phenomenon known as electrostriction. These pressure waves slightly change the optical fiber’s refractive index and act like a moving grating, scattering some of the light backward. This backward-scattered light experiences a frequency shift, known as the Brillouin shift, which is directly related to the temperature and strain in the fiber at the scattering point.

Commercial Brillouin-based systems are technically capable of monitoring subsea communications cables, especially for strain and temperature sensing. However, they are not yet standard in the submarine communications cable industry, and integration typically requires dedicated or dark fibers, as the sensing cannot share the same fiber with active data traffic.

Raman Scattering: Imagine you are shining a flashlight through a glass of water. Most of the light goes straight through, like cars driving down a road without turning. But sometimes, a tiny bit of light bumps into something inside the water, like a little water molecule, and bounces off differently. It’s like the car suddenly makes a tiny turn and changes its color. This little bump and color change is what we call Raman scattering. It is a special effect as it helps scientists figure out what’s inside things, like what water is made of, by looking at how the light changes when it bounces off.

Raman scattering is primarily used in submarine fiber cable sensing for Distributed Temperature Sensing (DTS). This technique exploits the temperature-dependent nature of Raman scattering to measure the temperature along the entire length of an optical fiber, which can be embedded within or run alongside a submarine cable. Raman scattering has several applications in submarine cables. It is used for environmental monitoring by detecting gradual thermal changes caused by ocean currents or geothermal activity. Regarding cable integrity, it can identify hotspots that might indicate electrical faults or compromised insulation in power cables. In Arctic environments, Raman-based Distributed Temperature Sensing (DTS) can help infer changes in surrounding ice or seawater temperatures, aiding in ice detection. Additionally, it supports early warning systems in the energy and offshore sectors by identifying overheating and other thermal anomalies before they lead to critical failures.

However, Raman scattering has notable limitations. Because it is a weak optical effect, DTS systems based on Raman scattering require high-powered lasers and highly sensitive detectors. It is also unsuitable for detecting dynamic events such as vibrations or acoustic signals, better sensed using Rayleigh or Brillouin scattering. Furthermore, Raman-based DTS typically offers spatial resolutions of one meter or more and has a slow response time, making it less effective for identifying rapid or short-lived events like submarine activity or tampering.

Commercial Raman-DTS solutions exist and are actively deployed in subsea power cable monitoring. Their use in telecom submarine cables is less common but technically feasible, particularly for infrastructure integrity monitoring rather than data-layer diagnostics.

FURTHER READING.

ACKNOWLEDGEMENT.

I greatly acknowledge my wife, Eva Varadi, for her support, patience, and understanding during the creative process of writing this article. I am furthermore indebted to Andreas Gladisch, VP Emerging Technologies – Deutsche Telekom AG, for sharing his expertise on fiber-optical sensing technologies with me and providing some of the foundational papers on which my article and research have been based. I always come away wiser from our conversations.

Will LEO Satellite Direct-to-Cell Networks make Terrestrial Networks Obsolete?

THE POST-TOWER ERA – A FAIRYTAIL.

From the bustling streets of New York to the remote highlands of Mongolia, the skyline had visibly changed. Where steel towers and antennas once dominated now stood open spaces and restored natural ecosystems. Forests reclaimed their natural habitats, and birds nested in trees undisturbed by the scaring of high rural cellular towers. This transformation was not sudden but resulted from decades of progress in satellite technology, growing demand for ubiquitous connectivity, an increasingly urgent need to address the environmental footprint of traditional telecom infrastructures, and the economic need to dramatically reduce operational expenses tied up in tower infrastructure. By the time the last cell site was decommissioned, society stood at the cusp of a new age of connectivity by LEO satellites covering all of Earth.

The annual savings worldwide from making terrestrial cellular towers obsolete in total cost are estimated to amount to at least 300 billion euros, and it is expected that moving cellular access to “heaven” will avoid more than 150 million metric tons of CO2 emissions annually. The retirement of all terrestrial cellular networks worldwide has been like eliminating the entire carbon footprint of The Netherlands or Malaysia and leading to a dramatic reduction in demand for sustainable green energy sources that previously were used to power the global cellular infrastructure.

INTRODUCTION.

Recent postings and a substantial part of commentary give the impression that we are heading towards a post-tower era where Elon Musk’s Low Earth Orbit (LEO) satellite Starlink network (together with competing options, e.g., ATS Spacemobile and Lynk, and no, I do not see Amazon’s Project Kuiper in this space) will make terrestrially-based tower infrastructure and earth-bound cellular services obsolete.

T-Mobile USA is launching its Direct-to-Cell (D2C) service via SpaceX’s Starlink LEO satellite network. The T-Mobile service is designed to work with existing LTE-compatible smartphones, allowing users to connect to Starlink satellites without needing specialized hardware or smartphone applications.

Since the announcement, posts and media coverage have declared the imminent death of the terrestrial cellular network. When it is pointed out that this may be a premature death sentence to an industry, telecom operators, and their existing cellular mobile networks, it is also not uncommon to be told off as being too pessimistic and an unbeliever in Musk’s genius vision. Musk has on occasion made it clear the Starlink D2C service is aimed at texts and voice calls in remote and rural areas, and to be honest, the D2C service currently hinges on 2×5 MHz in the T-Mobile’s PCS band, adding constraints to the “broadbandedness” of the service. The fact that the service doesn’t match the best of T-Mobile US’s 5G network quality (e.g., 205+ Mbps downlink) or even get near its 4G speeds should really not bother anyone, as the value of the D2C service is that it is available in remote and rural areas with little to no terrestrial cellular coverage and that you can use your regular cellular device with no need for a costly satellite service and satphone (e.g., Iridium, Thuraya, Globalstar).

While I don’t expect to (or even want to) change people’s beliefs, I do think it would be great to contribute to more knowledge and insights based on facts about what is possible with low-earth orbiting satellites as a terrestrial substitute and what is uninformed or misguided opinion.

The rise of LEO satellites has sparked discussions about the potential obsolescence of terrestrial cellular networks. With advancements in satellite technology and increasing partnerships, such as T-Mobile’s collaboration with SpaceX’s Starlink, proponents envision a future where towers are replaced by ubiquitous connectivity from the heavens. However, the feasibility of LEO satellites achieving service parity with terrestrial networks raises significant technical, economic, and regulatory questions. This article explores the challenges and possibilities of LEO Direct-to-Cell (D2C) networks, shedding light on whether they can genuinely replace ground-based cellular infrastructure or will remain a complementary technology for specific use cases.

WHY DISTANCE MATTERS.

The distance between you (your cellular device) and the base station’s antenna determines your expected service experience in cellular and wireless networks. The longer you are away from the base station that serves you, in general, the poorer your connection quality and performance will be, with everything else being equal. As the distance increases, signal weakening (i.e., path loss) grows exponentially, reducing signal quality and making it harder for devices to maintain reliable communication. Closer proximity allows for more substantial, faster, and more stable connections, while longer distances require more power and advanced technologies like beamforming or repeaters to compensate.

Physics tells us how a signal loses its signal strength (or power) over a distance with the square of the distance from the source of the signal itself (either the base station transmitter or the consumer device). This applies universally to all electromagnetic waves traveling in free space. Free space means that there are no obstacles, reflections, or scattering. No terrain features, buildings, or atmospheric conditions interfere with the propagation signal.

So, what matters to the Free Space Path Loss (FSPL)? That is the signal strength over a given distance in free space:

  • The signal strength reduces (the path loss increases) with the square of the distance (d) from its source.
  • Path loss increases (i.e., signal strength decreases) with the (square of the) frequency (f). The higher the frequency, the higher the path loss at a given distance from the signal source.
  • A larger transmit antenna aperture reduces the path loss by focusing the transmitted signal (energy) more efficiently. An antenna aperture is an antenna’s “effective area” that captures or transmits electromagnetic waves. It depends directly on antenna gain and inverse of the square of the signal frequency (i.e., higher frequency → smaller aperture).
  • Higher receiver gain will also reduce the path loss.

$PL_{FS} \; = \; \left( \frac{4 \pi}{c} \right)^2 (d \; f)^2 \; \propto d^2 \; f^2$

$$FSPL_{dB} \; = 10 \; Log_{10} (PL_{FS}) \; = \; 20 \; Log_{10}(d) \; + \; 20 \; Log_{10}(f) \; + \; constant$$

The above equations show a strong dependency on distance; the farther away, the larger the signal loss, and the higher the frequency, the larger the signal loss. Relaxing some of the assumptions leading to the above relationship leads us to the following:

$FSPL_{dB}^{rs} \; = \; 20 \; Log_{10}(d) \; – \; 10 \; Log_{10}(A_t^{eff}) \; – \; 10 \; Log_{10}(G_{r}) \; + \; constant$

The last of the above equations introduces the transmitter’s effective antenna aperture (\(A_t^{eff}\)) and the receiver’s gain (\(G_r\)), telling us that larger apertures reduce path loss as they focus the transmitted energy more efficiently and that higher receiver gain likewise reduces the path loss (i.e., “they hear better”).

It is worth remembering that the transmitter antenna aperture is directly tied to the transmitter gain ($G_t$) when the frequency (f) has been fixed. We have

$A_t^{eff} \; = \; \frac{c^2}{4\pi} \; \frac{1}{f^2} \; G_t \; = \; 0.000585 \; m^2 \; G_t \;$ @ f = 3.5 GHz.

From the above, as an example, it is straightforward to see that the relative path loss difference between the two distances of 550 km (e.g., typical altitude of an LEO satellite) and 2.5 km (typical terrestrial cellular coverage range ) is

$\frac{PL_{FS}(550 km)}{PL_{FS}(2.5 km)} \; = \; \left( \frac {550}{2.5}\right)^2 \; = \; 220^2 \; \approx \; 50$ thousand. So if all else was equal (it isn’t, btw!), we would expect that the signal loss at a distance of 550 km would be 50 thousand times higher than at 2.5 km. Or, in the electrical engineer’s language, at a distance of 550 km, the loss would be 47 dB higher than at 2.5 km.

The figure illustrates the difference between (a) terrestrial cellular and (b) satellite coverage. A terrestrial cellular signal typically covers a radius of 0.5 to 5 km. In contrast, a LEO satellite signal travels a substantial distance to reach Earth (e.g., Starlink satellite is at an altitude of 550 km). While the terrestrial signal propagates through the many obstacles it meets on its earthly path, the satellite signal’s propagation path would typically be free-space-like (i.e., no obstacles) until it penetrates buildings or other objects to reach consumer devices. Historically, most satellite-to-Earth communication has relied on outdoor ground stations or dishes where the outdoor antenna on Earth provides LoS to the satellite and will also compensate somewhat for the signal loss due to the distance to the satellite.

Let’s compare a terrestrial 5G 3.5 GHz advanced antenna system (AAS) 2.5 km from a receiver with a LEO satellite system at an altitude of 550 km. Note I could have chosen a lower frequency, e.g., 800 MHz or the PCS 1900 band. While it would give me some advantages regarding path loss (i.e., $FSPL \; \propto \; f^2$), the available bandwidth is rather smallish and insufficient for state-or-art 5G services (imo!). From a free-space path loss perspective, independently of frequency, we need to overcome an almost 50 thousand times relative difference in distance squared (ca. 47 dB difference) in favor of the terrestrial system. In this comparison, it should be understood that the terrestrial and the satellite systems use the same carrier frequency (otherwise, one should account for the difference in frequency), and the only difference that matters (for the FSPL) is the difference in distance to the receiver.

Suppose I require that my satellite system has the same signal loss in terms of FSPL as my terrestrial system to aim at a comparable quality of service level. In that case, I have several options in terms of satellite enhancements. I could increase transmit power, although it would imply that I need a transmit power of 47 dB more than the terrestrial system, or approximately 48 kW, which is likely impractical for the satellite due to power limitations. Compare this with the current Starlink transmit power of approximately 32 W (45 dBm), ca. 1,500 times lower. Alternatively, I could (in theory!) increase my satellite antenna aperture, leading to a satellite antenna with a diameter of ca. 250 meters, which is enormous compared to current satellite antennas (e.g., Starlink’s ca. 0.05 m2 aperture for a single antenna and total area in the order of 1.6 m2 for the Ku/Ka bands). Finally, I could (super theoretically) also massively improve my consumer device (e.g., smartphone) to receive gain (with 47 dB) from today’s range of -2 dBi to +5 dBi. Achieving 46 dBi gain in a smartphone receiver seems unrealistic due to size, power, and integration constraints. As the target of LEO satellite direct-to-cell services is to support commercially available cellular devices used in terrestrial, only the satellite specifications can be optimized.

Based on a simple free-space approach, it appears unreasonable that an LEO satellite communication system can provide 5G services at parity with a terrestrial cellular network to normal (unmodified) 5G consumer devices without satellite-optimized modifications. The satellite system’s requirements for parity with a terrestrial communications system are impractical (but not impossible) and, if pursued, would significantly drive up design complexity and cost, likely making such a system highly uneconomical.

At this point, you should ask yourself if it is reasonable to assume that a terrestrial communication cellular system can be taken to propagate as its environment is “free-space” like. Thus, obstacles, reflections, and scattering are ignored. Is it really okay to presume that terrain features, buildings, or atmospheric conditions do not interfere with the propagation of the terrestrial cellular signal? Of course, the answer should be that it is not okay to assume that. When considering this, let’s see if it matters much compared to the LEO satellite path loss.

TERRESTRIAL CELLULAR PROPAGATION IS NOT HAPPENING IN FREE SPACE, AND NEITHER IS A SATELLITE’S.

The Free-Space Path Loss (FSPL) formula assumes ideal conditions where signals propagate in free space without interference, blockage, or degradation, besides what would naturally be by traveling a given distance. However, as we all experience daily, real-world environments introduce additional factors such as obstructions, multipath effects, clutter loss, and environmental conditions, necessitating corrections to the FSPL approach. Moving from one room of our house to another can easily change the cellular quality and our experience (e.g., dropped calls, poorer voice quality, lower speed, changing from using 5G to 4G or even to 2G, no coverage at all). Driving through a city may also result in ups and downs with respect to the cellular quality we experience. Some of these effects are tabulated below.

Urban environments typically introduce the highest additional losses due to dense buildings, narrow streets, and urban canyons, which significantly obstruct and scatter signals. For example, the Okumura-Hata Urban Model accounts for such obstructions and adds substantial losses to the FSPL, averaging around 30–50 dB, depending on the density and height of buildings.

Suburban environments, on the other hand, are less obstructed than urban areas but still experience moderate clutter losses from trees, houses, and other features. In these areas, corrections based on the Okumura-Hata Suburban Model add approximately 10–20 dB to the FSPL, reflecting the moderate level of signal attenuation caused by vegetation and scattered structures.

Rural environments have the least obstructions, resulting in the lowest additional loss. Corrections based on the Okumura-Hata Rural Model typically add around 5–10 dB to the FSPL. These areas benefit from open landscapes with minimal obstructions, making them ideal for long-range signal propagation.

Non-line-of-sight (NLOS) conditions increase additionally the path loss, as signals must diffract or scatter to reach the receiver. This effect adds 10–20 dB in suburban and rural areas and 20–40 dB in urban environments, where obstacles are more frequent and severe. Similarly, weather conditions such as rain and foliage contribute to signal attenuation, with rain adding up to 1–5 dB/km at higher frequencies (above 10 GHz) and dense foliage introducing an extra 5–15 dB of loss.

The corrections for these factors can be incorporated into the FSPL formula to provide a more realistic estimation of signal attenuation. By applying these corrections, the FSPL formula can reflect the conditions encountered in terrestrial communication systems across different environments.

The figure above illustrates the differences and similarities concerning the coverage environment for (a) terrestrial and (b) satellite communication systems. The terrestrial signal environment, in most instances, results in the loss of the signal as it propagates through the terrestrial environment due to vegetation, terrain variations, urban topology or infrastructure, weather, and ultimately, as the signal propagates from the outdoor environment to the indoor environment it signal reduces further as it, for example, penetrates windows with coatings, outer and inner walls. The combination of distance, obstacles, and material penetration leads to a cumulative reduction in signal strength as the signal propagates through the terrestrial environment. For the satellite, as illustrated in (b), a substantial amount of signal is reduced due to the vast distance it has to travel before reaching the consumer. If no outdoor antenna connects with the satellite signal, then the satellite signal will be further reduced as it penetrates roofs, multiple ceilings, multiple floors, and walls.

It is often assumed that a satellite system has a line of sight (LoS) without environmental obstructions in its signal propagation (besides atmospheric ones). The reasoning is not unreasonable as the satellite is on top of the consumers of its services and, of course, a correct approach when the consumer has an outdoor satellite receiver (e.g., a dish) in direct LoS with the satellite. Moreover, historically, most satellite-to-Earth communication has relied on outdoor ground stations or outdoor dishes (e.g., placed on roofs or another suitable location) where the outdoor antenna on Earth provides LoS to the satellite’s antenna also compensating somewhat for the signal loss due to the distance to the satellite.

When considering a satellite direct-to-cell device, we no longer have the luxury of a satellite-optimized advanced Earth-based outdoor antenna to facilitate the communications between the satellite and the consumer device. The satellite signal has to close the connection with a standard cellular device (e.g., smartphone, tablet, …), just like the terrestrial cellular network would have to do.

However, 80% or more of our mobile cellular traffic happens indoors, in our homes, workplaces, and public places. If a satellite system had to replace existing mobile network services, it would also have to provide a service quality similar to that of consumers from the terrestrial cellular network. As shown in the above figure, this involves urban areas where the satellite signal will likely pass through a roof and multiple floors before reaching a consumer. Depending on housing density, buildings (shadowing) may block the satellite signal, resulting in substantial service degradation for consumers suffering from such degrading effects. Even if the satellite signal would not face the same challenges as a terrestrial cellular signal, such as with vegetation, terrain variations, and the horizontal dimension of urban topology (e.g., outer& inner walls, coated windows,… ), the satellite signal would still have to overcome the vertical dimension of urban topologies (e..g, roofs, ceilings, floors, etc…) to connect to consumers cellular devices.

For terrestrial cellular services, the cellular network’s signal integrity will (always) have a considerable advantage over the satellite signal because of the proximity to the consumer’s cellular device. With respect to distance alone, an LEO satellite at an altitude of 550 km will have to overcome a 50 thousand times (or a 47 dB) path loss compared to a cellular base station antenna 2.5 km away. Overcoming that path loss penalty adds considerable challenges to the antenna design, which would seem highly challenging to meet and far from what is possible with today’s technology (and economy).

CHALLENGES SUMMARIZED.

Achieving parity between a Low Earth Orbit (LEO) satellite providing Direct-to-Cell (D2C) services and a terrestrial 5G network involves overcoming significant technical challenges. The disparity arises from fundamental differences in these systems’ environments, particularly in free-space path loss, penetration loss, and power delivery. Terrestrial networks benefit from closer proximity to the consumer, higher antenna density, and lower propagation losses. In contrast, LEO satellites must address far more significant free-space path losses due to the large distances involved and the additional challenges of transmitting signals through the atmosphere and into buildings.

The D2C challenges for LEO satellites are increasingly severe at higher frequencies, such as 3.5 GHz and above. As we have seen above, the free-space path loss increases with the square of the frequency, and penetration losses through common building materials, such as walls and floors, are significantly higher. For an LEO satellite system to achieve indoor parity with terrestrial 5G services at this frequency, it would need to achieve extraordinary levels of effective isotropic radiated power (EIRP), around 65 dB, and narrow beamwidths of approximately 0.5° to concentrate power on specific service areas. This would require very high onboard power outputs, exceeding 1 kW, and large antenna apertures, around 2 m in diameter, to achieve gains near 55 dBi. These requirements place considerable demands on satellite design, increasing mass, complexity, and cost. Despite these optimizations, indoor service parity at 3.5 GHz remains challenging due to persistent penetration losses of around 20 dB, making this frequency better suited for outdoor or line-of-sight applications.

Achieving a stable beam with the small widths required for a LEO satellite to provide high-performance Direct-to-Cell (D2C) services presents significant challenges. Narrow beam widths, on the order of 0.5° to 1°, are essential to effectively focus the satellite’s power and overcome the high free-space path loss. However, maintaining such precise beams demands advanced satellite antenna technologies, such as high-gain phased arrays or large deployable apertures, which introduce design, manufacturing, and deployment complexities. Moreover, the satellite must continuously track rapidly moving targets on Earth as it orbits around 7.8 km/s. This requires highly accurate and fast beam-steering systems, often using phased arrays with electronic beamforming, to compensate for the relative motion between the satellite and the consumer. Any misalignment in the beam can result in significant signal degradation or complete loss of service. Additionally, ensuring stable beams under variable conditions, such as atmospheric distortion, satellite vibrations, and thermal expansion in space, adds further layers of technical complexity. These requirements increase the system’s power consumption and cost and impose stringent constraints on satellite design, making it a critical challenge to achieve reliable and efficient D2C connectivity.

As the operating frequency decreases, the specifications for achieving parity become less stringent. At 1.8 GHz, the free-space path loss and penetration losses are lower, reducing the signal deficit. For a LEO satellite operating at this frequency, a 2.5 m² aperture (1.8 m diameter) antenna and an onboard power output of around 800 W would suffice to deliver EIRP near 60 dBW, bringing outdoor performance close to terrestrial equivalency. Indoor parity, while more achievable than 3.5 GHz, would still face challenges due to penetration losses of approximately 15 dB. However, the balance between the reduced propagation losses and achievable satellite optimizations makes 1.8 GHz a more practical compromise for mixed indoor and outdoor coverage.

At 800 MHz, the frequency-dependent losses are significantly reduced, making it the most feasible option for LEO satellite systems to achieve parity with terrestrial 5G networks. The free-space path loss decreases further, and penetration losses into buildings are reduced to approximately 10 dB, comparable to what terrestrial systems experience. These characteristics mean that the required specifications for the satellite system are notably relaxed. A 1.5 m² aperture (1.4 m diameter) antenna, combined with a power output of 400 W, would achieve sufficient gain and EIRP (~55 dBW) to deliver robust outdoor coverage and acceptable indoor service quality. Lower frequencies also mitigate the need for extreme beamwidth narrowing, allowing for more flexible service deployment.

Most consumers’ cellular consumption happens indoors. These consumers are compared to an LEO satellite solution typically better served by existing 5G cellular broadband networks. When considering a direct-to-normal-cellular device, it would not be practical to have an LEO satellite network, even an extensive one, to replace existing 5G terrestrial-based cellular networks and the services these support today.

This does not mean that LEO satellite cannot be of great utility when connecting to an outdoor Earth-based consumer dish, as is already evident in many remote, rural, and suburban places. The summary table above also shows that LEO satellite D2C services are feasible, without too challenging modifications, at the lower cellular frequency ranges between 600 MHz to 1800 MHz at service levels close to the terrestrial systems, at least in rural areas and for outdoor services in general. In indoor situations, the LEO Satellite D2C signal is more likely to be compromised due to roof and multiple floor penetration scenarios to which a terrestrial signal may be less exposed.

WHAT GOES DOWN MUST COME UP.

LEO satellite services that provide direct to unmodified mobile cellular device services are getting us all too focused on the downlink path from the satellite directly to the device. It seems easy to forget that unless you deliver a broadcast service, we also need the unmodified cellular device to directly communicate meaningfully with the LEO satellite. The challenge for an unmodified cellular device (e.g., smartphone, tablet, etc.) to receive the satellite D2C signal has been explained extensively in the previous section. In the satellite downlink-to-device scenario, we can optimize the design specifications of the LEO satellite to overcome some (or most, depending on the frequency) of the challenges posed by the satellite’s high altitude (compared to a terrestrial base station’s distance to the consumer device). In the device direct-uplink-to-satellite, we have very little to no flexibility unless we start changing the specifications of the terrestrial device portfolio. Suppose we change the specifications for consumer devices to communicate better with satellites. In that case, we also change the premise and economics of the (wrong) idea that LEO satellites should be able to completely replace terrestrial cellular networks at service parity with those terrestrial cellular networks.

Achieving uplink communication from a standard cellular device to an LEO satellite poses significant challenges, especially when attempting to match the performance of a terrestrial 5G network. Cellular devices are designed with limited transmission power, typically in the range of 23–30 dBm (0.2–1 watt), sufficient for short-range communication with terrestrial base stations. However, when the receiving station is a satellite orbiting between 550 and 1,200 kilometers, the transmitted signal encounters substantial free-space path loss. The satellite must, therefore, be capable of detecting and processing extremely weak signals, often below -120 dBm, to maintain a reliable connection.

The free-space path loss in the uplink direction is comparable to that in the downlink, but the challenges are compounded by the cellular device’s limitations. At higher frequencies, such as 3.5 GHz, path loss can exceed 155 dB, while at 1.8 GHz and 800 MHz, it reduces to approximately 149.6 dB and 143.6 dB, respectively. Lower frequencies favor uplink communication because they experience less path loss, enabling better signal propagation over large distances. However, cellular devices typically use omnidirectional antennas with very low gain (0–2 dBi), poorly suited for long-distance communication, placing even greater demands on the satellite’s receiving capabilities.

The satellite must compensate for these limitations with highly sensitive receivers and high-gain antennas. Achieving sufficient antenna gain requires large apertures, often exceeding 4 meters in diameter for 800 MHz or 2 meters for 3.5 GHz, increasing the satellite’s size, weight, and complexity. Phased-array antennas or deployable reflectors are often used to achieve the required gain. Still, their implementation is constrained by the physical limitations and costs of launching such systems into orbit. Additionally, the satellite’s receiver must have an exceptionally low noise figure, typically in the range of 1–3 dB, to minimize internal noise and allow the detection of weak uplink signals.

Interference is another critical challenge in the uplink path. Unlike terrestrial networks, where signals from individual devices are isolated into small sectors, satellites receive signals over larger geographic areas. This broad coverage makes it difficult to separate and process individual transmissions, particularly in densely populated areas where numerous devices transmit simultaneously. Managing this interference requires sophisticated signal processing capabilities on the satellite, increasing its complexity and power demands.

The motion of LEO satellites introduces additional complications due to the Doppler effect, which causes a shift in the uplink signal frequency. At higher frequencies like 3.5 GHz, these shifts are more pronounced, requiring real-time adjustments to the receiver to compensate. This dynamic frequency management adds another layer of complexity to the satellite’s design and operation.

Among the frequencies considered, 3.5 GHz is the most challenging for uplink communication due to high path loss, pronounced Doppler effects, and poor building penetration. Satellites operating at this frequency must achieve extraordinary sensitivity and gain, which is difficult to implement at scale. At 1.8 GHz, the challenges are somewhat reduced as the path loss and Doppler effects are less severe. However, the uplink requires advanced receiver sensitivity and high-gain antennas to approach terrestrial network performance. The most favorable scenario is at 800 MHz, where the lower path loss and better penetration characteristics make uplink communication significantly more feasible. Satellites operating at this frequency require less extreme sensitivity and gain, making it a practical choice for achieving parity with terrestrial 5G networks, especially for outdoor and light indoor coverage.

Uplink, the consumer device to satellite signal direction, poses additional limitations to the frequency range. Such systems may be interesting to 600 MHz to a maximum of 1.8 GHz, which is already challenging for uplink and downlink in indoor usage. Service in the lower cellular frequency range is feasible for outdoor usage scenarios in rural and remote areas and for non-challenging indoor environments (e.g., “simple” building topologies).

The premise that LEO satellite D2C services would make terrestrial cellular networks redundant everywhere by offering service parity appears very unlikely, and certainly not with the current generation of LEO satellites being launched. The altitude range of the LEO satellites (300 – 1200 km) and frequency ranges used for most terrestrial cellular services (600 MHz to 5 GHz) make it very challenging and even impractical (for higher cellular frequency ranges) to achieve quality and capacity parity with existing terrestrial cellular networks.

LEO SATELLITE D2C ARCHITECTURE.

A subscriber would realize they have LEO satellite Direct-to-Cell coverage through network signaling and notifications provided by their mobile device and network operator. Using this coverage depends on the integration between the LEO satellite system and the terrestrial cellular network, as well as the subscriber’s device and network settings. Here’s how this process typically works:

When a subscriber moves into an area where traditional terrestrial coverage is unavailable or weak, their mobile device will periodically search for available networks, as it does when trying to maintain connectivity. If the device detects a signal from a LEO satellite providing D2C services, it may indicate “Satellite Coverage” or a similar notification on the device’s screen.

This recognition is possible because the LEO satellite extends the subscriber’s mobile network. The satellite broadcasts system information on the same frequency bands licensed to the subscriber’s terrestrial network operator. The device identifies the network using the Public Land Mobile Network (PLMN) ID, which matches the subscriber’s home network or a partner network in a roaming scenario. The PLMN is a fundamental component of terrestrial and LEO satellite D2C networks, which is the identifier that links a mobile consumer to a specific mobile network operator. It enables communication, access rights management, network interoperability, and supporting services such as voice, text, and data.

The PLMN is also directly connected to the frequency bands used by an operator and any satellite service provider, acting as an extension of the operator’s network. It ensures that devices access the appropriately licensed bands through terrestrial or satellite systems and governs spectrum usage to maintain compliance with regulatory frameworks. Thus, the PLMN links the network identification and frequency allocation, ensuring seamless and lawful operation in terrestrial and satellite contexts.

In an LEO satellite D2C network, the PLMN plays a similar but more complex role, as it must bridge the satellite system with terrestrial mobile networks. The satellite effectively operates as an extension of the terrestrial PLMN, using the same MCC and MNC codes as the consumer’s home network or a roaming partner. This ensures that consumer devices perceive the satellite network as part of their existing subscription, avoiding the need for additional configuration or specialized hardware. When the satellite provides coverage, the PLMN enables the device to authenticate and access services through the operator’s core network, ensuring consistency with terrestrial operations. It ensures that consumer authentication, billing, and service provisioning remain consistent across the terrestrial and satellite domains. In cases where multiple terrestrial operators share access to a satellite system, the PLMN facilitates the correct routing of consumer sessions to their respective home networks. This coordination is particularly important in roaming scenarios, where a consumer connected to a satellite in one region may need to access services through their home network located in another region.

For a subscriber to make use of LEO satellite coverage, the following conditions must be met:

  • Device Compatibility: The subscriber’s mobile device must support satellite connectivity. While many standard devices are compatible with satellite D2C services using terrestrial frequencies, certain features may be required, such as enhanced signal processing or firmware updates. Modern smartphones are increasingly being designed to support these capabilities.
  • Network Integration: The LEO satellite must be integrated with the subscriber’s mobile operator’s core network. This ensures the satellite extends the terrestrial network, maintaining seamless authentication, billing, and service delivery. Consumers can make and receive calls, send texts, or access data services through the satellite link without changing their settings or SIM card.
  • Service Availability: The type of services available over the satellite link depends on the network and satellite capabilities. Initially, services may be limited to text messaging and voice calls, as these require less bandwidth and are easier to support in shared satellite coverage zones. High-speed data services, while possible, may require further advancements in satellite capacity and network integration.
  • Subscription or Permissions: Subscribers must have access to satellite services through their mobile plan. This could be included in their existing plan or offered as an add-on service. In some cases, roaming agreements between the subscriber’s home network and the satellite operator may apply.
  • Emergency Use: In specific scenarios, satellite connectivity may be automatically enabled for emergencies, such as SOS messages, even if the subscriber does not actively use the service for regular communication. This is particularly useful in remote or disaster-affected areas with unavailable terrestrial networks.

Once connected to the satellite, the consumer experience is designed to be seamless. The subscriber can initiate calls, send messages, or access other supported services just as they would under terrestrial coverage. The main differences may include longer latency due to the satellite link and, potentially, lower data speeds or limitations on high-bandwidth activities, depending on the satellite network’s capacity and the number of consumers sharing the satellite beam.

Managing a call on a Direct-to-Cell (D2C) satellite network requires specific mobile network elements in the core network, alongside seamless integration between the satellite provider and the subscriber’s terrestrial network provider. The service’s success depends on how well the satellite system integrates into the terrestrial operator’s architecture, ensuring that standard cellular functions like authentication, session management, and billing are preserved.

In a 5G network, the core network plays a central role in managing calls and data sessions. For a D2C satellite service, key components of the operator’s core network include the Access and Mobility Management Function (AMF), which handles consumer authentication and signaling. The AMF establishes and maintains connectivity for subscribers connecting via the satellite. Additionally, the Session Management Function (SMF) oversees the session context for data services. It ensures compatibility with the IP Multimedia Subsystem (IMS), which manages call control, routing, and handoffs for voice-over-IP communications. The Unified Data Management (UDM) system, another critical core component, stores subscriber profiles, detailing permissions for satellite use, roaming policies, and Quality of Service (QoS) settings.

To enforce network policies and billing, the Policy Control Function (PCF) applies service-level agreements and ensures appropriate charges for satellite usage. For data routing, elements such as the User Plane Function (UPF) direct traffic between the satellite ground stations and the operator’s core network. Additionally, interconnect gateways manage traffic beyond the operator’s network, such as the Internet or another carrier’s network.

The role of the satellite provider in this architecture depends on the integration model. If the satellite system is fully integrated with the terrestrial operator, the satellite primarily acts as an extension of the operator’s radio access network (RAN). In this case, the satellite provider requires ground stations to downlink traffic from the satellites and forward it to the operator’s core network via secure, high-speed connections. The satellite provider handles radio gateway functionality, translating satellite-specific protocols into formats compatible with terrestrial systems. In this scenario, the satellite provider does not need its own core network because the operator’s core handles all call processing, authentication, billing, and session management.

In a standalone model, where the LEO satellite provider operates independently, the satellite system must include its own complete core network. This requires implementing AMF, SMF, UDM, IMS, and UPF, allowing the satellite provider to directly manage subscriber sessions and calls. In this case, interconnect agreements with terrestrial operators would be needed to enable roaming and off-network communication.

Most current D2C solutions, including those proposed by Starlink with T-Mobile or AST SpaceMobile, follow the integrated model. In these cases, the satellite provider relies on the terrestrial operator’s core network, reducing complexity and leveraging existing subscriber management systems. The LEO satellites are primarily responsible for providing RAN functionality and ensuring reliable connectivity to the terrestrial core.

REGULATORY CHALLENGES.

LEO satellite networks offering Direct-to-Cell (D2C) services face substantial regulatory challenges in their efforts to operate within frequency bands already allocated to terrestrial cellular services. These challenges are particularly significant in regions like Europe and the United States, where cellular frequency ranges are tightly regulated and managed by national and regional authorities to ensure interference-free operations and equitable access among service providers.

The cellular frequency spectrum in Europe and the USA is allocated through licensing frameworks that grant exclusive usage rights to mobile network operators (MNOs) for specific frequency bands, often through competitive auctions. For example, in the United States, the Federal Communications Commission (FCC) regulates spectrum usage, while in Europe, national regulatory authorities manage spectrum allocations under the guidelines set by the European Union and CEPT (European Conference of Postal and Telecommunications Administrations). The spectrum currently allocated for cellular services, including low-band (e.g., 600 MHz, 800 MHz), mid-band (e.g., 1.8 GHz, 2.1 GHz), and high-band (e.g., 3.5 GHz), is heavily utilized by terrestrial operators for 4G LTE and 5G networks.

In March 2024, the Federal Communications Commission (FCC) adopted a groundbreaking regulatory framework to facilitate collaborations between satellite operators and terrestrial mobile service providers. This initiative, termed “Supplemental Coverage from Space,” allows satellite operators to use the terrestrial mobile spectrum to offer connectivity directly to consumer handsets and is an essential component of FCC’s “Single Network Future.” The framework aims to enhance coverage, especially in remote and underserved areas, by integrating satellite and terrestrial networks. The FCC granted SpaceX (November 2024) approval to provide direct-to-cell services via its Starlink satellites. This authorization enables SpaceX to partner with mobile carriers, such as T-Mobile, to extend mobile coverage using satellite technology. The approval includes specific conditions to prevent interference with existing services and to ensure compliance with established regulations. Notably, the FCC also granted SpaceX’s request to provide service to cell phones outside the United States. For non-US operations, Starlink must obtain authorization from the relevant governments. Non-US operations are authorized in various sub-bands between 1429 MHz and 2690 MHz.

In Europe, the regulatory framework for D2C services is under active development. The European Conference of Postal and Telecommunications Administrations (CEPT) is exploring the regulatory and technical aspects of satellite-based D2C communications. This includes understanding connectivity requirements and addressing national licensing issues to facilitate the integration of satellite services with existing mobile networks. Additionally, the European Space Agency (ESA) has initiated feasibility studies on Direct-to-Cell connectivity, collaborating with industry partners to assess the potential and challenges of implementing such services across Europe. These studies aim to inform future regulatory decisions and promote innovation in satellite communications.

For LEO satellite operators to offer D2C services in these regulated bands, they would need to reach agreements with the licensed MNOs with the rights to these frequencies. This could take the form of spectrum-sharing agreements or leasing arrangements, wherein the satellite operator obtains permission to use the spectrum for specific purposes, often under strict conditions to avoid interference with terrestrial networks. For example, SpaceX’s collaboration with T-Mobile in the USA involves utilizing T-Mobile’s existing mid-band spectrum (i.e., PCS1900) under a partnership model, enabling satellite-based connectivity without requiring additional spectrum licensing.

In Europe, the situation is more complex due to the fragmented nature of the regulatory environment. Each country manages its spectrum independently, meaning LEO operators must negotiate agreements with individual national MNOs and regulators. This creates significant administrative and logistical hurdles, as the operator must align with diverse licensing conditions, technical requirements, and interference mitigation measures across multiple jurisdictions. Furthermore, any satellite use of the terrestrial spectrum in Europe must comply with European Union directives and ITU (International Telecommunication Union) regulations, prioritizing terrestrial services in these bands.

Interference management is a critical regulatory concern. LEO satellites operating in the same frequency bands as terrestrial networks must implement sophisticated coordination mechanisms to ensure their signals do not disrupt terrestrial operations. This includes dynamic spectrum management, geographic beam shaping, and power control techniques to minimize interference in densely populated areas where terrestrial networks are most active. Regulators in the USA and Europe will likely require detailed technical demonstrations and compliance testing before approving such operations.

Another significant challenge is ensuring equitable access to spectrum resources. MNOs have invested heavily in acquiring and deploying their licensed spectrum, and many may view satellite D2C services as a competitive threat. Regulators would need to establish clear frameworks to balance the rights of terrestrial operators with the potential societal benefits of extending connectivity through satellites, particularly in underserved rural or remote areas.

Beyond regulatory hurdles, LEO satellite operators must collaborate extensively with MNOs to integrate their services effectively. This includes interoperability agreements to ensure seamless handoffs between terrestrial and satellite networks and the development of business models that align incentives for both parties.

TAKEAWAYS.

Ditect-to-cell LEO satellite networks face considerable technology hurdles in providing services comparable to terrestrial cellular networks.

  • Overcoming free-space path loss and ensuring uplink connectivity from low-power mobile devices with omnidirectional antennas.
  • Cellular devices transmit at low power (typically 23–30 dBm), making it difficult for uplink signals to reach satellites in LEO at 500–1,200 km altitudes.
  • Uplink signals from multiple devices within a satellite beam area can overlap, creating interference that challenges the satellite’s ability to separate and process individual uplink signals.
  • Developing advanced phased-array antennas for satellites, dynamic beam management, and low-latency signal processing to maintain service quality.
  • Managing mobility challenges, including seamless handovers between satellites and beams and mitigating Doppler effects due to the high relative velocity of LEO satellites.
  • The high relative velocity of LEO satellites introduces frequency shifts (i.e., Doppler Effect) that the satellite must compensate for dynamically to maintain signal integrity.
  • Address bandwidth limitations and efficiently reuse spectrum while minimizing interference with terrestrial and other satellite networks.
  • Scaling globally may require satellites to carry varied payload configurations to accommodate regional spectrum requirements, increasing technical complexity and deployment expenses.
  • Operating on terrestrial frequencies necessitates dynamic spectrum sharing and interference mitigation strategies, especially in densely populated areas, limiting coverage efficiency and capacity.
  • Ensuring the frequent replacement of LEO satellites due to shorter lifespans increases operational complexity and cost.

On the regulatory front, integrating D2C satellite services into existing mobile ecosystems is complex. Spectrum licensing is a key issue, as satellite operators must either share frequencies already allocated to terrestrial mobile operators or secure dedicated satellite spectrum.

  • Securing access to shared or dedicated spectrum, particularly negotiating with terrestrial operators to use licensed frequencies.
  • Avoiding interference between satellite and terrestrial networks requires detailed agreements and advanced spectrum management techniques.
  • Navigating fragmented regulatory frameworks in Europe, where national licensing requirements vary significantly.
  • Spectrum Fragmentation: With frequency allocations varying significantly across countries and regions, scaling globally requires navigating diverse and complex spectrum licensing agreements, slowing deployment and increasing administrative costs.
  • Complying with evolving international regulations, including those to be defined at the ITU’s WRC-27 conference.
  • Developing clear standards and agreements for roaming and service integration between satellite operators and terrestrial mobile network providers.
  • The high administrative and operational burden of scaling globally diminishes economic benefits, particularly in regions where terrestrial networks already dominate.
  • While satellites excel in rural or remote areas, they might not meet high traffic demands in urban areas, restricting their ability to scale as a comprehensive alternative to terrestrial networks.

The idea of D2C satellite networks making terrestrial cellular networks obsolete is ambitious but fraught with practical limitations. While LEO satellites offer unparalleled reach in remote and underserved areas, they struggle to match terrestrial networks’ capacity, reliability, and low latency in urban and suburban environments. The high density of base stations in terrestrial networks enables them to handle far greater traffic volumes, especially for data-intensive applications.

  • Coverage advantage: Satellites provide global reach, particularly in remote or underserved regions, where terrestrial networks are cost-prohibitive and often of poor quality or altogether lacking.
  • Capacity limitations: Satellites struggle to match the high-density traffic capacity of terrestrial networks, especially in urban areas.
  • Latency challenges: Satellite latency, though improving, cannot yet compete with the ultra-low latency of terrestrial 5G for time-critical applications.
  • Cost concerns: Deploying and maintaining satellite constellations is expensive, and they still depend on terrestrial core infrastructure (although the savings if all terrestrial RAN infrastructure could be avoided is also very substantial).
  • Complementary role: D2C networks are better suited as an extension to terrestrial networks, filling coverage gaps rather than replacing them entirely.

The regulatory and operational constraints surrounding using terrestrial mobile frequencies for D2C services severely limit scalability. This fragmentation makes it difficult to achieve global coverage seamlessly and increases operational and economic inefficiencies. While D2C services hold promise for addressing connectivity gaps in remote areas, their ability to scale as a comprehensive alternative to terrestrial networks is hampered by these challenges. Unless global regulatory harmonization or innovative technical solutions emerge, D2C networks will likely remain a complementary, sub-scale solution rather than a standalone replacement for terrestrial mobile networks.

FURTHER READING.

  1. Kim K. Larsen, “The Next Frontier: LEO Satellites for Internet Services.” Techneconomyblog, (March 2024).
  2. Kim K. Larsen, “Stratospheric Drones & Low Earth Satellites: Revolutionizing Terrestrial Rural Broadband from the Skies?” Techneconomyblog, (January 2024).
  3. Kim K. Larsen, “A Single Network Future“, Techneconomyblog, (March 2024).
  4. T.S. Rappaport, “Wireless Communications – Principles & Practice,” Prentice Hall (1996). In my opinion, it is one of the best graduate textbooks on communications systems. I bought it back in 1999 as a regular hardcover. I have not found it as a Kindle version, but I believe there are sites where a PDF version may be available (e.g., Scribd).

ACKNOWLEDGEMENT.

I greatly acknowledge my wife, Eva Varadi, for her support, patience, and understanding during the creative process of writing this article.

What Lies Beneath.

On the early morning of November 17, 2024, the Baltic Sea was shrouded in a dense, oppressive fog that clung to the surface like a spectral veil. The air was thick with moisture, and visibility was severely limited, reducing the horizon to a mere shadowy outline. The sea itself was eerily calm. This haunting stillness set the stage for the unforeseen disruption of the submarine cables. This event would send ripples of concern about hybrid warfare far beyond the misty expanse of the Baltic. The quiet depths of the Baltic Sea have become the stage for a high-tech mystery gripping the world. Two critical submarine cables were severed, disrupting communication in a rare and alarming twist.

As Swedish media outlet SVT Nyheter broke the news, suspicions of sabotage began to surface. Adding fuel to the intrigue, a Chinese vessel became the focus of investigators like the first lantern — the ship of interest, Yi Peng 3, had reportedly been near both breakpoints at the critical moments. While submarine cable damage is not uncommon, the simultaneous failure of two cables, separated by distance but broken within the same maritime zone, is an event of perceived extraordinary rarity that raised the suspicion of foul play and hybrid war actions against Western critical infrastructure.

Against the backdrop of escalating geopolitical tensions, speculation is rife. Could these breaks signal a calculated act of sabotage? As the investigation unfolds, the presence of the Chinese vessel looms large, now laying for anchor in international waters in Danish Kattegat, turning a routine disruption into a high-stakes drama that may be redefining maritime security in our digital age.

Signe Ravn-Højgaard, Director of the Danish Think Tank for Digital Infrastructure, has been at the forefront, with her timely LinkedIn Posts, delivering near real-time updates that have kept experts and observers alike on edge.

Let’s count to ten and look at what we know so far and at the same time revisit some subsea cable fundamentals as well.

WHY DO SUBMARINE CABLES BREAK?

Distinguishing between natural causes, unintended human actions, and deliberate human actions in the context of submarine cable breaks requires analyzing the circumstances and evidence surrounding the incident.

Natural causes generally involve geological or environmental events such as earthquakes, underwater landslides, strong currents, or seabed erosion. In the Arctic, icebergs may scrape the seabed as they drift or ground in shallow waters, potentially dragging and crushing calves in their path. These causes often coincide with measurable natural phenomena like seismic activity, seasonal ice, or extreme weather events in the area of the cable break. According to data from the International Cable Protection Committee (ICPC), ca. 5% of faults are caused by natural phenomena, such as earthquakes, underwater landslides, iceberg drifts, or volcanic activity.

The aging of submarine cables adds to their vulnerability. Wear and tear, corrosion, and material degradation due to long-term exposure to seawater can lead to failures, especially in decades-old cables. In some cases, the damage may also stem from improper installation or manufacturing defects, where weak points in the cable structure result in premature failure.

Unintended human actions are characterized by accidental interference with cables, often linked to maritime activities. Examples include ship anchor dragging, fishing vessel trawling, or accidental damage during underwater construction or maintenance. These incidents typically occur in areas of high maritime traffic or during specific operations and lack any indicators of malicious intent. Approximately 40% of subsea cable faults are caused by anchoring and fishing activities, the most common human-induced risks. Another 45% of faults have unspecified causes, which could include a mix of factors. Upwards of 87% of all faults are a result of human intervention.

While necessary, maintenance and repair operations can also introduce risks. Faulty repairs, crossed cables, or mishandling during maintenance can create new vulnerabilities. Underwater construction activities, such as dredging, pipeline installation, or offshore energy projects, may inadvertently damage cables.

Deliberate human actions, which by all means are the stuff of the most interesting stories, by contrast, involve intentional interference with submarine cables and are usually motivated by sabotage, espionage, or geopolitical strategies. These cases often feature evidence of targeted activity, such as patterns of damage suggesting deliberate cutting or tampering. Unexplained or covert vessel movements near critical cable routes may also indicate intentional actions. A deliberate action may, of course, be disguised as an accidental interference (e.g., anchor dragging or trawling).

Although much focus is on the integrity of the subsea cables themselves, which is natural due to the complexity and time it takes to repair a broken cable, it is wise to remember that landing stations, beach manholes, and associated operational systems are likewise critical components of the submarine cable infrastructure and are vulnerable to deliberate hostile actions as well. Cyber exposure in network management systems, which are often connected to the internet, presents an additional risk, making these systems potential targets for sabotage, espionage, or cyberattacks. Strengthening the physical security of these facilities and enhancing cybersecurity measures are essential to mitigate these risks.

Landing stations and submarine cable cross-connects, or T-junctions, are critical nodes in the global communications infrastructure, making them particularly vulnerable to deliberate attacks. A compromise at a landing station could disrupt multiple cables simultaneously, severing regional or international connectivity. At the same time, an attack on a T-junction could disable critical pathways, bypassing redundancy mechanisms and amplifying the impact of a single failure. These vulnerabilities highlight the need for enhanced physical security, robust monitoring, and advanced cybersecurity measures to safeguard these vital points due to their disproportional impact if compromised.

Although deliberate human actions are increasingly considered a severe risk with the current geopolitical climate, their frequency and impact are not well-documented in the report. Most known subsea cable incidents remain attributed to accidental causes, with sabotage and espionage considered significant but less quantified threats.

Categorizing cable breaks involves gathering data on the context of the incident, including geographic location, timing, activity logs from nearby vessels, and environmental conditions. Combining this information with forensic analysis of the damaged cable helps determine whether the cause was natural, accidental, or deliberate.

WHY ARE SUBMARINE CABLES CRITICAL INFRASTRUCTURE?

Submarine cables are indispensable to modern society and should be regarded as critical infrastructure because they enable global connectivity and support essential services. These cables carry approximately 95% of international data traffic, forming the backbone of the Internet, financial systems, and communications. Their reliability underpins industries, governments, and economies worldwide, making disruptions highly consequential. For example, the financial sector relies heavily on submarine cables for instantaneous transactions and stock trading, while governments depend on them for secure communications and national security operations. With limited viable alternatives, such as satellites, which lack the bandwidth and speed of submarine cables, these cables are uniquely vital.

Despite their importance, submarine cable networks are designed with significant redundancy and safeguards to ensure resilience. Multiple cable routes exist for most major data pathways, ensuring that a single failure does not result in widespread disruptions. For example, transatlantic communications are supported by numerous parallel cables. Regional systems, such as those in Europe and North America, are highly interconnected, offering alternative routes to reroute traffic during outages. Advanced repair capabilities, including specialized cable-laying and repair ships, ensure timely restoration of damaged cables. Additionally, internet service providers and data centers use sophisticated traffic-routing protocols to minimize the impact of localized disruptions. Ownership and maintenance of these cables are often managed by consortia of telecom and technology companies, enhancing their robustness and shared responsibility for maintenance.

It is worth considering for operators and customers of submarine cables that using multiple parallel submarine cables drastically improves the overall availability of the network. With two cables, downtime is reduced to mere seconds annually (99.9999% and maximum 32 seconds annual downtime), and with three cables, it becomes negligible (99.9999999% and maximum ~0.32 seconds annual downtime). This enhanced reliability ensures that critical services remain uninterrupted even if one cable experiences a failure. Such setups are ideal for organizations or infrastructures that require near-perfect availability. To mitigate the impact of deliberate hostile actions on submarine cable traffic, operators must adopt a geographically strategic approach when designing redundancy and robustness, considering both the physical and logical connectivity and transport.

While the submarine cable network is inherently robust, users of this infrastructure must adopt proactive measures to safeguard their services and traffic. Organizations should distribute data across multiple cables to mitigate risks from localized outages and invest in cloud-based redundancy with geographically dispersed data centers to ensure continuity. Collaborative monitoring efforts between governments and private companies can help prevent accidental or deliberate damage, while security measures for cable landing stations and undersea routes can reduce vulnerabilities. By acknowledging the strategic importance of submarine cables and implementing such safeguards, users can help ensure the continued resilience of this critical global infrastructure.

1-2 KNOCKOUT!

So what happened underneath the Baltic Sea last weekend (between 17 and 18 November)?

In mid-November 2024, two significant submarine cable disruptions occurred in the Baltic Sea, raising concerns over the security of critical infrastructure in the region. The first incident involved the BCS East-West Interlink cable, which connects Lithuania to Sweden. On November 17, at approximately 10:00 AM local time (08:00 UTC), the damage was detected. The cable runs from Sventoji, Lithuania, to Katthammarsvik on the east coast of the Swedish island of Gotland. Telia Lithuania, a telecommunications company, reported that the cable had been “cut,” leading to substantial communication disruptions between Lithuania and Sweden.

The second disruption occurred the following day, on November 18, around midnight (note: exact time seems to be uncertain), involving the C-Lion1 cable connecting Finland to Germany. The damage was identified off the coast of the Swedish island of Öland. Finnish telecommunications company Cinia Oy reported that the cable had been physically interrupted by an unknown force, resulting in a complete outage of services transmitted via this cable.

The reactions from affected nations have highlighted the seriousness of these events. In Germany, Defense Minister Boris Pistorius stated that the damage appeared to be the result of sabotage, emphasizing the unlikelihood of it being accidental. In Finland, Foreign Minister Elina Valtonen expressed deep concern, stressing the importance of protecting such vital infrastructure. Sweden initiated a formal investigation into the disruptions, with the Swedish Prosecution Authority opening a case under suspicion of sabotage.

The timeline of these events begins on November 17, with the detection of damage to the BCS East-West Interlink cable, followed by the discovery of the severed C-Lion1 cable on November 18. Geographically, both incidents occurred in the Baltic Sea, with the East-West Interlink cable between Lithuania and Sweden and the C-Lion1 cable connecting Finland and Germany. The breaks were specifically detected near the Swedish islands of Gotland and Öland.

These disruptions have led to heightened security measures and widespread investigations in the Baltic region as authorities seek to determine the cause and safeguard critical submarine cable infrastructure. Concerns over potential sabotage have intensified discussions among NATO members and their allies, underscoring the geopolitical implications of such vulnerabilities.

THE SITUATION.

The figure below provides a comprehensive overview of submarine cables in the Baltic Sea and Scandinavia. In most media coverage, only the two compromised submarine cables, BSC East-West Interlink (RFS: 1997) and C-Lion1 (RFS: 2016) have been shown, which may create the impression that those two are the only subsea cables in the Baltic. This is not the case, as shown below. This does not diminish the seriousness of the individual submarine cable breaks but illustrates that alternative routes may be present until the compromised cables have been repaired.

The figure also shows the areas of the two submarine cables that appear to have been broken and the approximate timeline for when cable operators notice that the cables were compromised. Compared to the BCS East-West Link, the media coverage of the C-Lion1 break is a bit more unclear about the exact time and location of the break. This is obviously very important information as it can be correlated with the position of the vessel of interest that is currently under investigation for causing the two breaks.

It should be noted that the Baltic Sea area has a considerable amount of individual submarine sea cables. A few of those are very near the two broken ones or would cross the busy shipping routes vessels take through the Baltic Sea.

Using the Marinetraffic tracker (note: there are other alternatives; I like this one), I can get an impression of the maritime traffic around the submarine breaks at the approximate time frames when the breaks were discovered. The Figure below shows the marine traffic around the BCS East-West Link from Gotland (Sweden) to Sventoji (Lithuania) across the Baltic Sea with a cable length of 218 km.

The route between Gotland and the Baltic States, also known as the Central Baltic Sea, is one of the busiest sea routes in the world, with more than 30 thousand vessels passing through annually. Around the BCS West-East Interlink subsea cable break, ca. 10+ maritime vessels were passing around the time of the cable break. The only Chinese ship at the time and location was Yi Peng 3 (Bulk Carrier), also mentioned in the press a couple of hours ago.

Some hours later, between 23:00 and 01:00 UTC, “Yi Peng 3” was crossing the area of the second cable break at a time that seemed to also be the time that the C-Lion1 outage was observed. See the Figure below with the red circle pinpointing the Chinese vessel. Again “Yi Peng 3” is the only Chinese vessel in the area at the possible time of the cable break. It is important, as also shown in the Figure below, that there were many other ships in the area and neighborhood of Chinese vessel and the location of the C-Lion1 submarine cable.

Using the Maritinetraffic website’s historical data, I have mapped out the “Yi Peng 3” route up through the Baltic Sea to the Russian port Ust-Luga and back out of the Baltic Sea, including the path and timing of its presence around the two cable breaks. That coincides with the time of the reported outages.

If one examines the Chinese vessel’s speed relative to the other vessels’ speeds, it would appear that “Yi Peng 3” is the only vessel that matches both break locations and time intervals for the breaks. I would like to emphasize that such an assessment is limited to the data in the Maritinetraffic database (that I am using) and may obviously be a coincidence, irrespective of how one judges the likelihood of that. Also, even if the Chinese vessel of interest should be found to have caused the two submarine cable breaks, it may not have been a deliberate act.

“Yi Peng 3’s current status (2024-11-20 12:41 (UTC+1)) is that it has stopped at anchor in Kattegat in Danish territorial waters (See the Figure below). The “Yi Peng 3” seems to have stopped (in international waters) in Kattegat of their own volition and supposedly not by local authorities.

There are many rumors circulating about the Chinese vessel. It was earlier reported that a Danish pilot was placed on the vessel as of yesterday evening, November 19 (2024). This also agrees with the official event entry and timestamp as recorded by Maritinetraffic. In the media, this event has been misconstrued as Danish maritime authorities have taken control of the Chinese vessel. This, however, appears not to have been the case later.

Danish waters, including the Kattegat, are part of a region where licensed pilotage (by a “los” in Danish) is commonly required or strongly recommended for vessels of specific sizes or types, especially when navigating congested or challenging areas. The presence of a licensed pilot entry in the log reinforces that the vessel’s activities during this phase of its journey align with standard operating procedures.

However, this does not exclude the need for further scrutiny, as other aspects of the vessel’s behavior, such as voluntary stops or deviations from its planned route, should still warrant investigation. If for nothing else, an inquiry should ensure sufficient information is available for an insurance to take effect and compensate the submarine cable owners for the damages and cost of repairing the cables. If “Yi Peng 3” did not stop its journey due to intervention from the Danish marine authority, then it may be at the request of the protection & indemnity insurance company that the owner of “Yi Peng 3” should have in place.

WHAT DOES IT TAKE TO CUT A SUBMARINE CABLE?

To break a submarine cable, a ship typically needs to generate significant force. This is often caused by an anchor’s unintentional or accidental deployment while the ship is underway. The ship’s momentum plays a crucial role, determined by its mass and speed. A large, heavily loaded vessel moving at even moderate speeds, such as 6 knots, generates immense kinetic energy. Suppose an anchor is deployed in such conditions. In that case, the combination of drag, weight, and momentum can create concentrated forces capable of damaging or severing a cable.

The anchor’s characteristics are equally critical. A large, sharp anchor with heavy flukes can snag a cable, especially if the cable is exposed on the seabed or poorly buried. As the ship continues to move forward, the dragging anchor might stretch, lift, or pierce the cable’s protective layers. If the ship is in an area with soft seabed sediment like mud or sand, the anchor has a better chance of digging in and generating the necessary tension to break the cable. On harder or rocky seabed, the anchor might skip, but this can still result in abrasion or localized stress on the cable.

The BCS East-West Interlink cable, the first submarine cable to break, connecting Lithuania and Sweden, is laid at depths ranging from approximately 100 to 150 meters. In these depths, the seabed is predominantly composed of soft sediments, including sand and mud, which can shift over time due to currents and sediment deposition. Such conditions can lead to sections of the cable becoming exposed, increasing their vulnerability to external impacts like anchoring. The C-Lion1 cable, the second subsea cable to break, is situated at shallower depths of about 20 to 40 meters. In these areas, the seabed may consist of a combination of soft sediments and harder materials, such as clay or glacial till. The presence of harder substrates can pose challenges for cable burial and protection, potentially leaving segments of the cable exposed and susceptible to damage from external forces.

The vulnerability of the cable is also a factor. Submarine cables are typically armored and buried under 1–2 meters of sediment near shorelines, but in deeper waters, they are often exposed due to technical challenges in burial. An exposed cable is particularly at risk, especially if it is old or has been previously weakened by sediment movement or other physical interactions.

When a submarine cable break occurs, one would typically analyze maritime vessels in the vicinity of the break. A vessel’s AIS signals can provide telltale signs. AIS transmits a vessel’s speed, position, and heading at regular intervals, which can reveal anomalies in its movement. If a ship accidentally deploys its anchor:

  • Speed Changes: The vessel’s speed would begin to decrease unexpectedly as the anchor drags along the seabed, creating resistance. This deceleration might be gradual or abrupt, depending on the seabed type and the tension in the anchor chain. In an extreme case, the speed could drop from cruising speeds (e.g., 6 knots) to near zero as the ship comes to a stop.
  • Position Irregularities: If the anchor snags a cable, the AIS track may show deviations from the expected path. The ship might veer slightly off course or experience irregular movement due to the uneven drag caused by the cable interaction.
  • Stop or Slow Maneuvers: If the anchor creates substantial resistance, the vessel might halt entirely, leaving a stationary position in the AIS record for a prolonged period.

Additionally, position data from the AIS might reveal whether the ship was operating near known submarine cable routes. This is significant because anchoring is typically restricted in these zones, and any AIS data showing activity or stops within these areas would be a red flag. The figure below provides an illustration of Yi Peng 3‘s AIS signal, using available data from Maritine Traffic, between the 16th of November to 18th of November (2024). It is apparent that there are long time gaps in the AIS transmission on both the 17th as well as on the 18th, while prior to those dates, the AIS was on transmitted approximately every 2 minutes. Apart from the AIS silence at around 8 AM on 17th of November, the AIS silence coincides with significant speed drops over the period indicating that Yi Peng 3 would have been at or near standstill.

Environmental and human factors further compound the situation. Strong currents, storms, or poor visibility might increase the likelihood of accidental anchoring or a missed restriction. Human error, such as improper navigation or ignoring marked cable zones, can also lead to such incidents. Once the anchor catches the cable, the tension forces can grow until the cable either snaps or is pulled from its burial, increasing stopping distances for the ship.

When considering the scenario where the Yi Peng 3, a large bulk carrier with a displacement of approximately 75,169 tons, drops its anchor while traveling at a speed of 6 knots (~3.1 m/s), the stopping dynamics vary significantly depending on whether or not the anchor snags a submarine cable. Using mathematical modeling, we can analyze the expected stopping time and distance in both cases, assuming specific conditions for the ship and the cable. The anchor deployment generates a drag force depending on the seabed conditions (as discussed above) and whether the anchor catches a submarine cable. When no submarine cable is involved, the drag force generated by the anchor is estimated at 1.5 Mega Newton, a typical value for large vessels in soft seabed conditions (e.g., mud or sand). If the ship’s anchor catches a submarine cable, the resistance force effectively doubles to 3 Mega Newton, assuming the cable resists the anchor’s pull consistently until the ship stops or the sea cable eventually breaks (i.e., they usually do as the ship’s kinetic energy is far greater than the energy needed to shear the submarine cable).

When the anchor drags along the seabed without encountering a cable, the stopping time is approximately 2.5 minutes, and the ship travels a distance of ca. 250 meters before coming to a complete stop. This deceleration is driven solely by the drag force of the anchor interacting with the seabed. However, if the anchor catches a submarine cable, the stopping time is reduced to around a minute, and the stopping distance shortens to ca. 100+ meters. This reduction occurs because the resistance force doubles, significantly increasing the rate of deceleration. If the cable breaks, the ship might accelerate slightly as the anchor loses the additional drag from the cable. This would then extend the stopping distance compared to a scenario where the cable holds until the ship stops. The ship might veer slightly off course if the anchor suddenly becomes free. Do to the time scale involved, e.g., 1 to 3 minutes, such an event would be difficult to observe in real-time as the AIS transmit cycle could be longer. However, from standstill back to an operating speed of 6 knots would realistically take up to 40 minutes, including anchor recovery, under normal operating conditions. If their anchor has been entangled in the submarine cable it may take substantially longer to recover the anchor and be able to continue the journey (even if they “forget” to notify the authorities as they would be obliged to do). In “desperation” the vessel may drop their anchor and rely on their other anchor for redundancy (i.e., larger vessels typically have 2 anchors, a port anchor and a starboard anchor).

When a submarine cable breaks during interaction with a ship, it is usually due to excessive tensile forces that exceed the cable’s strength. Conditions such as the ship’s size and speed, the cable’s vulnerability, and the seabed characteristics all contribute to the likelihood of a break. Once the cable snaps, it drastically changes the dynamics of the ship’s deceleration, often leading to increased stopping distances and posing risks to both the cable and the ship’s anchoring equipment. Understanding these dynamics is critical for assessing incidents involving submarine cables and maritime vessels.

If the Yi Peng 3 accidentally dropped its anchor while sailing at 6 knots, it is highly plausible that the anchor could sever the BCS East-West Interlink submarine cable. The ship’s immense kinetic energy (i.e., 350+ Mega Joule), combined with the forces exerted by the dragging anchor, far exceed the energy required to break the cable (i.e., 70+ kilo Joule for a 50 mm thick cable).

ACTUAL TRAFFIC IMPACT OF THE BALTIC SEA CABLE CUTS?

The RIPE NCC conducted an analysis using data from RIPE Atlas, a global network of measurement probes, to assess the impact of these cable cuts. The study focused on latency and packet loss between RIPE Atlas anchors in the countries connected by the damaged cables. Their key findings were:

  • BCS East-West Interlink Cut (Sweden-Lithuania): Approximately 20% of the measured paths between Sweden and Lithuania exhibited significant increases in latency following the cable cut. However, no substantial packet loss was detected, indicating that while some routes experienced delays, data transmission remained largely intact.
  • C-Lion1 Cut (Finland-Germany): About 30% of the paths between Finland and Germany showed notable latency increases after the incident. Similar to the BCS cut, there was no significant packet loss observed, suggesting that alternative routing effectively maintained data flow despite the increased delays.

The analysis concluded that the internet demonstrated a degree of resilience by rerouting traffic through alternative paths, mitigating the potential impact of the cable disruptions. As discussed in this article the RIPE NCC analysis highlights the importance of maintaining and securing multiple connections to ensure robust internet connectivity. In those considerations it is also clear that technical responsible needs to consider latency in their choices of alternative routes as some customers applications may be critically sensitive to too high latencies (e.g., payment and certain banking applications applications, real-time communications such as Zoom, Teams, Google Meet, financial trading,..).

While media often highlights that security- and intelligence-sensitive information (e.g., diplomatic traffic, defense-related traffic, …) may be compromised in case of a submarine cable cut, it seems to me highly unlikely that such information would rely solely on a single submarine cable connection without backups (e.g., satellites communications, dedicated secure networks, air-gapped systems, route diversity, …) or contingencies. Best practices in network design and operational security prioritize redundancy, especially for sensitive communications.

Anyway, military and diplomatic communications are rarely entrusted solely to submarine cables. High-value networks, like those used by NATO or national defense agencies, integrate (a) high-capacity, low-latency satellite links as failover, (b) secure terrestrial routes, and (c) cross-border fiber agreements with trusted partners.

WHAT IS THE RISK?

Below is a simple example of a risk assessment model, with the rose color illustrating the risk category into which the two sea cables, BCS East-West Interlink and C-Lion1, might fall. This really should be seen as an illustration, and the actual probability ranges may not reflect reality. Luckily, we only have a little data that could be used to build a more rigorous risk assessment or incident probability model. In the illustration below, I differentiate between Baseline Risk, which represents the risk of a subsea cable break due to natural causes, including unintentional human-caused breaks, and Sabotage Risk, which represents the deliberately caused submarine breaks due to actual warfare or hybrid/pseudo warfare.

The annual occurrence of 100 to 200 cable breaks (out of ca. 600) translates to a break rate of approximately 0.017% to 0.033% per kilometer each year. This low percentage underscores the robustness of the submarine cable infrastructure despite the challenges posed by natural events and human activities.

With the table above, one could, in principle, estimate the likelihood of a cable break due to natural causes and the additional probability of cable breaks attributed to deliberate actions. Hereby forming an overall estimate of the risk of a cable break for a particular submarine cable. This might look like this (or a lot more complex than this;-):

$P_{Baseline} \; = \; \beta_0 \; + \; \beta_1 L \; + \; \beta_2 e^{\alpha A} \; + \; \beta_3 F \; + \; \beta_4 S \; + \; \beta_5 \frac{1}{D} \; + \; \beta_6 \frac{1}{C}$ $\\ \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \; + \; \beta_7 I \; = \; \beta_0 \; + \; \sum_{i=1}^{n} \beta_i \cdot P_i$

$P_{Sabotage} \; = \; \gamma_1G \; + \; \gamma_2O$

$P_{Cable \; break} \; + \; P_{Baseline} \; + \; P_{Sabotage}$

For the BCS East-West Interlink break, we can make the following high-level assessment of the Baseline risk of a break. The BCS East-West Interlink submarine cable, connecting Sventoji, Lithuania, and Katthammarsvik, Sweden, spans the Baltic Sea, which is characterized by moderate marine traffic and relatively shallow waters.

The Baseline Probability considerations amounts to

  • Cable Length: Shorter cables generally have a lower risk of breaks.
  • Marine Traffic Density: The Baltic Sea experiences moderate marine traffic, which can increase the likelihood of accidental damage from anchors or fishing activities.
  • Fishing Activity: The area has moderate fishing activity, posing a potential risk to submarine cables.
  • Seismic Activity: The Baltic Sea is geologically stable, indicating a low risk from seismic events.
  • Iceberg Activity: The likelihood of an iceberg causing a submarine cable break in the Baltic Sea, particularly in the areas where recent disruptions were observed, is virtually nonexistent.
  • Depth of Cable: The cable lies in relatively shallow waters, making it more susceptible to human activities.
  • Cable Armoring: If the cable is well-armored, it will be more resistant to physical damage.

As an illustration here are the specifics of the Baseline Risk with assumed ß-weights using the midpoint probabilities from the Table above.

  • Cable Length (L): 0.1 × 0.15=0.015
  • Cable Age (A): 0.15 × 0.10 = 0.015
  • Marine Traffic (M): 0.2 × 0.25 = 0.05
  • Fishing Activity (F): 0.175 × 0.15 = 0.02625
  • Seismic Activity (S): 0.075 × 0.02 = 0.0015
  • Iceberg Activity (I): 0 × 0.01 = 0
  • Depth (D): 0.375 × 0.02 = 0.0075
  • Armoring (C): 0.15 × 0.1 = 0.015

Summing these Baseline contributions:

$P_{Baseline} \; = \; 0.015 \; + \; 0.015 + \; 0.005 + \; 0.02625 + \; 0.0015 + \; 0 + \; 0.0075 + \; 0.015 + \; 0.13$

Or 13% (0.060% per km) baseline probability per year of experiencing a cable break by causes not deliberate.

Estimated Baseline Probability Range:

Considering all the above factors, the baseline probability using minimum and maximum of a break in the BCS East-West Interlink cable is estimated to be in the low to moderate range, approximately 7.35% (0.034% per km) to 18.7% (0.086 per km) per year. This estimation accounts for the moderate marine and fishing activities, shallow depth, and the assumption of standard protective measures. Also, note that this is below the average cable break likelihood of between 17% and 33% (i.e., 100 to 200 out of 600 breaks per year).

Given the geopolitical tensions, the cable’s accessible location, and recent incidents, the likelihood of sabotage for the BCS East-West Interlink is moderate to high. Implementing robust security measures and continuous monitoring is essential to mitigate this risk. The available media information indicates that the monitoring of this sea cable was good. Based on the available information, this may not be said of the C-Lion1 submarine cable, owned by Cinia Oy, although this cable is also substantially longer than the BCS one (1,172 vs. 218 km).

The European Union Agency for Cybersecurity (Enisa) published a report back in 2023 (July) titled “Subsea Cables – What is at Stake?”. The ICPC’s (International Cable Protection Committee) categorization of cable faults shows that approximately 40% of subsea cable faults are caused by anchoring and fishing activities, the most common human-induced risks. Another 45% of faults have unspecified causes, which could include a mix of factors. Around 87% of all faults result from human intervention, either through unintentional actions like fishing and anchoring or deliberate malicious activities. On the other hand, 4% of faults are due to system failures, attributed to technical defects in cables or equipment. Lastly, 5% of faults are caused by natural phenomena, such as earthquakes, underwater landslides, or volcanic activity. These statistics emphasize the predominance of human activities in subsea cable disruptions over natural or technical causes. These insights can calibrate the above risk assessment methodology, although some deconvolution would be necessary to insure that appropriate regionalized and situational data has been correctly considered.

ADDITIONAL INFORMATION.

Details of the ship of interest, and suspect number one: YI PENG 3 (IMO: 9224984) is a Bulk Carrier built in 2001 and sailing under China’s flag. Her carrying capacity is 75,121 tonnes, and her current draught is reported to be 14.5 meters. Her length is 225 meters, and her width is 32.3 meters. A maritime bulk carrier vessel is designed to transport unpackaged bulk goods in large quantities. These goods, such as grain, coal, ore, cement, salt, or other raw materials, are typically loose and not containerized. Bulk carriers are essential in global trade, particularly for industries transporting raw materials efficiently and economically.

The owner of “Yi Peng 3” is Ningbo Yipeng Shipping Co., Ltd. is a maritime company based in Ningbo, Zhejiang Province, China. The company is located at 306, Yanjiang Donglu, Zhenhai District, Ningbo, Zhejiang, 315200, China. Ningbo Yipeng Shipping specializes in domestic and international waterway transportation, offering domestic freight forwarding, ship agency, and the wholesale and retail of mineral products. The company owns and operates bulk carrier vessels, including the “YI PENG” (IMO: 9224996), a bulk carrier with a gross tonnage of 40,622 and a deadweight of 75,169 tons, built in 2001. Another vessel, “YI PENG 3” (IMO: 9224984), is also registered under the company’s ownership. Financially, Ningbo Yipeng Shipping reported a total operating income of approximately 78.18 million yuan, with a net profit of about -9.97 million yuan, indicating a loss for the reporting period.

ACKNOWLEDGEMENT.

I greatly acknowledge my wife, Eva Varadi, for her support, patience, and understanding during the creative process of writing this Blog. Many thanks to Signe Ravn-Højgaard for keeping us updated on the developments over the last few days (in November 2024), and for her general engagement around and passion for critical infrastructure.