"It doesn't matter how beautiful your idea is, it doesn't matter how smart or important you are. If the idea doesn't agree with reality, it's wrong", Richard Feynman (paraphrased)
If Greenland were digitally cut off tomorrow, how much of its public sector would still function? The uncomfortable answer: very little. The truth is that not only would the public sector break down, but society as a whole would likely also break down the longer a digital isolation would be in effect. This article outlines why it does not necessarily have to be this way and suggests that some remedies and actions can be taken to minimize the impact of an event where Greenland would be digitally isolated from the rest of the internet for an extended period (e.g., weeks to months).
We may like, or feel tempted, to think of digital infrastructure as neutral plumbing. But as I wrote earlier, “digital infrastructure is no longer just about connectivity, but about sovereignty and resilience.” Greenland today has neither.
A recent Sermitsiaq article on Greenland’s “Digital Afhængighed af Udlandet” by Poul Krarup, which describes research work done by the Tænketanken Digital Infrastruktur, laid it bare and crystal clear: the backbone of Greenland’s administration, email, payments, and even municipal services, runs on servers and platforms that are located mainly outside Greenland (and Denmark). Global giants in Europe and the US hold the keys. Greenland doesn’t. My own research reveals just how dramatic this dependency is. The numbers from my own study of 315 Greenlandic public-sector domains make it painfully clear: over 70% of web/IP hosting is concentrated among just three foreign providers, including Microsoft, Google, and Cloudflare. For email exchanges (MX), it’s even worse: the majority of MX records sit entirely outside Greenland’s control.
So imagine the cable is cut, the satellite links fail, or access to those platforms is revoked. Schools, hospitals, courts, and municipalities. How many could still function? How many could even switch on a computer?
This isn’t a thought experiment. It’s a wake-up call.
In my earlier work on Greenland’s critical communications infrastructure, “Greenland: Navigating Security and Critical Infrastructure in the Arctic – A Technology Introduction”, I have pointed out both the resilience and the fragility of what exists today. Tusass has built and maintained a transport network that keeps the country connected under some of the harshest Arctic conditions. That achievement is remarkable, but it is also costly and economically challenging without external subsidies and long-term public investment. With a population of just 57,000 people, Greenland faces challenges in sustaining this infrastructure on market terms alone.
DIGITAL SOVEREIGNTY.
What do we mean when we use phrases like “the digital sovereignty of Greenland is at stake”? Let’s break down the complex language (for techies like myself). Sovereignty in the classical sense is about control over land, people, and institutions. Digital sovereignty extends this to the virtual space. It is primarily about controlling data, infrastructure, and digital services. As societies digitalize, critical aspects of sovereignty move into the digital sphere, such as,
Infrastructure as territory: Submarine cables, satellites, data centers, and cloud platforms are the digital equivalents of ports, roads, and airports. If you don’t own or control them, you depend on others to move your “digital goods.”
Data as a resource: Just as natural resources are vital to economic sovereignty, data has become the strategic resource of the digital age. Those who store, process, and govern data hold significant power over decision-making and value creation.
Platforms as institutions: Social media, SaaS, and search engines act like global “public squares” and administrative tools. If controlled abroad, they may undermine local political, cultural, or economic authority.
The excellent book by Anu Bradford, “Digital Empires: The Global Battle to Regulate Technology,” describes how the digital world is no longer a neutral, borderless space but is increasingly shaped by the competing influence of three distinct “empires.” The American model is built around the dominance of private platforms, such as Google, Amazon, and Meta, where innovation and market power drive the agenda. The scale and ubiquity of Silicon Valley firms have enabled them to achieve a global reach. In contrast, the Chinese model fuses technological development with state control. Here, digital platforms are integrated into the political system, used not only for economic growth but also for surveillance, censorship, and the consolidation of authority. Between these two poles lies the European model, which has little homegrown platform power but exerts influence through regulation. By setting strict rules on privacy, competition, and online content, Europe has managed to project its legal standards globally, a phenomenon Bradford refers to as the “Brussels effect” (which is used here in a positive sense). Bradford’s analysis highlights the core dilemma for Greenland. Digital sovereignty cannot be achieved in isolation. Instead, it requires navigating between these global forces while ensuring that Greenland retains the capacity to keep its critical systems functioning, its data governed under its own laws, and its society connected even when global infrastructures falter. The question is not which empire to join, but how to engage with them in a way that strengthens Greenland’s ability to determine its own digital future.
In practice, this means that Greenland’s strategy cannot be about copying one of the three empires, but rather about carving out a space of resilience within their shadow. Building a national Internet Exchange Point ensures that local traffic continues to circulate on the island rather than being routed abroad, even when external links fail. Establishing a sovereign GovCloud provides government, healthcare, and emergency services with a secure foundation that is not dependent on distant data centers or foreign jurisdictions. Local caching of software updates, video libraries, and news platforms enables communities to operate in a “local mode” during disruptions, preserving continuity even when global connections are disrupted. These measures do not create independence from the digital empires. Still, they give Greenland the ability to negotiate with them from a position of greater strength, ensuring that participation in the global digital order does not come at the expense of local control or security.
FROM DAILY RESILIENCE TO STRATEGIC FRAGILITY.
I have argued that integrity, robustness, and availability must be the guiding principles for Greenland’s digital backbone, both now and in the future.
Integrity means protecting against foreign influence and cyber threats through stronger cybersecurity, AI support, and autonomous monitoring.
Robustness requires diversifying the backbone with new submarine cables, satellite systems, and dual-use assets that can serve both civil and defense needs.
Availability depends on automation and AI-driven monitoring, combined with autonomous platforms such as UAVs, UUVs, IoT sensors, and distributed acoustic sensing on submarine cables, to keep services running across vast and remote geographies with limited human resources.
The conclusion I drew in my previous work remains applicable today. Greenland must develop local expertise and autonomy so that critical communications are not left vulnerable to outside actors in times of crisis. Dual-use investments are not only about defense; they also bring better services, jobs, and innovation.
Source: Tusass Annual Report 2023 with some additions and minor edits.
The Figureabove illustrates the infrastructure of the Greenlandic sole telecommunications provider, Tusass. Note that Tusass is the incumbent and only telecom provider in Greenland. Currently, five hydropower plants (shown above, location only indicative) provide more than 80% of Greenland’s electricity demand. Greenland is entering a period of significant infrastructure transformation, with several large projects already underway and others on the horizon. The most visible change is in aviation. Following the opening of the new international airport in Nuuk in 2024, with its 2,200-meter runway capable of receiving direct flights from Europe and North America, attention has turned to Ilulissat, on the Northwestern Coast of Greenland, and Qaqortoq. Ilulissat is being upgraded with its own 2,200-meter runway, a new terminal, and a control tower, while the old 845-meter strip is being converted into an access road. In southern Greenland, a new airport is being built in Qaqortoq, with a 1,500-meter runway scheduled to open around 2026. Once completed, these three airports, Nuuk, Ilulissat, and Qaqortoq, the largest town in South Greenland, will together handle roughly 80 percent of Greenland’s passenger traffic, reshaping both tourism and domestic connectivity. Smaller projects, such as the planned airport at Ittoqqortoormiit and changes to heliport infrastructure in East Greenland, are also part of this shift, although on a longer horizon.
Beyond air travel, the next decade is likely to bring new developments in maritime infrastructure. There is growing interest in constructing deep-water ports, both to support commercial shipping and to enable the export of minerals from Greenland’s interior. Denmark has already committed around DKK 1.6 billion (approximately USD 250 million) between 2026 and 2029 for a deep-sea port and related coastal infrastructure, with several proposals directly linked to mining ventures. In southern Greenland, for example, the Tanbreez multi-element rare earth project lies within reach of Qaqortoq, and the new airport’s specifications were chosen with freight requirements in mind. Other mineral prospects, ranging from rare earths to nickel and zinc, will require their own supporting infrastructure, roads, power, and port facilities, if the project transitions from exploration to production. The timelines for these mining and port projects are less certain than for the airports, since they depend on market conditions, environmental approvals, and financing. Yet it is clear that the 2025–2035 period will be decisive for Greenland’s economic and strategic trajectory. The combination of new airports, potential deep-water harbors, and the possible opening of significant mining operations would amount to the largest coordinated build-out of Greenlandic infrastructure in decades. Moreover, several submarine cable projects have been mentioned that would strengthen international connectivity to Greenland, as well as strengthen the redundancy and robustness of settlement connectivity, in addition to the existing long-haul microwave network connecting all settlements along the west coast from North to South.
And this is precisely why the question of a sudden digital cut-off matters so much. Without integrity, robustness, and availability built into the communications infrastructure, Greenland’s public sector and its critical infrastructure remain dangerously exposed. What looks resilient in daily operation could unravel overnight if the links to the outside world were severed or internal connectivity were compromised. In particular, the dependency on Nuuk is a critical risk.
GREENLAND’s DIGITAL INFRASTRUCTURE BY LAYER.
Let’s peel the digital onion layer by layer of Greenland’s digital infrastructure.
Is Greenland’s digital infrastructure broken down by the layers upon which society’s continuous functioning depends? This illustration shows how applications, transport, routing, and interconnect all depend on the external connectivity.
Greenland’s digital infrastructure can be understood as a stack of interdependent layers, each of which reveals a set of vulnerabilities. This is illustrated by the Figure above. At the top of the stack lie the applications and services that citizens, businesses, and government rely on every day. These include health IT systems, banking platforms, municipal services, and cloud-based applications. The critical issue is that most of these services are hosted abroad and have no local “island mode.” In practice, this means that if Greenland is digitally cut off, domestic apps and services will fail to function because there is no mechanism to run them independently within the country.
Beneath this sits the physical transport layer, which is the actual hardware that moves data. Greenland is connected internationally by just two subsea cables, routed via Iceland and Canada. A few settlements, such as Tasiilaq, remain entirely dependent on satellite links, while microwave radio chains connect long stretches of the west coast. At the local level, there is some fiber deployment, but it is limited to individual settlements rather than forming part of a national backbone. This creates a transport infrastructure that, while impressive given Greenland’s geography, is inherently fragile. Two cables and a scattering of satellites do not amount to genuine redundancy for a nation. The next layer is IP/TCP transport, where routing comes into play. Here, too, the system is basic. Greenland relies on a limited set of upstream providers with little true diversity or multi-homing. As a result, if one of the subsea cables is cut, large parts of the country’s connectivity collapse, because traffic cannot be seamlessly rerouted through alternative pathways. The resilience that is taken for granted in larger markets is largely absent here.
Finally, at the base of the stack, interconnect and routing expose the structural dependency most clearly. Greenland operates under a single Autonomous System Number (ASN). An ASN is a unique identifier assigned to a network operator (like Tusass) that controls its own routing on the Internet. It allows the network to exchange traffic and routing information with other networks using the Border Gateway Protocol (BGP). In Greenland, there is no domestic internet exchange point (IXP) or peering between local networks. All traffic must be routed abroad first, whether it is destined for Greenland or beyond. International transit flows through Iceland and Canada via the subsea cables, and via geostationary GreenSat satellite connectivity through Grand Canaria as a limited (in capacity) fallback that connected via the submarine network back to Greenland. There is no sovereign government cloud, almost no local caching for global platforms, and only a handful of small data centers (being generous with the definition here). The absence of scaled redundancy and local hosting means that virtually all of Greenland’s digital life depends on international connections.
GREENLAND’s DIGITAL LIFE ON A SINGLE THREAD.
Considering the many layers described above, a striking picture emerges: applications, transport, routing, and interconnect are all structured in ways that assume continuous external connectivity. What appears robust on a day-to-day basis can unravel quickly. A single cable cut, upstream outage, or local transmission fault in Greenland does not just slow down the internet. It can also disrupt it. It can paralyze everyday life across almost every sector, as much of the country’s digital backbone relies on external connectivity and fragile local transport. For the government, the reliance on cloud-hosted systems abroad means that email, document storage, case management, and health IT systems would go dark. Hospitals and clinics could lose access to patient records, lab results, and telemedicine services. Schools would be cut off from digital learning platforms and exam systems that are hosted internationally. Municipalities, which already lean on remote data centers for payroll, social services, and citizen portals, would struggle to process even routine administrative tasks. In finance, the impact would be immediate. Greenland’s card payment and clearing systems are routed abroad; without connectivity, credit and debit card transactions could no longer be authorized. ATMs would stop functioning. Shops, fuel stations, and essential suppliers would be forced into cash-only operations at best, and even that would depend on whether their local systems can operate in isolation. The private sector would be equally disrupted. Airlines, shipping companies, and logistics providers all rely on real-time reservation and cargo systems hosted outside Greenland. Tourism, one of the fastest-growing industries, is almost entirely dependent on digital bookings and payments. Mining operations under development would be unable to transmit critical data to foreign partners or markets. Even at the household level, the effects could be highly disruptive. Messaging apps, social media, and streaming platforms all require constant external connections; they would stop working instantly. Online banking and digital ID services would be unreachable, leaving people unable to pay bills, transfer money, or authenticate themselves for government services. As there are so few local caches or hosting facilities in Greenland, even “local” digital life evaporates once the cables are cut. So we will be back to reading books and paper magazines again.
This means that an outage can cascade well beyond the loss of entertainment or simple inconvenience. It undermines health care, government administration, financial stability, commerce, and basic communication. In practice, the disruption would touch every citizen and every institution almost immediately, with few alternatives in place to keep essential civil services running.
GREENLAND’s DIGITAL INFRASTRUCTURE EXPOSURE: ABOUT THE DATA.
In this inquiry, I have primarily analyzed two pillars of Greenland’s digital presence: web/IP hosting, as well as MX (mail exchange) hosting. These may sound technical, but they are fundamental to understanding. Web/IP hosting determines where Greenland’s websites and online services physically reside, whether inside Greenland’s own infrastructure or abroad in foreign data centers. MX hosting determines where email is routed and processed, and is crucial for the operation of government, business, and everyday communication. Together, these layers form the backbone of a country’s digital sovereignty.
What the data shows is sobering. For example, the Government’s own portal nanoq.gl is hosted locally by Tele Greenland (i.e., Tusass GL), but its email is routed through Amazon’s infrastructure abroad. The national airline, airgreenland.gl, also relies on Microsoft’s mail servers in the US and UK. These are not isolated cases. They illustrate the broader pattern of dependence. If hosting and mail flows are predominantly external, then Greenland’s resilience, control, and even lawful access are effectively in the hands of others.
The data from the Greenlandic .gl domain space paints a clear and rather bleak picture of dependency and reliance on the outside world. My inquiry covered 315 domains, resolving more than a thousand hosts and IPs and uncovering 548 mail exchangers, which together form a dependency network of 1,359 nodes and 2,237 edges. What emerges is not a story of local sovereignty but of heavy reliance on external, that is, outside Greenland, hosting.
When broken down, it becomes clear how much of the Greenlandic namespace is not even in use. Of the 315 domains, only 190 could be resolved to a functioning web or IP host, leaving 125 domains, or about 40 percent, with no active service. For mail exchange, the numbers are even more striking: only 98 domains have MX records, while 217 domains, it would appear, cannot be used for email, representing nearly seventy percent of the total. In other words, the universe of domains we can actually analyze shrinks considerably once you separate the inactive or unused domains from those that carry real digital services.
It is within this smaller, active subset that the pattern of dependency becomes obvious. The majority of the web/IP hosting we can analyze is located outside Greenland, primarily on infrastructure controlled by American companies such as Cloudflare, Microsoft, Google, and Amazon, or through Danish and European resellers. For email, the reliance is even more complete: virtually all MX hosting that exists is foreign, with only two domains fully hosted in Greenland. This means that both Greenland’s web presence and its email flows are overwhelmingly dependent on servers and policies beyond its own borders. The geographic spread of dependencies is extensive, spanning the US, UK, Ireland, Denmark, and the Netherlands, with some entries extending as far afield as China and Panama. This breadth raises uncomfortable questions about oversight, control, and the exposure of critical services to foreign jurisdictions.
Security practices add another layer of concern. Many domains lack the most basic forms of email protection. TheSender Policy Framework(SPF), which instructs mail servers on which IP addresses are authorized to send on behalf of a domain, is inconsistently applied. DomainKeys Identified Mail(DKIM), which uses cryptographic signatures to verify that an email originates from the claimed sender, is also patchy. Most concerning is that Domain-based Message Authentication, Reporting, and Conformance(DMARC), a policy that allows a domain to instruct receiving mail servers on how to handle suspicious emails (for example, reject or quarantine them), is either missing or set to “none” for many critical domains. Without SPF, DKIM, and DMARC properly configured, Greenlandic organizations are wide open to spoofing and phishing, including within government and municipal domains.
Taken together, the picture is clear. Greenland’s digital backbone is not in Greenland. Its critical web and mail infrastructure lives elsewhere, often in the hands of hyperscalers far beyond Nuuk’s control. The question practically asks itself: if those external links were cut tomorrow, how much of Greenland’s public sector could still function?
GREENLAND’s DIGITAL INFRASTRUCTURE EXPOSURE: SOME KEY DATA OUT OF A VERY RICH DATASET.
The Figure shows the distribution of Greenlandic (.gl) web/IP domains hosted on a given country’s infrastructure. Note that domains are frequently hosted in multiple countries. However, very few (2!) have an overlap with Greenland.
The chart of Greenland (.gl) Web/IP Infrastructure Hosting by Supporting Country reveals the true geography of Greenland’s digital presence. The data covers 315 Greenlandic domains, of which 190 could be resolved to active web or IP hosts. From these, I built a dependency map showing where in the world these domains are actually served.
The headline finding is stark: 57% of Greenlandic domains depend on infrastructure in the United States. This reflects the dominance of American companies such as Cloudflare, Microsoft, Google, and Amazon, whose services sit in front of or fully host Greenlandic websites. In contrast, only 26% of domains are hosted on infrastructure inside Greenland itself (primarily through Tele Greenland/Tusass). Denmark (19%), the UK (14%), and Ireland (13%) appear as the next layers of dependency, reflecting the role of regional resellers, like One.com/Simply, as well as Microsoft and Google’s European data centers. Germany, France, Canada, and a long tail of other countries contribute smaller shares.
It is worth noting that the validity of this analysis hinges on how the data are treated. Each domain is counted once per country where it has active infrastructure. This means a domain like nanoq.gl (the Greenland Government portal) is counted for both Greenland and its foreign dependency through Amazon’s mail services. However, double-counting with Greenland is extremely rare. Out of the 190 resolvable domains, 73 (38%) are exclusively Greenlandic, 114 (60%) are solely foreign, and only 2 (~1%) domains are hybrids, split between Greenland and another country. Those two are Nanoq.gl and airgreenland.gl, both of which combine a Greenland presence with foreign infrastructure. This is why the Figure above shows percentages that add up to more than 100%. They represent the dependency footprint. The share of Greenlandic domains that touch each country. They do not represent a pie chart of mutually exclusive categories. What is most important to note, however, is that the overlap with Greenland is vanishingly small. In practice, Greenlandic domains are either entirely local or entirely foreign. Very few straddle the boundary.
The conclusion is sobering. Greenland’s web presence is deeply externalized. With only a quarter of domains hosted locally, and more than half relying on US-controlled infrastructure, the country’s digital backbone is anchored outside its borders. This is not simply a matter of physical location. It is about sovereignty, resilience, and control. The dominance of US, Danish, and UK providers means that Greenland’s citizens, municipalities, and even government services are reliant on infrastructure they do not own and cannot fully control.
Figure shows the distribution of Greenlandic (.gl) domains by the supporting country for the MX (mail exchange) infrastructure. It shows that nearly all email services are routed through foreign providers.
The Figure above of the MX (mail exchange) infrastructure by supporting country reveals an even more pronounced pattern of external reliance compared to the above case for web hosting. From the 315 Greenlandic domains examined, only 98 domains had active MX records. These are the domains that can be analyzed for mail routing and that have been used in the analysis below.
Among them, 19% of all Greenlandic domains send their mail through US-controlled infrastructure, primarily Microsoft’s Outlook/Exchange services and Google’s Gmail. The United Kingdom (12%), Ireland (9%), and Denmark (8%) follow, reflecting the presence of Microsoft and Google’s European data centers and Danish resellers. France and Australia appear with smaller shares at 2%, and beyond that, the contributions of other countries are negligible. Greenland itself barely registers. Only two domains, accounting for 1% of the total, utilize MX infrastructure hosted within Greenland. The rest rely on servers beyond its borders. This result is consistent with our sovereignty breakdown: almost all Greenlandic email is foreign-hosted, with just two domains entirely local and one hybrid combining Greenlandic and foreign providers.
Again, the validity of this analysis rests on the same method as the web/IP chart. Each domain is counted once per country where its MX servers are located. Percentages do not add up to 100% because domains may span multiple countries; however, crucially, as with web hosting, double-counting with Greenland is vanishingly rare. In fact, virtually no Greenlandic domains combine local and foreign MX; they are either foreign-only or, in just two cases, local-only.
The story is clear and compelling: Greenland’s email infrastructure is overwhelmingly externalized. Where web hosting still accounts for a quarter of domains within the country, email sovereignty is almost nonexistent. Nearly all communication flows through servers controlled by US, UK, Ireland, or Denmark. The implication is sobering. In the event of disruption, policy disputes, or surveillance demands, Greenland has little autonomous control over its most basic digital communications.
A sector-level view of how Greenland’s web/IP domains are hosted, local vs externally (outside Greenland).
This chart provides a sector-level view of how Greenlandic domains are hosted, distinguishing between those resolved locally in Greenland and those hosted outside of Greenland. It is based on the subset of 190 domains for which sufficient web/IP hosting information was available. Importantly, the categorization relies on individual domains, not on companies as entities. A single company or institution may own and operate multiple domains, which are counted separately for the purpose of this analysis. There is also some uncertainty in sector assignment, as many domains have ambiguous names and were categorized using best-fit rules.
The distribution highlights the uneven exercise of digital sovereignty across sectors. In education and finance, the dependency is absolute: 100 percent of domains are hosted externally, with no Greenland-based presence at all. It should not come as a big surprise that ninety percent of government domains are hosted in Greenland, while only 10 percent are hosted outside. From a Digital Government sovereignty perspective, this would obviously be what should be expected. Transportation shows a split, with about two-thirds of domains hosted locally and one-third abroad, reflecting a mix of Tele Greenland-hosted (Tusass GL) domains alongside foreign-hosted services, such as airgreenland.gl. According to the available data, Energy infrastructure is hosted entirely abroad, underscoring possibly one of the most critical vulnerabilities in the dataset. By contrast, telecom domains, unsurprisingly, given Tele Greenland’s role, are entirely local, making it the only sector with 100 percent internal hosting. Municipalities present a more positive picture, with three-quarters of domains hosted locally and one-quarter abroad, although this still represents a partial external dependency. Finally, the large and diverse “Other” category, which contains a mix of companies, organizations, and services, is skewed towards foreign hosting (67 percent external, 33 percent local).
Taken together, the results underscore three important points. First, sector-level sovereignty is highly uneven. While telecom, municipal, and Governmental web services retain more local control, most finance, education, and energy domains are overwhelmingly external. We should keep in mind that when a Greenlandic domain resolves to local infrastructure, it indicates that the frontend web hosting, the visible entry point that users connect to, is located within Greenland, typically through Tele Greenland (i.e., Tusass GL). However, this does not automatically mean that the entire service stack is local. Critical back-end components such as databases, authentication services, payment platforms, or integrated cloud applications may still reside abroad. In practice, a locally hosted domain therefore guarantees only that the web interface is served from Greenland, while deeper layers of the service may remain dependent on foreign infrastructure. This distinction is crucial when evaluating genuine digital sovereignty and resilience. However, the overall pattern is unmistakable. Greenland’s digital presence remains heavily reliant on foreign hosting, with only pockets of local sovereignty.
A sector-level view of the share of locally versus externally (i.e., outside Greenland) MX (mail exchange) hosted Greenlandic domains (.gl).
The Figure above provides a sector-level view of how Greenlandic domains handle their MX (mail exchange) infrastructure, distinguishing between those hosted locally and those that rely on foreign providers. The analysis is based on the subset of 94 domains (out of 315 total) where MX hosting could be clearly resolved. In other words, these are the domains for which sufficient DNS information was available to identify the location of their mail servers. As with the web/IP analysis, it is important to note two caveats: sector classification involves a degree of interpretation, and the results represent individual domains, not individual companies. A single organization may operate multiple domains, some of which are local and others external.
The results are striking. For most sectors, such as education, finance, transport, energy, telecom, and municipalities, the dependence on foreign MX hosting is total. 100 percent of identified domains rely on external providers for email infrastructure. Even critical sectors such as energy and telecom, where one might expect a more substantial local presence, are fully externalized. The government sector presents a mixed picture. Half of the government domains examined utilize local MX hosting, while the other half are tied to foreign providers. This partial local footprint is significant, as it shows that while some government email flows are retained within Greenland, an equally large share is routed through servers abroad. The “other” sector, which includes businesses, NGOs, and various organizations, shows a small local footprint of about 3 percent, with 97 percent hosted externally. Taken together, the Figure paints a more severe picture of dependency than the web/IP hosting analysis.
While web hosting still retained about a quarter of domains locally, in the case of email, nearly everything is external. Even in government, where one might expect strong sovereignty, half of the domains are dependent on foreign MX servers. This distinction is critical. Email is the backbone of communication for both public and private institutions, and the routing of Greenland’s email infrastructure almost entirely abroad highlights a deep vulnerability. Local MX records guarantee only that the entry point for mail handling is in Greenland. They do not necessarily mean that mail storage or filtering remains local, as many services rely on external processing even when the MX server is domestic.
The broader conclusion is clear. Greenland’s sovereignty in digital communications is weakest in email. Across nearly all sectors, external providers control the infrastructure through which communication must pass, leaving Greenland reliant on systems located far outside its borders. Irrespective of how the picture painted here may appear severe in terms of digital sovereignty, it is not altogether surprising. The almost complete externalization of Greenlandic email infrastructure is not surprising, given that most global email services are provided by U.S.-based hyperscalers such as Microsoft and Google. This reliance on Big Tech is the norm worldwide, but it carries particular implications for Greenland, where dependence on foreign-controlled communication channels further limits digital sovereignty and resilience.
The analysis of the 94 MX hosting entries shows a striking concentration of Greenlandic email infrastructure in the hands of a few large players. Microsoft dominates the picture with 38 entries, accounting for just over 40 percent of all records, while Amazon follows with 20 entries, or around 21 percent. Google, including both Gmail and Google Cloud Platform services, contributes an additional 8 entries, representing approximately 9% of the total. Together, these three U.S. hyperscalers control nearly 70 percent of all Greenlandic MX infrastructure. By contrast, Tele Greenland (Tusass GL) appears in only three cases, equivalent to just 3 percent of the total, highlighting the minimal local footprint. The remaining quarter of the dataset is distributed across a long tail of smaller European and global providers such as Team Blue in Denmark, Hetzner in Germany, OVH and O2Switch in France, Contabo, Telenor, and others. The distribution, however you want to cut it, underscores the near-total reliance on U.S. Big Tech for Greenland’s email services, with only a token share remaining under national control.
Out of 179 total country mentions across the dataset, the United States is by far the most dominant hosting location, appearing in 61 cases, or approximately 34 percent of all country references. The United Kingdom follows with 38 entries (21 percent), Ireland with 28 entries (16 percent), and Denmark with 25 entries (14 percent). France (4 percent) and Australia (3 percent) form a smaller second tier, while Greenland itself appears only three times (2 percent). Germany also accounts for three entries, and all other countries (Austria, Norway, Spain, Czech Republic, Slovakia, Poland, Canada, and Singapore) occur only once each, making them statistically marginal. Examining the structure of services across locations, approximately 30 percent of providers are tied to a single country, while 51 percent span two countries (for example, UK–US or DK–IE). A further 18 percent are spread across three countries, and a single case involved four countries simultaneously. This pattern reflects the use of distributed or redundant MX services across multiple geographies, a characteristic often found in large cloud providers like Microsoft and Amazon.
The key point is that, regardless of whether domains are linked to one, two, or three countries, the United States is present in the overwhelming majority of cases, either alone or in combination with other countries. This confirms that U.S.-based infrastructure underpins the backbone of Greenlandic email hosting, with European locations such as the UK, Ireland, and Denmark acting primarily as secondary anchors rather than true alternatives.
WHAT DOES IT ALL MEAN?
Greenland’s public digital life overwhelmingly runs on infrastructure it does not control. Of 315 .gl domains, only 190 even have active web/IP hosting, and just 98 have resolvable MX (email) records. Within that smaller, “real” subset, most web front-ends are hosted abroad and virtually all email rides on foreign platforms. The dependency is concentrated, with U.S. hyperscalers—Microsoft, Amazon, and Google—accounting for nearly 70% of MX services. The U.S. is also represented in more than a third of all MX hosting locations (often alongside the UK, Ireland, or Denmark). Local email hosting is almost non-existent (two entirely local domains; a few Tele Greenland/Tusass appearances), and even for websites, a Greenlandic front end does not guarantee local back-end data or apps.
That architecture has direct implications for sovereignty and security. If submarine cables, satellites, or upstream policies fail or are restricted, most government, municipal, health, financial, educational, and transportation services would degrade or cease, because their applications, identity systems, storage, payments, and mail are anchored off-island. Daily resilience can mask strategic fragility: the moment international connectivity is severely compromised, Greenland lacks the local “island mode” to sustain critical digital workflows.
This is not surprising. U.S. Big Tech dominates email and cloud apps worldwide. Still, it may pose a uniquely high risk for Greenland, given its small population, sparse infrastructure, and renewed U.S. strategic interest in the region. Dependence on platforms governed by foreign law and policy erodes national leverage in crisis, incident response, and lawful access. It exposes citizens to outages or unilateral changes that are far beyond Nuuk’s control.
The path forward is clear: treat digital sovereignty as critical infrastructure. Prioritize local capabilities where impact is highest (government/municipal core apps, identity, payments, health), build island-mode fallbacks for essential services, expand diversified transport (additional cables, resilient satellite), and mandate basic email security (SPF/DKIM/DMARC) alongside measurable locality targets for hosting and data. Only then can Greenland credibly assure that, even if cut off from the world, it can still serve its people.
CONNECTIVITY AND RESILIENCE: GREENLAND VERSUS OTHER SOVEREIGN ISLANDS.
Sources: Submarine cable counts from TeleGeography/SubmarineNetworks.com; IXPs and ASNs from Internet Society Pulse/Peering DB and RIR data; GDP and Population from IMF/Worldband (2023/2024); Internet penetration from ITU and National Statistics.
The comparative table shown above highlights Greenland’s position among other sovereign and autonomous islands in terms of digital infrastructure. With two international submarine cables, Greenland shares the same level of cable redundancy as the Faroe Islands, Malta, the Maldives, Seychelles, Cuba, and Fiji. This places it in the middle tier of island connectivity: above small states like Comoros, which rely on a single cable, but far behind island nations such as Cyprus, Ireland, or Singapore, which have built themselves into regional hubs with multiple independent international connections.
Where Greenland diverges is in the absence of an Internet Exchange Point (IXP) and its very limited number of Autonomous Systems (ASNs). Unlike Iceland, which couples four cables with three IXPs and over ninety ASNs, Greenland remains a network periphery. Even smaller states such as Malta, Seychelles, or Mauritius operate IXPs and host more ASNs, giving them greater routing autonomy and resilience.
In terms of internet penetration, Greenland fares relatively well, with a rate of over 90 percent, comparable to other advanced island economies. Yet the country’s GDP base is extremely limited, comparable to the Faroe Islands and Seychelles, which constrains its ability to finance major independent infrastructure projects. This means that resilience is not simply a matter of demand or penetration, but rather a question of policy choices, prioritization, and regional partnerships.
Seen from a helicopter’s perspective, Greenland is neither in the worst nor the best position. It has more resilience than single-cable states such as Comoros or small Pacific nations. Still, it lags far behind peer islands that have deliberately developed multi-cable redundancy, local IXPs, and digital sovereignty strategies. For policymakers, this raises a fundamental challenge: whether to continue relying on the relative stability of existing links, or to actively pursue diversification measures such as a national IXP, additional cable investments, or regional peering agreements. In short, Greenland’s digital sovereignty depends less on raw penetration figures and more on whether its infrastructure choices can elevate it from a peripheral to a more autonomous position in the global network.
HOW TO ELEVATE SOUTH GREENLAND TO A PREFERRED TO A PREFFERED DIGITAL HOST FOR THE WORLD … JUST SAYING, WHY NOT!
At first glance, South Greenland and Iceland share many of the same natural conditions that make Iceland an attractive hub for data centers. Both enjoy a cool North Atlantic climate that allows year-round free cooling, reducing the need for energy-intensive artificial systems. In terms of pure geography and temperature, towns such as Qaqortoq and Narsaq in South Greenland are not markedly different from Reykjavík or Akureyri. From a climatic standpoint, there is no inherent reason why Greenland should not also be a viable location for large-scale hosting facilities.
The divergence begins not with climate but with energy and connectivity. Iceland spent decades developing a robust mix of hydropower and geothermal plants, creating a surplus of cheap renewable electricity that could be marketed to international hyperscale operators. Greenland, while rich in hydropower potential, has only a handful of plants tied to local demand centers, with no national grid and limited surplus capacity. Without investment in larger-scale, interconnected generation, it cannot guarantee the continuous, high-volume power supply that international data centers demand. Connectivity is the other decisive factor. Iceland today is connected to four separate submarine cable systems, linking it to Europe and North America, which gives operators confidence in redundancy and low-latency routes across the Atlantic. South Greenland, by contrast, depends on two branches of the Greenland Connect system, which, while providing diversity to Iceland and Canada, does not offer the same level of route choice or resilience. The result is that Iceland functions as a transatlantic bridge, while Greenland remains an endpoint.
For South Greenland to move closer to Iceland’s position, several changes would be necessary. The most important would be a deliberate policy push to develop surplus renewable energy capacity and make it available for export into data center operations. Parallel to this, Greenland would need to pursue further international submarine cables to break its dependence on a single system and create genuine redundancy. Finally, it would need to build up the local digital ecosystem by fostering an Internet Exchange Point and encouraging more networks to establish Autonomous Systems on the island, ensuring that Greenland is not just a transit point but a place where traffic is exchanged and hosted, and, importantly, making money on its own Digital Infrastructure and Sovereignty. South Greenland already shares the climate advantage that underpins Iceland’s success, but climate alone is insufficient. Energy scale, cable diversity, and deliberate policy have been the ingredients that have allowed Iceland to transform itself into a digital hub. Without similar moves, Greenland risks remaining a peripheral node rather than evolving into a sovereign center of digital resilience.
A PRACTICAL BLUEPRINT FOR GREENLAND TOWARDS OWNING ITS DIGITAL SOVEREIGNTY.
No single measure eliminates Greenland’s dependency on external infrastructure, banking, global SaaS, and international transit, which are irreducible. But taken together, these steps described below maximize continuity of essential functions during cable cuts or satellite disruption, improve digital sovereignty, and strengthen bargaining power with global vendors. The trade-off is cost, complexity, and skill requirements, which means Greenland must prioritize where full sovereignty is truly mission-critical (health, emergency, governance) and accept graceful degradation elsewhere (social media, entertainment, SaaS ERP).
A. Keep local traffic local (routing & exchange).
Proposal: Create or strengthen a national IXP in Nuuk, with a secondary node (e.g., Sisimiut or Qaqortoq). Require ISPs, mobile operators, government, and major content/CDNs to peer locally. Add route-server policies with “island-mode” communities to ensure that intra-Greenland routes stay reachable even if upstream transit is lost. Deploy anycasted recursive DNS and host authoritative DNS for .gl domains on-island, with secondaries abroad.
Pros:
Dramatically reduces the latency, cost, and fragility of local traffic.
Ensures Greenland continues to “see itself” even if cut off internationally.
DNS split-horizon prevents sensitive internal queries from leaking off-island.
Cons:
Needs policy push. Voluntary peering is often insufficient in small markets.
Running redundant IXPs is a fixed cost for a small economy.
CDNs may resist deploying nodes without incentives (e.g., free rack and power).
A natural and technically well-founded reaction, especially given Greenland’s monopolistic structure under Tusass, is that an IXP or multiple ASNs might seem redundant. Both content and users reside on the same Tusass network, and intra-Greenland traffic already remains local at Layer 3. Adding an IXP would not change that in practice. Without underlying physical or organizational diversity, an exchange point cannot create redundancy on its own.
However, over the longer term, an IXP can still serve several strategic purposes. It provides a neutral routing and governance layer that enables future decentralization (e.g., government, education, or sectoral ASNs), strengthens “island-mode” resilience by isolating internal routes during disconnection from the global Internet, and supports more flexible traffic management and security policies. Notably, an IXP also offers a trust and independence layer that many third-party providers, such as hyperscalers, CDNs, and data-center networks, typically require before deploying local nodes. Few global operators are willing to peer inside the demarcation of a single national carrier’s network. A neutral IXP provides them with a technical and commercial interface independent of Tusass’s internal routing domain, thereby making on-island caching or edge deployments more feasible in the future. In that sense, this accurately reflects today’s technical reality. The IXP concept anticipates tomorrow’s structural and sovereignty needs, bridging the gap between a functioning monopoly network and a future, more open digital ecosystem.
In practice (and in my opinion), Tusass is the only entity in Greenland with the infrastructure, staff, and technical capacity to operate an IXP. While this challenges the ideal of neutrality, it need not invalidate the concept if the exchange is run on behalf of Naalakkersuisut (the Greenlandic self-governing body) or under a transparent, multi-stakeholder governance model. The key issue is not who operates the IXP, but how it is governed. Suppose Tusass provides the platform while access, routing, and peering policies are openly managed and non-discriminatory. In that case, the IXP can still deliver genuine benefits: local routing continuity, “island-mode” resilience, and a neutral interface that encourages future participation by hyperscalers, CDNs, and sectoral networks.
B. Host public-sector workloads on-island.
Proposal: Stand up a sovereign GovCloud GL in Nuuk (failover in another town, possible West-East redundancy), operated by a Greenlandic entity or tightly contracted partner. Prioritize email, collaboration, case handling, health IT, and emergency comms. Keep critical apps, archives, and MX/journaling on-island even if big SaaS (like M365) is still used abroad.
Pros:
Keeps essential government operations functional in an isolation event.
Reduces legal exposure to extraterritorial laws, such as the U.S. CLOUD Act.
Provides a training ground for local IT and cloud talent.
Cons:
High CapEx + ongoing OpEx; cloud isn’t a one-off investment.
Scarcity of local skills; risk of over-reliance on a few engineers.
Difficult to replicate the breadth of SaaS (ERP, HR, etc.) locally; selective hosting is realistic, full stack is not.
C. Make email & messaging “cable- and satellite-outage proof”.
Proposal: Host primary MX and mailboxes in GovCloud GL with local antispam, journaling, and security. Use off-island secondaries only for queuing. Deploy internal chat/voice/video systems (such as Matrix, XMPP, or local Teams/Zoom gateways) to ensure that intra-Greenland traffic never routes outside the country. Define an “emergency federation mode” to isolate traffic during outages.
Pros:
Ensures communication between government, hospitals, and municipalities continues during outages.
Local queues prevent message loss even if foreign relays are unreachable.
Operating robust mail and collaboration platforms locally is a resource-intensive endeavor.
Risk of user pushback if local platforms feel less polished than global SaaS.
The emergency “mode switch” adds operational complexity and must be tested regularly.
D. Put the content edge in Greenland.
Proposal: Require or incentivize CDN caches (Akamai, Cloudflare, Netflix, OS mirrors, software update repos, map tiles) to be hosted inside Greenland’s IXP(s).
Pros:
Improves day-to-day performance and cuts transit bills.
Reduces dependency on subsea cables for routine updates and content.
Keeps basic digital life (video, software, education platforms) usable in isolation.
Cons:
CDNs deploy based on scale; Greenland’s market may be marginal without a subsidy.
Hosting costs (power, cooling, rackspace) must be borne locally.
Only covers cached/static content; dynamic services (banking, SaaS) still break without external connectivity.
E. Implement into law & contracts.
Proposal: Mandate data residency for public-sector data; require “island-mode” design in procurement. Systems must demonstrate the ability to authenticate locally, operate offline, maintain usable data, and retain keys under Greenlandic custody. Impose peering obligations for ISPs and major SaaS/CDNs.
Pros:
Creates a predictable baseline for sovereignty across all agencies.
Prevents future procurement lock-in to non-resilient foreign SaaS.
Gives legal backing to technical requirements (IXP, residency, key custody).
Cons:
May raise the costs of IT projects (compliance overhead).
Without a strong enforcement, rules risk becoming “checkbox” exercises.
Possible trade friction if foreign vendors see it as protectionist.
F. Strengthen physical resilience
Proposal: Maintain and upgrade subsea cable capacity (Greenland Connect and Connect North), add diversity (spur/loop and new landings), and maintain long-haul microwave/satellite as a tertiary backup. Pre-engineer quality of service downgrades for graceful degradation.
Pros:
Adds true redundancy. Nothing replaces a working subsea cable.
Tertiary paths (satellite, microwave) keep critical services alive during failures.
Clear QoS downgrades make service loss more predictable and manageable.
Cons:
High (possibly very high) CapEx. New cable segments cost tens to hundreds of millions of euros.
Satellite/microwave backup cannot match the throughput of subsea cables.
International partners may be needed for funding and landing rights.
Security & trust
Proposal: Deploy local PKI and HSMs for the government. Enforce end-to-end encryption. Require local custody of cryptographic keys. Audit vendor remote access and include kill switches.
Pros:
Prevents data exposure via foreign subpoenas (without Greenland’s knowledge).
Local trust anchors give confidence in sovereignty claims.
Kill switches and audit trails enhance vendor accountability.
Cons:
PKI and HSM management requires very specialized skills.
Without strong governance, there is a risk of “security theatre” rather than absolute security.
On-island first as default. A key step for Greenland is to make on-island first the norm so that local-to-local traffic stays local even if Atlantic cables fail. Concretely, stand up a national IXP in Nuuk to keep domestic traffic on the island and anchor CDN caches; build a Greenlandic “GovCloud” to host government email, identity, records, and core apps; and require all public-sector systems to operate in “island mode” (continue basic services offline from the rest of the world). Pair this with local MX, authoritative DNS, secure chat/collaboration, and CDN caches, so essential content and services remain available during outages. Back it with clear procurement rules on data residency and key custody to reduce both outage risk and exposure to foreign laws (e.g., CLOUD Act), acknowledging today’s heavy—if unsurprising—reliance on U.S. hyperscalers (Microsoft, Amazon, Google).
What this changes, and what it doesn’t. These measures don’t aim to sever external ties. They should rebalance them. The goal is graceful degradation that keeps government services, domestic payments, email, DNS, and health communications running on-island, while accepting that global SaaS and card rails will go dark during isolation. Finally, it’s also worth remembering that local caching is only a bridge, not a substitute for global connectivity. In the first days of an outage, caches would keep websites, software updates, and even video libraries available, allowing local email and collaboration tools to continue running smoothly. But as the weeks pass, those caches would inevitably grow stale. News sites, app stores, and streaming platforms would stop refreshing, while critical security updates, certificates, and antivirus definitions would no longer be available, leaving systems exposed to risk. If isolation lasted for months, the impact would be much more profound. Banking and card clearing would be suspended, SaaS-driven ERP systems would break down, and Greenland would slide into a “local only” economy, relying on cash and manual processes. Over time, the social impact would also be felt, with the population cut off from global news, communication, and social platforms. Caching, therefore, buys time, but not independence. It can make an outage manageable in the short term, yet in the long run, Greenland’s economy, security, and society depend on reconnecting to the outside world.
The Bottom line. Full sovereignty is unrealistic for a sparse, widely distributed country, and I don’t think it makes sense to strive for that. It just appears impractical. In my opinion, partial sovereignty is both achievable and valuable. Make on-island first the default, keep essential public services and domestic comms running during cuts, and interoperate seamlessly when subsea links and satellites are up. This shifts Greenland from its current state of strategic fragility to one of managed resilience, without overlooking the rest of the internet.
ACKNOWLEDGEMENT.
I want to acknowledge my wife, Eva Varadi, for her unwavering support, patience, and understanding throughout the creative process of writing this article. I would also like to thank Dr. Signe Ravn-Højgaard, from “Tænketanken Digital Infrastruktur”, and the Sermitsiaq article “Digital afhængighed af udlandet” (“Digital dependency on foreign countries”) by Paul Krarup, for inspiring this work, which is also a continuation of my previous research and article titled “Greenland: Navigating Security and Critical Infrastructure in the Arctic – A Technology Introduction”. I would like to thank Lasse Jarlskov for his insightful comments and constructive feedback on this article. His observations regarding routing, OSI layering, and the practical realities of Greenland’s network architecture were both valid and valuable, helping refine several technical arguments and improve the overall clarity of the analysis.
ASN — Autonomous System Number: A unique identifier assigned to a network operator that controls its own routing on the Internet, enabling the exchange of traffic with other networks using the Border Gateway Protocol (BGP).
BGP — Border Gateway Protocol: The primary routing protocol of the Internet, used by Autonomous Systems to exchange information about which paths data should take across networks.
CDN — Content Delivery Network: A system of distributed servers that cache and deliver content (such as videos, software updates, or websites) closer to users, reducing latency and dependency on international links.
CLOUD Act — Clarifying Lawful Overseas Use of Data Act: A U.S. law that allows American authorities to demand access to data stored abroad by U.S.-based cloud providers, raising sovereignty and privacy concerns for other countries.
DMARC — Domain-based Message Authentication, Reporting and Conformance: An email security protocol that tells receiving servers how to handle messages that fail authentication checks, protecting against spoofing and phishing.
DKIM — DomainKeys Identified Mail: An email authentication method that uses cryptographic signatures to verify that a message has not been altered and truly comes from the claimed sender.
DNS — Domain Name System: The hierarchical system that translates human-readable domain names (like example.gl) into IP addresses that computers use to locate servers.
ERP — Enterprise Resource Planning A type of integrated software system that organizations use to manage business processes such as finance, supply chain, HR, and operations.
GL — Greenland (country code top-level domain, .gl) The internet country code for Greenland, used for local domain names such as nanoq.gl.
GovCloud — Government Cloud: A sovereign or dedicated cloud infrastructure designed for hosting public-sector applications and data within national jurisdiction.
HSM — Hardware Security Module: A secure physical device that manages cryptographic keys and operations, used to protect sensitive data and digital transactions.
IoT — Internet of Things: A network of physical devices (sensors, appliances, vehicles, etc.) connected to the internet, capable of collecting and exchanging data.
IP — Internet Protocol: The fundamental addressing system of the Internet, enabling data packets to be sent from one computer to another.
ISP — Internet Service Provider: A company or entity that provides customers with access to the internet and related services.
IXP — Internet Exchange Point: A physical infrastructure where networks interconnect directly to exchange internet traffic locally rather than through international transit links.
MX — Mail Exchange (Record): A type of DNS record that specifies the mail servers responsible for receiving email on behalf of a domain.
PKI — Public Key Infrastructure: A framework for managing encryption keys and digital certificates, ensuring secure electronic communications and authentication.
SaaS — Software as a Service: Cloud-based applications delivered over the internet, such as Microsoft 365 or Google Workspace, are usually hosted on servers outside the country.
SPF — Sender Policy Framework: An email authentication protocol that defines which mail servers are authorized to send email on behalf of a domain, reducing the risk of forgery.
Tusass is the national telecommunications provider of Greenland, formerly Tele Greenland, responsible for submarine cables, satellite links, and domestic connectivity.
UAV — Unmanned Aerial Vehicle: An aircraft without a human pilot on board, often used for surveillance, monitoring, or communications relay.
UUV — Unmanned Underwater Vehicle: A robotic submarine used for monitoring, surveying, or securing undersea infrastructure such as cables.
It’s 2045. Earth is green again. Free from cellular towers and the terrestrial radiation of yet another G, no longer needed to justify endless telecom upgrades. Humanity has finally transcended its communication needs to the sky, fully served by swarms of Low Earth Orbit (LEO) satellites.
Millions of mobile towers have vanished. No more steel skeletons cluttering skylines and nature in general. In their place: millions of beams from tireless LEO satellites, now whispering directly into our pockets from orbit.
More than 1,200 MHz of once terrestrially-bound cellular spectrum below the C-band had been uplifted to LEO satellites. Nearly 1,500 MHz between 3 and 6 GHz had likewise been liberated from its earthly confines, now aggressively pursued by the buzzing broadband constellations above.
It all works without a single modification to people’s beloved mobile devices. Everyone enjoyed the same, or better, cellular service than in those wretched days of clinging to terrestrial-based infrastructure.
So, how did this remarkable transformation come about?
THE COVERAGE.
First, let’s talk about coverage. The chart below tells the story of orbital ambition through three very grounded curves. On the x-axis, we have the inclination angle, which is the degree to which your satellites are encouraged to tilt away from the equator to perform their job. On the y-axis: how much of the planet (and its people) they’re actually covering. The orange line gives us land area coverage. It starts low, as expected, tropical satellites don’t care much for Greenland. But as the inclination rises, so does their sense of duty to the extremes (the poles that is). The yellow line represents population coverage, which grows faster than land, maybe because humans prefer to live near each other (or they like the scenery). By the time you reach ~53° inclination, you’re covering about 94% of humanity and 84% of land areas. The dashed white line represents mobile cell coverage, the real estate of telecom towers. A constellation at a 53° inclination would cover nearly 98% of all mobile site infrastructure. It serves as a proxy for economic interest. It closely follows the population curve, but adds just a bit of spice, reflecting urban density and tower sprawl.
This chart illustrates the cumulative global coverage achieved at varying orbital inclination angles for three key metrics: land area (orange), population (yellow), and estimated terrestrial mobile cell sites (dashed white). As inclination increases from equatorial (0°) to polar (90°), the percentage of global land and population coverage rises accordingly. Notably, population coverage reaches approximately 94% at ~53° inclination, a critical threshold for satellite constellations aiming to maximize global user reach without the complexity of polar orbits. The mobile cell coverage curve reflects infrastructure density and aligns closely with population distribution.
The satellite constellation’s beams have replaced traditional terrestrial cells, providing a one-to-one coverage substitution. They not only replicate coverage in former legacy cellular areas but also extend service to regions that previously lacked connectivity due to low commercial priority from telecom operators. Today, over 3 million beams substitute obsolete mobile cells, delivering comparable service across densely populated areas. An additional 1 million beams have been deployed to cover previously unserved land areas, primarily rural and remote regions, using broader, lower-capacity beams with radii up to 10 kilometers. While these rural beams do not match the density or indoor penetration of urban cellular coverage, they represent a cost-effective means of achieving global service continuity, especially for basic connectivity and outdoor access in sparsely populated zones.
Conclusion? If you want to build a global satellite mobile network, you don’t need to orbit the whole planet. Just tilt your constellation enough to touch the crowded parts, and leave the tundra to the poets. However, this was the “original sin” of LEO Direct-2-Cellular satellites.
THE DEMAND.
Although global mobile traffic growth slowed notably after the early 2020s, and the terrestrial telecom industry drifted toward its “end of history” moment, the orbital network above inherited a double burden. Not only did satellite constellations need to deliver continuous, planet-wide coverage, a milestone legacy telecoms had never reached, despite millions of ground sites, but they also had to absorb globally converging traffic demands as billions of users crept steadily toward the throughput mean.
This chart shows the projected DL traffic across a full day (UTC), based on regions where local time falls within the evening Busy Hour window (17:00–22:00) and are within satellite coverage (minimum elevation ≥ 25°). The BH population is calculated hourly, taking into account time zone alignment and visibility, with a 20% concurrency rate applied. Each active user is assumed to consume 500 Mbps downlink in 2045. The peak, reaching overThis chart shows the uplink traffic demand experienced across a full day (UTC), based on regions under Busy Hour conditions (17:00–22:00 local time) and visible to the satellite constellation (with a minimum elevation angle of 25°). For each UTC hour, the BH population within coverage is calculated using global time zone mapping. Assuming a 20% concurrency rate and an average uplink throughput of 50 Mbps per active user, the total UL traffic is derived. The resulting curve reflects how demand shifts in response to the Earth’s rotation beneath the orbital band. The peak, reaching over
The radio access uplink architecture relies on low round-trip times for proper scheduling, timing alignment, and HARQ (Hybrid Automatic Repeat Request) feedback cycles. The propagation delay at 350 km yields a round-trip time of about 2.5 to 3 milliseconds, which falls within the bounds of what current specifications can accommodate. This is particularly important for latency-sensitive applications such as voice, video, and interactive services that require low jitter and reliable feedback mechanisms. In contrast, orbits at 550 km or above push latency closer to the edge of what NR protocols can tolerate, which could hinder performance or require non-standard adaptations. The beam geometry also plays a central role. At lower altitudes, satellite beams projected to the ground are inherently smaller. This smaller footprint translates into tighter beam patterns with narrower 3 dB cut-offs, which significantly improves frequency reuse and spatial isolation. These attributes are important for deploying high-capacity networks in densely populated urban environments, where interference and spectrum efficiency are paramount. Narrower beams allow D2C operators to steer coverage toward demand centers while minimizing adjacent-beam interference dynamically. Operating at 350 km is not without drawbacks. The satellite’s ground footprint at this altitude is smaller, meaning that more satellites are required to achieve full Earth coverage. Additionally, satellites at this altitude are exposed to greater atmospheric drag, resulting in shorter orbital lifespans unless they are equipped with more powerful or efficient propulsion systems to maintain altitude. The current design aims for a 5-year orbital lifespan. Despite this, the shorter lifespan has an upside, as it reduces the long-term risks of space debris. Deorbiting occurs naturally and quickly at lower altitudes, making the constellation more sustainable in the long term.
THE CONSTELLATION.
The satellite-to-cellular infrastructure has now fully matured into a global-scale system capable of delivering mobile broadband services that are not only on par with, but in many regions surpass, the performance of terrestrial cellular networks. At its core lies a constellation of low Earth orbit satellites operating at an altitude of 350 kilometers, engineered to provide seamless, high-quality indoor coverage for both uplink and downlink, even in densely urban environments.
To meet the evolving expectations of mobile users, each satellite beam delivers a minimum of 50 Mbps of uplink capacity and 500 Mbps of downlink capacity per user, ensuring full indoor quality even in highly cluttered environments. Uplink transmissions utilize the 600 MHz to 1800 MHz band, providing 1200 MHz of aggregated bandwidth. Downlink channels span 1500 MHz of spectrum, ranging from 2100 MHz to the upper edge of the C-band. At the network’s busiest hour (e.g., around 20:00 local time) across the most densely populated regions south of 53° latitude, the system supports a peak throughput of 60,000 Tbps for downlink and 6,000 Tbps for uplink. To guarantee reliability under real-world utilization, the system is engineered with a 25% capacity overhead, raising the design thresholds to 75,000 Tbps for DL and 7,500 Tbps for UL during peak demand.
Each satellite beam is optimized for high spectral efficiency, leveraging advanced beamforming, adaptive coding, and cutting-edge modulation. Under these conditions, downlink beams deliver 4.5 Gbps, while uplink beams, facing more challenging reception constraints, achieve 1.8 Gbps. Meeting the adjusted peak-hour demand requires approximately 16.7 million active DL beams and 4.2 million UL beams, amounting to over 20.8 million simultaneous beams concentrated over the peak demand region.
Thanks to significant advances in onboard processing and power systems, each satellite now supports up to 5,000 independent beams simultaneously. This capability reduces the number of satellites required to meet regional peak demand to approximately 4,200. These satellites are positioned over a region spanning an estimated 45 million square kilometers, covering the evening-side urban and suburban areas of the Americas, Europe, Africa, and Asia. This configuration yields a beam density of nearly 0.46 beams per square kilometer, equivalent to one active beam for every 2 square kilometers, densely overlaid to provide continuous, per-user, indoor-grade connectivity. In urban cores, beam radii are typically below 1 km, whereas in lower-density suburban and rural areas, the system adjusts by using larger beams without compromising throughput.
Because peak demand rotates longitudinally with the Earth’s rotation, only a portion of the entire constellation is positioned over this high-demand region at any given time. To ensure 4,200 satellites are always present over the region during peak usage, the total constellation comprises approximately 20,800 satellites, distributed across several hundred orbital planes. These planes are inclined and phased to optimize temporal availability, revisit frequency, and coverage uniformity while minimizing latency and handover complexity.
The resulting Direct-to-Cellular satellite constellation and system of today is among the most ambitious communications infrastructures ever created. With more than 20 million simultaneous beams dynamically allocated across the globe, it has effectively supplanted traditional mobile towers in many regions, delivering reliable, high-speed, indoor-capable broadband connectivity precisely where and when people need it.
When Telcos Said ‘Not Worth It,’ Satellites Said ‘Hold My Beam. In the world of 2045, even the last village at the end of the dirt road streams at 500 Mbps. No tower in sight, just orbiting compassion and economic logic finally aligned.
THE SATELLITE.
The Cellular Device to Satellite Path.
The uplink antennas aboard the Direct-to-Cellular satellites have been specifically engineered to reliably receive indoor-quality transmissions from standard (unmodified) mobile devices operating within the 600 MHz to 1800 MHz band. Each device is expected to deliver a minimum of 50 Mbps uplink throughput, even when used indoors in heavily cluttered urban environments. This performance is made possible through a combination of wideband spectrum utilization, precise beamforming, and extremely sensitive receiving systems in orbit. The satellite uplink system operates across 1200 MHz of aggregated bandwidth (e.g., 60 channels of 20 MHz), spanning the entire upper UHF and lower S-band. Because uplink signals originate from indoor environments, where wall and structural penetration losses can exceed 20 dB, the satellite link budget must compensate for the combined effects of indoor attenuation and free-space propagation at a 350 km orbital altitude. At 600 MHz, which represents the lowest frequency in the UL band, the free space path loss alone is approximately 133 dB. When this is compounded with indoor clutter and penetration losses, the total attenuation the satellite must overcome reaches approximately 153 dB or more.
Rather than specifying the antenna system at a mid-band average frequency, such as 900 MHz (i.e., the mid-band of the 600 MHz to 1800 MHz range), the system has been conservatively engineered for worst-case performance at 600 MHz. This design philosophy ensures that the antenna will meet or exceed performance requirements across the entire uplink band, with higher frequencies benefiting from naturally improved gain and narrower beamwidths. This choice guarantees that even the least favorable channels, those near 600 MHz, support reliable indoor-grade uplink service at 50 Mbps, with a minimum required SNR of 10 dB to sustain up to 16-QAM modulation. Achieving this level of performance at 600 MHz necessitated a large physical aperture. The uplink receive arrays on these satellites have grown to approximately 700 to 750 m² in area, and are constructed using modular, lightweight phased-array tiles that unfold in orbit. This aperture size enables the satellite to achieve a receive gain of approximately 45 dBi at 600 MHz, which is essential for detecting low-power uplink transmissions with high spectral efficiency, even from users deep indoors and under cluttered conditions.
Unlike earlier systems, such as AST SpaceMobile’s BlueBird 1, launched in the mid-2020s with an aperture of around 900 m² and challenged by the need to acquire indoor uplink signals, today’s Direct-to-Cellular (D2C) satellites optimize the uplink and downlink arrays separately. This separation allows each aperture to be custom-designed for its frequency and link budget requirements. The uplink arrays incorporate wideband, dual-polarized elements, such as log-periodic or Vivaldi structures, backed by high-dynamic-range low-noise amplifiers and a distributed digital beamforming backend. Assisted by real-time AI beam management, each satellite can simultaneously support and track up to 2,500 uplink beams, dynamically allocating them across the active coverage region.
Despite their size, these receive arrays are designed for compact launch configurations and efficient in-orbit deployment. Technologies such as inflatable booms, rigidizable mesh structures, and ultralight composite materials allow the arrays to unfold into large apertures while maintaining structural stability and minimizing mass. Because these arrays are passive receivers, thermal loads are significantly lower than those of transmit systems. Heat generation is primarily limited to the digital backend and front-end amplification chains, which are distributed across the array surface to facilitate efficient thermal dissipation.
The Satellite to Cellular Device Path.
The downlink communication path aboard Direct-to-Cellular satellites is engineered as a fully independent system, physically and functionally separated from the uplink antenna. This separation reflects a mature architectural philosophy that has been developed over decades of iteration. The downlink and uplink systems serve fundamentally different roles and operate across vastly different frequency bands, with their power, thermal, and antenna constraints. The downlink system operates in the frequency range from 2100 MHz up to the upper end of the C-band, typically around 4200 MHz. This is significantly higher than the uplink range, which extends from 600 to 1800 MHz. Due to this disparity in wavelength, a factor of nearly six between the lowest uplink and highest downlink frequencies, a shared aperture is neither practical nor efficient. It is widely accepted today that integrating transmit and receive functions into a single broadband aperture would compromise performance on both ends. Instead, today’s satellites utilize a dual-aperture approach, with the downlink antenna system optimized exclusively for high-frequency transmission and the uplink array designed independently for low-frequency reception.
In order to deliver 500 Mbps per user with full indoor coverage, each downlink beam must sustain approximately 4.5 Gbps, accounting for spectral reuse and beam overlap. At an orbital altitude of 350 kilometers, downlink beams must remain narrow, typically covering no more than a 1-kilometer radius in urban zones, to match uplink geometry and maintain beam-level concurrency. The antenna gain required to meet these demands is in the range of 50 to 55 dBi, which the satellites achieve using high-frequency phased arrays with a physical aperture of approximately 100 to 200 m². Because the downlink system is responsible for high-power transmission, the antenna tiles incorporate GaN-based solid-state power amplifiers (SSPAs), which deliver hundreds of watts per panel. This results in an overall effective isotropic radiated power (EIRP) of 50 to 60 dBW per beam, sufficient to reach deep indoor devices even at the upper end of the C-band. The power-intensive nature of the downlink system introduces thermal management challenges (describe below in the next section), which are addressed by physically isolating the transmit arrays from the receiver surfaces. The downlink and uplink arrays are positioned on opposite sides of the spacecraft bus or thermally decoupled through deployable booms and shielding layers.
The downlink beamforming is fully digital, allowing real-time adaptation of beam patterns, power levels, and modulation schemes. Each satellite can form and manage up to 2,500 independent downlink beams, which are coordinated with their uplink counterparts to ensure tight spatial and temporal alignment. Advanced AI algorithms help shape beams based on environmental context, usage density, and user motion, thereby further improving indoor delivery performance. The modulation schemes used on the downlink frequently reach 256-QAM and beyond, with spectral efficiencies of six to eight bits per second per Hz in favorable conditions.
The physical deployment of the downlink antenna varies by platform, but most commonly consists of front-facing phased array panels or cylindrical surfaces fitted with azimuthally distributed tiles. These panels can be either fixed or mounted on articulated platforms that allow active directional steering during orbit, depending on the beam coverage strategy, an arrangement also called gumballed.
No Bars? Not on This Planet. In 2045, even the Icebears will have broadband. When satellites replaced cell towers, the Arctic became just another neighborhood in the global gigabit grid.
Satellite System Architecture.
The Direct-to-Cellular satellites have evolved into high-performance, orbital base stations that far surpass the capabilities of early systems, such as AST SpaceMobile’s Bluebird 1 or SpaceX’s Starlink V2 Mini. These satellites are engineered not merely to relay signals, but to deliver full-featured indoor mobile broadband connectivity directly to standard handheld devices, anywhere on Earth, including deep urban cores and rural regions that have been historically underserved by terrestrial infrastructure.
As described earlier, today’s D2C satellite supports up to 5,000 simultaneous beams, enabling real-time uplink and downlink with mobile users across a broad frequency range. The uplink phased array, designed to capture low-power, deep-indoor signals at 600 MHz, occupies approximately 750 m². The DL array, optimized for high-frequency, high-power transmission, spans 150 to 200 m². Unlike early designs, such as Bluebird 1, which used a single, large combined antenna, today’s satellites separate the uplink and downlink arrays to optimize each for performance, thermal behavior, and mechanical deployment. These two systems are typically mounted on opposite sides of the satellite and thermally isolated from one another.
Thermal management is one of the defining challenges of this architecture. While AST’s Bluebird 1 (i.e., from mid-2020s) boasted a large antenna aperture approaching 900 m², its internal systems generated significantly less heat. Bluebird 1 operated with a total power budget of approximately 10 to 12 kilowatts, primarily dedicated to a handful of downlink beams and limited onboard processing. In contrast, today’s D2C satellite requires a continuous power supply of 25 to 35 kilowatts, much of which must be dissipated as heat in orbit. This includes over 10 kilowatts of sustained RF power dissipation from the DL system alone, in addition to thermal loads from the digital beamforming hardware, AI-assisted compute stack, and onboard routing logic. The key difference lies in beam concurrency and onboard intelligence. The satellite manages thousands of simultaneous, high-throughput beams, each dynamically scheduled and modulated using advanced schemes such as 256-QAM and beyond. It must also process real-time uplink signals from cluttered environments, allocate spectral and spatial resources, and make AI-driven decisions about beam shape, handovers, and interference mitigation. All of this requires a compute infrastructure capable of delivering 100 to 500 TOPS (tera-operations per second), distributed across radiation-hardened processors, neural accelerators, and programmable FPGAs. Unlike AST’s Bluebird 1, which offloaded most of its protocol stack to the ground, today’s satellites run much of the 5G core network onboard. This includes RAN scheduling, UE mobility management, and segment-level routing for backhaul and gateway links.
This computational load compounds the satellite’s already intense thermal environment. Passive cooling alone is insufficient. To manage thermal flows, the spacecraft employs large radiator panels located on its outer shell, advanced phase-change materials embedded behind the DL tiles, and liquid loop systems that transfer heat from the RF and compute zones to the radiative surfaces. These thermal systems are intricately zoned and actively managed, preventing the heat from interfering with the sensitive UL receive chains, which require low-noise operation under tightly controlled thermal conditions. The DL and UL arrays are thermally decoupled not just to prevent crosstalk, but to maintain stable performance in opposite thermal regimes: one dominated by high-power transmission, the other by low-noise reception.
To meet its power demands, the satellite utilizes a deployable solar sail array that spans 60 to 80 m². These sails are fitted with ultra-high-efficiency solar cells capable of exceeding 30–35% efficiency. They are mounted on articulated booms that track the sun independently from the satellite’s Earth-facing orientation. They provide enough current to sustain continuous operation during daylight periods, while high-capacity batteries, likely based on lithium-sulfur or solid-state chemistry, handle nighttime and eclipse coverage. Compared to the Starlink V2 Mini, which generates around 2.5 to 3.0 kilowatts, and the Bluebird 1, which operates at roughly 10–12 kilowatts. Today’s system requires nearly three times the generation and five times the thermal rejection capability compared to the initial satellites of the mid-2020s.
Structurally, the satellite is designed to support this massive infrastructure. It uses a rigid truss core (i.e., lattice structure) with deployable wings for the DL system and a segmented, mesh-based backing for the UL aperture. Propulsion is provided by Hall-effect or ion thrusters, with 50 to 100 kilograms of inert propellant onboard to support three to five years of orbital station-keeping at an altitude of 350 kilometers. This height is chosen for its latency and spatial reuse advantages, but it also imposes continuous drag, requiring persistent thrust.
The AST Bluebird 1 may have appeared physically imposing in its time due to its large antenna, thermal, computational, and architectural complexity. Today’s D2C satellite, 20 years later, far exceeds anything imagined two decades earlier. The heat generated by its massive beam concurrency, onboard processing, and integrated network core makes its thermal management system not only more severe than Bluebird 1’s but also one of the primary limiting factors in the satellite’s physical and functional design. This thermal constraint, in turn, shapes the layout of its antennas, compute stack, power system, and propulsion.
Mass and Volume Scaling.
The AST’s Bluebird 1, launched in the mid-2020s, had a launch mass of approximately 1,500 kilograms. Its headline feature was a 900 m² unfoldable antenna surface, designed to support direct cellular connectivity from space. However, despite its impressive aperture, the system was constrained by limited beam concurrency, modest onboard computing power, and a reliance on terrestrial cores for most network functions. The bulk of its mass was dominated by structural elements supporting its large antenna surface and the power and thermal subsystems required to drive a relatively small number of simultaneous links. Bluebird’s propulsion was chemical, optimized for initial orbit raising and limited station-keeping, and its stowed volume fit comfortably within standard medium-lift payload fairings. Starlink’s V2 Mini, although smaller in physical aperture, featured a more balanced and compact architecture. Weighing roughly 800 kilograms at launch, it was designed around high-throughput broadband rather than direct-to-cellular use. Its phased array antenna surface was closer to 20–25 m², and it was optimized for efficient manufacturing and high-density orbital deployment. The V2 Mini’s volume was tightly packed, with solar panels, phased arrays, and propulsion modules folded into a relatively low-profile bus optimized for rapid deployment and low-cost launch stacking. Its onboard compute and thermal systems were scaled to match its more modest power budget, which typically hovered around 2.5 to 3.0 kilowatts.
In contrast, today’s satellites occupy an entirely new performance regime. The dry mass of the satellite ranges between 2,500 and 3,500 kilograms, depending on specific configuration, thermal shielding, and structural deployment method. This accounts for its large deployable arrays, high-density digital payload, radiator surfaces, power regulation units, and internal trusses. The wet mass, including onboard fuel reserves for at least 5 years of station-keeping at 350 km altitude, increases by up to 800 kilograms, depending on the propulsion type (e.g., Hall-effect or gridded ion thrusters) and orbital inclination. This brings the total launch mass to approximately 3,000 to 4,500 kilograms, or more than double ATS’s old Bluebird 1 and roughly five times that of SpaceX’s Starlink V2 Mini.
Volume-wise, the satellites require a significantly larger stowed configuration than either AST’s Bluebird 1 or SpaceX’s Starlink V2 Mini. While both of those earlier systems were designed to fit within traditional launch fairings, Bluebird 1 utilizes a folded hinge-based boom structure, and Starlink V2 Mini is optimized for ultra-compact stacking. Today’s satellite demands next-generation fairing geometries, such as 5-meter-class launchers or dual-stack configurations. This is driven by the dual-antenna architecture and radiator arrays, which, although cleverly folded during launch, expand dramatically once deployed in orbit. In its operational configuration, the satellite spans tens of meters across its antenna booms and solar sails. The uplink array, built as a lightweight, mesh-backed surface supported by rigidizing frames or telescoping booms, unfolds to a diameter of approximately 30 to 35 meters, substantially larger than Bluebird 1’s ~20–25 meter maximum span and far beyond the roughly 10-meter unfolded span of Starlink V2 Mini. The downlink panels, although smaller, are arranged for precise gimballed orientation (i.e., a pivoting mechanism allowing rotation or tilt along one or more axes) and integrated thermal control, which further expands the total deployed volume envelope. The volumetric footprint of today’s D2C satellite is not only larger in surface area but also more spatially complex, as its segregated UL and DL arrays, thermal zones, and solar wings must avoid interference while maintaining structural and thermal equilibrium. Compared to the simplified flat-pack layout of Starlink V2 Mini and the monolithic boom-deployed design of Bluebird 1.
The increase in dry mass, wet mass, and deployed volume is not a byproduct of inefficiency, but a direct result of very substantial performance improvements that were required to replace terrestrial mobile towers with orbital systems. Today’s D2C satellites deliver an order of magnitude more beam concurrency, spectral efficiency, and per-user performance than its 2020s predecessors. This is reflected in every subsystem, from power generation and antenna design to propulsion, thermal control, and computing. As such, it represents the emergence of a new class of satellite altogether: not merely a space-based relay or broadband node, but a full-featured, cloud-integrated orbital RAN platform capable of supporting the global cellular fabric from space.
CAN THE FICTION BECOME A REALITY?
From the perspective of 2025, the vision of a global satellite-based mobile network providing seamless, unmodified indoor connectivity at terrestrial-grade uplink and downlink rates, 50 Mbps up, 500 Mbps down, appears extraordinarily ambitious. The technical description from 2045 outlines a constellation of 20,800 LEO satellites, each capable of supporting 5,000 independent full-duplex beams across massive bandwidths, while integrating onboard processing, AI-driven beam control, and a full 5G core stack. To reach such a mature architecture within two decades demands breakthrough progress across multiple fronts.
The most daunting challenge lies in achieving indoor-grade cellular uplink at frequencies as low as 600 MHz from devices never intended to communicate with satellites. Today, even powerful ground-based towers struggle to achieve sub-1 GHz uplink coverage inside urban buildings. For satellites at an altitude of 350 km, the free-space path loss alone at 600 MHz is approximately 133 dB. When combined with clutter, penetration, and polarization mismatches, the system must close a link budget approaching 153–160 dB, from a smartphone transmitting just 23 dBm (200 mW) or less. No satellite today, including AST SpaceMobile’s BlueBird 1, has demonstrated indoor uplink reception at this scale or consistency. To overcome this, the proposed system assumes deployable uplink arrays of 750 m² with gain levels exceeding 45 dBi, supported by hundreds of simultaneously steerable receive beams and ultra-low-noise front-end receivers. From a 2025 lens, the mechanical deployment of such arrays, their thermal stability, calibration, and mass management pose nontrivial risks. Today’s large phased arrays are still in their infancy in space, and adaptive beam tracking from fast-moving LEO platforms remains unproven at the required scale and beam density.
Thermal constraints are also vastly more complex than anything currently deployed. Supporting 5,000 simultaneous beams and radiating tens of kilowatts from compact platforms in LEO requires heat rejection systems that go beyond current radiator technology. Passive radiators must be supplemented with phase-change materials, active fluid loops, and zoned thermal isolation to prevent transmit arrays from degrading the performance of sensitive uplink receivers. This represents a significant leap from today’s satellites, such as Starlink V2 Mini (~3 kW) or BlueBird 1 (~10–12 kW), neither of which operates with a comparable beam count, throughput, or antenna scale.
The required onboard compute is another monumental leap. Running thousands of simultaneous digital beams, performing real-time adaptive beamforming, spectrum assignment, HARQ scheduling, and AI-driven interference mitigation, all on-orbit and without ground-side offloading, demands 100–500 TOPS of radiation-hardened compute. This is far beyond anything that will be flying in 2025. Even state-of-the-art military systems rely heavily on ground computing and centralized control. The 2045 vision implies on-orbit autonomy, local decision-making, and embedded 5G/6G core functionality within each spacecraft, a full software-defined network node in orbit. Realizing such a capability requires not only next-gen processors but also significant progress in space-grade AI inference, thermal packaging, and fault tolerance.
On the power front, generating 25–35 kW per satellite in LEO using 60–80 m² solar sails pushes the boundary of photovoltaic technology and array mechanics. High-efficiency solar cells must achieve conversion rates exceeding 30–35%, while battery systems must maintain high discharge capacity even in complete darkness. Space-based power architectures today are not yet built for this level of sustained output and thermal dissipation.
Even if the individual satellite challenges are solved, the constellation architecture presents another towering hurdle. Achieving seamless beam handover, full spatial reuse, and maintaining beam density over demand centers as the Earth rotates demands near-perfect coordination of tens of thousands of satellites across hundreds of planes. No current LEO operator (including SpaceX) manages a constellation of that complexity, beam concurrency, or spatial density. Furthermore, scaling the manufacturing, testing, launch, and in-orbit commissioning of over 20,000 high-performance satellites will require significant cost reductions, increased factory throughput, and new levels of autonomous deployment.
Regulatory and spectrum allocation are equally formidable barriers. The vision entails the massively complex undertaking of a global reallocation of terrestrial mobile spectrum, particularly in the sub-3 GHz bands, to LEO operators. As of 2025, such a reallocation is politically and commercially fraught, with entrenched mobile operators and national regulators unlikely to cede prime bands without extensive negotiation, incentives, and global coordination. The use of 600–1800 MHz from orbit for direct-to-device is not yet globally harmonized (and may never be), and existing terrestrial rights would need to be either vacated or managed via complex sharing schemes.
From a market perspective, widespread device compatibility without modification implies that standard mobile chipsets, RF chains, and antennas evolve to handle Doppler compensation, extended RTT timing budgets, and tighter synchronization tolerances. While this is not insurmountable, it requires updates to 3GPP standards, baseband silicon, and potentially network registration logic, all of which must be implemented without degrading terrestrial service. Although NTN (non-terrestrial networks) support has begun to emerge in 5G standards, the level of transparency and ubiquity envisioned in 2045 is not yet backed by practical deployments.
While the 2045 architecture described so far assumes a single unified constellation delivering seamless global cellular service from orbit, the political and commercial realities of space infrastructure in 2025 strongly suggest a fragmented outcome. It is unlikely that a single actor, public or private, will be permitted, let alone able, to monopolize the global D2C landscape. Instead, the most plausible trajectory is a competitive and geopolitically segmented orbital environment, with at least one major constellation originating from China (note: I think it is quit likely we may see two major ones), another from the United States, a possible second US-based entrant, and potentially a European-led system aimed at securing sovereign connectivity across the continent. This fracturing of the orbital mobile landscape imposes a profound constraint on the economic and technical scalability of the system. The assumption that a single constellation could achieve massive economies of scale, producing, launching, and managing tens of thousands of high-performance satellites with uniform coverage obligations, begins to collapse under the weight of geopolitical segmentation. Each competitor must now shoulder its own development, manufacturing, and deployment costs, with limited ability to amortize those investments over a unified global user base. Moreover, such duplication of infrastructure risks saturating orbital slots and spectrum allocations, while reducing the density advantage that a unified system would otherwise enjoy. Instead of concentrating thousands of active beams over a demand zone with a single coordinated fleet, separate constellations must compete for orbital visibility and spectral access over the same urban centers. The result is likely to be a decline in per-satellite utilization efficiency, particularly in regions of geopolitical overlap or contested regulatory coordination.
2045: One Vision, Many Launch Pads. The dream of global satellite-to-cellular service may shine bright, but it won’t rise from a single constellation. With China, the U.S., and others racing skyward, the economics of universal LEO coverage could fracture into geopolitical silos, making scale, spectrum, and sustainability more contested than ever.
Finally, the commercial viability of any one constellation diminishes when the global scale is eroded. While a monopoly or globally dominant operator could achieve lower per-unit satellite costs, higher average utilization, and broader roaming revenues, a fractured environment reduces ARPU (average revenue per user). It increases the breakeven threshold for each deployment. Satellite throughput that could have been centrally optimized now risks duplication and redundancy, increasing operational overhead and potentially slowing innovation as vendors attempt to differentiate on proprietary terms. In this light, the architecture described earlier must be seen as an idealized vision. This convergence point may never be achieved in pure form unless global policy, spectrum governance, and commercial alliances move toward more integrated outcomes. While the technological challenges of the 2045 D2C system are significant, the fragmentation of market structure and geopolitical alignment may prove an equally formidable barrier to realizing the full systemic potential. While a monopoly or globally dominant operator could achieve lower per-unit satellite costs, higher average utilization, and broader roaming revenues, a fractured environment reduces ARPU (average revenue per user). It increases the breakeven threshold for each deployment. Satellite throughput that could have been centrally optimized now risks duplication and redundancy, increasing operational overhead and potentially slowing innovation as vendors attempt to differentiate on proprietary terms. In this light, the architecture described earlier must be seen as an idealized vision. This convergence point may never be achieved in pure form unless global policy, spectrum governance, and commercial alliances move toward more integrated outcomes. While the technological challenges of the 2045 D2C system are significant, the fragmentation of market structure and geopolitical alignment may prove an equally formidable barrier to realizing the full systemic potential.
Heavenly Coverage, Hellish Congestion. Even a single mega-constellation turns the sky into premium orbital real estate … and that’s before the neighbors show up with their own fleets. Welcome to the era of broadband traffic … in space.
Despite these barriers, incremental paths forward exist. Demonstration satellites in the late 2020s, followed by regional commercial deployments in the early 2030s, could provide real-world validation. The phased evolution of spectrum use, dual-use handsets, and AI-assisted beam management may mitigate some of the scaling concerns. Regulatory alignment may emerge as rural and unserved regions increasingly depend on space-based access. Ultimately, the achievement of the 2045 architecture relies not only on engineering but also on sustained cross-industry coordination, geopolitical alignment, and commercial viability on a planetary scale. As of 2025, the probability of realizing the complete vision by 2045, in terms of indoor-grade, direct-to-device service via a fully orbital mobile core, is perhaps 40–50%, with a higher probability (~70%) for achieving outdoor-grade or partially integrated hybrid services. The coming decade will reveal whether the industry can fully solve the unique combination of thermal, RF, computational, regulatory, and manufacturing challenges required to replace the terrestrial mobile network with orbital infrastructure.
POSTSCRIPT – THE ECONOMICS.
The Direct-to-Cellular satellite architecture described in this article would reshape not only the technical landscape of mobile communications but also its economic foundation. The very premise of delivering mobile broadband directly from space, bypassing terrestrial towers, fiber backhaul, and urban permitting, undermines one of the most entrenched capital systems of the 20th and early 21st centuries: the mobile infrastructure economy. Once considered irreplaceable, the sprawling ecosystem of rooftop leases, steel towers, field operations, base stations, and fiber rings has been gradually rendered obsolete by a network that floats above geography.
The financial implications of such a shift are enormous. Before such an orbital transition described in this article, the global mobile industry invested well over 300 billion USD annually in network CapEx and Opex, with a large share dedicated to the site infrastructure layer, construction, leasing, energy, security, and upkeep of millions of base stations and their associated land or rooftop assets. Tower companies alone have become multi-billion-dollar REITs (i.e., Real Estate Investment Trusts), profiting from site tenancy and long-term operating contracts. As of the mid-2020s, the global value tied up in the telecom industry’s physical infrastructure is estimated to exceed 2.5 to 3 trillion USD, with tower companies like Cellnex and American Tower collectively managing hundreds of billions of dollars in infrastructure assets. An estimated $300–500 billion USD invested in mobile infrastructure represents approximately 0.75% to 1.5% of total global pension assets and accounts for 15% to 30% of pension fund infrastructure investments. This real estate-based infrastructure model defined mobile economics for decades and has generally been regarded as a reasonably safe haven for investors. In contrast, the 2045 D2C model front-loads its capital burden into satellite manufacturing, launch, and orbital operations. Rather than being geographically bound, capital is concentrated into a fleet of orbital base stations, each capable of dynamically serving users across vast and shifting geographies. This not only eliminates the need for millions of distributed cell sites, but it also breaks the historical tie between infrastructure deployment and national geography. Coverage no longer scales with trenching crews or urban permitting delays but with orbital plane density and beamforming algorithms.
Yet, such a shift does not necessarily mean lower cost, only different economics. Launching and operating tens of thousands of advanced satellites, each capable of supporting thousands of beams and running onboard compute environments, still requires massive capital outlay and ongoing expenditures in space traffic management, spectrum coordination, ground gateways, and constellation replenishment. The difference lies in utilization and marginal reach. Where terrestrial infrastructure often struggles to achieve ROI in rural or low-income markets, orbital systems serve these zones as part of the same beam budget, with no new towers or trenches required.
Importantly, the 2045 model would likely collapse the mobile value chain. Instead of a multi-layered system of operators, tower owners, fiber wholesalers, and regional contractors, a vertically integrated satellite operator can now deliver the full stack of mobile service from orbit, owning the user relationship end-to-end. This disintermediation has significant implications for revenue distribution and regulatory control, and challenges legacy operators to either adapt or exit.
The scale of economic disruption mirrors the scale of technical ambition. This transformation could rewrite the very economics of connectivity. While the promise of seamless global coverage, zero tower density, and instant-on mobility is compelling, it may also signal the end of mobile telecom as a land-based utility.
If this little science fiction story comes true, and there are many good and bad reasons to doubt it, Telcos may not Ascend to the Sky, but take the Stairway to Heaven.
Graveyard of the Tower Titans. This symbolic illustration captures the end of an era, depicting headstones for legacy telecom giants such as American Tower, Crown Castle, and SBA Communications, as well as the broader REIT (Real Estate Investment Trust) infrastructure model that once underpinned the terrestrial mobile network economy. It serves as a metaphor for the systemic shift brought on by Direct-to-Cellular (D2C) satellite networks. What’s fading is not only the mobile tower itself, but also the vast ancillary industry that has grown around it, including power systems, access rights, fiber-infrastructure, maintenance firms, and leasing intermediaries, as well as the telecom business model that relied on physical, ground-based infrastructure. As the skies take over the signal path, the economic pillars of the old telecom world may no longer stand.
I would like to acknowledge my wife, Eva Varadi, for her unwavering support, patience, and understanding throughout the creative process of writing this article.
A diver approaches a sensing fiber-optic submarine cable beneath the icy waters of the North Atlantic, as a rusting cargo ship floats above and a submarine lurks nearby. The cable’s radiant rings symbolize advanced sensing capabilities, detecting acoustic, seismic, and movement signals. Yet, its exposure also reveals the vulnerability of subsea infrastructure to tampering, espionage, and sabotage, especially in geopolitically tense regions like the Arctic.
WHY WE NEED VISIBILITY INTO SUBMARINE CABLE ACTIVITY.
We can’t protect what we can’t measure. Today, we are mostly blind concerning our global submarine communications networks. We cannot state with absolute certainty whether critical parts of this infrastructure are already compromised by capable hostile state actors ready to press the button at an appropriate time. If the global submarine cable network were to break down, so would the world order as we know it. Submarine cables form the “invisible” backbone of the global digital infrastructure, yet they remain highly vulnerable. Over 95% of intercontinental internet and data traffic traverses subsea cables (which is in the order of between 25% of the total internet traffic worldwide), but these critical assets lie largely unguarded on the ocean floor, exposed to environmental events, shipping activities, and increasingly, geopolitical interference.
In 2024 and early 2025, multiple high-profile incidents involving submarine cable damage have occurred, highlighting the fragility of undersea communication infrastructure in an increasingly unstable geopolitical environment. Several disruptions affected strategic submarine cable routes, raising concerns about sabotage, poor seamanship, and hybrid threats, particularly in sensitive maritime corridors (e.g., Baltic Sea, Taiwan Strait, Red Sea, etc.).
As also discussed in my recent article (“What lies beneath“), one of the most prominent cases of subsea cable cuts occurred November 2024 in the Baltic Sea, where two critical submarine cables, the East-West Interlink between Lithuania and Sweden, and the C-Lion1 cable between Finland and Germany, were damaged in close temporal and spatial proximity. The Chinese cargo vessel Yi Peng 3 was identified as having been in the vicinity during both incidents. During a Chinese-led probe, investigators from Sweden, Germany, Finland, and Denmark boarded the ship in early December. By March 2025, European officials expressed growing confidence that the breaks were accidental rather than acts of sabotage. In December 2025, and also in the Baltic Sea, the Estlink 2 submarine power cable and two telecommunications cables operated by Elisa were ruptured. The suspected culprit was the Eagle S, an oil tanker believed to be part of Russia’s “shadow fleet”, a group of poorly maintained vessels that emerged after Russia’s invasion of Ukraine to circumvent sanctions and transport goods covertly. These vessels are frequently operated by opportunists with little maritime training or seamanship, posing a growing risk to maritime-based infrastructure.
These recent incidents further emphasize the need for proactive monitoring or sensing tools applied to the submarine cable infrastructure. Today, more than 100 subsea cable outages are logged each year globally. Most are attributed to natural or unintentional human-related causes, including poor seamanship and even worse vessels. Moreover, Authorities have noted that, since Russia’s full-scale invasion of Ukraine in 2022, the use of a “ghost fleet” of vessels, often in barely seaworthy condition and operated by underqualified or loosely regulated crews, has grown substantially in scope. These ships, appearing also to be used for hybrid operations or covert missions, operate under minimal oversight, raising the risk of both deliberate interference and catastrophic negligence.
As detailed in my article “What lies beneath“, several particular cable break signatures may be “fingerprints” of hybrid or hostile interference signatures. This may include simultaneous localized cuts, unnatural uniform damage profiles, and activity in geostrategic cable chokepoints, traits that appear atypical of commercial maritime incidents. One notable pattern is the lack of conventional warning signals, e.g., no seismic precursors, known trawling vessels in the area, and rapid phase discontinuities captured in coherent signal traces of the few sensing equipment on submarine cables we have. Equally concerning is the geopolitical context. The Baltic Sea is a critical artery connecting Northern Europe’s cloud infrastructure. Taiwan’s subsea cables are vital to the global chip supply chain and financial systems. Disrupting these routes can create outsized geopolitical pressure, allowing the hostile actor to maintain plausible deniability..
Modern sensing technologies now offer a pathway to detect and characterize such disturbances. Research by Mazur et al. (OFC 2024) has demonstrated real-time anomaly detection across transatlantic submarine cable systems. Their methodology could spot small mechanical vibrations and sudden cable stresses that precede an optical cable failure. Such sensing systems can be retrofitted onto existing landing stations, enabling authorities or cable operations to issue early alerts for potential sabotage or environmental threats.
Furthermore, continuous monitoring allows real-time threat classification, differentiating between earthquake-triggered phase drift and artificial localized cuts. Combined with AI-enhanced analytics and (near) real-time AIS (Automatic Identification System) information, these sensing systems can serve as a digital tripwire along the seabed, transforming our ability to monitor and defend strategic infrastructure.
Without these capabilities, the subsea cable infrastructure landscape remains an operational blind spot, susceptible to exploitation in the next phase of global competition or geopolitical conflict. As threats evolve and hybrid tactics and actions increase, visibility into what lies beneath is advantageous and essential.
Illustration of a so-called Russian “ghost” vessel (e.g., bulk carrier) dragging its stern anchor through a subsea optical communications cable. It is an informal term that describes a Russian vessel operating covertly or suspiciously, often without broadcasting its identity or location using the Automatic Identification System (AIS), the global maritime safety protocol that civilian ships must use.
ISLANDS AT RISK: THE FRAGILE NETWORK BENEATH THE WAVES.
Submarine fiber-optic cables form the “invisible” backbone of global connectivity, silently transmitting over 95% of international data traffic beneath the world’s oceans (note: intercontinental data traffic represents ~25% of the worldwide data traffic). These subsea cables are essential for everyday internet access, cloud services, financial transactions (i.e., over 10 billion euros daily), critical infrastructure operations, emergency response coordination, and national security. Despite their importance, they are physically fragile, vulnerable to natural disruptions such as undersea earthquakes, volcanic activity, and ice movement, as well as to human causes like accidental trawling, ship anchor drags, and even deliberate sabotage. A single cut to a key cable can isolate entire regions or nations from the global network, disrupt trade and governance, and slow or sever international communication for days or weeks.
This fragility becomes even more acute when viewed through the lens of island nations and territories. The figure below presents a comparative snapshot of various islands across the globe, illustrating the number of international subsea cable connections each has (in blue bars), overlaid with the population size in millions (in orange). The disparity is striking: densely populated islands such as Taiwan, Sri Lanka, or Madagascar often rely on only a few cables, while smaller territories like Saint Helena or Gotland may have just a single connection to the rest of the world. These islands inherently depend on subsea infrastructure for access to digital services, economic stability, and international communication, yet many remain poorly connected or dangerously exposed to single points of failure. Some of these Islands may be less important from a global security, geopolitical context and a defense perspective. However, for the inhabitants of those islands, that of course will not matter much, and some islands are of critical importance to a safe and secure world order.
The chart below underscores a critical truth. Island connectivity is not just a matter of bandwidth or speed but a matter of resilience. For many of the world’s islands, a break in the cable doesn’t just slow the internet; it severs the lifeline. Every additional cable significantly reduces systemic risk. For example, going from two to three cables can cut expected unavailability by more than 60–80%, and moving from three to four cables supports near-continuous availability, which is now required for modern economies and national security.
The bar chart shows the number of subsea cable connections, while the orange line represents each island’s population (plotted on a log-scale), highlighting disparities between connectivity and population density.
Reducing systemic risk means lowering the chance that a single point of failure, or a small set of failures, can cause a complete system breakdown. In the context of subsea cable infrastructure, systemic risk refers to the vulnerability that arises when a country’s or island’s entire digital connectivity relies on just one or two physical links to the outside world. With only two international submarine cables connecting a given island in parallel, it would mean that it is deemed acceptable to have up to ~13 minutes of (a total service loss) downtime per year (note: for a single cable, that would be ~2 days per year). This should be compared to the time it may take to get the submarine cable repaired and operational again (after a cut), which may take weeks, or even months, depending on the circumstances and location. Adding a third submarine cable (parallel to the other two) reduces the maximum expected total loss of service to ~4 seconds per year. The likelihood that all 3 would be compromised by naturally occurring incidents would be very small (i.e., one in ten million). Relying on only two submarine cables for an island’s entire international connectivity, at bandwidth-critical scale, is a high-stakes gamble. While dual-cable redundancy may offer sufficient availability on paper, it fails to account for real-world risks such as correlated failures, extended repair times, and the escalating strategic value of uninterrupted digital access. This represents a technical fragility and a substantial security liability for an island economy and a digitally reliant society.
Suppose one cable is accidentally or deliberately damaged, with little or no redundancy. In that case, the entire system can collapse, cutting off internet access, disrupting communication, and halting financial and governmental operations. Reducing systemic risk involves increasing resilience through redundancy, ensuring the overall system continues functioning even if one or more cables fail. This also means not relying on only one type of connectivity, e.g., subsea cables or satellite. Still, combinations of different kinds of connectivity are incredibly important to safeguard continuous connectivity to the outside world from the perspective of an Island, even if alternative or backup connectivity does not match the capacity of the primary means of connectivity. Moreover, islands with relatively low populations tend to rely on one central terrestrial-based switching hub (e.g., typically at the central population hub), without much or meshed connectivity, exposing all communication on an island if such a hub becomes compromised.
Submarine cables are increasingly recognized as strategic targets in a hybrid warfare or full-scale military conflict scenario. Deliberate severance of these cables, particularly in chokepoints, near shore landing zones (i.e., landing stations), or cable branching points, can be a high-impact, low-visibility tactic to cripple communications without overt military action.
Going from two to three (or three to four) subsea cables may offer some strategic buffer. If an attacker compromises one or even two links, the third can preserve some level of connectivity, allowing essential communications, coordination, and early warning systems to remain operational. This may reduce the impact window for disruption and provide authorities time to respond or re-route traffic. However, it is unlikely to make a substantial difference in a conflict scenario, where a capable hostile actor may easily compromise a relatively low number of submarine cable connections. Moreover, if the terrestrial network is exposed to a single point of failure via a central switching hub design, having multiple subsea connections may matter very little in a crisis situation.
And, think about it, there is no absolute guarantee that the world’s critical subsea infrastructure has not already been compromised by hostile actors. In fact, given the strategic importance of submarine cables and the increasing sophistication of state and non-state actors in hybrid warfare, it appears entirely plausible that certain physical and cyber vulnerabilities have already been identified, mapped, or even covertly exploited.
In short, the absence of evidence is not evidence of absence. While major nations and alliances like NATO have increased efforts to monitor and secure subsea infrastructure, the sheer scale and opacity of the undersea environment mean that strategic surprise is still possible (maybe even likely). It is also worth remembering that most submarine cables operate in the dark in the historical and even present-day context. We rely on their redundancy and robustness, but we largely lack the sensory systems that allow us to proactively defend or observe them in real time.
This is what makes submarine cable sensing technologies such a strategic frontier today and why resilience, through redundancy, sensing technologies, and international cooperation, is critical. We may not be able to prevent every act of sabotage, but we can reduce the risk of catastrophic failure and improve our ability to detect and respond in real time.
THE LIKELY SUSPECTS – THE CAPABLE HOSTILE ACTOR SEEN FROM A WESTERN PERSPECTIVE.
As observed in the Western context, Russia and China are considered the most capable hostile actors in submarine cable sabotage. China is reportedly advancing its ability to conduct such operations at scale. These developments underscore the growing need for technological defenses and multilateral coordination to safeguard global digital infrastructure.
Several state actors possess the capability and potential intent to compromise or destroy submarine communications networks. Among them, Russia is perhaps the most openly scrutinized. Its specialized naval platforms, such as the Yantar-class intelligence ships and deep-diving submersibles like the AS-12 “Losharik”, can access cables on the ocean floor for tapping or cutting purposes. Western military officials have repeatedly raised concerns about Russia’s activities near undersea infrastructure. For example, NATO has warned of increased Russian naval activity near transatlantic cable routes, viewing this as a serious security risk impacting nearly a billion people across North America and Western Europe.
China is also widely regarded as a capable actor in this domain. The People’s Liberation Army Navy (PLAN) and a vast network of state-linked maritime engineering firms possess sophisticated underwater drones, survey vessels, and cable-laying ships. These assets allow for potential cable mapping, interception, or sabotage operations. Chinese maritime activity around strategic chokepoints such as the South China Sea has raised suspicions of dual-use missions under the guise of oceanographic research.
Furthermore, credible reports and analyses suggest that China is developing methods and technologies that could allow it to compromise subsea cable networks at scale. This includes experimental systems enabling simultaneous disruption or surveillance of multiple cables. According to Newsweek, recent Chinese patents may indicate that China has explored ways to “cut or manipulate undersea cables” as part of its broader strategy for information dominance.
Other states, such as North Korea and Iran, may not possess full deep-sea capabilities but remain threats to regional segments, particularly shallow water cables and landing stations. With its history of asymmetric tactics, North Korea could plausibly disrupt cable links to South Korea or Japan. Meanwhile, Iran may threaten Persian Gulf routes, especially during heightened conflict.
While non-state actors are not typically capable of attacking deep-sea infrastructure directly, they could be used by state proxies or engage in sabotage at cable landing sites. These actors may exploit the relative physical vulnerability of cable infrastructure near shorelines or in countries with less robust monitoring systems.
Finally, it is not unthinkable that NATO countries possess the technical means and operational experience to compromise submarine cables if required. However, their actions are typically constrained by strategic deterrence, international law, and alliance norms. In contrast, Russia and China are perceived as more likely to use these capabilities to project coercive power or achieve geopolitical disruption under a veil of plausible deniability.
WE CAN’T PROTECT WHAT WE CAN’T MEASURE – WHAT IS THE SENSE OF SENSING SUBMARINE CABLES?
In the context of submarine fiber-optic cable connections, it should be clear that we cannot protect this critical infrastructure if we are blind to the environment around it and along the cables themselves.
While traditionally designed for high-capacity telecommunications, submarine optical cables are increasingly recognized as dual-use assets, serving civil and defense purposes. When enhanced with distributed sensing technologies, these cables can act as persistent monitoring platforms, capable of detecting physical disturbances along the cable routes in (near) real time.
From a defense perspective, sensing-enabled subsea cables offer a discreet, infrastructure-integrated solution for maritime situational awareness. Technologies such as Distributed Acoustic Sensing (DAS), Coherent Optical Frequency Domain Reflectometry (C-OFDR), and State of Polarization (SOP) sensing can detect anomalies like trawling activity, anchor dragging, undersea vehicle movement, or cable tampering, especially in coastal zones or strategic chokepoints like the GIUK gap or Arctic straits. When paired with AI-driven classification algorithms, these systems can provide early-warning alerts for hybrid threats, such as sabotage or unregistered diver activity near sensitive installations.
For critical infrastructure protection, these technologies play an essential role in real-time monitoring of cable integrity. They can detect:
Gradual mechanical strain due to shifting seabed or ocean currents,
Seismic disturbances that may precede physical breaks,
Ice loading or iceberg impact events in polar regions.
These sensing systems also enable faster fault localization. While they are not likely to prevent a cable from being compromised, whether by accidental impact or deliberate sabotage, they dramatically reduce the time required to identify the problem’s location. In traditional submarine cable operations, pinpointing a break can take days, especially in deep or remote waters. With distributed sensing, operators can localize disturbances within meters along thousands of kilometers of cable, enabling faster dispatch of repair vessels, route reconfiguration, and traffic rerouting.
Moreover, sensing technologies that operate passively or without interrupting telecom traffic, such as SOP sensing or C-OFDR, are particularly well suited for retrofitting onto existing brownfield infrastructure or deployment on dual-use commercial-defense systems. They offer persistent, covert surveillance without consuming bandwidth or disrupting service, an advantage for national security stakeholders seeking scalable, non-invasive monitoring solutions. As such, they are emerging as a critical layer in the defense of underwater communications infrastructure and the broader maritime domain.
We should remember that no matter how advanced our monitoring systems are, they are unlikely to prevent submarine cables from being compromised by natural events like earthquakes and icebergs or unintentional and deliberate human activity such as trawling, anchor strikes, or sabotage. However, the sensing technologies offer the ability to detect and localize problems faster, enabling quicker response and mitigation.
TECHNOLOGY OVERVIEW: SUBMARINE CABLE SENSING.
Modern optical fiber sensing leverages the cable’s natural backscatter phenomena, such as Rayleigh, Brillouin, and Raman effects, to extract environmental data from a subsea communications cable. The physics of these effects is briefly described at the end of this article.
In the following, I will provide a comparative outline of the major sensing technologies in use today or may be deployed in future greenfield submarine fiber deployments. Each method has trade-offs in spatial or temporal resolution, compatibility with existing infrastructure, cost, and robustness to background noise. We will focus on defense applications in general applied to Arctic coastal environments, such as around Greenland. The relevance of each optical cable sensing technology described below to maritime defense will be summarized.
Some of the most promising sensing technologies today are based on the principles of Rayleigh scattering. For most sensing techniques, Rayleigh scattering is crucial in transforming standard optical cables into powerful sensor arrays without necessarily changing the physical cable structure. This makes it particularly valuable for submarine cable applications in the Arctic and strategic defense settings. By analyzing the light that bounces back from within the fiber, these systems can enable (near) real-time monitoring of intrusions or seismic activity over vast distances, spanning thousands of kilometers. Importantly, promising techniques are leverage Rayleigh scattering to function effectively even on legacy cable infrastructure, where installing additional reflectors would be impractical or uneconomical. Since Rayleigh-based sensing can be performed passively and non-invasively, it does not interfere with active data traffic, making it ideal for dual-use cables for communication and surveillance purposes. This approach offers a uniquely scalable and resilient way to enhance situational awareness and infrastructure defense in harsh or remote environments like the Arctic.
Before we get started on the various relevant sensing technologies let us briefly discuss what we mean by a sensing technology’s performance and its sensing capability, that is how well it can detect, localize, and classify physical disturbances, such as vibration, strain, acoustic pressure, or changes in light polarization, along a fiber-optic cable. The performance is typically judged by parameters like spatial resolution, detection range, sensitivity, signal-to-noise ratio, and the system’s ability to operate in noisy or variable environments. In the context of submarine detection, these disturbances are often caused by acoustic signals generated by vessel propulsion, machinery noise, or pressure waves from movement through the water. While the fiber does not measure sound pressure directly, it can detect the mechanical effects of those acoustic waves, such as tiny vibrations or refractive index changes in the surrounding seabed or cable sheath. The technologies we deploy have to be able to detect these vibrations as phase shifts in backscattered light. In contrast, other technologies may track subtle polarization changes induced by environmental stress on the subsea optical cables (as a result of an event in the proximity of the cable). A sensing system is considered effective when it can capture and resolve these indirect signatures of underwater activity with enough fidelity to enable actionable interpretation, especially in complex environments like coastal Arctic zones or the deep ocean.
In underwater acoustics, sound is measured in units of decibels relative to 1 micro Pascal, expressed as “dB re 1 µPa”, which defines a standard reference pressure level. The notation “dB re 1 µPa @ 1 m” refers to the sound pressure level of an underwater source, expressed in decibels relative to 1 micro Pascal and measured at a standard distance of one meter from the source. This metric quantifies how loud an object, such as a submarine, diver, or vessel, sounds when observed at close range, and is essential for modeling how sound propagates underwater and estimating detection ranges. In contrast, noise floor measurements use “dB re 1 µPa/√Hz,” which describes the distribution of background acoustic energy across frequencies, normalized per unit bandwidth. While source level describes how powerful a sound is at its origin, noise floor values indicate how easily such a sound could be detected in a given underwater environment.
Measurements are often normalized to bandwidth to assess sound or noise frequency characteristics, using “dB re 1 µPa/√Hz”. For example, stating a noise level of 90 dB re 1 µPa/√Hz in the 10 to 1000 Hz band means that within that frequency range, the acoustic energy is distributed at an average pressure level referenced per square root of Hertz. This normalization allows fair comparison of signals or noise across different sensing bandwidths. It helps determine whether a signal, such as a submarine’s acoustic signature, can be detected above the background noise floor. The effectiveness of a sensing technology is ultimately judged by whether it can resolve these types of signals with sufficient clarity and reliability for the specific use case.
In the mid-latitude Atlantic Ocean, typical noise floor levels range between 85 and 105 dB re 1 µPa/√Hz in the 10 to 1000 Hz frequency band. This environment is shaped by intense shipping traffic, consistent wave action, wind-generated surface noise, and biological sources such as whales. The noise levels are generally higher near busy shipping lanes and during storms, which raises the acoustic background and makes it more challenging to detect subtle events such as diver activity or low-signature submersibles (e.g., ballistic missile submarine, SSBN). In such settings, sensing techniques must operate with high signal-to-noise ratio thresholds, often requiring filtering or focusing on specific narrow frequency bands and enhanced by machine learning applications.
On the other hand, the Arctic coastal environment, such as the waters surrounding Greenland, is markedly quieter than, for example, the Atlantic Ocean. Here, the noise floor typically falls between 70 and 95 dB re 1 µPa/√Hz, and in winter, when sea ice covers the surface, it can drop even lower to around 60 dB. In these conditions, noise sources are limited to occasional vessel traffic, wind-driven surface activity, and natural phenomena such as glacial calving or ice cracking. The seasonal nature of Arctic noise patterns means that the acoustic environment is especially quiet and stable during winter, creating ideal conditions for detecting faint mechanical disturbances. This quiet background significantly improves the detectability of low-amplitude events, including the movement of stealth submarines, diver-based tampering, or UUV (i.e., unmanned underwater vehicles) activity.
Distributed Acoustic Sensing (DAS) uses phase-sensitive optical time-domain reflectometry (φ-OTDR) to detect acoustic vibrations and dynamic strain in general. Dynamic strain may arise from seismic waves or mechanical impacts along an optical fiber path. DAS allows for structural monitoring at a resolution of ca. 10 meters and a typical distance with amplification of 10 to 100 kilometers (can be extended by more amplifiers). It is an active sensor technology. DAS can be installed on shorter submarine cables (e.g., less than 100 km), although installing on a brownfield subsea cable is relatively complex. For long submarine cables (e.g., transatlantic), DAS would be greenfield deployed in conjunction with the subsea cable rollout, as retrofitting on an existing fiber installation would be impractical.
Phase-sensitive optical time domain reflectometry is a sensing technique that allows an optical fiber, like those used in subsea cables, to act like a long string of virtual microphones or vibration sensors. The method works by sending short pulses of laser light into the fiber and measuring the tiny reflections that bounce back due to natural imperfections inside the glass. When there is no activity near the cable, the backscattered light has a stable pattern. But when something happens near the cable, like a ship dragging an anchor, seismic shaking, or underwater movement, those vibrations cause tiny changes in the fiber’s shape. This physically stretches or compresses the fiber, changing the phase of the light traveling through it. φ-OTDR is specially designed to be sensitive to these phase changes. What is being detected, then, is not a “sound” per se, but a tiny change in the timing (phase) of the light as it reflects back. These phase shifts happen because mechanical energy from the outside world, like movement, stress, or pressure, slightly changes the length of the fiber or its refractive properties at specific points. φ-OTDR is ideal for detecting vibrations, like footsteps (yes, the technique also works on terra firma), vehicle movement, or anchor dragging. It is best suited for acoustic sensing over relatively long distances with moderate resolution.
So, in simple terms:
The “event” is not inside the fiber but in sufficient vicinity to cause a reaction in the fiber.
That external event causes micro-bending or stretching of the fiber.
The fiber cable’s mechanical deformation changes the phase of light that is then detected.
The sensing system uses these changes to pinpoint where along the fiber the event happened, often with meter-scale precision.
DAS has emerged as a powerful tool for transforming optical fibers into real-time acoustic sensor arrays, capable of detecting subtle mechanical disturbances such as vibrations, underwater movement, or seismic waves. While this capability is very attractive for defense and critical infrastructure monitoring, its application across existing long-haul subsea cables, particularly transoceanic systems, is severely constrained. The technology requires dark fibers or at least isolated, unused wavelengths, which are generally unavailable in (older) operational submarine systems already carrying high-capacity data traffic. Moreover, most legacy subsea cables were not designed with DAS compatibility in mind, lacking the bidirectional amplification or optical access points required to maintain sufficient signal integrity for acoustic sensing over long distances.
Retrofitting existing transatlantic or pan-Arctic submarine cables for DAS would be technically complex and, in most scenarios, likely economically unfeasible. These systems span thousands of kilometers, are deeply buried or armored along parts of their route, and incorporate in-line repeaters that do not support the backscattering reflection needed for DAS. As a result, implementing DAS across such long-haul infrastructure would entail replacing major cable components or deploying parallel sensing fibers. Both options may likely be inconsistent with the constraints of an already-deployed system. Suppose this kind of sensing capability is deemed strategically necessary. In that case, it may be operationally much less complex and more economical to deploy a greenfield cable with the embedded sensing technology, particularly for submarine cables that are 10 years old or older.
Despite these limitations, DAS offers significant potential for defense applications over shorter submarine segments, particularly near coastal landing points or within exclusive economic zones. One promising use case involves the Arctic and sub-Arctic regions surrounding Greenland. As geopolitical interest in the Arctic intensifies and ice-free seasons expand, the cables that connect Greenland to Iceland, Canada, and northern Europe will increasingly represent strategic infrastructure. DAS could be deployed along these shorter subsea spans, especially within fjords, around sensitive coastal bases, or in narrow straits, to monitor for hybrid threats such as diver incursions, submersible drones, or anchor dragging from unauthorized vessels. Greenland’s coastal cables often traverse relatively short distances without intermediate amplifiers and with accessible routes, making them more amenable to partial DAS coverage, especially if dark fiber pairs or access points exist at the landing stations.
The technology can be integrated into the infrastructure in a greenfield context, where new submarine cables are being designed and laid out. This includes reserving fiber strands exclusively for sensing, installing bidirectional optical amplifiers compatible with DAS, and incorporating coastal and Arctic-specific surveillance requirements into the architecture. For example, new Arctic subsea cables could be designed with DAS-enabled branches that extend into high-risk zones, allowing for passive real-time monitoring of marine activity without deploying sonar arrays or surface patrol assets (e.g., not actively communicate for example a ballistic missile submarine that it has been found as would have been the case with an active sonar).
DAS also supports geophysical and environmental sensing missions relevant to Arctic defense. When deployed along the Greenlandic shelf or near tectonic fault lines, DAS can contribute to early-warning systems for undersea earthquakes, landslides, or ice-shelf collapse events. These capabilities enhance environmental resilience and strengthen military situational awareness in a region where traditional sensing infrastructure is sparse.
DAS is best suited for detecting mid-to-high frequency acoustic energy, such as propeller cavitation or hull vibrations. However, stealth submarines may not produce strong enough vibrations to be detected unless they operate close to the fiber (e.g., <1 km) or in shallow water where coupling to the seabed is enhanced. Detection is plausible under favorable conditions but uncertain in deep-sea environments. However, in shallow Greenlandic coastal waters, DAS may detect a submarine’s acoustic wake, cavitation onset, or low-frequency hull vibrations, especially if the vessel passes within several hundred meters of the fiber.
Deploying φ-OTDR on brownfield submarine cables requires minimal infrastructure changes, as the sensing system can be installed directly at the landing station using a dedicated or wavelength-isolated fiber. However, its effective sensing range is limited to the segment between the landing station and the first in-line optical amplifier, typically around 80 to 100 kilometers. This limitation exists because standard submarine amplifiers are unidirectional and amplify the forward-traveling signal only. They do not support the return of backscattered light required by φ-OTDR, effectively cutting off sensing beyond the first repeater in brownfield systems. Even in a greenfield deployment, φ-OTDR is fundamentally constrained by weak backscatter, incoherent detection, poor long-distance SNR, and amplifier design, making it a technology mainly for coastal environments.
Coherent Optical Frequency Domain Reflectometry (C-OFDR) employs continuous-wave frequency-chirped laser probe signals and measures how the interference pattern (of the reflected light) changes (i.e., coherent detection). It offers high resolution (i.e., 100 -200 meters) and, for telecom-grade implementations, long-range sensing (i.e., 100s km), even over legacy submarine cables without Bragg gratings (i.e., period variation of the refractive index of the fiber). It is an active sensor technology. C-OFDR is one of the most promising techniques for high-resolution distributed sensing over long distances (e.g., transatlantic distances), and it can, in fact, be used on existing operational subsea cables without any special modifications to the cable itself, although with some practical considerations on older systems and limitations due to a reduced dynamic range. However, this sensing technology does require coherent detection systems with narrow-linewidth lasers and advanced DSP, which might make brownfield integration complex without significant upgrades. In contrast, greenfield deployments can seamlessly incorporate C-OFDR by leveraging the coherent optical infrastructure already standard in modern long-haul submarine cables. C-OFDR technique, like φ-OTDR, also relies on sensing changes in lights properties as it is reflected from imperfections in the fiber optical cable (i.e., Rayleigh backscattering). When something (an “event”) happens near the fiber, like the ground shakes from an earthquake, an anchor hits the seabed, or temperature changes, the optical fiber experiences microscopic stretching, squeezing, or vibration. These tiny changes affect how the light reflects back. Specifically, they change the phase and frequency of the returning signal. C-OFDR uses interferometry to measure these small differences very precisely. It is important to understand that the “event” we talk about is not inside the fiber, but its effects are causing changes to the fiber that can be measured by our chosen sensing technique. External forces (like pressure or motion) cause strain or stress in the glass fiber, which changes how the light moves inside. C-OFDR detects those changes and tells you where along the cable these changes happened, sometimes within a few centimeters.
Deploying C-OFDR on brownfield submarine cables is more challenging, as it typically requires more changes to the landing station, such as coherent transceivers with narrow-linewidth lasers and high-speed digital signal processing, which are normally not present in legacy landing stations. Even if such equipment is added at the landing station, like φ-OTDR, sensing may be limited to the segment up to the first in-line amplifier unless modified as shown in the work by Mazur et al.. C-OFDR, compared to φ-OTDR, leverages coherent receivers, DSP, and telecom-grade infrastructure to overcome those barriers, making C-OFDR a very relevant long-haul subsea cable sensing technology.
An interesting paper using a modified C-OFDR technique, “Continuous Distributed Phase and Polarization Monitoring of Trans-Atlantic Submarine Fiber Optic Cable” by Mazur et al., demonstrates a powerful proof-of-concept for using existing long-haul submarine telecom cables, equipped with more than 70 amplifiers, for real-time environmental sensing without interrupting data transmission. The authors used a prototype system combining a fiber laser, FPGA (Field-Programmable Gate Array), and GPU (Graphical Processing Unit) to perform long-range optical frequency domain reflectometry (C-OFDR) over a 6,500 km transatlantic submarine cable. By measuring phase and polarization changes between repeaters, they successfully detected a 6.4 magnitude earthquake near Ferndale, California, showing the seismic wave propagating in real-time from the West Coast of the USA, across North America, and was eventually observed by Mazur et al. in the Atlantic Ocean. Furthermore, they demonstrated deep-sea temperature measurements by analyzing round-trip time variations along the full cable spans. The system operated for over two months without service interruptions, underscoring the feasibility of repurposing submarine cables as large-scale oceanic sensing arrays for geophysical and defense applications. Their system’s ability to monitor deep-sea environmental variations, such as temperature changes, contributes to situational awareness in remote oceanic regions like the Arctic or the Greenland-Iceland-UK (GIUK) Gap, areas of increasing strategic importance. It is worth noting that while the basic structure of the cable (in terms of span length and repeater placement) is standard for long-haul subsea cable systems, what sets this cable apart is the integration of a non-disruptive monitoring system that leverages existing infrastructure for advanced environmental sensing, a capability not found in most subsea systems deployed purely for telecom.
Furthermore, using C-OFDR and polarization-resolved sensing (SOP) without disrupting live telecommunications traffic provides a discreet means of monitoring infrastructure. This is particularly advantageous for covert surveillance of vital undersea routes. Finally, the system’s fine-grained phase and polarization diagnostics have the potential to detect disturbances such as anchor drags, unauthorized vessel movement, or cable tampering, activities that may indicate hybrid threats or espionage. These features position the technology as a promising enabler for real-time intelligence, surveillance, and reconnaissance (ISR) applications over existing subsea infrastructure.
C-OFDR is very sensitive over long distances and, when optimized with narrowband probing, may detect subtle refractive index changes caused by waterborne pressure variations. While more robust than DAS at long range, its ability to resolve weak, broadband submarine noise signatures remains speculative and would likely require AI-based classification. In Greenland, C-OFDR might be able to detect subtle pressure variations or cable stress caused by passing submarines, but only if the cable is close to the source.
Phase-based sensing, which φ-OTDR belongs to, is an active sensing technique that tracks the phase variation of optical signals for precise mechanical event detection. It requires narrow linewidth lasers and sensitive DSP algorithms. In phase-based sensing, we send very clean, stable light from a narrow-linewidth laser through the fiber cable. We then measure how the phase of that light changes as it travels. These phase shifts are incredibly sensitive to tiny movements, smaller than a wavelength of light. As discussed above, when the fiber is disturbed, even just a little, the light’s phase changes, which is what the system detects. This sensing technology offers a theoretical spatial resolution of 1 meter and is currently expected to be practical over distances less than 10 kilometers. In general, phase-based sensing is a broader class of fiber-optic sensing methods that detect optical phase changes caused by mechanical, thermal, or acoustic disturbances.
Phase-based sensing technologies detect sub-nanometer variations in the phase of light traveling through an optical fiber, offering exceptional sensitivity to mechanical disturbances such as vibrations or pressure waves. However, its practical application over the existing installed base of submarine cable infrastructure remains extremely limited. Some of the more advanced implementations are largely confined to laboratory settings due to the need for narrow-linewidth lasers, high-coherence probe sources, and low-noise environments. These conditions are difficult to achieve across real-world subsea spans, especially those with optical amplifiers and high traffic loads. These technical demands make retrofitting phase-based sensing onto operational subsea cables impractical, particularly given the complexity of accessing in-line repeaters and the susceptibility of phase measurements to environmental noise. Still, as the technology matures and can be adapted to tolerate noisy and lossy environments, it could enable ultra-fine detection of small-scale events such as underwater cutting tools, diver-induced vibrations, or fiber tampering attempts.
In a defense context, phase-based sensing might one day be used to monitor high-risk cable landings or militarized undersea chokepoints where detecting subtle mechanical signatures could provide an early warning of sabotage or surveillance activity. Its extraordinary resolution could also contribute to low-profile detection of seabed motion near sensitive naval installations. While not yet field-deployable at scale, it represents a promising frontier for future submarine sensing systems in strategic environments, typically in proximity to coastal areas.
Coherent MIMO Distributed Fiber Sensing (DFS) is another cutting-edge active sensing technique belonging to the phase-based sensing family that uses polarization-diverse probing for spatially-resolved sensing on deployed multi-core fibers (MCF), enabling robust, high-resolution environmental mapping. This technology remains currently limited to laboratory environments and controlled testbeds, as the widespread installed base of submarine cables does not use MCF and lacks the transceiver infrastructure required to support coherent MIMO interrogation. Retrofitting existing subsea systems with this capability would require complete replacement of the fiber plant, making it infeasible for legacy infrastructure, but potentially interesting for greenfield deployments.
Despite these limitations, the future application of Coherent MIMO DFS in defense contexts is compelling. Greenfield deployments, such as new Arctic cables or secure naval corridors, could enable real-time acoustic and mechanical activity mapping across multiple parallel cores, offering spatial resolution that rivals or exceeds existing sensing platforms. This level of precision could support the detection and classification of complex underwater threats, including stealth submersibles or distributed tampering attempts. With further development, it might also support wide-area surveillance grids embedded directly into the fiber infrastructure of critical sea lanes or military installations. While not deployable on today’s global cable networks, it represents a next-generation tool for submarine situational awareness in future defense-grade fiber systems.
State of Polarization (SOP) sensing technology detects changes in light polarization that allow sensing environmental disturbances to a submarine optical cable. It can be implemented passively using existing coherent transceivers and thus can be used on existing operational submarine cables. The SOP sensing technology does not offer spatial resolution by default. However, it has a very high temporal sensitivity on a millisecond level, allowing it to resolve temporally localized SOP anomalies that may often be precursors for a structurally compromised submarine cable. SOP sensing provides timely and actionable information even without pinpoint spatial resolution for applications like cable break prediction, anomaly detection, and hybrid threat alerts. However, if the temporal information can be mapped back to the compromised physical location within 10s of kilometers. The SOP sensing can cover up to 1000s of kilometers of a submarine system.
SOP sensing provides path-integrated information about mechanical stress or vibration. While it lacks spatial resolution, it could register anomalous polarization disturbances along Arctic cable routes that coincide with suspected submarine activity. Even global SOP anomalies may be suspicious in Greenland’s sparse traffic environment, but localizing the source would remain challenging. It is likely a technique that, combined with C-OFDR, would offer both a spatial and temporal picture that, in combination, could become a promising use case. SOP provides fast, passive temporal detection, while C-OFDR (or DAS) delivers spatial resolution and event classification. The combination may offer a more robust and operationally viable architecture for strategic subsea sensing, suitable for civilian and defense applications across existing and future cable systems.
Deploying SOP-based sensing on brownfield submarine cables requires no changes to the cable infrastructure, such as landing stations. It passively monitors changes in the state of polarization at the transceiver endpoints. However, this method does not provide spatial resolution and cannot localize events along the cable. It also does not rely on backscatter, and therefore its sensing capability is not limited by the presence of amplifiers, unlike φ-OTDR or C-OFDR. The limitation, instead, is that SOP sensing provides only a global, integrated signal over the entire fiber span, making it effective for detecting disturbances but not pinpointing their location.
Table: Performance characteristics of key optical fiber sensing technologies for subsea applications. The table summarizes spatial resolution, operational range, minimum detectable sound levels, activation state, and compatibility with existing subsea cable infrastructure. Values reflect current best estimates and lab performance where applicable, highlighting trade-offs in detection sensitivity and deployment feasibility across sensing modalities. Range depends heavily on system design. While traditional C-OFDR typically operates over short ranges (<100 m), advanced variants using telecom-grade coherent receivers may extend reach to 100s of km at lower resolution. This table, as well as the text, considers the telecom-grade variant of C-OFDR.
Beyond the sensing technologies already discussed, such as DAS (including φ-OTDR), C-OFDR, SOP, and Coherent MIMO DFS, several additional, lesser-known sensing modalities can be deployed on or alongside submarine cables. These systems differ in physical mechanisms, deployment feasibility, and sensitivity, and while some remain experimental, others are used in niche environmental or energy-sector applications. Several of these have implications for defense-related detection scenarios, including submarine tracking, sabotage attempts, or unauthorized anchoring, particularly in strategically sensitive Arctic regions like Greenland’s West and East Coasts.
One such system is Brillouin-based distributed sensing, including Brillouin Optical Time Domain Analysis (BOTDA) and Brillouin Optical Time Domain Reflectometry (BOTDR). These methods operate by sending pulses down the fiber and analyzing the Brillouin frequency shift, which varies with temperature and strain. The spatial resolution is typically between 0.5 and 1 meter, and the sensing range can extend to 50 km under optimized conditions. The system’s strength is detecting slow-moving structural changes, such as seafloor deformation, tectonic strain, or sediment pressure buildup. However, because the Brillouin interaction is weak and slow to respond, it is poorly suited for real-time detection of fast or low-amplitude acoustic events like those produced by a stealth submarine or diver. Anchor dragging might be detected, but only if it results in significant, sustained strain in the cable. These systems could be modestly effective in shallow Arctic shelf environments, such as Greenland’s west coast, but they are not viable for real-time defense monitoring.
Another temperature-focused method is Raman-based distributed temperature sensing (DTS). This technique analyzes the ratio of Stokes and anti-Stokes backscatter to detect temperature changes along the fiber, with spatial resolution typically on the order of 1 meter and ranges up to 10–30 km. Raman DTS is widely used in the oil and gas industry for downhole monitoring, but is not optimized for dynamic or mechanical disturbances. It offers little utility in detecting diver activity, submarine motion, or anchor drag unless such events lead to secondary thermal effects. Furthermore, Raman DTS is unsuitable for detecting fast-moving threats like submarines or divers. It can detect slow thermal anomalies caused by prolonged contact, buried tampering devices, or gradual sediment buildup. Thus, it may serve as a background “health monitor” for defense-relevant subsea critical infrastructures. As its enabling mechanism is Raman scattering, which is even weaker than Rayleigh and Brillouin scattering, it is likely to make this sensor technology unsuitable for Arctic defense applications. Moreover, the cold and thermally stable Arctic seabed provides a limited dynamic range for temperature-induced sensing.
A more advanced but experimental method is optical frequency comb (OFC)-based sensing, which uses an ultra-stable frequency comb to probe changes in fiber length and strain with sub-picometer resolution. This offers unparalleled spatial granularity (down to millimeters) and could, in theory, detect subtle refractive index changes induced by acoustic coupling or mechanical perturbation. However, range is limited to short spans (<10 km), and implementation is complex and not yet field-viable. This technology might detect micro-vibrations from nearby submersibles or diver-induced strain signatures in a future defense-grade network, especially greenfield deployments in Arctic coastal corridors. The physical mechanism is interferometric phase detection, amplified by comb coherence and time-of-flight mapping. Frequency comb-based techniques could be the foundation for a next-generation submarine cable monitoring system, especially in greenfield defense-focused coastal deployments requiring excellent spatial resolution under variable environmental conditions. Unlike traditional reflectometry or phase sensing, the laser frequency comb should be able to maintain calibrated performance in fluctuating Arctic environments, where salinity and temperature affect refractive index dramatically, and therefore, a key benefit for Greenlandic and Arctic deployments.
Another emerging direction is Integrated Sensing and Communication (ISAC), where linear frequency-modulated sensing signals are embedded directly into the optical communication waveform. This approach avoids dedicated dark fiber and can achieve moderate spatial resolution (~100–500 meters) with ranges of up to 80 km using coherent receivers. ISAC has been proposed for simultaneous data transmission and distributed vibration sensing. In Arctic coastal areas, where telecom capacity may be underutilized and infrastructure redundancy is limited, ISAC could enable non-invasive monitoring of anchor strikes or structural cable disturbances. It may not detect quiet submarines unless direct coupling occurs, but it could potentially flag diver-based sabotage or hybrid threats that cause physical cable contact.
Lastly, hybrid systems combining external sensor pods, such as tethered hydrophones, magnetometers, or pressure sensors, with submarine cables are deployed in specialized ocean observatories (e.g., NEPTUNE Canada). These use the cable for power and telemetry and offer excellent sensitivity for detecting underwater acoustic and geophysical events. However, they require custom cable interfaces, increased power provisioning, and are not easily retrofitted to commercial or legacy submarine systems. In Arctic settings, such systems could offer unparalleled awareness of glacier calving, seismic activity, or vessel movement in chokepoints like the Kangertittivaq (i.e., Scoresby Sund) or the southern exit of Baffin Bay (i.e., Avannaata Imaa). The main limitation of hybrid systems lies in their cost and the need for local infrastructure support. The economics relative to such systems’ benefits requires careful consideration compared to more conventional maritime sensor architectures.
DEFENSE SCENARIOS OF CRITICAL SUBSEA CABLE INFRASTRUCTURE.
Submarine cable infrastructure is increasingly recognized as a medium for data transmission and a platform for environmental and security monitoring. With the integration of advanced optical sensing technologies, these cables can detect and interpret physical disturbances across vast underwater distances. This capability opens up new opportunities for national defense, situational awareness, and infrastructure resilience, particularly in coastal and Arctic regions where traditional surveillance assets are limited. The following section outlines how different sensing modalities, such as DAS, C-OFDR, SOP, and emerging MIMO DFS, can support key operational objectives ranging from seismic early warning to hybrid threat detection. Each scenario case reflects a unique combination of acoustic signature, environmental setting, and technological suitability.
Intrusion Detection: Detect tampering, trawling, or vehicle movement near cables in coastal zones.
Seismic Early Warning: Monitor undersea earthquakes with high fidelity, enabling early warning for tsunami-prone regions.
Cable Integrity Monitoring: Identify precursor events to fiber breaks and trigger alerts to reroute traffic or dispatch response teams.
Hybrid Threat Detection: Monitor signs of hybrid warfare activities such as sabotage or unauthorized seabed operations near strategic cables. This also includes anchor-dragging sounds.
Maritime Domain Awareness: Track vessel movement patterns in sensitive maritime zones using vibrations induced along shore-connected cable infrastructure.
Intrusion Detection involving trawling, tampering, or underwater vehicle movement near the cable is best addressed using Distributed Acoustic Sensing (DAS), especially on coastal Arctic subsea cables where environmental noise is lower and mechanical coupling between the cable and the seafloor is stronger. DAS can detect short-range, high-frequency mechanical disturbances from human activity. However, this is more challenging in the open ocean due to poor acoustic coupling and cable burial. Coherent Optical Frequency Domain Reflectometry (C-OFDR) combined with State of Polarization (SOP) sensing offers a more passive and feasible alternative in such environments. C-OFDR can detect strain anomalies and localized pressure effects, while SOP sensing can identify anomalous polarization drift patterns caused by motion or stress, even on live traffic-carrying fibers.
For Seismic Early Warning, phase-based sensing (including both φ-OTDR and C-OFDR) is well suited across coastal and oceanic deployments. These technologies detect low-frequency ground motion with high sensitivity and temporal resolution. Phase-based methods can sense teleseismic activity or tectonic shifts along the cable route in deep ocean environments. The advantage increases in the Arctic coastal zones due to low background noise and shallow deployment, enabling the detection of smaller regional seismic events. Additionally, SOP sensing, while not a primary seismic tool, can detect long-duration cable strain or polarization shifts during large quakes, offering a redundant sensing layer.
Combining C-OFDR and SOP sensing is most effective for Cable Integrity Monitoring, particularly for early detection of fiber stress, micro-bending, or fatigue before a break occurs. SOP sensing works especially well for long-haul ocean cables with live data traffic, where passive, non-intrusive monitoring is essential. C-OFDR is more sensitive to local strain patterns and can precisely locate deteriorating sections. In Arctic coastal cables, this combination enables operators to detect damage from ice scouring, sediment movement, or thermal stress due to permafrost dynamics.
Hybrid Threat Detection benefits most from high-resolution, multi-modal sensing, such as detecting sabotage or seabed tampering by divers or unmanned vehicles. Along coastal regions, including Greenland’s fjords, Coherent MIMO Distributed Fiber Sensing (DFS), although still in its early stages, shows great promise due to its ability to spatially resolve overlapping disturbance signatures across multiple cores or polarizations. DAS may also contribute to near-shore detection if acoustic coupling is sufficient. On ocean cables, SOP sensing fused with AI-based anomaly detection provides a stealthy, always-on layer of hybrid threat monitoring, especially when other modalities (e.g., sonar, patrols) are absent or infeasible.
Finally, DAS is effective along coastal fiber segments for Maritime Domain Awareness, particularly tracking vessel movement in sensitive Arctic corridors or near military installations. It detects the acoustic and vibrational signatures of passing vessels, anchor deployment, or underwater vehicle operation. These signatures can be classified using spectrogram-based AI models to differentiate between fishing boats, cargo vessels, or small submersibles. While unable to localize the event, SOP sensing can flag cumulative disturbances or repetitive mechanical interactions along the fiber. This use case becomes less practical in oceanic settings unless vessel activity occurs near cable landing zones or shallow fiber stretches.
These scenario considerations have been summarised in the Table below.
Table: Summarises of subsea sensing use cases and corresponding detection performance. The table outlines representative sound power levels, optimal sensing technologies, environmental suitability, and estimated detection distances for key maritime and defense-related use cases. Detection range is inferred from typical source levels, local noise floors, and sensing system capabilities in Arctic coastal and oceanic environments.
LEGACY SUBSEA SENSING NETWORKS: SONOR SYSTEMS AND THEIR EVOLVING ROLE.
The observant reader might at this point feel (rightly) that I am totally ignoring the good old sonar (e.g., sound navigation and ranging), which has been around since World War I and is thus approximately 110 years old as a technology. In the Cold War era, at its height from the 1950s to the 1980s, sonar technology advanced further into the strategic domain. The United States and its allies developed large-scale systems like SOSUS (Sound Surveillance System) and SURTASS (Surveillance Towed Array Sensor System) to detect and monitor the growing fleet of Soviet nuclear submarines. These systems enabled long-range, continuous underwater surveillance, establishing sonar as a tactical tool and a key component of strategic deterrence and early warning architectures.
So, let us briefly look at Sonar as a defensive (and offensive) technology.
Undersea sensing as a cornerstone of naval strategy and maritime situational awareness; for example, see the account “66 Years of Undersea Surveillance” by Taddiken et al. Throughout the Cold War, the world’s major powers invested heavily in long-range underwater surveillance systems, especially passive and active sonar networks. These systems remain relevant today, providing persistent monitoring for submarine detection, anti-access/area denial operations, and undersea infrastructure protection.
Passive sonar systems detect acoustic signatures emitted by ships, submarines, and underwater seismic activity. These systems rely on the natural propagation of sound through water and are often favored for their stealth since they do not emit signals. Their operation is inherently covert. In contrast, active sonar transmits acoustic pulses and measures reflected signals to detect and range objects that might not produce detectable noise, such as quiet submarines or inert objects on the seafloor.
The most iconic example of a passive sonar network is the U.S. Navy’s Sound Surveillance System (SOSUS), initially deployed in the 1950s. SOSUS comprises a series of hydrophone arrays fixed to the ocean floor and connected by undersea cables to onshore processing stations. While much of SOSUS remains classified, its operational role continues today with mobile and advanced fixed networks under the Integrated Undersea Surveillance System (IUSS). Other nations have developed analogous capabilities, including Russia’s MGK-series networks, China’s emerging Great Undersea Wall system, and France’s SLAMS network. These systems offer broad area acoustic coverage, especially in strategic chokepoints like the GIUK (Greenland-Iceland-UK) gap and the South China Sea.
Despite sonar’s historical and operational value, traditional sonar networks have significant limitations. Passive sonar is susceptible to acoustic masking by oceanic noise and may struggle to detect vessels employing acoustic stealth technologies. Active sonar, while more precise, risks disclosing its location to adversaries due to its emitted signals. Furthermore, sonar performance is constrained by water conditions, salinity, temperature gradients, and depth, affecting acoustic propagation. Additionally, sonar coverage is inherently sparse and highly dependent on the geographical layout of sensor arrays and underwater topology. Furthermore, deployment and maintenance of sonar arrays are logistically complex and costly, often requiring naval support or undersea construction assets. These limitations suggest a decreasing standalone effectiveness of sonar systems in high-resolution detection, mainly as adversaries develop quieter and more agile underwater vehicles.
This table summarizes key sonar technologies used in naval and infrastructure surveillance, highlighting typical unit spacing, effective coverage radius, and operational notes for systems ranging from deep-ocean fixed arrays (SOSUS/IUSS) to mobile and nearshore defense systems.
Think of sonar as a radar for the sea, sensing outward into the subsea environment. Due to sound propagation characteristics (i.e., in water sound travels more than 4 times faster and attenuates very slowly compared to sound waves in air), sonar is an ideal technology for submarine detection and seismic monitoring. In contrast, optical sensing in subsea cables is like a tripwire or seismograph, detecting anything that physically touches, moves, or perturbs the cable along its length. The emergence of distributed sensing over fiber optics has introduced a transformative approach to undersea and terrestrial monitoring. Distributed Acoustic Sensing (DAS), Distributed Fiber Sensing (DFS), and Coherent Optical Frequency Domain Reflectometry (C-OFDR) leverage the existing footprint of submarine telecommunications infrastructure to detect environmental disturbances, including vibrations, seismic activity, and human interaction with cables, at high spatial and temporal resolution. Unlike traditional sonar, these fiber-based systems do not rely on acoustic wave propagation in water but instead monitor the optical fiber’s phase, strain, or polarization variations. So, very simple sonar uses acoustics to sense sound waves in water, while fiber-based sensing is based on optics and how lights travel in an optical fiber. When embedded in submarine cables, such sensing techniques allow for continuous, covert, and high-resolution surveillance of the cable’s immediate environment, including detection of trawler interactions, anchor dragging, subsea landslides, and localized mechanical disturbances. They operate within the optical transmission spectrum without interrupting the core data service. While sonar systems excel at broad ocean surveillance and object tracking, their coverage is limited to specific regions and depths where arrays are installed. Conversely, fiber-based sensing offers persistent surveillance along entire transoceanic links, albeit restricted to the immediate vicinity of the cable path. Together, these systems should not be seen as competitors but very much complementary tools. Sonar covers the strategic expanse, while fiber-optic sensing provides fine-grained visibility where infrastructure resides.
This table contrasts traditional active and passive sonar networks with emerging fiber-integrated sensing systems (e.g., DAS, DFS, and C-OFDR) across key operational dimensions, including detection medium, infrastructure, spatial resolution, and security characteristics. It highlights the complementary strengths of each technology for undersea surveillance and strategic infrastructure monitoring.
The future of sonar sensing lies in hybridization and adaptive intelligence. Ongoing research explores networks that combine passive sonar arrays with intelligent edge processing using AI/ML to discriminate between ambient and threat signatures. There is also a push to integrate mobile platforms, such as Unmanned Underwater Vehicles (UUVs), into sonar meshes, expanding spatial coverage dynamically based on threat assessments. Material advances may also lead to miniaturized or modular hydrophone systems that can be ad hoc or embedded into multipurpose seafloor assets. Some navies are exploring Acoustic Vector Sensors (AVS), which can detect the pressure and direction of incoming sound waves, offering a richer data set for tracking and identification. Coupled with improvements in real-time ocean modeling and environmental acoustics, these future sonar systems may offer higher fidelity detection even in shallow and complex coastal waters where passive sensors are less effective. Moreover, integration with optical fiber systems is an area of active development. Some proposals suggest co-locating acoustic sensors with fiber sensing nodes or utilizing fiber backhaul for sonar telemetry in real-time, thereby merging the benefits of both approaches into a coherent undersea surveillance architecture.
Historically, Russian submarines seeking proximity to U.S. and NATO targets would patrol areas along the Greenland-Iceland-UK (GIUK) gap and the eastern coast of Greenland, using the remoteness and challenging acoustic environment to remain hidden. However, strategic speculation and evolving threat assessments now suggest a westward shift, toward the sparsely monitored Greenlandic West Coast. This region offers even greater stealth potential due to limited surveillance infrastructure, complex fjord geography, and weaker sensor coverage than traditional GIUK chokepoints. Submarines could strike the U.S. East Coast from these waters in under 15 minutes, leveraging geographic proximity and acoustic ambiguity. Even if the difference in warning time would be no more than about 2–4 minutes depending on launch angle, trajectory, and detection latency, in the context of strategic warning systems and nuclear command and control, the loss of several minutes of additional reaction time can matter significantly, especially for early-warning systems, evacuation orders, or launch-on-warning decisions.
U.S. and Canadian defense communities have increasingly voiced concern over this evolving threat. U.S. Navy leadership, including Vice Admiral Andrew Lewis, has warned that the U.S. East Coast is “no longer a sanctuary,” underscoring the return of great power maritime competition and the pressing need for situational awareness even in home waters. As Russia modernizes its submarine fleet with quieter propulsion and longer-range missiles, its ability to hide near strategic seams like Greenland becomes a direct vulnerability to North American security.
This emerging risk makes the case for integrating advanced sensing capabilities into subsea cable infrastructure across Greenland and the broader Arctic theatre. Cable-based sensing technologies, such as Distributed Acoustic Sensing (DAS) and State of Polarization (SOP) monitoring, could dramatically enhance NATO’s ability to detect anomalous underwater activity, particularly in the fjords and shallow coastal regions of Greenland’s western seaboard. In a region where traditional sonar and surface surveillance are limited by ice, darkness, and remoteness, the subsea cable system could become an invisible tripwire, transforming Greenland’s digital arteries into dual-use defense assets.
Therefore, advanced sensing technologies should not be treated as optional add-ons but as foundational elements of Greenland’s Arctic defense architecture. Particular technologies that can work well and are relatively uncomplicated to operationalize on brownfield subsea cable installations. These would offer a critical layer of redundancy, early warning, and environmental insight, capabilities uniquely suited to the high north’s emerging strategic and climatic realities.
The Arctic Deployment Concept outlines a forward-looking strategy to integrate submarine cable sensing technologies into the defense and intelligence infrastructure of the Arctic region, particularly Greenland, as geopolitical tensions and environmental instability intensify. Greenland’s strategic location at the North Atlantic and Arctic Ocean intersection makes it a critical node in transatlantic communications and military situational awareness. As climate change opens new maritime passages and exposes previously ice-locked areas, the region becomes increasingly vulnerable, not only to environmental hazards like shifting ice masses and undersea seismic activity, but also to the growing risks of geopolitical friction, cyber operations, and hybrid threats targeting critical infrastructure.
In this context, sensing-enhanced submarine cables offer a dual-use advantage: they carry data traffic and serve as real-time monitoring assets, effectively transforming passive infrastructure into a distributed sensor network. These capabilities are especially vital in Greenland, where terrestrial sensing is sparse, the weather is extreme, and response times are long due to the remoteness of the terrain. By embedding Distributed Acoustic Sensing (DAS), Coherent Optical Frequency Domain Reflectometry (C-OFDR), and State of Polarization (SOP) sensing along cable routes, operators can monitor for ice scouring, tectonic activity, tampering, or submarine presence in near real time.
This chart illustrates the Greenlandic telecommunications provider Tusass’s infrastructure (among other things). Note that Tussas is the incumbent and only telecom provider in Greenland. Currently, five hydropower plants (shown above; location is only indicative) provide more than 80% of Greenland’s electricity demand. Greenland’s new international airport became operational in Nuuk in November 2024. Source: from the Tusass Annual Report 2023 with some additions and minor edits.
As emphasized in the article “Greenland: Navigating Security and Critical Infrastructure in the Arctic”, Greenland is not only a logistical hub for NATO but also home to increasinglydigitalized civilian systems. This dual-use nature of Arctic subsea cables underscores the need for resilient, secure, and monitored communications infrastructure. Given the proximity of Greenland to the GIUK gap, a historic naval choke point between Greenland, Iceland, and the UK, any interruption or undetected breach in subsea connectivity here could undermine both civilian continuity and allied military posture in the region.
Moreover, the cable infrastructure along Greenland’s coastline, connecting remote settlements, research stations, and defense assets, is highly linear and often exposed to physical threats from shifting icebergs, seabed movement, or vessel anchoring. These shallow, coastal environments are ideally suited for sensing deployments, where good coupling between the fiber and the seabed enables effective detection of local activity. Integrating sensing technologies here supports ISR (i.e., Intelligence, Surveillance, and Reconnaissance) and predictive maintenance. It extends domain awareness into remote fjords and ice-prone straits where traditional radar or sonar systems may be ineffective or cost-prohibitive.
The map of Greenland’s telecommunications infrastructure provides a powerful visual framework for understanding how sensing capabilities could be integrated into the nation’s subsea cable system to enhance strategic awareness and defense. The western coastline, where the majority of Greenland’s population resides (~35%) and where the main subsea cable infrastructure runs, offers an ideal geographic setting for deploying cable-integrated sensing technologies. The submarine cable routes from Nanortalik in the south to Upernavik in the north connect critical civilian hubs such as Nuuk, Ilulissat, and Qaqortoq, while simultaneously passing near U.S. military installations like Pituffik Space Base. While essential for digital connectivity, this infrastructure also represents a strategic vulnerability if left unsensed and unprotected.
Given that Russian nuclear-powered submarines (e.g., SSBMs) are suspected of operating closer to the Greenlandic coastline, shifting from the historical GIUK gap to potentially less monitored regions along the west, Greenland’s cable network could be transformed into an invisible perimeter sensor array. Technologies such as Distributed Acoustic Sensing (DAS) and State of Polarization (SOP) monitoring could be layered onto the existing fiber without disrupting data traffic. These technologies would allow authorities to detect minute vibrations from nearby vessel movement or unauthorized subsea activity, and to monitor for seismic shifts or environmental anomalies like iceberg scouring.
The map above shows the submarine cable backbone, microwave-chain sites, and satellite ground stations. If integrated, these components could act as hybrid communication-and-sensing relay points, particularly in remote locations like Qaanaaq or Tasiilaq, further extending domain awareness into previously unmonitored fjords and inlets. The location of the new international airport in Nuuk, combined with Nuuk’s proximity to hydropower and a local datacenter, also suggests that the capital could serve as a national hub for submarine cable-based surveillance and anomaly detection processing.
Much of this could be operationalized using existing infrastructure with minimal intrusion (at least in the proximity of Greenland’s coastline). Brownfield sensing upgrades, mainly using coherent transceiver-based SOP methods or in-line C-OFDR reflectometry, may be implemented on live cable systems, allowing Greenland’s existing communications network to become a passive tripwire for submarine activity and other hybrid threats. This way, the infrastructure shown on the map could evolve into a dual-use defense asset, vital in securing Greenland’s civilian connectivity and NATO’s northern maritime flank.
POLICY AND OPERATIONAL CONSIDERATIONS.
As discussed previously, today, we are essentially blind to what happens to our submarine infrastructure, which carries over 95% of the world’s intercontinental internet traffic and supports more than 10 trillion euros daily in financial transactions. This incredibly important global submarine communications network was taken for granted for a long time, almost like a deploy-and-forget infrastructure. It is worthwhile to remember that we cannot protect what we cannot measure.
Arctic submarine cable sensing is as much a policy and sourcing question as a technical one. The integration of sensing platforms should follow a modular, standards-aligned approach, supported by international cooperation, robust cybersecurity measures, and operational readiness for Arctic conditions. If implemented strategically, these systems can offer enhanced resilience and a model for dual-use infrastructure governance in the digital age.
As Arctic geostrategic relevance increases due to climate change, geopolitical power rivalry, and the expansion of digital critical infrastructure, submarine cable sensing has emerged as both a technological opportunity and a governance challenge. The deployment of sensing techniques such as State of Polarization (SOP) monitoring and Coherent Optical Frequency Domain Reflectometry (C-OFDR) offers the potential to transform traditionally passive infrastructure into active, real-time monitoring platforms. However, realizing this vision in the Arctic, particularly for Greenlandic and trans-Arctic cable systems, requires a careful approach to policy, interoperability, sourcing, and operational governance.
One of the key operational advantages of SOP-based sensing is that it allows for continuous, passive monitoring of subsea cables without consuming bandwidth or disrupting live traffic. When analyzed using AI-enhanced models, SOP fluctuations provide a low-impact way to detect seismic activity, cable tampering, or trawling events. This makes SOP a highly viable candidate for brownfield deployments in the Arctic, where live traffic-carrying cables traverse vulnerable and logistically challenging environments. Similarly, C-OFDR, while slightly more complex in deployment, has been demonstrated in real-world conditions on transatlantic cables, offering precise localization of environmental disturbances using coherent interferometry without the need for added reflectors.
From a policy standpoint, Arctic submarine sensing intersects with civil, commercial, and defense domains, making multinational coordination essential. Organizations such as NATO, NORDEFCO (Nordic Defence Cooperation), and the Arctic Council must harmonize protocols for sensor data sharing, event attribution, and incident response. While SOP and C-OFDR generate valuable geophysical and security-relevant data, questions remain about how such data can be lawfully shared across borders, especially when detected anomalies may involve classified infrastructure or foreign-flagged vessels.
Moreover, integration with software-defined networking and centralized control planes can enable rapid traffic rerouting when anomalies are detected, improving resilience against natural or intentional disruptions. This also requires technical readiness in Greenlandic and Nordic telecom systems, many of which are evolving toward open architectures but may still depend on legacy switching hubs vulnerable to single points of failure.
Sensory compatibility and strategic trust must guide the acquisition and sourcing of sensing systems. Vendors like Nokia Bell Labs, which developed AI-based SOP anomaly detection models, have demonstrated in-band sensing on submarine networks without service degradation. A sourcing team may want to ensure that due diligence is done on the foundational models and that high-risk countries or vendors have not compromised their origin. I would recommend that sourcing teams follow the European Union’s 5G security framework as guidance in selecting the algorithmic solution, ensuring that no high-risk vendor/country has been involved at any point in the model development, training, or operational aspects of inferences and updates that are involved in applications of such models. By the way, it might be a very good and safe idea to extend this principle to the submarine cable construction and repair industry (just saying!).
When sourcing such systems, governments and operators should prioritize:
Proven compatibility with coherent transceiver infrastructure (i.e., brownfield submarine cable installations). Needless to say, solutions are tested before final sourcing (e.g., PoC).
Supplier alignment with NATO or Nordic/Arctic security frameworks. At a minimum, guidance should be taken from the EU 5G security framework and its approach to high-risk vendors and countries.
Firmware and AI models need clear IP ownership and cybersecurity compliance. Needless to say, the foundational models must originate from trusted companies and markets.
Inclusion of post-deployment support in Arctic (and beyond Arctic) operational conditions.
It cannot be emphasized enough that not all sensing systems are equally suitable for long-haul submarine cable stretches, such as transatlantic routes. Different sensing strategies may be required for the same subsea cable at different cable parts or spans (e.g., the bottom of the Atlantic Ocean vs coastal areas or proximity). A hybrid sensing approach is often more effective than a single solution. The physical length, signal attenuation, repeater spacing, and bandwidth constraints inherent to long-haul cables introduce technical limitations that influence which sensing techniques are viable and scalable.
For example, φ-OTDR (phase-sensitive OTDR) and standard DAS techniques, while powerful for acoustic sensing on terrestrial or coastal cables, face significant challenges over ultra-long distances due to signal loss and diminishing signal-to-noise ratio. These methods typically require access to dark fiber and may struggle to operate effectively across repeated links or when deployed mid-span across thousands of kilometers without amplification. Contrastingly, techniques like State of Polarization (SOP) sensing and Coherent Optical Frequency Domain Reflectometry (C-OFDR) have demonstrated strong potential for brownfield integration on transoceanic cables. SOP sensing can operate passively on live, traffic-carrying fibers and has been successfully demonstrated over 6,500 km transatlantic spans without an invasive retrofit. Similarly, C-OFDR, particularly in its in-line coherent implementation, can leverage existing coherent transceivers and loop-back paths to perform long-range distributed sensing across legacy infrastructure..
This leads to the reasonable conclusion that a mix of sensing technologies tailored to cable type, length, environment, and use case is appropriate and necessary. For example, coastal or Arctic shelf cables may benefit more from high-resolution φ-OTDR/DAS deployments. In contrast, transoceanic cables call for SOP, or C-OFDR-based systems compatible with repeated, live traffic environments. This modular, multi-modal approach ensures maximum coverage, resilience, and relevance, especially as sensing is extended across greenfield and brownfield deployments.
Thus, hybrid sensing architectures are emerging as a best practice, with each technique contributing unique strengths toward a comprehensive monitoring and defense capability for critical submarine infrastructure.
Last but not least, cybersecurity and signal integrity protections are critical. Sensor platforms that generate real-time alerts must include spoofing detection, data authentication, and secured telemetry channels to prevent manipulation or false alarms. SOP sensing, for instance, may be vulnerable to polarization spoofing unless validated against multi-parameter baselines, such as concurrent C-OFDR strain signatures or external ISR (i.e., Intelligence, Surveillance, and Reconnaissance) inputs.
CONCLUSION AND RECOMMENDATION.
Submarine cables are indispensable for global connectivity, transmitting over 95% of international internet traffic, yet they remain primarily unmonitored and physically vulnerable. Recent events and geopolitical tensions reveal that hostile actors could target this infrastructure with plausible deniability, especially in regions with low surveillance like the Arctic. As described in this article, enhanced sensing technologies, such as DAS, SOP, and C-OFDR, can provide real-time awareness and threat detection, transforming passive infrastructure into active security assets. This is particularly urgent for islands and Arctic regions like Greenland, where fragile cable networks (in the sense of few independent international connections) represent single points of failure.
Key Considerations:
Submarine cables are strategic, yet “blind & deaf” infrastructures. Despite carrying the majority of global internet and financial data, most cables lack embedded sensing capabilities, leaving them vulnerable to natural and hybrid threats. This is especially true in the Arctic and island regions with minimal redundancy.
Recent hybrid threat patterns reinforce the need for monitoring. Cases like the 2024–2025 Baltic and Taiwan cable incidents show patterns (e.g., clean cuts, sudden phase shifts) that may be consistent with deliberate interference. These events demonstrate how undetected tampering can have immediate national and global impacts.
The Arctic is both a strategic and environmental hotspot. Melting sea ice has made the region more accessible to submarines and sabotage, while Greenland’s cables are often shallow, unprotected, and linked to critical NATO and civilian installations. Integrating sensing capabilities here is urgent.
Sensing systems enable early warning and reduce repair times. Technologies like SOP and C-OFDR can be applied to existing (brownfield) subsea systems without disrupting live traffic. This allows for anomaly detection, seismic monitoring, and rapid localization of cable faults, cutting response times from days to minutes.
Hybrid sensing systems and international cooperation are essential. No single sensing technology fits all submarine environments. The most effective strategy for resilience and defense involves combining multiple modalities tailored to cable type, geography, and threat level while ensuring trusted procurement and governance.
Relying on only one or two submarine cables for an island’s entire international connectivity at a bandwidth-critical scale is a high-stakes gamble. For example, a dual-cable redundancy may offer sufficient availability on paper. However, it fails to account for real-world risks such as correlated failures, extended repair times, and the escalating strategic value of uninterrupted digital access.
Quantity doesn’t matter for capable hostile actors: for a capable hostile actor, whether a country or region has two, three, or a handful of international submarine cables is unlikely to matter in terms of compromising those critical infrastructure assets.
In addition to the key conclusions above, there is a common belief that expanding the number of international submarine cables from two to three or three to four offers meaningful protection against deliberate sabotage by hostile state actors. While intuitively appealing, this notion underestimates a determined adversary’s intent and capability. For a capable actor, targeting an additional one or two cables is unlikely to pose a serious operational challenge. If the goal is disruption or coercion, a capable adversary will likely plan for multi-point compromise from the outset (including landing station considerations).
However, what cannot be overstated is the resilience gained through additional, physically distinct (parallel) cable systems. Moving from two to three truly diverse and independently repairable cables improves system availability by a factor of roughly 200, reducing expected downtime from over hours per year to under one minute. Expanding to four cables can reduce expected downtime to mere seconds annually. These figures reflect statistical robustness and operational continuity in the face of failure. Yet availability alone is not enough. Submarine cable repair timelines remain long, stretching from weeks to months, even under favorable conditions. And while natural disruptions are significant, they are no longer our only concern. Undersea infrastructure has become a deliberate target in hybrid and kinetic conflict scenarios in today’s geopolitical climate. The most pressing threat is not that these cables might be compromised, but that they may already be; we are simply unaware. The undersea domain is poorly monitored, poorly defended, and rich in asymmetric leverage.
Submarine cable infrastructure is not just the backbone of global digital connectivity. It is also a strategic asset with profound implications for civil society and national defense. The reliance on subsea cables for internet access, financial transactions, and governmental coordination is absolute. Satellite-based communications networks can only carry an infinitesimal amount of the traffic carried by subsea cable networks. If the global submarine cable network were to break down, so would the world order as we know it. Integrating advanced sensing technologies such as SOP, DAS, and C-OFDR into these networks transforms them from passive conduits into dynamic surveillance and monitoring systems. This dual-use capability enables faster fault detection and enhanced resilience for civilian communication systems, but also supports situational awareness, early-warning detection, and hybrid threat monitoring in contested or strategically sensitive areas like the Arctic. Ensuring submarine cable systems are robust, observable, and secured must therefore be seen as a shared priority, bridging commercial, civil, and military domains.
THE PHYSICS BEHIND SENSING – A BIT OF BACKUP.
Rayleigh Scattering: Imagine shining a flashlight through a long glass tunnel. Even though the glass tunnel looks super smooth, it has tiny bumps and little specks you can not see. When the light hits those tiny bumps, some bounce back, like a ball bounces off a wall. That bouncing light is called Rayleigh scattering.
Rayleigh scattering is a fundamental optical phenomenon in which light is scattered by small-scale variations in the refractive index of a medium, such as microscopic imperfections or density fluctuations within an optical fiber. It occurs naturally in all standard single-mode fibers and results in a portion of the transmitted light being scattered in all directions, including backward toward the transmitter. The intensity of Rayleigh backscattered light is typically very weak, but it can be detected and analyzed using highly sensitive receivers. The scattering is elastic, meaning there is no change in wavelength between the incident and scattered light.
In distributed fiber optic sensing (DFOS), Rayleigh backscatter forms the basis for several techniques:
Distributed Acoustic Sensing (DAS): The DAS sensing solution uses phase-sensitive optical time-domain reflectometry (i.e., φ-OTDR) to measure minute changes in the backscattered phase caused by vibrations. These changes indicate environmental disturbances such as seismic waves, intrusions, or cable movement.
Coherent Optical Frequency Domain Reflectometry (C-OFDR): C-OFDR leverages Rayleigh backscatter to measure changes in the fiber over distance with high resolution. By sweeping a narrow-linewidth laser over a frequency range and detecting interference from the backscatter, C-OFDR enables continuous distributed sensing along submarine cables. Unlike earlier methods requiring Bragg gratings, recent innovations allow this technique to work even over legacy subsea cables without them.
Coherent Receiver Sensing: This technique monitors Rayleigh backscatter and polarization changes using existing telecom equipment’s DSP (digital signal processing) capabilities. This allows for passive sensing with no additional probes, and the sensing does not interfere with data traffic.
Brillouin Scattering: Imagine you are talking through a long string tied between two cups, like a string telephone most of us played with as kids (before everyone got a smartphone when they turned 3 years old). Now, picture that the string is not still. It shakes a little, like shivering or wiggling in the wind or the strain of the hands holding the cups. When your voice travels down that string, it bumps into those little wiggles. That bumping makes the sound of your voice change a tiny bit. Brillouin scattering is like that. When light travels through our string (that could be a glass fiber), the tiny wiggles inside the string make the light change direction, and the way that light and cable “wiggles” work together can tell our engineers stories about what happens inside the cable.
Brillouin scattering is a nonlinear optical effect that occurs when light interacts with acoustic (sound) waves within the optical fiber. When a continuous wave or pulsed laser signal travels through the fiber, it can generate small pressure waves due to a phenomenon known as electrostriction. These pressure waves slightly change the optical fiber’s refractive index and act like a moving grating, scattering some of the light backward. This backward-scattered light experiences a frequency shift, known as the Brillouin shift, which is directly related to the temperature and strain in the fiber at the scattering point.
Commercial Brillouin-based systems are technically capable of monitoring subsea communications cables, especially for strain and temperature sensing. However, they are not yet standard in the submarine communications cable industry, and integration typically requires dedicated or dark fibers, as the sensing cannot share the same fiber with active data traffic.
Raman Scattering: Imagine you are shining a flashlight through a glass of water. Most of the light goes straight through, like cars driving down a road without turning. But sometimes, a tiny bit of light bumps into something inside the water, like a little water molecule, and bounces off differently. It’s like the car suddenly makes a tiny turn and changes its color. This little bump and color change is what we call Raman scattering. It is a special effect as it helps scientists figure out what’s inside things, like what water is made of, by looking at how the light changes when it bounces off.
Raman scattering is primarily used in submarine fiber cable sensing for Distributed Temperature Sensing (DTS). This technique exploits the temperature-dependent nature of Raman scattering to measure the temperature along the entire length of an optical fiber, which can be embedded within or run alongside a submarine cable. Raman scattering has several applications in submarine cables. It is used for environmental monitoring by detecting gradual thermal changes caused by ocean currents or geothermal activity. Regarding cable integrity, it can identify hotspots that might indicate electrical faults or compromised insulation in power cables. In Arctic environments, Raman-based Distributed Temperature Sensing (DTS) can help infer changes in surrounding ice or seawater temperatures, aiding in ice detection. Additionally, it supports early warning systems in the energy and offshore sectors by identifying overheating and other thermal anomalies before they lead to critical failures.
However, Raman scattering has notable limitations. Because it is a weak optical effect, DTS systems based on Raman scattering require high-powered lasers and highly sensitive detectors. It is also unsuitable for detecting dynamic events such as vibrations or acoustic signals, better sensed using Rayleigh or Brillouin scattering. Furthermore, Raman-based DTS typically offers spatial resolutions of one meter or more and has a slow response time, making it less effective for identifying rapid or short-lived events like submarine activity or tampering.
Commercial Raman-DTS solutions exist and are actively deployed in subsea power cable monitoring. Their use in telecom submarine cables is less common but technically feasible, particularly for infrastructure integrity monitoring rather than data-layer diagnostics.
FURTHER READING.
Global Submarine Cable Map is a free and regularly updated resource from TeleGeography data. It is imo a very useful tool for visualizing where submarine cables are located.
MaritineTraffic is a global maritime tracking platform that uses Automatic Identification System (AIS) data to provide real-time and historical location information for commercial and private vessels worldwide. It can monitor ship movements near submarine cable routes and landing sites, enabling analysts to identify unusual behavior, loitering patterns, or the presence of non-reporting vessels in sensitive maritime zones, which is important for detecting potential risks to undersea infrastructure. I tend to use this in combination with the Submarine Cable Map above.
Larsen, K.K., “What Lies Beneath”, (2024), Techneconomyblog.com. The article highlights the vulnerability of submarine cables and proposes using Automatic Identification System (AIS) data to trace vessels near the site and time of a cable break. By analyzing ship trajectories, investigators can identify potentially responsible vessels, distinguishing between accidental damage and deliberate interference. It also introduces a dual-risk framework: a baseline risk score to assess natural or accidental causes, and a sabotage risk score based on vessel behavior, cable location, and geopolitical context. This method enhances attribution, supports early warnings, and protects critical subsea infrastructure.
Larsen, K.K., “Greenland: Navigating Security and Critical Infrastructure in the Arctic“, (2024), Techneconomyblog.com. The article explores Greenland’s growing strategic role in Arctic security and infrastructure resilience amid rising geopolitical tensions. It emphasizes the need for robust digital and energy infrastructure, including resilient submarine cables and satellite links, to support civilian connectivity and defense readiness. The piece also highlights Greenland’s value as a technological gateway between North America and Europe, underscoring its relevance in future-proofing NATO and allied operations in the High North.
Seismology (SM) Division of the European Geosciences Union (EGU), “What is Distributed Acoustic Sensing“, (2023), Blog. A good introductory hands-on article that explains what DAS is about and why its versatility and scalability make it a powerful tool for real-time, large-scale sensing without the need for additional hardware deployment (in short to medium distances <100 km).
Guerrier, S. et al., “Introducing coherent MIMO sensing, a fading-resilient, polarization-independent approach to φ-OTDR“, (2020), Optics Express, 28(14), 21081–21094. This work introduces the concept of Coherent MIMO Distributed Fiber Sensing. It demonstrates how dual-polarization probing with coherent detection works. It lays the groundwork for future high-performance sensing over optical fibers, with clear implications for subsea and defense infrastructure monitoring.
Nikles, M. et al., P. A., “Simple distributed fiber sensor based on Brillouin gain spectrum analysis“, (1996), Optics Letters, 21(10), 758–760. This work introduces the concept of distributed sensing using the Brillouin gain spectrum, laying the groundwork for Brillouin Optical Time Domain Analysis. It demonstrates how the Brillouin frequency shift can map strain and temperature along standard optical fibers with meter-scale resolution and kilometer-scale range.
Dai, M., et al., “A Survey on Integrated Sensing, Communication, and Computing Networks for Smart Oceans”, J. Sens. Actuator Netw. (2022). This article explores the rapidly advancing field of ISAC, which combines radar and communication technologies. It comprehensively reviews key enablers, signal processing techniques, system architectures, and application scenarios as they apply to maritime domains (as well as other non-maritime domains).
Coddington, I. et al., “Dual-comb spectroscopy“, (2016), Optica, 3(4), 414–426. This work presents the foundational principles of dual-comb spectroscopy, a technique that uses two optical frequency combs to perform high-resolution, broadband, and rapid measurements without moving parts. It demonstrates how this method enables precise, real-time sensing of distance, spectral properties, and refractive index changes, with applications in metrology, ranging, and environmental monitoring. For subsea cable defense applications, the Coddington paper provides the theoretical basis for how frequency combs can achieve sub-millimeter resolution in long optical paths (e.g., submarine cables) under various mechanical forces in an underwater environment.
I greatly acknowledge my wife, Eva Varadi, for her support, patience, and understanding during the creative process of writing this article. I am furthermore indebted to Andreas Gladisch, VP Emerging Technologies – Deutsche Telekom AG, for sharing his expertise on fiber-optical sensing technologies with me and providing some of the foundational papers on which my article and research have been based. I always come away wiser from our conversations.
From the bustling streets of New York to the remote highlands of Mongolia, the skyline had visibly changed. Where steel towers and antennas once dominated now stood open spaces and restored natural ecosystems. Forests reclaimed their natural habitats, and birds nested in trees undisturbed by the scaring of high rural cellular towers. This transformation was not sudden but resulted from decades of progress in satellite technology, growing demand for ubiquitous connectivity, an increasingly urgent need to address the environmental footprint of traditional telecom infrastructures, and the economic need to dramatically reduce operational expenses tied up in tower infrastructure. By the time the last cell site was decommissioned, society stood at the cusp of a new age of connectivity by LEO satellites covering all of Earth.
The annual savings worldwide from making terrestrial cellular towers obsolete in total cost are estimated to amount to at least 300 billion euros, and it is expected that moving cellular access to “heaven” will avoid more than 150 million metric tons of CO2 emissions annually. The retirement of all terrestrial cellular networks worldwide has been like eliminating the entire carbon footprint of The Netherlands or Malaysia and leading to a dramatic reduction in demand for sustainable green energy sources that previously were used to power the global cellular infrastructure.
INTRODUCTION.
Recent postings and a substantial part of commentary give the impression that we are heading towards a post-tower era where Elon Musk’s Low Earth Orbit (LEO) satellite Starlink network (together with competing options, e.g., ATS Spacemobile and Lynk, and no, I do not see Amazon’s Project Kuiper in this space) will make terrestrially-based tower infrastructure and earth-bound cellular services obsolete.
Since the announcement, posts and media coverage have declared the imminent death of the terrestrial cellular network. When it is pointed out that this may be a premature death sentence to an industry, telecom operators, and their existing cellular mobile networks, it is also not uncommon to be told off as being too pessimistic and an unbeliever in Musk’s genius vision. Musk has on occasion made it clear the Starlink D2C service is aimed at texts and voice calls in remote and rural areas, and to be honest, the D2C service currently hinges on 2×5 MHz in the T-Mobile’s PCS band, adding constraints to the “broadbandedness” of the service. The fact that the service doesn’t match the best of T-Mobile US’s 5G network quality (e.g., 205+ Mbps downlink) or even get near its 4G speeds should really not bother anyone, as the value of the D2C service is that it is available in remote and rural areas with little to no terrestrial cellular coverage and that you can use your regular cellular device with no need for a costly satellite service and satphone (e.g., Iridium, Thuraya, Globalstar).
While I don’t expect to (or even want to) change people’s beliefs, I do think it would be great to contribute to more knowledge and insights based on facts about what is possible with low-earth orbiting satellites as a terrestrial substitute and what is uninformed or misguided opinion.
The rise of LEO satellites has sparked discussions about the potential obsolescence of terrestrial cellular networks. With advancements in satellite technology and increasing partnerships, such as T-Mobile’s collaboration with SpaceX’s Starlink, proponents envision a future where towers are replaced by ubiquitous connectivity from the heavens. However, the feasibility of LEO satellites achieving service parity with terrestrial networks raises significant technical, economic, and regulatory questions. This article explores the challenges and possibilities of LEO Direct-to-Cell (D2C) networks, shedding light on whether they can genuinely replace ground-based cellular infrastructure or will remain a complementary technology for specific use cases.
WHY DISTANCE MATTERS.
The distance between you (your cellular device) and the base station’s antenna determines your expected service experience in cellular and wireless networks. The longer you are away from the base station that serves you, in general, the poorer your connection quality and performance will be, with everything else being equal. As the distance increases, signal weakening (i.e., path loss) grows exponentially, reducing signal quality and making it harder for devices to maintain reliable communication. Closer proximity allows for more substantial, faster, and more stable connections, while longer distances require more power and advanced technologies like beamforming or repeaters to compensate.
Physics tells us how a signal loses its signal strength (or power) over a distance with the square of the distance from the source of the signal itself (either the base station transmitter or the consumer device). This applies universally to all electromagnetic waves traveling in free space. Free space means that there are no obstacles, reflections, or scattering. No terrain features, buildings, or atmospheric conditions interfere with the propagation signal.
So, what matters to the Free Space Path Loss (FSPL)? That is the signal strength over a given distance in free space:
The signal strength reduces (the path loss increases) with the square of the distance (d) from its source.
Path loss increases (i.e., signal strength decreases) with the (square of the) frequency (f). The higher the frequency, the higher the path loss at a given distance from the signal source.
A larger transmit antenna aperture reduces the path loss by focusing the transmitted signal (energy) more efficiently. An antenna aperture is an antenna’s “effective area” that captures or transmits electromagnetic waves. It depends directly on antenna gain and inverse of the square of the signal frequency (i.e., higher frequency → smaller aperture).
Higher receiver gain will also reduce the path loss.
The above equations show a strong dependency on distance; the farther away, the larger the signal loss, and the higher the frequency, the larger the signal loss. Relaxing some of the assumptions leading to the above relationship leads us to the following:
The last of the above equations introduces the transmitter’s effective antenna aperture (\(A_t^{eff}\)) and the receiver’s gain (\(G_r\)), telling us that larger apertures reduce path loss as they focus the transmitted energy more efficiently and that higher receiver gain likewise reduces the path loss (i.e., “they hear better”).
It is worth remembering that the transmitter antenna aperture is directly tied to the transmitter gain ($G_t$) when the frequency (f) has been fixed. We have
From the above, as an example, it is straightforward to see that the relative path loss difference between the two distances of 550 km (e.g., typical altitude of an LEO satellite) and 2.5 km (typical terrestrial cellular coverage range ) is
$\frac{PL_{FS}(550 km)}{PL_{FS}(2.5 km)} \; = \; \left( \frac {550}{2.5}\right)^2 \; = \; 220^2 \; \approx \; 50$ thousand. So if all else was equal (it isn’t, btw!), we would expect that the signal loss at a distance of 550 km would be 50 thousand times higher than at 2.5 km. Or, in the electrical engineer’s language, at a distance of 550 km, the loss would be 47 dB higher than at 2.5 km.
The figure illustrates the difference between (a) terrestrial cellular and (b) satellite coverage. A terrestrial cellular signal typically covers a radius of 0.5 to 5 km. In contrast, a LEO satellite signal travels a substantial distance to reach Earth (e.g., Starlink satellite is at an altitude of 550 km). While the terrestrial signal propagates through the many obstacles it meets on its earthly path, the satellite signal’s propagation path would typically be free-space-like (i.e., no obstacles) until it penetrates buildings or other objects to reach consumer devices. Historically, most satellite-to-Earth communication has relied on outdoor ground stations or dishes where the outdoor antenna on Earth provides LoS to the satellite and will also compensate somewhat for the signal loss due to the distance to the satellite.
Let’s compare a terrestrial 5G 3.5 GHz advanced antenna system (AAS) 2.5 km from a receiver with a LEO satellite system at an altitude of 550 km. Note I could have chosen a lower frequency, e.g., 800 MHz or the PCS 1900 band. While it would give me some advantages regarding path loss (i.e., $FSPL \; \propto \; f^2$), the available bandwidth is rather smallish and insufficient for state-or-art 5G services (imo!). From a free-space path loss perspective, independently of frequency, we need to overcome an almost 50 thousand times relative difference in distance squared (ca. 47 dB difference) in favor of the terrestrial system. In this comparison, it should be understood that the terrestrial and the satellite systems use the same carrier frequency (otherwise, one should account for the difference in frequency), and the only difference that matters (for the FSPL) is the difference in distance to the receiver.
Suppose I require that my satellite system has the same signal loss in terms of FSPL as my terrestrial system to aim at a comparable quality of service level. In that case, I have several options in terms of satellite enhancements. I could increase transmit power, although it would imply that I need a transmit power of 47 dB more than the terrestrial system, or approximately 48 kW, which is likely impractical for the satellite due to power limitations. Compare this with the current Starlink transmit power of approximately 32 W (45 dBm), ca. 1,500 times lower. Alternatively, I could (in theory!) increase my satellite antenna aperture, leading to a satellite antenna with a diameter of ca. 250 meters, which is enormous compared to current satellite antennas (e.g., Starlink’s ca. 0.05 m2 aperture for a single antenna and total area in the order of 1.6 m2 for the Ku/Ka bands). Finally, I could (super theoretically) also massively improve my consumer device (e.g., smartphone) to receive gain (with 47 dB) from today’s range of -2 dBi to +5 dBi. Achieving 46 dBi gain in a smartphone receiver seems unrealistic due to size, power, and integration constraints. As the target of LEO satellite direct-to-cell services is to support commercially available cellular devices used in terrestrial, only the satellite specifications can be optimized.
Based on a simple free-space approach, it appears unreasonable that an LEO satellite communication system can provide 5G services at parity with a terrestrial cellular network to normal (unmodified) 5G consumer devices without satellite-optimized modifications. The satellite system’s requirements for parity with a terrestrial communications system are impractical (but not impossible) and, if pursued, would significantly drive up design complexity and cost, likely making such a system highly uneconomical.
At this point, you should ask yourself if it is reasonable to assume that a terrestrial communication cellular system can be taken to propagate as its environment is “free-space” like. Thus, obstacles, reflections, and scattering are ignored. Is it really okay to presume that terrain features, buildings, or atmospheric conditions do not interfere with the propagation of the terrestrial cellular signal? Of course, the answer should be that it is not okay to assume that. When considering this, let’s see if it matters much compared to the LEO satellite path loss.
TERRESTRIAL CELLULAR PROPAGATION IS NOT HAPPENING IN FREE SPACE, AND NEITHER IS A SATELLITE’S.
The Free-Space Path Loss (FSPL) formula assumes ideal conditions where signals propagate in free space without interference, blockage, or degradation, besides what would naturally be by traveling a given distance. However, as we all experience daily, real-world environments introduce additional factors such as obstructions, multipath effects, clutter loss, and environmental conditions, necessitating corrections to the FSPL approach. Moving from one room of our house to another can easily change the cellular quality and our experience (e.g., dropped calls, poorer voice quality, lower speed, changing from using 5G to 4G or even to 2G, no coverage at all). Driving through a city may also result in ups and downs with respect to the cellular quality we experience. Some of these effects are tabulated below.
Urban environments typically introduce the highest additional losses due to dense buildings, narrow streets, and urban canyons, which significantly obstruct and scatter signals. For example, the Okumura-Hata Urban Model accounts for such obstructions and adds substantial losses to the FSPL, averaging around 30–50 dB, depending on the density and height of buildings.
Suburban environments, on the other hand, are less obstructed than urban areas but still experience moderate clutter losses from trees, houses, and other features. In these areas, corrections based on the Okumura-Hata Suburban Model add approximately 10–20 dB to the FSPL, reflecting the moderate level of signal attenuation caused by vegetation and scattered structures.
Rural environments have the least obstructions, resulting in the lowest additional loss. Corrections based on the Okumura-Hata Rural Model typically add around 5–10 dB to the FSPL. These areas benefit from open landscapes with minimal obstructions, making them ideal for long-range signal propagation.
Non-line-of-sight (NLOS) conditions increase additionally the path loss, as signals must diffract or scatter to reach the receiver. This effect adds 10–20 dB in suburban and rural areas and 20–40 dB in urban environments, where obstacles are more frequent and severe. Similarly, weather conditions such as rain and foliage contribute to signal attenuation, with rain adding up to 1–5 dB/km at higher frequencies (above 10 GHz) and dense foliage introducing an extra 5–15 dB of loss.
The corrections for these factors can be incorporated into the FSPL formula to provide a more realistic estimation of signal attenuation. By applying these corrections, the FSPL formula can reflect the conditions encountered in terrestrial communication systems across different environments.
The figure above illustrates the differences and similarities concerning the coverage environment for (a) terrestrial and (b) satellite communication systems. The terrestrial signal environment, in most instances, results in the loss of the signal as it propagates through the terrestrial environment due to vegetation, terrain variations, urban topology or infrastructure, weather, and ultimately, as the signal propagates from the outdoor environment to the indoor environment it signal reduces further as it, for example, penetrates windows with coatings, outer and inner walls. The combination of distance, obstacles, and material penetration leads to a cumulative reduction in signal strength as the signal propagates through the terrestrial environment. For the satellite, as illustrated in (b), a substantial amount of signal is reduced due to the vast distance it has to travel before reaching the consumer. If no outdoor antenna connects with the satellite signal, then the satellite signal will be further reduced as it penetrates roofs, multiple ceilings, multiple floors, and walls.
It is often assumed that a satellite system has a line of sight (LoS) without environmental obstructions in its signal propagation (besides atmospheric ones). The reasoning is not unreasonable as the satellite is on top of the consumers of its services and, of course, a correct approach when the consumer has an outdoor satellite receiver (e.g., a dish) in direct LoS with the satellite. Moreover, historically, most satellite-to-Earth communication has relied on outdoor ground stations or outdoor dishes (e.g., placed on roofs or another suitable location) where the outdoor antenna on Earth provides LoS to the satellite’s antenna also compensating somewhat for the signal loss due to the distance to the satellite.
When considering a satellite direct-to-cell device, we no longer have the luxury of a satellite-optimized advanced Earth-based outdoor antenna to facilitate the communications between the satellite and the consumer device. The satellite signal has to close the connection with a standard cellular device (e.g., smartphone, tablet, …), just like the terrestrial cellular network would have to do.
However, 80% or more of our mobile cellular traffic happens indoors, in our homes, workplaces, and public places. If a satellite system had to replace existing mobile network services, it would also have to provide a service quality similar to that of consumers from the terrestrial cellular network. As shown in the above figure, this involves urban areas where the satellite signal will likely pass through a roof and multiple floors before reaching a consumer. Depending on housing density, buildings (shadowing) may block the satellite signal, resulting in substantial service degradation for consumers suffering from such degrading effects. Even if the satellite signal would not face the same challenges as a terrestrial cellular signal, such as with vegetation, terrain variations, and the horizontal dimension of urban topology (e.g., outer& inner walls, coated windows,… ), the satellite signal would still have to overcome the vertical dimension of urban topologies (e..g, roofs, ceilings, floors, etc…) to connect to consumers cellular devices.
For terrestrial cellular services, the cellular network’s signal integrity will (always) have a considerable advantage over the satellite signal because of the proximity to the consumer’s cellular device. With respect to distance alone, an LEO satellite at an altitude of 550 km will have to overcome a 50 thousand times (or a 47 dB) path loss compared to a cellular base station antenna 2.5 km away. Overcoming that path loss penalty adds considerable challenges to the antenna design, which would seem highly challenging to meet and far from what is possible with today’s technology (and economy).
CHALLENGES SUMMARIZED.
Achieving parity between a Low Earth Orbit (LEO) satellite providing Direct-to-Cell (D2C) services and a terrestrial 5G network involves overcoming significant technical challenges. The disparity arises from fundamental differences in these systems’ environments, particularly in free-space path loss, penetration loss, and power delivery. Terrestrial networks benefit from closer proximity to the consumer, higher antenna density, and lower propagation losses. In contrast, LEO satellites must address far more significant free-space path losses due to the large distances involved and the additional challenges of transmitting signals through the atmosphere and into buildings.
The D2C challenges for LEO satellites are increasingly severe at higher frequencies, such as 3.5 GHz and above. As we have seen above, the free-space path loss increases with the square of the frequency, and penetration losses through common building materials, such as walls and floors, are significantly higher. For an LEO satellite system to achieve indoor parity with terrestrial 5G services at this frequency, it would need to achieve extraordinary levels of effective isotropic radiated power (EIRP), around 65 dB, and narrow beamwidths of approximately 0.5° to concentrate power on specific service areas. This would require very high onboard power outputs, exceeding 1 kW, and large antenna apertures, around 2 m in diameter, to achieve gains near 55 dBi. These requirements place considerable demands on satellite design, increasing mass, complexity, and cost. Despite these optimizations, indoor service parity at 3.5 GHz remains challenging due to persistent penetration losses of around 20 dB, making this frequency better suited for outdoor or line-of-sight applications.
Achieving a stable beam with the small widths required for a LEO satellite to provide high-performance Direct-to-Cell (D2C) services presents significant challenges. Narrow beam widths, on the order of 0.5° to 1°, are essential to effectively focus the satellite’s power and overcome the high free-space path loss. However, maintaining such precise beams demands advanced satellite antenna technologies, such as high-gain phased arrays or large deployable apertures, which introduce design, manufacturing, and deployment complexities. Moreover, the satellite must continuously track rapidly moving targets on Earth as it orbits around 7.8 km/s. This requires highly accurate and fast beam-steering systems, often using phased arrays with electronic beamforming, to compensate for the relative motion between the satellite and the consumer. Any misalignment in the beam can result in significant signal degradation or complete loss of service. Additionally, ensuring stable beams under variable conditions, such as atmospheric distortion, satellite vibrations, and thermal expansion in space, adds further layers of technical complexity. These requirements increase the system’s power consumption and cost and impose stringent constraints on satellite design, making it a critical challenge to achieve reliable and efficient D2C connectivity.
As the operating frequency decreases, the specifications for achieving parity become less stringent. At 1.8 GHz, the free-space path loss and penetration losses are lower, reducing the signal deficit. For a LEO satellite operating at this frequency, a 2.5 m² aperture (1.8 m diameter) antenna and an onboard power output of around 800 W would suffice to deliver EIRP near 60 dBW, bringing outdoor performance close to terrestrial equivalency. Indoor parity, while more achievable than 3.5 GHz, would still face challenges due to penetration losses of approximately 15 dB. However, the balance between the reduced propagation losses and achievable satellite optimizations makes 1.8 GHz a more practical compromise for mixed indoor and outdoor coverage.
At 800 MHz, the frequency-dependent losses are significantly reduced, making it the most feasible option for LEO satellite systems to achieve parity with terrestrial 5G networks. The free-space path loss decreases further, and penetration losses into buildings are reduced to approximately 10 dB, comparable to what terrestrial systems experience. These characteristics mean that the required specifications for the satellite system are notably relaxed. A 1.5 m² aperture (1.4 m diameter) antenna, combined with a power output of 400 W, would achieve sufficient gain and EIRP (~55 dBW) to deliver robust outdoor coverage and acceptable indoor service quality. Lower frequencies also mitigate the need for extreme beamwidth narrowing, allowing for more flexible service deployment.
Most consumers’ cellular consumption happens indoors. These consumers are compared to an LEO satellite solution typically better served by existing 5G cellular broadband networks. When considering a direct-to-normal-cellular device, it would not be practical to have an LEO satellite network, even an extensive one, to replace existing 5G terrestrial-based cellular networks and the services these support today.
This does not mean that LEO satellite cannot be of great utility when connecting to an outdoor Earth-based consumer dish, as is already evident in many remote, rural, and suburban places. The summary table above also shows that LEO satellite D2C services are feasible, without too challenging modifications, at the lower cellular frequency ranges between 600 MHz to 1800 MHz at service levels close to the terrestrial systems, at least in rural areas and for outdoor services in general. In indoor situations, the LEO Satellite D2C signal is more likely to be compromised due to roof and multiple floor penetration scenarios to which a terrestrial signal may be less exposed.
WHAT GOES DOWN MUST COME UP.
LEO satellite services that provide direct to unmodified mobile cellular device services are getting us all too focused on the downlink path from the satellite directly to the device. It seems easy to forget that unless you deliver a broadcast service, we also need the unmodified cellular device to directly communicate meaningfully with the LEO satellite. The challenge for an unmodified cellular device (e.g., smartphone, tablet, etc.) to receive the satellite D2C signal has been explained extensively in the previous section. In the satellite downlink-to-device scenario, we can optimize the design specifications of the LEO satellite to overcome some (or most, depending on the frequency) of the challenges posed by the satellite’s high altitude (compared to a terrestrial base station’s distance to the consumer device). In the device direct-uplink-to-satellite, we have very little to no flexibility unless we start changing the specifications of the terrestrial device portfolio. Suppose we change the specifications for consumer devices to communicate better with satellites. In that case, we also change the premise and economics of the (wrong) idea that LEO satellites should be able to completely replace terrestrial cellular networks at service parity with those terrestrial cellular networks.
Achieving uplink communication from a standard cellular device to an LEO satellite poses significant challenges, especially when attempting to match the performance of a terrestrial 5G network. Cellular devices are designed with limited transmission power, typically in the range of 23–30 dBm (0.2–1 watt), sufficient for short-range communication with terrestrial base stations. However, when the receiving station is a satellite orbiting between 550 and 1,200 kilometers, the transmitted signal encounters substantial free-space path loss. The satellite must, therefore, be capable of detecting and processing extremely weak signals, often below -120 dBm, to maintain a reliable connection.
The free-space path loss in the uplink direction is comparable to that in the downlink, but the challenges are compounded by the cellular device’s limitations. At higher frequencies, such as 3.5 GHz, path loss can exceed 155 dB, while at 1.8 GHz and 800 MHz, it reduces to approximately 149.6 dB and 143.6 dB, respectively. Lower frequencies favor uplink communication because they experience less path loss, enabling better signal propagation over large distances. However, cellular devices typically use omnidirectional antennas with very low gain (0–2 dBi), poorly suited for long-distance communication, placing even greater demands on the satellite’s receiving capabilities.
The satellite must compensate for these limitations with highly sensitive receivers and high-gain antennas. Achieving sufficient antenna gain requires large apertures, often exceeding 4 meters in diameter for 800 MHz or 2 meters for 3.5 GHz, increasing the satellite’s size, weight, and complexity. Phased-array antennas or deployable reflectors are often used to achieve the required gain. Still, their implementation is constrained by the physical limitations and costs of launching such systems into orbit. Additionally, the satellite’s receiver must have an exceptionally low noise figure, typically in the range of 1–3 dB, to minimize internal noise and allow the detection of weak uplink signals.
Interference is another critical challenge in the uplink path. Unlike terrestrial networks, where signals from individual devices are isolated into small sectors, satellites receive signals over larger geographic areas. This broad coverage makes it difficult to separate and process individual transmissions, particularly in densely populated areas where numerous devices transmit simultaneously. Managing this interference requires sophisticated signal processing capabilities on the satellite, increasing its complexity and power demands.
The motion of LEO satellites introduces additional complications due to the Doppler effect, which causes a shift in the uplink signal frequency. At higher frequencies like 3.5 GHz, these shifts are more pronounced, requiring real-time adjustments to the receiver to compensate. This dynamic frequency management adds another layer of complexity to the satellite’s design and operation.
Among the frequencies considered, 3.5 GHz is the most challenging for uplink communication due to high path loss, pronounced Doppler effects, and poor building penetration. Satellites operating at this frequency must achieve extraordinary sensitivity and gain, which is difficult to implement at scale. At 1.8 GHz, the challenges are somewhat reduced as the path loss and Doppler effects are less severe. However, the uplink requires advanced receiver sensitivity and high-gain antennas to approach terrestrial network performance. The most favorable scenario is at 800 MHz, where the lower path loss and better penetration characteristics make uplink communication significantly more feasible. Satellites operating at this frequency require less extreme sensitivity and gain, making it a practical choice for achieving parity with terrestrial 5G networks, especially for outdoor and light indoor coverage.
Uplink, the consumer device to satellite signal direction, poses additional limitations to the frequency range. Such systems may be interesting to 600 MHz to a maximum of 1.8 GHz, which is already challenging for uplink and downlink in indoor usage. Service in the lower cellular frequency range is feasible for outdoor usage scenarios in rural and remote areas and for non-challenging indoor environments (e.g., “simple” building topologies).
The premise that LEO satellite D2C services would make terrestrial cellular networks redundant everywhere by offering service parity appears very unlikely, and certainly not with the current generation of LEO satellites being launched. The altitude range of the LEO satellites (300 – 1200 km) and frequency ranges used for most terrestrial cellular services (600 MHz to 5 GHz) make it very challenging and even impractical (for higher cellular frequency ranges) to achieve quality and capacity parity with existing terrestrial cellular networks.
LEO SATELLITE D2C ARCHITECTURE.
A subscriber would realize they have LEO satellite Direct-to-Cell coverage through network signaling and notifications provided by their mobile device and network operator. Using this coverage depends on the integration between the LEO satellite system and the terrestrial cellular network, as well as the subscriber’s device and network settings. Here’s how this process typically works:
When a subscriber moves into an area where traditional terrestrial coverage is unavailable or weak, their mobile device will periodically search for available networks, as it does when trying to maintain connectivity. If the device detects a signal from a LEO satellite providing D2C services, it may indicate “Satellite Coverage” or a similar notification on the device’s screen.
This recognition is possible because the LEO satellite extends the subscriber’s mobile network. The satellite broadcasts system information on the same frequency bands licensed to the subscriber’s terrestrial network operator. The device identifies the network using the Public Land Mobile Network (PLMN) ID, which matches the subscriber’s home network or a partner network in a roaming scenario. The PLMN is a fundamental component of terrestrial and LEO satellite D2C networks, which is the identifier that links a mobile consumer to a specific mobile network operator. It enables communication, access rights management, network interoperability, and supporting services such as voice, text, and data.
The PLMN is also directly connected to the frequency bands used by an operator and any satellite service provider, acting as an extension of the operator’s network. It ensures that devices access the appropriately licensed bands through terrestrial or satellite systems and governs spectrum usage to maintain compliance with regulatory frameworks. Thus, the PLMN links the network identification and frequency allocation, ensuring seamless and lawful operation in terrestrial and satellite contexts.
In an LEO satellite D2C network, the PLMN plays a similar but more complex role, as it must bridge the satellite system with terrestrial mobile networks. The satellite effectively operates as an extension of the terrestrial PLMN, using the same MCC and MNC codes as the consumer’s home network or a roaming partner. This ensures that consumer devices perceive the satellite network as part of their existing subscription, avoiding the need for additional configuration or specialized hardware. When the satellite provides coverage, the PLMN enables the device to authenticate and access services through the operator’s core network, ensuring consistency with terrestrial operations. It ensures that consumer authentication, billing, and service provisioning remain consistent across the terrestrial and satellite domains. In cases where multiple terrestrial operators share access to a satellite system, the PLMN facilitates the correct routing of consumer sessions to their respective home networks. This coordination is particularly important in roaming scenarios, where a consumer connected to a satellite in one region may need to access services through their home network located in another region.
For a subscriber to make use of LEO satellite coverage, the following conditions must be met:
Device Compatibility: The subscriber’s mobile device must support satellite connectivity. While many standard devices are compatible with satellite D2C services using terrestrial frequencies, certain features may be required, such as enhanced signal processing or firmware updates. Modern smartphones are increasingly being designed to support these capabilities.
Network Integration: The LEO satellite must be integrated with the subscriber’s mobile operator’s core network. This ensures the satellite extends the terrestrial network, maintaining seamless authentication, billing, and service delivery. Consumers can make and receive calls, send texts, or access data services through the satellite link without changing their settings or SIM card.
Service Availability: The type of services available over the satellite link depends on the network and satellite capabilities. Initially, services may be limited to text messaging and voice calls, as these require less bandwidth and are easier to support in shared satellite coverage zones. High-speed data services, while possible, may require further advancements in satellite capacity and network integration.
Subscription or Permissions: Subscribers must have access to satellite services through their mobile plan. This could be included in their existing plan or offered as an add-on service. In some cases, roaming agreements between the subscriber’s home network and the satellite operator may apply.
Emergency Use: In specific scenarios, satellite connectivity may be automatically enabled for emergencies, such as SOS messages, even if the subscriber does not actively use the service for regular communication. This is particularly useful in remote or disaster-affected areas with unavailable terrestrial networks.
Once connected to the satellite, the consumer experience is designed to be seamless. The subscriber can initiate calls, send messages, or access other supported services just as they would under terrestrial coverage. The main differences may include longer latency due to the satellite link and, potentially, lower data speeds or limitations on high-bandwidth activities, depending on the satellite network’s capacity and the number of consumers sharing the satellite beam.
Managing a call on a Direct-to-Cell (D2C) satellite network requires specific mobile network elements in the core network, alongside seamless integration between the satellite provider and the subscriber’s terrestrial network provider. The service’s success depends on how well the satellite system integrates into the terrestrial operator’s architecture, ensuring that standard cellular functions like authentication, session management, and billing are preserved.
In a 5G network, the core network plays a central role in managing calls and data sessions. For a D2C satellite service, key components of the operator’s core network include the Access and Mobility Management Function (AMF), which handles consumer authentication and signaling. The AMF establishes and maintains connectivity for subscribers connecting via the satellite. Additionally, the Session Management Function (SMF) oversees the session context for data services. It ensures compatibility with the IP Multimedia Subsystem (IMS), which manages call control, routing, and handoffs for voice-over-IP communications. The Unified Data Management (UDM) system, another critical core component, stores subscriber profiles, detailing permissions for satellite use, roaming policies, and Quality of Service (QoS) settings.
To enforce network policies and billing, the Policy Control Function (PCF) applies service-level agreements and ensures appropriate charges for satellite usage. For data routing, elements such as the User Plane Function (UPF) direct traffic between the satellite ground stations and the operator’s core network. Additionally, interconnect gateways manage traffic beyond the operator’s network, such as the Internet or another carrier’s network.
The role of the satellite provider in this architecture depends on the integration model. If the satellite system is fully integrated with the terrestrial operator, the satellite primarily acts as an extension of the operator’s radio access network (RAN). In this case, the satellite provider requires ground stations to downlink traffic from the satellites and forward it to the operator’s core network via secure, high-speed connections. The satellite provider handles radio gateway functionality, translating satellite-specific protocols into formats compatible with terrestrial systems. In this scenario, the satellite provider does not need its own core network because the operator’s core handles all call processing, authentication, billing, and session management.
In a standalone model, where the LEO satellite provider operates independently, the satellite system must include its own complete core network. This requires implementing AMF, SMF, UDM, IMS, and UPF, allowing the satellite provider to directly manage subscriber sessions and calls. In this case, interconnect agreements with terrestrial operators would be needed to enable roaming and off-network communication.
Most current D2C solutions, including those proposed by Starlink with T-Mobile or AST SpaceMobile, follow the integrated model. In these cases, the satellite provider relies on the terrestrial operator’s core network, reducing complexity and leveraging existing subscriber management systems. The LEO satellites are primarily responsible for providing RAN functionality and ensuring reliable connectivity to the terrestrial core.
REGULATORY CHALLENGES.
LEO satellite networks offering Direct-to-Cell (D2C) services face substantial regulatory challenges in their efforts to operate within frequency bands already allocated to terrestrial cellular services. These challenges are particularly significant in regions like Europe and the United States, where cellular frequency ranges are tightly regulated and managed by national and regional authorities to ensure interference-free operations and equitable access among service providers.
The cellular frequency spectrum in Europe and the USA is allocated through licensing frameworks that grant exclusive usage rights to mobile network operators (MNOs) for specific frequency bands, often through competitive auctions. For example, in the United States, the Federal Communications Commission (FCC) regulates spectrum usage, while in Europe, national regulatory authorities manage spectrum allocations under the guidelines set by the European Union and CEPT (European Conference of Postal and Telecommunications Administrations). The spectrum currently allocated for cellular services, including low-band (e.g., 600 MHz, 800 MHz), mid-band (e.g., 1.8 GHz, 2.1 GHz), and high-band (e.g., 3.5 GHz), is heavily utilized by terrestrial operators for 4G LTE and 5G networks.
In March 2024, the Federal Communications Commission (FCC) adopted a groundbreaking regulatory framework to facilitate collaborations between satellite operators and terrestrial mobile service providers. This initiative, termed “Supplemental Coverage from Space,” allows satellite operators to use the terrestrial mobile spectrum to offer connectivity directly to consumer handsets and is an essential component of FCC’s “Single Network Future.” The framework aims to enhance coverage, especially in remote and underserved areas, by integrating satellite and terrestrial networks. The FCC granted SpaceX (November 2024) approval to provide direct-to-cell services via its Starlink satellites. This authorization enables SpaceX to partner with mobile carriers, such as T-Mobile, to extend mobile coverage using satellite technology. The approval includes specific conditions to prevent interference with existing services and to ensure compliance with established regulations. Notably, the FCC also granted SpaceX’s request to provide service to cell phones outside the United States. For non-US operations, Starlink must obtain authorization from the relevant governments. Non-US operations are authorized in various sub-bands between 1429 MHz and 2690 MHz.
In Europe, the regulatory framework for D2C services is under active development. The European Conference of Postal and Telecommunications Administrations (CEPT) is exploring the regulatory and technical aspects of satellite-based D2C communications. This includes understanding connectivity requirements and addressing national licensing issues to facilitate the integration of satellite services with existing mobile networks. Additionally, the European Space Agency (ESA) has initiated feasibility studies on Direct-to-Cell connectivity, collaborating with industry partners to assess the potential and challenges of implementing such services across Europe. These studies aim to inform future regulatory decisions and promote innovation in satellite communications.
For LEO satellite operators to offer D2C services in these regulated bands, they would need to reach agreements with the licensed MNOs with the rights to these frequencies. This could take the form of spectrum-sharing agreements or leasing arrangements, wherein the satellite operator obtains permission to use the spectrum for specific purposes, often under strict conditions to avoid interference with terrestrial networks. For example, SpaceX’s collaboration with T-Mobile in the USA involves utilizing T-Mobile’s existing mid-band spectrum (i.e., PCS1900) under a partnership model, enabling satellite-based connectivity without requiring additional spectrum licensing.
In Europe, the situation is more complex due to the fragmented nature of the regulatory environment. Each country manages its spectrum independently, meaning LEO operators must negotiate agreements with individual national MNOs and regulators. This creates significant administrative and logistical hurdles, as the operator must align with diverse licensing conditions, technical requirements, and interference mitigation measures across multiple jurisdictions. Furthermore, any satellite use of the terrestrial spectrum in Europe must comply with European Union directives and ITU (International Telecommunication Union) regulations, prioritizing terrestrial services in these bands.
Interference management is a critical regulatory concern. LEO satellites operating in the same frequency bands as terrestrial networks must implement sophisticated coordination mechanisms to ensure their signals do not disrupt terrestrial operations. This includes dynamic spectrum management, geographic beam shaping, and power control techniques to minimize interference in densely populated areas where terrestrial networks are most active. Regulators in the USA and Europe will likely require detailed technical demonstrations and compliance testing before approving such operations.
Another significant challenge is ensuring equitable access to spectrum resources. MNOs have invested heavily in acquiring and deploying their licensed spectrum, and many may view satellite D2C services as a competitive threat. Regulators would need to establish clear frameworks to balance the rights of terrestrial operators with the potential societal benefits of extending connectivity through satellites, particularly in underserved rural or remote areas.
Beyond regulatory hurdles, LEO satellite operators must collaborate extensively with MNOs to integrate their services effectively. This includes interoperability agreements to ensure seamless handoffs between terrestrial and satellite networks and the development of business models that align incentives for both parties.
TAKEAWAYS.
Ditect-to-cell LEO satellite networks face considerable technology hurdles in providing services comparable to terrestrial cellular networks.
Overcoming free-space path loss and ensuring uplink connectivity from low-power mobile devices with omnidirectional antennas.
Cellular devices transmit at low power (typically 23–30 dBm), making it difficult for uplink signals to reach satellites in LEO at 500–1,200 km altitudes.
Uplink signals from multiple devices within a satellite beam area can overlap, creating interference that challenges the satellite’s ability to separate and process individual uplink signals.
Developing advanced phased-array antennas for satellites, dynamic beam management, and low-latency signal processing to maintain service quality.
Managing mobility challenges, including seamless handovers between satellites and beams and mitigating Doppler effects due to the high relative velocity of LEO satellites.
The high relative velocity of LEO satellites introduces frequency shifts (i.e., Doppler Effect) that the satellite must compensate for dynamically to maintain signal integrity.
Address bandwidth limitations and efficiently reuse spectrum while minimizing interference with terrestrial and other satellite networks.
Scaling globally may require satellites to carry varied payload configurations to accommodate regional spectrum requirements, increasing technical complexity and deployment expenses.
Operating on terrestrial frequencies necessitates dynamic spectrum sharing and interference mitigation strategies, especially in densely populated areas, limiting coverage efficiency and capacity.
Ensuring the frequent replacement of LEO satellites due to shorter lifespans increases operational complexity and cost.
On the regulatory front, integrating D2C satellite services into existing mobile ecosystems is complex. Spectrum licensing is a key issue, as satellite operators must either share frequencies already allocated to terrestrial mobile operators or secure dedicated satellite spectrum.
Securing access to shared or dedicated spectrum, particularly negotiating with terrestrial operators to use licensed frequencies.
Avoiding interference between satellite and terrestrial networks requires detailed agreements and advanced spectrum management techniques.
Navigating fragmented regulatory frameworks in Europe, where national licensing requirements vary significantly.
Spectrum Fragmentation: With frequency allocations varying significantly across countries and regions, scaling globally requires navigating diverse and complex spectrum licensing agreements, slowing deployment and increasing administrative costs.
Complying with evolving international regulations, including those to be defined at the ITU’s WRC-27 conference.
Developing clear standards and agreements for roaming and service integration between satellite operators and terrestrial mobile network providers.
The high administrative and operational burden of scaling globally diminishes economic benefits, particularly in regions where terrestrial networks already dominate.
While satellites excel in rural or remote areas, they might not meet high traffic demands in urban areas, restricting their ability to scale as a comprehensive alternative to terrestrial networks.
The idea of D2C satellite networks making terrestrial cellular networks obsolete is ambitious but fraught with practical limitations. While LEO satellites offer unparalleled reach in remote and underserved areas, they struggle to match terrestrial networks’ capacity, reliability, and low latency in urban and suburban environments. The high density of base stations in terrestrial networks enables them to handle far greater traffic volumes, especially for data-intensive applications.
Coverage advantage: Satellites provide global reach, particularly in remote or underserved regions, where terrestrial networks are cost-prohibitive and often of poor quality or altogether lacking.
Capacity limitations: Satellites struggle to match the high-density traffic capacity of terrestrial networks, especially in urban areas.
Latency challenges: Satellite latency, though improving, cannot yet compete with the ultra-low latency of terrestrial 5G for time-critical applications.
Cost concerns: Deploying and maintaining satellite constellations is expensive, and they still depend on terrestrial core infrastructure (although the savings if all terrestrial RAN infrastructure could be avoided is also very substantial).
Complementary role: D2C networks are better suited as an extension to terrestrial networks, filling coverage gaps rather than replacing them entirely.
The regulatory and operational constraints surrounding using terrestrial mobile frequencies for D2C services severely limit scalability. This fragmentation makes it difficult to achieve global coverage seamlessly and increases operational and economic inefficiencies. While D2C services hold promise for addressing connectivity gaps in remote areas, their ability to scale as a comprehensive alternative to terrestrial networks is hampered by these challenges. Unless global regulatory harmonization or innovative technical solutions emerge, D2C networks will likely remain a complementary, sub-scale solution rather than a standalone replacement for terrestrial mobile networks.
T.S. Rappaport, “Wireless Communications – Principles & Practice,” Prentice Hall (1996). In my opinion, it is one of the best graduate textbooks on communications systems. I bought it back in 1999 as a regular hardcover. I have not found it as a Kindle version, but I believe there are sites where a PDF version may be available (e.g., Scribd).
ACKNOWLEDGEMENT.
I greatly acknowledge my wife, Eva Varadi, for her support, patience, and understanding during the creative process of writing this article.
On the early morning of November 17, 2024, the Baltic Sea was shrouded in a dense, oppressive fog that clung to the surface like a spectral veil. The air was thick with moisture, and visibility was severely limited, reducing the horizon to a mere shadowy outline. The sea itself was eerily calm. This haunting stillness set the stage for the unforeseen disruption of the submarine cables. This event would send ripples of concern about hybrid warfare far beyond the misty expanse of the Baltic. The quiet depths of the Baltic Sea have become the stage for a high-tech mystery gripping the world. Two critical submarine cables were severed, disrupting communication in a rare and alarming twist.
As Swedish media outlet SVT Nyheter broke the news, suspicions of sabotage began to surface. Adding fuel to the intrigue, a Chinese vessel became the focus of investigators like the first lantern — the ship of interest, Yi Peng 3, had reportedly been near both breakpoints at the critical moments. While submarine cable damage is not uncommon, the simultaneous failure of two cables, separated by distance but broken within the same maritime zone, is an event of perceived extraordinary rarity that raised the suspicion of foul play and hybrid war actions against Western critical infrastructure.
Against the backdrop of escalating geopolitical tensions, speculation is rife. Could these breaks signal a calculated act of sabotage? As the investigation unfolds, the presence of the Chinese vessel looms large, now laying for anchor in international waters in Danish Kattegat, turning a routine disruption into a high-stakes drama that may be redefining maritime security in our digital age.
Signe Ravn-Højgaard, Director of the Danish Think Tank for Digital Infrastructure, has been at the forefront, with her timely LinkedIn Posts, delivering near real-time updates that have kept experts and observers alike on edge.
Let’s count to ten and look at what we know so far and at the same time revisit some subsea cable fundamentals as well.
WHY DO SUBMARINE CABLES BREAK?
Distinguishing between natural causes, unintended human actions, and deliberate human actions in the context of submarine cable breaks requires analyzing the circumstances and evidence surrounding the incident.
Natural causes generally involve geological or environmental events such as earthquakes, underwater landslides, strong currents, or seabed erosion. In the Arctic, icebergs may scrape the seabed as they drift or ground in shallow waters, potentially dragging and crushing calves in their path. These causes often coincide with measurable natural phenomena like seismic activity, seasonal ice, or extreme weather events in the area of the cable break. According to data from the International Cable Protection Committee(ICPC),ca. 5% of faults are caused by natural phenomena, such as earthquakes, underwater landslides, iceberg drifts, or volcanic activity.
The aging of submarine cables adds to their vulnerability. Wear and tear, corrosion, and material degradation due to long-term exposure to seawater can lead to failures, especially in decades-old cables. In some cases, the damage may also stem from improper installation or manufacturing defects, where weak points in the cable structure result in premature failure.
Unintended human actions are characterized by accidental interference with cables, often linked to maritime activities. Examples include ship anchor dragging, fishing vessel trawling, or accidental damage during underwater construction or maintenance. These incidents typically occur in areas of high maritime traffic or during specific operations and lack any indicators of malicious intent. Approximately 40% of subsea cable faults are caused by anchoring and fishing activities, the most common human-induced risks. Another 45% of faults have unspecified causes, which could include a mix of factors. Upwards of 87% of all faults are a result of human intervention.
While necessary, maintenance and repair operations can also introduce risks. Faulty repairs, crossed cables, or mishandling during maintenance can create new vulnerabilities. Underwater construction activities, such as dredging, pipeline installation, or offshore energy projects, may inadvertently damage cables.
Deliberate human actions, which by all means are the stuff of the most interesting stories, by contrast, involve intentional interference with submarine cables and are usually motivated by sabotage, espionage, or geopolitical strategies. These cases often feature evidence of targeted activity, such as patterns of damage suggesting deliberate cutting or tampering. Unexplained or covert vessel movements near critical cable routes may also indicate intentional actions. A deliberate action may, of course, be disguised as an accidental interference (e.g., anchor dragging or trawling).
Although much focus is on the integrity of the subsea cables themselves, which is natural due to the complexity and time it takes to repair a broken cable, it is wise to remember that landing stations, beach manholes, and associated operational systems are likewise critical components of the submarine cable infrastructure and are vulnerable to deliberate hostile actions as well. Cyber exposure in network management systems, which are often connected to the internet, presents an additional risk, making these systems potential targets for sabotage, espionage, or cyberattacks. Strengthening the physical security of these facilities and enhancing cybersecurity measures are essential to mitigate these risks.
Landing stations and submarine cable cross-connects, or T-junctions, are critical nodes in the global communications infrastructure, making them particularly vulnerable to deliberate attacks. A compromise at a landing station could disrupt multiple cables simultaneously, severing regional or international connectivity. At the same time, an attack on a T-junction could disable critical pathways, bypassing redundancy mechanisms and amplifying the impact of a single failure. These vulnerabilities highlight the need for enhanced physical security, robust monitoring, and advanced cybersecurity measures to safeguard these vital points due to their disproportional impact if compromised.
Although deliberate human actions are increasingly considered a severe risk with the current geopolitical climate, their frequency and impact are not well-documented in the report. Most known subsea cable incidents remain attributed to accidental causes, with sabotage and espionage considered significant but less quantified threats.
Categorizing cable breaks involves gathering data on the context of the incident, including geographic location, timing, activity logs from nearby vessels, and environmental conditions. Combining this information with forensic analysis of the damaged cable helps determine whether the cause was natural, accidental, or deliberate.
WHY ARE SUBMARINE CABLES CRITICAL INFRASTRUCTURE?
Submarine cables are indispensable to modern society and should be regarded as critical infrastructure because they enable global connectivity and support essential services. These cables carry approximately 95% of international data traffic, forming the backbone of the Internet, financial systems, and communications. Their reliability underpins industries, governments, and economies worldwide, making disruptions highly consequential. For example, the financial sector relies heavily on submarine cables for instantaneous transactions and stock trading, while governments depend on them for secure communications and national security operations. With limited viable alternatives, such as satellites, which lack the bandwidth and speed of submarine cables, these cables are uniquely vital.
Despite their importance, submarine cable networks are designed with significant redundancy and safeguards to ensure resilience. Multiple cable routes exist for most major data pathways, ensuring that a single failure does not result in widespread disruptions. For example, transatlantic communications are supported by numerous parallel cables. Regional systems, such as those in Europe and North America, are highly interconnected, offering alternative routes to reroute traffic during outages. Advanced repair capabilities, including specialized cable-laying and repair ships, ensure timely restoration of damaged cables. Additionally, internet service providers and data centers use sophisticated traffic-routing protocols to minimize the impact of localized disruptions. Ownership and maintenance of these cables are often managed by consortia of telecom and technology companies, enhancing their robustness and shared responsibility for maintenance.
It is worth considering for operators and customers of submarine cables that using multiple parallel submarine cables drastically improves the overall availability of the network. With two cables, downtime is reduced to mere seconds annually (99.9999% and maximum 32 seconds annual downtime), and with three cables, it becomes negligible (99.9999999% and maximum ~0.32 seconds annual downtime). This enhanced reliability ensures that critical services remain uninterrupted even if one cable experiences a failure. Such setups are ideal for organizations or infrastructures that require near-perfect availability. To mitigate the impact of deliberate hostile actions on submarine cable traffic, operators must adopt a geographically strategic approach when designing redundancy and robustness, considering both the physical and logical connectivity and transport.
While the submarine cable network is inherently robust, users of this infrastructure must adopt proactive measures to safeguard their services and traffic. Organizations should distribute data across multiple cables to mitigate risks from localized outages and invest in cloud-based redundancy with geographically dispersed data centers to ensure continuity. Collaborative monitoring efforts between governments and private companies can help prevent accidental or deliberate damage, while security measures for cable landing stations and undersea routes can reduce vulnerabilities. By acknowledging the strategic importance of submarine cables and implementing such safeguards, users can help ensure the continued resilience of this critical global infrastructure.
1-2 KNOCKOUT!
So what happened underneath the Baltic Sea last weekend (between 17 and 18 November)?
In mid-November 2024, two significant submarine cable disruptions occurred in the Baltic Sea, raising concerns over the security of critical infrastructure in the region. The first incident involved the BCS East-West Interlink cable, which connects Lithuania to Sweden. On November 17, at approximately 10:00 AM local time (08:00 UTC), the damage was detected. The cable runs from Sventoji, Lithuania, to Katthammarsvik on the east coast of the Swedish island of Gotland. Telia Lithuania, a telecommunications company, reported that the cable had been “cut,” leading to substantial communication disruptions between Lithuania and Sweden.
The second disruption occurred the following day, on November 18, around midnight (note: exact time seems to be uncertain), involving the C-Lion1 cable connecting Finland to Germany. The damage was identified off the coast of the Swedish island of Öland. Finnish telecommunications company Cinia Oy reported that the cable had been physically interrupted by an unknown force, resulting in a complete outage of services transmitted via this cable.
The timeline of these events begins on November 17, with the detection of damage to the BCS East-West Interlink cable, followed by the discovery of the severed C-Lion1 cable on November 18. Geographically, both incidents occurred in the Baltic Sea, with the East-West Interlink cable between Lithuania and Sweden and the C-Lion1 cable connecting Finland and Germany. The breaks were specifically detected near the Swedish islands of Gotland and Öland.
These disruptions have led to heightened security measures and widespread investigations in the Baltic region as authorities seek to determine the cause and safeguard critical submarine cable infrastructure. Concerns over potential sabotage have intensified discussions among NATO members and their allies, underscoring the geopolitical implications of such vulnerabilities.
THE SITUATION.
The figure below provides a comprehensive overview of submarine cables in the Baltic Sea and Scandinavia. In most media coverage, only the two compromised submarine cables, BSC East-West Interlink (RFS: 1997) and C-Lion1 (RFS: 2016) have been shown, which may create the impression that those two are the only subsea cables in the Baltic. This is not the case, as shown below. This does not diminish the seriousness of the individual submarine cable breaks but illustrates that alternative routes may be present until the compromised cables have been repaired.
The figure also shows the areas of the two submarine cables that appear to have been broken and the approximate timeline for when cable operators notice that the cables were compromised. Compared to the BCS East-West Link, the media coverage of the C-Lion1 break is a bit more unclear about the exact time and location of the break. This is obviously very important information as it can be correlated with the position of the vessel of interest that is currently under investigation for causing the two breaks.
It should be noted that the Baltic Sea area has a considerable amount of individual submarine sea cables. A few of those are very near the two broken ones or would cross the busy shipping routes vessels take through the Baltic Sea.
Using the Marinetraffic tracker (note: there are other alternatives; I like this one), I can get an impression of the maritime traffic around the submarine breaks at the approximate time frames when the breaks were discovered. The Figure below shows the marine traffic around the BCS East-West Link from Gotland (Sweden) to Sventoji (Lithuania) across the Baltic Sea with a cable length of 218 km.
The route between Gotland and the Baltic States, also known as the Central Baltic Sea, is one of the busiest sea routes in the world, with more than 30 thousand vessels passing through annually. Around the BCS West-East Interlink subsea cable break, ca. 10+ maritime vessels were passing around the time of the cable break. The only Chinese ship at the time and location was Yi Peng 3 (Bulk Carrier), also mentioned in the press a couple of hours ago.
Some hours later, between 23:00 and 01:00 UTC, “Yi Peng 3” was crossing the area of the second cable break at a time that seemed to also be the time that the C-Lion1 outage was observed. See the Figure below with the red circle pinpointing the Chinese vessel. Again “Yi Peng 3” is the only Chinese vessel in the area at the possible time of the cable break. It is important, as also shown in the Figure below, that there were many other ships in the area and neighborhood of Chinese vessel and the location of the C-Lion1 submarine cable.
Using the Maritinetraffic website’s historical data, I have mapped out the “Yi Peng 3” route up through the Baltic Sea to the Russian port Ust-Luga and back out of the Baltic Sea, including the path and timing of its presence around the two cable breaks. That coincides with the time of the reported outages.
If one examines the Chinese vessel’s speed relative to the other vessels’ speeds, it would appear that “Yi Peng 3” is the only vessel that matches both break locations and time intervals for the breaks. I would like to emphasize that such an assessment is limited to the data in the Maritinetraffic database (that I am using) and may obviously be a coincidence, irrespective of how one judges the likelihood of that. Also, even if the Chinese vessel of interest should be found to have caused the two submarine cable breaks, it may not have been a deliberate act.
“Yi Peng 3’s current status (2024-11-20 12:41 (UTC+1)) is that it has stopped at anchor in Kattegat in Danish territorial waters (See the Figure below). The “Yi Peng 3” seems to have stopped (in international waters) in Kattegat of their own volition and supposedly not by local authorities.
There are many rumors circulating about the Chinese vessel. It was earlier reported that a Danish pilot was placed on the vessel as of yesterday evening, November 19 (2024). This also agrees with the official event entry and timestamp as recorded by Maritinetraffic. In the media, this event has been misconstrued as Danish maritime authorities have taken control of the Chinese vessel. This, however, appears not to have been the case later.
Danish waters, including the Kattegat, are part of a region where licensed pilotage (by a “los” in Danish) is commonly required or strongly recommended for vessels of specific sizes or types, especially when navigating congested or challenging areas. The presence of a licensed pilot entry in the log reinforces that the vessel’s activities during this phase of its journey align with standard operating procedures.
However, this does not exclude the need for further scrutiny, as other aspects of the vessel’s behavior, such as voluntary stops or deviations from its planned route, should still warrant investigation. If for nothing else, an inquiry should ensure sufficient information is available for an insurance to take effect and compensate the submarine cable owners for the damages and cost of repairing the cables. If “Yi Peng 3” did not stop its journey due to intervention from the Danish marine authority, then it may be at the request of the protection & indemnity insurance company that the owner of “Yi Peng 3” should have in place.
WHAT DOES IT TAKE TO CUT A SUBMARINE CABLE?
To break a submarine cable, a ship typically needs to generate significant force. This is often caused by an anchor’s unintentional or accidental deployment while the ship is underway. The ship’s momentum plays a crucial role, determined by its mass and speed. A large, heavily loaded vessel moving at even moderate speeds, such as 6 knots, generates immense kinetic energy. Suppose an anchor is deployed in such conditions. In that case, the combination of drag, weight, and momentum can create concentrated forces capable of damaging or severing a cable.
The anchor’s characteristics are equally critical. A large, sharp anchor with heavy flukes can snag a cable, especially if the cable is exposed on the seabed or poorly buried. As the ship continues to move forward, the dragging anchor might stretch, lift, or pierce the cable’s protective layers. If the ship is in an area with soft seabed sediment like mud or sand, the anchor has a better chance of digging in and generating the necessary tension to break the cable. On harder or rocky seabed, the anchor might skip, but this can still result in abrasion or localized stress on the cable.
The BCS East-West Interlink cable, the first submarine cable to break, connecting Lithuania and Sweden, is laid at depths ranging from approximately 100 to 150 meters. In these depths, the seabed is predominantly composed of soft sediments, including sand and mud, which can shift over time due to currents and sediment deposition. Such conditions can lead to sections of the cable becoming exposed, increasing their vulnerability to external impacts like anchoring. The C-Lion1 cable, the second subsea cable to break, is situated at shallower depths of about 20 to 40 meters. In these areas, the seabed may consist of a combination of soft sediments and harder materials, such as clay or glacial till. The presence of harder substrates can pose challenges for cable burial and protection, potentially leaving segments of the cable exposed and susceptible to damage from external forces.
The vulnerability of the cable is also a factor. Submarine cables are typically armored and buried under 1–2 meters of sediment near shorelines, but in deeper waters, they are often exposed due to technical challenges in burial. An exposed cable is particularly at risk, especially if it is old or has been previously weakened by sediment movement or other physical interactions.
When a submarine cable break occurs, one would typically analyze maritime vessels in the vicinity of the break. A vessel’s AIS signals can provide telltale signs. AIS transmits a vessel’s speed, position, and heading at regular intervals, which can reveal anomalies in its movement. If a ship accidentally deploys its anchor:
Speed Changes: The vessel’s speed would begin to decrease unexpectedly as the anchor drags along the seabed, creating resistance. This deceleration might be gradual or abrupt, depending on the seabed type and the tension in the anchor chain. In an extreme case, the speed could drop from cruising speeds (e.g., 6 knots) to near zero as the ship comes to a stop.
Position Irregularities: If the anchor snags a cable, the AIS track may show deviations from the expected path. The ship might veer slightly off course or experience irregular movement due to the uneven drag caused by the cable interaction.
Stop or Slow Maneuvers: If the anchor creates substantial resistance, the vessel might halt entirely, leaving a stationary position in the AIS record for a prolonged period.
Additionally, position data from the AIS might reveal whether the ship was operating near known submarine cable routes. This is significant because anchoring is typically restricted in these zones, and any AIS data showing activity or stops within these areas would be a red flag. The figure below provides an illustration of Yi Peng 3‘s AIS signal, using available data from Maritine Traffic, between the 16th of November to 18th of November (2024). It is apparent that there are long time gaps in the AIS transmission on both the 17th as well as on the 18th, while prior to those dates, the AIS was on transmitted approximately every 2 minutes. Apart from the AIS silence at around 8 AM on 17th of November, the AIS silence coincides with significant speed drops over the period indicating that Yi Peng 3 would have been at or near standstill.
Environmental and human factors further compound the situation. Strong currents, storms, or poor visibility might increase the likelihood of accidental anchoring or a missed restriction. Human error, such as improper navigation or ignoring marked cable zones, can also lead to such incidents. Once the anchor catches the cable, the tension forces can grow until the cable either snaps or is pulled from its burial, increasing stopping distances for the ship.
When considering the scenario where the Yi Peng 3, a large bulk carrier with a displacement of approximately 75,169 tons, drops its anchor while traveling at a speed of 6 knots (~3.1 m/s), the stopping dynamics vary significantly depending on whether or not the anchor snags a submarine cable. Using mathematical modeling, we can analyze the expected stopping time and distance in both cases, assuming specific conditions for the ship and the cable. The anchor deployment generates a drag force depending on the seabed conditions (as discussed above) and whether the anchor catches a submarine cable. When no submarine cable is involved, the drag force generated by the anchor is estimated at 1.5 Mega Newton, a typical value for large vessels in soft seabed conditions (e.g., mud or sand). If the ship’s anchor catches a submarine cable, the resistance force effectively doubles to 3 Mega Newton, assuming the cable resists the anchor’s pull consistently until the ship stops or the sea cable eventually breaks (i.e., they usually do as the ship’s kinetic energy is far greater than the energy needed to shear the submarine cable).
When the anchor drags along the seabed without encountering a cable, the stopping time is approximately 2.5 minutes, and the ship travels a distance of ca. 250 meters before coming to a complete stop. This deceleration is driven solely by the drag force of the anchor interacting with the seabed. However, if the anchor catches a submarine cable, the stopping time is reduced to around a minute, and the stopping distance shortens to ca. 100+ meters. This reduction occurs because the resistance force doubles, significantly increasing the rate of deceleration. If the cable breaks, the ship might accelerate slightly as the anchor loses the additional drag from the cable. This would then extend the stopping distance compared to a scenario where the cable holds until the ship stops. The ship might veer slightly off course if the anchor suddenly becomes free. Do to the time scale involved, e.g., 1 to 3 minutes, such an event would be difficult to observe in real-time as the AIS transmit cycle could be longer. However, from standstill back to an operating speed of 6 knots would realistically take up to 40 minutes, including anchor recovery, under normal operating conditions. If their anchor has been entangled in the submarine cable it may take substantially longer to recover the anchor and be able to continue the journey (even if they “forget” to notify the authorities as they would be obliged to do). In “desperation” the vessel may drop their anchor and rely on their other anchor for redundancy (i.e., larger vessels typically have 2 anchors, a port anchor and a starboard anchor).
When a submarine cable breaks during interaction with a ship, it is usually due to excessive tensile forces that exceed the cable’s strength. Conditions such as the ship’s size and speed, the cable’s vulnerability, and the seabed characteristics all contribute to the likelihood of a break. Once the cable snaps, it drastically changes the dynamics of the ship’s deceleration, often leading to increased stopping distances and posing risks to both the cable and the ship’s anchoring equipment. Understanding these dynamics is critical for assessing incidents involving submarine cables and maritime vessels.
If the Yi Peng 3 accidentally dropped its anchor while sailing at 6 knots, it is highly plausible that the anchor could sever the BCS East-West Interlink submarine cable. The ship’s immense kinetic energy (i.e., 350+ Mega Joule), combined with the forces exerted by the dragging anchor, far exceed the energy required to break the cable (i.e., 70+ kilo Joule for a 50 mm thick cable).
ACTUAL TRAFFIC IMPACT OF THE BALTIC SEA CABLE CUTS?
BCS East-West Interlink Cut (Sweden-Lithuania): Approximately 20% of the measured paths between Sweden and Lithuania exhibited significant increases in latency following the cable cut. However, no substantial packet loss was detected, indicating that while some routes experienced delays, data transmission remained largely intact.
C-Lion1 Cut (Finland-Germany): About 30% of the paths between Finland and Germany showed notable latency increases after the incident. Similar to the BCS cut, there was no significant packet loss observed, suggesting that alternative routing effectively maintained data flow despite the increased delays.
The analysis concluded that the internet demonstrated a degree of resilience by rerouting traffic through alternative paths, mitigating the potential impact of the cable disruptions. As discussed in this article the RIPE NCC analysis highlights the importance of maintaining and securing multiple connections to ensure robust internet connectivity. In those considerations it is also clear that technical responsible needs to consider latency in their choices of alternative routes as some customers applications may be critically sensitive to too high latencies (e.g., payment and certain banking applications applications, real-time communications such as Zoom, Teams, Google Meet, financial trading,..).
While media often highlights that security- and intelligence-sensitive information (e.g., diplomatic traffic, defense-related traffic, …) may be compromised in case of a submarine cable cut, it seems to me highly unlikely that such information would rely solely on a single submarine cable connection without backups (e.g., satellites communications, dedicated secure networks, air-gapped systems, route diversity, …) or contingencies. Best practices in network design and operational security prioritize redundancy, especially for sensitive communications.
Anyway, military and diplomatic communications are rarely entrusted solely to submarine cables. High-value networks, like those used by NATO or national defense agencies, integrate (a) high-capacity, low-latency satellite links as failover, (b) secure terrestrial routes, and (c) cross-border fiber agreements with trusted partners.
WHAT IS THE RISK?
Below is a simple example of a risk assessment model, with the rose color illustrating the risk category into which the two sea cables, BCS East-West Interlink and C-Lion1, might fall. This really should be seen as an illustration, and the actual probability ranges may not reflect reality. Luckily, we only have a little data that could be used to build a more rigorous risk assessment or incident probability model. In the illustration below, I differentiate between Baseline Risk, which represents the risk of a subsea cable break due to natural causes, including unintentional human-caused breaks, and Sabotage Risk, which represents the deliberately caused submarine breaks due to actual warfare or hybrid/pseudo warfare.
The annual occurrence of 100 to 200 cable breaks (out of ca. 600) translates to a break rate of approximately 0.017% to 0.033% per kilometer each year. This low percentage underscores the robustness of the submarine cable infrastructure despite the challenges posed by natural events and human activities.
With the table above, one could, in principle, estimate the likelihood of a cable break due to natural causes and the additional probability of cable breaks attributed to deliberate actions. Hereby forming an overall estimate of the risk of a cable break for a particular submarine cable. This might look like this (or a lot more complex than this;-):
For the BCS East-West Interlink break, we can make the following high-level assessment of the Baseline risk of a break. The BCS East-West Interlink submarine cable, connecting Sventoji, Lithuania, and Katthammarsvik, Sweden, spans the Baltic Sea, which is characterized by moderate marine traffic and relatively shallow waters.
The Baseline Probability considerations amounts to
Cable Length: Shorter cables generally have a lower risk of breaks.
Marine Traffic Density: The Baltic Sea experiences moderate marine traffic, which can increase the likelihood of accidental damage from anchors or fishing activities.
Fishing Activity: The area has moderate fishing activity, posing a potential risk to submarine cables.
Seismic Activity: The Baltic Sea is geologically stable, indicating a low risk from seismic events.
Iceberg Activity: The likelihood of an iceberg causing a submarine cable break in the Baltic Sea, particularly in the areas where recent disruptions were observed, is virtually nonexistent.
Depth of Cable: The cable lies in relatively shallow waters, making it more susceptible to human activities.
Cable Armoring: If the cable is well-armored, it will be more resistant to physical damage.
As an illustration here are the specifics of the Baseline Risk with assumed ß-weights using the midpoint probabilities from the Table above.
Or 13% (0.060% per km) baseline probability per year of experiencing a cable break by causes not deliberate.
Estimated Baseline Probability Range:
Considering all the above factors, the baseline probability using minimum and maximum of a break in the BCS East-West Interlink cable is estimated to be in the low to moderate range, approximately 7.35% (0.034% per km) to 18.7% (0.086 per km) per year. This estimation accounts for the moderate marine and fishing activities, shallow depth, and the assumption of standard protective measures. Also, note that this is below the average cable break likelihood of between 17% and 33% (i.e., 100 to 200 out of 600 breaks per year).
Given the geopolitical tensions, the cable’s accessible location, and recent incidents, the likelihood of sabotage for the BCS East-West Interlink is moderate to high. Implementing robust security measures and continuous monitoring is essential to mitigate this risk. The available media information indicates that the monitoring of this sea cable was good. Based on the available information, this may not be said of the C-Lion1 submarine cable, owned by Cinia Oy, although this cable is also substantially longer than the BCS one (1,172 vs. 218 km).
The European Union Agency for Cybersecurity (Enisa) published a report back in 2023 (July) titled “Subsea Cables – What is at Stake?”. The ICPC’s (International Cable Protection Committee) categorization of cable faults shows that approximately 40% of subsea cable faults are caused by anchoring and fishing activities, the most common human-induced risks. Another 45% of faults have unspecified causes, which could include a mix of factors. Around 87% of all faults result from human intervention, either through unintentional actions like fishing and anchoring or deliberate malicious activities. On the other hand, 4% of faults are due to system failures, attributed to technical defects in cables or equipment. Lastly, 5% of faults are caused by natural phenomena, such as earthquakes, underwater landslides, or volcanic activity. These statistics emphasize the predominance of human activities in subsea cable disruptions over natural or technical causes. These insights can calibrate the above risk assessment methodology, although some deconvolution would be necessary to insure that appropriate regionalized and situational data has been correctly considered.
ADDITIONAL INFORMATION.
Details of the ship of interest, and suspect number one: YI PENG 3 (IMO: 9224984) is a Bulk Carrier built in 2001 and sailing under China’s flag. Her carrying capacity is 75,121 tonnes, and her current draught is reported to be 14.5 meters. Her length is 225 meters, and her width is 32.3 meters. A maritime bulk carrier vessel is designed to transport unpackaged bulk goods in large quantities. These goods, such as grain, coal, ore, cement, salt, or other raw materials, are typically loose and not containerized. Bulk carriers are essential in global trade, particularly for industries transporting raw materials efficiently and economically.
The owner of “Yi Peng 3” is Ningbo Yipeng Shipping Co., Ltd. is a maritime company based in Ningbo, Zhejiang Province, China. The company is located at 306, Yanjiang Donglu, Zhenhai District, Ningbo, Zhejiang, 315200, China. Ningbo Yipeng Shipping specializes in domestic and international waterway transportation, offering domestic freight forwarding, ship agency, and the wholesale and retail of mineral products. The company owns and operates bulk carrier vessels, including the “YI PENG” (IMO: 9224996), a bulk carrier with a gross tonnage of 40,622 and a deadweight of 75,169 tons, built in 2001. Another vessel, “YI PENG 3” (IMO: 9224984), is also registered under the company’s ownership. Financially, Ningbo Yipeng Shipping reported a total operating income of approximately 78.18 million yuan, with a net profit of about -9.97 million yuan, indicating a loss for the reporting period.
ACKNOWLEDGEMENT.
I greatly acknowledge my wife, Eva Varadi, for her support, patience, and understanding during the creative process of writing this Blog. Many thanks to Signe Ravn-Højgaard for keeping us updated on the developments over the last few days (in November 2024), and for her general engagement around and passion for critical infrastructure.
Over the last three years, I have extensively covered the details of the Western European telecom sector’s capital expense levels and the drivers behind telecom companies’ capital investments. These accounts can be found in “The Nature of Telecom Capex—a 2023 Update” from 2023 and my initial article from 2022. This new version of “The Nature of Telecom Capex – a 2024 Update” is also different compared to the issues of 2022 and 2023 in that it focuses on the near future Capex demands from 2024 to 2030 and what we may expect from our Industry capital spending over the next 7 years.
For Western Europe, Capex levels in 2023 were lower than in 2022, a relatively rare but not unique occurrence that led many industry analysts to conclude the “End of Capex” and that from now on, “Capex will surely decline.” The compelling and logical explanations were also evident, pointing out that “data traffic (growth) is in decline”, “overproduction of bandwidth”, “5G is not what it was heralded to be”, “No interest in 6G”, “Capital is too expensive” and so forth. These “End to Capex” conclusions were often made on either aggregated data or selected data, depending on the availability of data.
Having worked on Capex planning and budgeting since the early 2000s for one of the biggest telecom companies in Europe, Deutsche Telecom AG, building what has been described as best-practice Capex models, my outlook is slightly less “optimistic” about the decline and “End” of Capex spending by the Industry. Indeed, for those expecting that a Telco’s capital planning is only impacted by hyper-rational insights glued to real-world tangibles and driven by clear strategic business objectives, I beg you to modify that belief somewhat.
Figure 1 illustrates the actual telecom Capex development for Western Europe between 2017 and 2023, with projected growth from 2024 (with the first two quarters’ actual Capex levels) to 2026, represented by the orange-colored dashed lines. The light dashed line illustrates the annual baseline Capex level before 5G and fiber deployment acceleration. The light solid line shows the corresponding Telco Capex to Revenue development, including an assessment for 2024 to 2026, with an annual increase of ca. 500 million euros. Source:New Street Research European Quarterly Review, covering 15 Western European countries (see references at the end of the blog) and 56+ telcos from 2017 to 2024, with 2024 covering the year’s first two quarters.
Western Europe’s telecommunications Capex fell between 2022 and 2023 for the first time in some years, from the peak of 51 billion euros in 2022. The overall development from 2017 to 2023 is illustrated below, including a projected Capex development covering 2024 to 2026 using each Telco’s revenue projections as a simple driver for the expected Capex level (i.e., inherently assuming that the planned Capex level is correlated to the anticipated, or targeted, revenue of the subsequent year).
The reduction in Capex between 2022 and 2023 comes from 29 out of 56 Telcos reducing their Capex level in 2023 compared to 2022. In 8 out of 15 countries, the Telco Capex levels were decreased by ca. 2.3 billion euros compared to their 2022 Capex levels. Likewise, 7 countries spent approximately 650 million euros more than their 2022 levels together. If we compared the 1st and 2nd half of 2023 with 2022, there was an unprecedented Capex reduction in the 2nd half of 2023 compared to any other year from 2017 to 2023. It really gives the impression that many ( at least 36 out of 56) Telcos put their feet on the break in 2023. 29 Telcos out of the 36 broke their spending in the last half of 2023 and ended the year with an overall lower spending than in 2022. Of the 8 countries with a lower Capex spend in 2023, the UK, France, Italy, and Spain make up more than 80%. Of the countries with a higher Capex in 2023, Germany, Netherlands, Belgium, and Austria make up more than 80%.
For a few of the countries with lower Capex levels in 2023, one could argue that they more or less finished their 5G rollout and have so high fiber-to-the-home penetration levels that more fiber is on account of overbuilt and of a substantially smaller scale than in the past (e.g., France, Norway, Spain, Portugal, Denmark, and Sweden). For other countries with a lower investment level than the previous year, such as the UK, Italy, and Greece, 2022 and 2023 saw substantial consolidation activity in the markets (e.g., Vodafone UK & C.K. Hutchinson 3, Wind Hellas rounded up in Nova Greece, …). In fact, Spain (e.g., Masmovil), Norway (e.g., Ice Group), and Denmark (e.g., Telia DK) also experienced consolidation activities that will generally lower companies’ spending levels initially. One would expect, as to some extent visible in the first half of 2024, that countries that spend less due to consolidation activities would increase their Capex levels in the next two to three years after an initial replanning period.
WESTERN EUROPE – THE BIG CAPEX OVERVIEW.
Figure 2 Shows on a country-level the 5-year average Capex spend (over the period 2019 to 2023) and the Capex in 2023. Source:New Street Research European Quarterly Review 2017 to 2024 (Q2).
When attempting to understand Telco Capex, or any Capex with a “built-in” cyclicity, one really should look at more than one or two years. Figure 2 above provides the comparison with the average Capex spend over the period 2019 to 2023 and the Capex spend in 2023. The five year Capex average captures the initial stages of 5G deployment in Europe, 5G deployment in general, COVID capacity investments (in fixed networks), the acceleration of Fiber rollout in many countries in Europe (e.g., Germany, UK, Netherlands, …), the financial (inflationary) crisis of increasing costly capital, and so forth. In my opinion 2023 is a reflection of the 2021-2022 financial crisis and that most of the 5G has been deployed to cover current market needs. As we have seen before, Telco investments are often 12 to 18 month out of synch with financial crisis years, and thus it is from that perspective also not surprising that 2023 might be a lower Capex year than in the past. Although, as is also evident from Figure 2, only 5 countries had a lower Capex level in 2023 than the previous 5 years average level.
Figure 3 Illustrates the Capex development over the last 5 years from 2019 to 2023 with the color Green showing years where the subsequent year had a higher Capex level, and color Red that the subsequent year had a lower Capex level. From a Western Europe perspective only 2023 had a lower Capex level than the previous year (compared to the last 5 years). Source:New Street Research European Quarterly Review 2017 to 2024 (Q2).
Using Capex to Revenue ratios of the Telco industry are prone to some uncertainty. This is particular the case when individual Telcos are compared. In general, I recommend to make comparisons over a given period of time, like 3 or 5 year periods, as it averages out some of the natural variation between Telcos and countries (e.g., one country or Telco may have started its 5G deployment earlier than others). Even that approach has to be taken with some caution as some Telcos may fully incur Capex for fiber deployments and others may make wholesale agreements with open Fiberco’s (for example) and only incur last-mile access or connection Capex. Although, of smaller relative Capex scale nowadays, Telcos increasingly have Towercos managing and building their passive infrastructure for their cell site demand. Some may still fully build their own cell sites, incurring proportionally higher Capex per new site deployed, which of course may lead to structural Capex differences between such Telcos. Having these cautionary remarks in mind, I believe that Capex to Revenue ratios does provide a means of comparing Countries or Telcos and it does give provide a picture of the capital investment intensity compared to the market performance. A country comparison of the 5-year (period: 2019 to 2023) average Capex to Revenue ratio is illustrated in Figure 3 below for the 15 markets considered in this blog.
Figure 4 Shows on a country-level the 5-year average Capex to Revenue ratios over the period 2019 to 2023. Source:New Street Research European Quarterly Review 2017 to 2024 (Q2).
Comparing Capex per capita and Capex as a percentage of GDP may offer insights into how capital investments are prioritized in relation to population size and economic output. These two metrics could highlight different aspects of investment strategies, providing a more comprehensive understanding of national economic priorities and critical infrastructure development levels. Such a comparison is show in Figure 15 below.
Capex per capita, shown in Figure 5 left hand side, measures the average amount of investment allocated to each person within a country. This metric is particularly useful for understanding the intensity of investment relative to the population, indicating how much infrastructure, technology, or other capital resources are being made available on a per-person basis. A higher Capex per capita suggests significant investment in areas like public services, infrastructure, or economic development, which could improve quality of life or boost productivity. Comparing this measure across countries helps identify disparities in investment levels, revealing which nations are placing greater emphasis on infrastructure development or economic expansion. For example, a country with a high Capex per capita likely prioritizes public goods such as transportation, energy, or digital infrastructure, potentially leading to better economic outcomes and higher living standards over time. The 5-year average Capex level does show a strong positive linear relationship with the Country population (R² = 0.9318, chart not shown), suggesting that ca. 93% of the variation in Capex can be explained by the variation in population. The trend implies that as the population increases, Capex also tends to increase, likely reflecting higher investment needs to accommodate larger populations. It should be noted that that a countries surface area is not a significant factor influencing Capex. While some countries with larger land areas might exhibit a higher Capex level, the overall trend is not strong.
Capex as a percentage of GDP, shown in Figure 5 right hand side, measures the proportion of a country’s economic output devoted to capital investment. This ratio provides context for understanding investment levels relative to the size of the economy, showing how much emphasis is placed on growth and development. A higher Capex-to-GDP ratio can indicate an aggressive investment strategy, commonly seen in developing economies or countries undergoing significant infrastructure expansion. Conversely, a lower ratio might suggest efficient capital allocation or, in some cases, underinvestment that could constrain future economic growth. This metric helps assess the sustainability of investment levels and reflects economic priorities. For instance, a high Capex-to-GDP ratio in a developed country could indicate a focus on upgrading existing infrastructure, whereas in a developing economy, it may signify efforts to close infrastructure gaps, modernization efforts (e.g., optical fiber replacing copper infrastructure per fixed broadband transformation) and accelerating growth. The 5-year average Capex level does show a strong positive linear relationship with the Country GDP (R² = 0.9389, chart not shown), suggesting that ca. 94% of the variation in Capex can be explained by the variation in the country GDP. While a few data points show some deviation from this trend, the overall fit is very strong, reinforcing the notion that larger economies generally allocate more resources to capital investments.
The insights gained from both Capex per capita and Capex as a percentage of GDP are complementary, providing a fuller picture of a country’s investment strategy. While Capex per capita reflects individual investment levels, Capex as a percentage of GDP reveals the scale of investment in relation to the overall economy. For example, a country with high Capex per capita but a low Capex-to-GDP ratio (e.g., Denmark, Norway, …) may have a wealthy population where individual investment levels are significant, but the size of the economy is such that these investments constitute a relatively small portion of total economic activity. Conversely, a country with a high Capex-to-GDP ratio but low Capex per capita (e.g., Greece) may be dedicating a substantial portion of its economic resources to infrastructure in an effort to drive growth, even if the per-person investment remains modest.
Figure 5 Illustrates two charts that compare the average capital expenditures over a 5-year period from 2019 to 2023. The left chart shows Capex per capita in euros, with Switzerland leading at 230 euros, while Spain has the lowest at 75 euros. The right chart depicts Capex as a percentage of GDP, where Greece tops the list at 0.47%, and Sweden is at the bottom with 0.16%. These metrics provide insights into how different countries allocate investments relative to their population size and economic output, revealing varying levels of investment intensity and economic priorities. It should be noted that Capex levels are strongly correlated with both the size of the population and the size of the economy as measured by the GDP. Source:New Street Research European Quarterly Review 2017 to 2024 (Q2).
FORWARD TO THE PAST.
Almost 15 years ago, I gave a presentation at the “4G World China” conference in Beijing titled “Economics of 4G Introduction in Growth Markets”. The idea was that a mobile operator’s capital demand would cycle between 8% (minimum) and 13% (maximum), usually with one replacement cycle before migrating to the next-generation radio access technology. This insight was backed up by best-practice capital demand models considering market strategy and growth Capex drivers. It involved also involved the insights of many expert discussions.
Figure 6 illustrates my expectations of how Capex would relate before, during, and after LTE deployment in Western Europe. Source:“Economics of 4G Introduction in Growth Markets” at “4G World China”, 2011.
For the careful observer, you will see that I expected, back in 2011, the typical Capex maintenance cycle in Western European markets between infrastructure and technology modernization periods to be no more than 8% and that Capex in the maintenance years would be 30% lower than required in the peak periods. I have yet to see a mobile operation with such a low capital intensity unless they effectively share their radio access network and/or by cost-structure “magic” (i.e., cost transformation), move typical mobile Capex items to Opex (by sourcing or optimizing the cost structure between fixed and mobile business units).
I retrospectively underestimated the industry’s willingness to continue increasing capital investments in existing networks, often ignoring the obvious optimization possibilities between their fixed and mobile broadband networks (due to organizational politics) and, of course, what has and still is a major industrial contagious infliction: “Metus Crescendi Exponentialis” (i.e., the fear of the exponential growth aka the opportunity to spend increasingly lots of Capex). From 2000 to today, the Western European Capex to Revenue ratio has been approximately between 11% and 21%, although it has been growing since around 2012 (see details in “The Nature of Telecom Capex—a 2023 Update”).
CAPEX DEVELOPMENT FROM 2024 TO 2026.
From the above Figure 1, it should be no surprise that I do not expect Capex to continue to decline substantially over the next couple of years, as we saw between 2022 and 2023. In fact, I anticipate that 2024 will be around the level of 2023, after which we will experience modest annual increases of 600 to 700 million euros. Countries with high 5G and Fiber-to-the-Home (FTTH) coverage (e.g., France, Netherlands, Norway, Spain, Portugal, Denmark, and Sweden) will keep their Capex levels possible with some modest declines with single-digit percentage points. Countries such as Germany, the UK, Austria, Belgium, and Greece are still European laggards in terms of FTTH coverage, being far below the 80+% of other Western European countries such as France, Spain, Portugal, Netherlands, Denmark, Sweden, and Norway. Such countries may be expected to continue to increase their Capex as they close the FTTH coverage gap. Here, it is worth remembering that several fiber acquisition strategies aiming at connecting homes with fiber result in a lower Capex than if a Telco aims to build all the required fiber infrastructure.
Consolidation Capex.
Telecom companies tend to scale back Capex during consolidation due to uncertainty, the desire to avoid redundancy, and the need to preserve cash. However, after regulatory approval and the deal’s closing, Capex typically rises as the company embarks on network integration, system migration, and infrastructure upgrades necessary to realize the merger’s benefits. This post-merger increase in Capex is crucial for achieving operational synergies, enhancing network performance, and maintaining a competitive edge in the telecom market.
If we look at the period 2021 to 2024, we have had the following consolidation and acquisition examples:
UK: In May 2021, Virgin Media and the O2 (Telefonica) UK merger was approved. They announced the intention to consolidate on May 7th, 2020.
UK: Vodafone UK and Three UK announced their intention to merge in June 2023. The final decision is expected by the end of 2024.
Spain: Orange and MasMovil announced their intent to consolidate in July 2023. Merger approval was given in February 2024. Conditions were imposed on the deal for MasMovil to divestitures its frequency spectrum.
Italy: The potential merger between Telecom Italia (TIM) and Open Fiber was first discussed in 2020 when the idea emerged to create a national fiber network in Italy by merging TIM’s fixed access unit, FiberCop, with Open Fiber. a Memorandum of Understanding was signed in May 2022.
Greece: Wind Hellas acquisition by United Group (Nova) was announced in August 2021 and finalized in January 2022 (with EU approval in December 2021).
Denmark: Norlys’s acquisition of Telia Denmark was first announced on April 25, 2023, and approved by the Danish competition authority in February 2024.
Thus, we should also expect that the bigger in-market consolidations may, in the short term (next 2+ years), lead to increased Capex spending during the consolidation phase, after which Capex (& Opex) synergies hopefully kick in. Typically, 2 budgetary cycles minimum before this would be expected to be observed. Consolidation Capex usually amounts to a couple of percentage points of total consolidated revenue, with some other bigger items being postponed to the tail end of a consolidation unless it is synergetic with the required integration.
The High-risk Suppler Challenge to Western Europe’s Telcos.
When assessing whether Capex will increase or decrease over the next few years (e.g., up to 2030), we cannot ignore the substantial Capex amounts associated with replacing high-risk suppliers (e.g., Huawei, ZTE) from Western European telecom networks. Today, the impact is mainly on mobile critical infrastructure, which is “limited” to core networks and 5G radio access networks (although some EU member states may have extended the reach beyond purely 5G). Particularly if (or when?) the current European Commission’s 5G Toolbox (legal) Framework (i.e., “The EU Toolbox for 5G Security”) is extended to all broadband network infrastructure (e.g., optical and IP transport network infrastructure, non-mobile backend networking & IT systems) and possibly beyond to also address Optical Network Terminal (ONT) and Customer Premise Equipment (note: ONT’s can be integrated in the CPE or alternatively separated from the CPE but installed at the customers premise). To an extent, it is thought-provoking that the EU emphasis has only been on 5G-associated critical infrastructure rather than the vast and ongoing investment of fiber-optical, next-generation fixed broadband networks across all European Union member states (and beyond). In particular, this may appear puzzling when the European Union has subsidized these new fiber-optical networks by up to 50%. Considering that the fixed-broadband traffic is 8 to 10 times that of the mobile traffic, and all mobile (and wireless) traffic passes through the fixed broadband network and associated local as well as global internet critical infrastructure.
As far back as 2013, the European Parliament raised some concerns about the degree of involvement (market share) of Chinese companies in the EU’s telecommunications sector. It should be remembered that in 2013, Europe’s sentiment was generally positive and optimistic toward collaboration with China, as evidenced by the European Commission’s report “EU-China 2020 Strategic Agenda for Cooperation” (2013). Historically, the development of the EU’s 5G Toolbox for Security was the result of a series of events from about 2008 (after the financial crisis) to 2019 (and to today), characterized by growing awareness in Europe of China’s strategic ambitions, the expansion of the BRI (Belt and Road Initiative, 2013), DSR (Digital Silk Road, an important part of BRI 2.0, 2015), and China’s National Intelligence Law (2017) requiring Chinese companies to cooperate with the Chinese Government on intelligence matters, as well as several high-profile cybersecurity incidents (e.g., APT, Operation Cloud Hopper, …), and increased scrutiny of Chinese technology providers and their influence on critical communications infrastructure across pretty much the whole of Europe. These factors collectively drove the EU to adopt a more cautious and coordinated approach to addressing security risks in the context of 5G and beyond.
Figure 7 illustrates Western society, including Western Europe, ‘s concern about Chinese technology presence in its digital infrastructure. A substantial “hidden” capital expense (security debt) is tied to Western Telco’s telecom infrastructures, mobile and fixed.
The European Commission’s 2023 second report on the implementation of the EU 5G cybersecurity toolbox offers an in-depth examination of the risks posed by high-risk suppliers, focusing on Chinese-origin infrastructure, such as equipment from Huawei and ZTE. The report outlines the various stages of implementation across EU Member States and provides recommendations on how to mitigate risks associated with Chinese infrastructure. It considers 5G and fixed broadband networks, including Customer Premise Equipment (CPE) devices like modems and routers placed at customer sites.
The EU Commission defines a high-risk supplier in the context of 5G cybersecurity based on several objective criteria to reduce security threats in telecom networks. A supplier may be classified as high-risk if it originates from a non-EU country with strong governmental ties or interference, particularly if its legal and political systems lack democratic safeguards, security protections, or data protection agreements with the EU. Suppliers susceptible to governmental control in such countries pose a higher risk.
A supplier’s ability to maintain a reliable and uninterrupted supply chain is also critical. A supplier may be considered high-risk if it is deemed vulnerable in delivering essential telecom components or ensuring consistent service. Corporate governance is another important aspect. Suppliers with opaque ownership structures or unclear separation from state influence are more likely to be classified as high-risk due to the increased potential for external control or lack of transparency.
A supplier’s cybersecurity practices also play a significant role. If the quality of the supplier’s products and its ability to implement security measures across operations are considered inadequate, this may raise concerns. In some cases, country-specific factors, such as intelligence assessments from national security agencies or evidence of offensive cyber capabilities, might heighten the risk associated with a particular supplier.
Furthermore, suppliers linked to criminal activities or intelligence-gathering operations undermining the EU’s security interests may also be considered high-risk.
To summarize what may make a telecom supplier a high-risk supplier:
Of non-EU origin.
Strong governmental ties.
The country of origin lacks democratic safeguards.
The country of origin lacks security protection or data protection agreements with the EU.
Associated supply chain risks of interruption.
Opaque ownership structure.
Unclear separation from state influence.
Ability to independently implement security measures shielding infrastructure from interference (e.g., sabotage, espionage, …).
These criteria are applied to ensure that telecom operators, and eventually any business with critical infrastructure, become independent of a single supplier, especially those that pose a higher risk to the security and stability of critical infrastructure.
Figure 8 above summarizes the current European legislative framework addressing high-risk suppliers in critical infrastructure, with an initial focus on 5G infrastructure and networks.
Regarding 5G infrastructure, the EU report reiterates the urgency for EU Member States to immediately implement restrictions on high-risk suppliers. The EU policy highlights the risks of state interference and cybersecurity vulnerabilities posed by the close ties between Chinese companies like Huawei and ZTE and the Chinese government. Following groundwork dating back to the 2008s EU Directive on Critical Infrastructure Protection (EPCIP), The EU’s Digital Single Market Strategy (2015), the (first) Network and Information Security (NIS) directive (2016), and early European concern about 5G societal impact and exposure to cybersecurity (2015 – 2017), the EU toolbox published in January 2020 is designed to address these risks by urging Member States to adopt a coordinated approach. As of 2023, a second EU report was published on the member state’s progress in implementing the EU Toolbox for 5G Cybersecurity. While many Member States have established legal frameworks that give national authorities the power to assess supplier risks, only 10 have fully imposed restrictions on high-risk suppliers in their 5G networks. The report criticizes the slow pace of action in some countries, which increases the EU’s collective exposure to security threats.
Germany, having one of the largest, in absolute numbers, Chinese RAN deployments in Western Europe, has been singled out for its apparent reluctance to address the high-risk supplier challenge in the last couple of years (see also notes in “Further Readings” at the back of this blog). Germany introduced its regulation on Chinese high-risk suppliers in July 2024 with a combination of their Telekommunikationsgesetz (TKG) and IT-Sicherheitsgesetz 2.0. The German government announced that starting in 2026, it will ban critical components from Huawei and ZTE in its 5G networks due to national security concerns. This decision aligns Germany with other European countries working to limit reliance on high-risk suppliers. Germany has been slower in implementing such measures than others in the EU, but the regulation marks a significant step towards strengthening its telecom infrastructure security. Light Reading has estimated that a German Huawei ban would cost €2.5B and take years for German telcos. This estimate seems very optimistic and certainly would require very substantial discounts from the supplier that would be chosen to replace, for example, their Huawei installations with, e.g., for Telekom Deutschland that would be ca. 50+% of their ca. 38+ thousand sites, and it is difficult for me to believe that that kind of economy would apply to all telcos in Western Europe with high-risk suppliers. I also believe it ignores de-commissioning costs and changes to the backend O&M systems. I expect telco operators will try to push the timeline for replacement until most of their high-risk supplier infrastructure is written off and ripe for modernization, which for Germany would most likely happen after 2026. One way or another, we should expect an increase in mobile Capex spending towards the end of the decade as the German operators are swapping out their Chinese RAN suppliers (which may only be a small part of their Capital spend if the ban is extended beyond 5G).
The European Commission recommends that restrictions cover critical and highly sensitive assets, such as the Radio Access Network (RAN) and core network functions, and urges member states to define transition periods to phase out existing equipment from high-risk suppliers. The transition periods, however, must be short enough to avoid prolonging dependency on these suppliers. Notably, the report calls for an immediate halt to installing new equipment from high-risk vendors, ensuring that ongoing deployment does not undermine EU security.
When it comes to fixed broadband services, the report extends its concerns beyond 5G. It stresses that many Member States are also taking steps to ensure that the fixed network infrastructure is not reliant on high-risk suppliers. Fourteen (14) member states have either implemented or plan to restrict Chinese-origin equipment in their fixed networks. Furthermore, nine (9) countries have adopted technology-neutral legislation, meaning the restrictions apply across all types of networks, not just 5G. This implies that Chinese-origin infrastructure, including transport network components, will eventually face the same scrutiny and restrictions as 5G networks. While the report does not explicitly call for a total ban on all Chinese-origin equipment, it stresses the need for detailed assessments of supplier risks and restrictions where necessary based on these assessments.
While the EU’s “5G Security Toolbox” focuses on 5G networks, Denmark’s approach, the “Danish Investment Screening Act,” which took effect on the 1st of July 2021, goes much further by addressing the security of fixed broadband, 4G, and transport networks. This broad regulatory focus helps Denmark ensure the security of its entire communications ecosystem, recognizing that vulnerabilities in older or supporting networks could still pose serious risks. A clear example of Denmark’s comprehensive approach to telecommunications security beyond 5G is when the Danish Center for Cybersikkerhed (CFCS) required TDC Net to remove Chinese DWDM equipment from its optical transport network. TDC Net claimed that the consequence of the CFCS requirement would result in substantial costs to TDC Net that they had not considered in their budgets. CFCS has regulatory and legal authority within Denmark, particularly in relation to national cybersecurity. CFCS is part of the Danish Defense Intelligence Service, which places it under the Ministry of Defense. Denmark’s regulatory framework is not only one of the sharpest implementations of the EU’s 5G Toolkit but also one of the most extensive in protecting its national telecom infrastructure across multiple layers and generations of technology. The Danish approach could be a strong candidate to serve as a blueprint for expanded EU regulation beyond 5G high-risk suppliers and thus become applicable to fixed broadband and transport networks, resulting in substantial additional Capex towards the end of the decade.
While not singled out as a unique risk category, customer premises equipment (CPE) from high-risk suppliers is mentioned in the context of broader network security measures. Some Member States have indicated plans to ensure that CPE is subject to strict procurement standards, potentially using EU-wide certification schemes to vet the security of such devices. CPE may be included in future security measures if it presents a significant risk to the network. Many CPEs have been integrated with the optical network terminal, or ONT, which is architecturally a part of the fixed broadband infrastructure, serving as a demarcation point between the fiber optic network and the customer’s internal network. Thus, ONT is highly likely to be considered and included in any high-risk supplier limitations that may come soon. Any CPE replacement program would likely be associated on its own with considerable Capex and cost for operators and their customers in general. The CPE quantum for the European Union (including the UK, cheeky, I know) is between 200 and 250 million CPEs, including various types of CPE devices, such as routers, modems, ONTs, and other network equipment deployed for residential and commercial users. It is estimated that 30% to 40% of these CPEs may be linked to high-risk Chinese suppliers. The financial impact of a systematic CPE replacement program in the EU (including the UK) could be between 5 to 8 billion euros in capital expenses, ignoring the huge operational costs of executing such a replacement program.
The Data Growth Slow Down – An Opportunities for Lower Capex?
How do we identify whether a growth dynamics, such as data growth, is exponential or self-limiting?
Exponential growth dynamics have the same (percentage) growth rate indefinitely. Self-limiting growth dynamics, or s-curve behavior, will have a declining growth rate. Natural systems are generally self-limiting, although they might exhibit exponential growth over a short term, typically in the initial growth phase. So, if you are in doubt (which you should not be), calculate the growth rate of your growth dynamics from the beginning until now. If that growth rate is constant (over several time intervals), your dynamics are exponential in nature (at least over the period you looked at); if not … well, your growth process is most likely self-limiting.
Telco Capex increases, and Telco Capex decreases. Capex is, in nature, cyclic, although increasing over time. Most European markets will have access to 550 to 650 MHz downlink spectrum depending on SDL deployment levels below 4 GHz. Assuming 4 (1) Mbps per DL (UL) MHz per sector effective spectral efficiency, 10 traffic hours per day, and ca. 350 to 400 thousand mobile sites (3 sectors each) across Western Europe, the carrying mobile capacity in Bytes is in the order of 140 Exa Bytes (EB) per Month (note: if I had chosen 2 and 0.5 Mbps per MHz per sector, carrying capacity would be ca. 70 EB/Month). It is clear that this carrying capacity limit will continue to increase with software releases, innovation, advanced antenna deployment with higher order MiMo, and migration from older radio access technologies to the newest (increasing the effective spectral efficiency).
According to Ericsson Mobility Visualizer, Western Europe saw a mobile data demand per month of 11 EB in 2023 (see Figure below). The demand for mobile data in 2023 was almost 10 times lower than the (conservatively) estimated carrying capacity of the underlying mobile networks.
Figure 9 illustrates the actual demanded data volume in EB per month. I have often observed that when planners estimate their budgetary demand for capacity expansions, they use the current YoY growth rate and apply it to the future (assuming their growth dynamics are geometrical). I call this the “Naive Expectations” assumption (fallacy) that obviously leads to the overprovision of network capacity and less efficient use of Capex, as opposed to the “Informed Expectations” approach based on the more realistic S-Curve dynamic growth dynamics. I have rarely seen the “Naive Expectations” fallacy challenged by CFOs or non-technical leadership responsible for the Telco budgets and economic health. Although not a transparent approach, it is a “great” way to add a “bit” of Capex cushion for other Capex uncertainties.
It should be noted that the Ericsson data treats traffic generated by fixed wireless access (FWA) separately (which, by the way, makes sense). Thus, the 11 EB for 2023 does not include FWA traffic. Ericsson only has a global forecast for FWA traffic starting from 2023 (note: it is not clear whether 2023 is actual FWA traffic or estimated). To get an impression of the long-term impact of FWA traffic, we can apply the same S-curve approach as the one used for mobile data traffic above, according to what I call the “Informed expectations” approach. Even with the FWA traffic, it is difficult to see a situation that, on average (at least), would pose any challenge to existing mobile networks. Particularly, the carrying capacity can easily be increased by deploying more advanced antennas (e.g., higher order MiMo), and, in general, it is expected to improve with each new software release forthcoming.
Figure 10 above uses Ericsson’s Mobile Visualizer data for Western Europe’s mobile and fixed wireless access (FWA) traffic. It gives us an idea of the total traffic expectations if the current usage dynamics continue. Ericsson only provides a global FWA forecast from 2023 to 2029. I have assumed WEU takes its proportional mobile share of the FWA traffic. Note: For the period up to and including 2023, it seems a bit rich in its FWA expectations, imo.
So, by all means, the latest and greatest mobile networks are, without much doubt, in most places, over-dimensioned from the perspective of their carrying bytes potential, the volumetric capacity, and what is demanded in terms of data volume. They also appear to remain so for a very long time unless the current demand dynamics fundamentally change (which is, of course, always a possibility, as we have seen historically).
However, that our customers get their volumetric demand satisfied is generally a reflection of the quality in terms of bits per second (a much more fundamental unit than volume) satisfied. Thus, the throughput, or speed, should be good enough for the customer to unhindered enjoy their consumption, which, as a consequence, generates the Bytes that most Telco executives have told themselves they understand and like to base their pricing on (and I would argue judging by my experience outside Europe more often than not maybe really don’t get). It is not uncommon that operators with complex volumetric pricing become more obsessed with data volume rather than optimum quality (that might, in fact, generate even more volume). The figure below is a snapshot from August 2024 of the median speeds customers enjoy in mobile as well as fixed broadband networks in Western Europe. In most cases in Europe, customers today enjoy substantially faster fixed-broadband services than they would get in mobile networks. One should expect that this would change how Telcos (at least integrated Telcos) would design and plan their mobile networks and, consequently, maybe dramatically reduce the amount of Mobile Capex we spend. There is little evidence that this is happening yet. However, I do anticipate, most likely naively, that the Telco industry would revise how mobile networks are architected, designed, and built with 6G.
Figure 11 shows that apart from one Western European country (Greece, also a fixed broadband laggard), all other markets have superior fixed broadband downlink speeds compared to what mobile networks can deliver. Note that the speed measurement data is based on the median statistic. Source:Speedtest Global Index, August 2024.
A Crisis of Too Much of a “Good” Thing?
Analysys Mason recently (July 2024) published a report titled “A Crisis of Overproduction in Bandwidth Means that Telecoms Capex Will Inevitably Fall.” The report explores the evolving dynamics of capital expenditure (Capex) in the telecom industry, highlighting that the industry is facing a turning point. The report argues that the telecom sector has reached a phase of bandwidth overproduction, where the infrastructure built to deliver data has far exceeded demand, leading to a natural decline in Capex over the coming years.
According to the Analysys Mason report, global Capex in the telecom sector has already peaked, with two significant investment surges behind it: the rollout of 5G networks in mobile infrastructure and substantial investments in fiber-to-the-premises (FTTP) networks. Both of these infrastructure developments were seen as essential for future-proofing networks, but now that the peaks in these investments have passed, Capex is expected to fall. The report predicts that by 2030, the Capex intensity (the proportion of revenue spent on capital investments) will drop from around 20% to 12%. This reduction is due to the shift from building new infrastructure to optimizing and maintaining existing networks.
The main messages that I take away from the Analysys Mason report are the following:
Overproduction of bandwidth: Telecom operators have invested heavily in building their networks. However, demand for data and bandwidth is no longer growing at the exponential rates seen in previous years.
Shifting Capex Trends: The telecom industry is experiencing two peaks: one in mobile spending due to the initial 5G coverage rollout and another in fixed broadband due to fiber deployments. Now that these peaks have passed, Capex is expected to decline.
Impact of lower data growth: The stagnation in mobile and fixed data demand, combined with the overproduction of mobile and fixed bandwidth, makes further large-scale investment in network expansion unnecessary.
My take on Analysys Mason’s conclusions is that with the cyclic nature of Telco investments, it is natural to expect that Capex will go up and down. That Capex will cycle between 20% (peak deployment phase) and 12% (maintenance phase) seems very agreeable. However, I would expect that the maintenance level would continue to increase as time goes by unless we fundamentally change how we approach mobile investments.
That network capacity is built up at the beginning of a new technology cycle (e.g., 5G NR, GPON, XGPON, XSGPON-based FTTH), it is also not surprising that the amount of available capacity will appear substantial. I would not call it a bandwidth overproduction crisis (although I agree that the overhead of provisioned carrying capacity compared to demand expectations seems historically high); it manifests the technologies we have developed and deployed today. For 5G NR real-world conditions, users could see peak DL speeds ranging from 200 Mbps to 1 Gbps with median 5G DL speeds of 100+ Mbps. The lower end of this range applies in areas with fewer available resources (e.g., less spectrum, fewer MIMO streams). In comparison, the higher end reflects better conditions, such as when a user is close to the cell tower with optimal signal conditions. The quality of fiber-connected households at current GPON and XGPON technology would be sustainable at 1 to 10 Gbps downstream to the in-home ONT/CPE. However, the in-home quality experienced over WiFi would depend a lot on how the WiFi network has been deployed and how many concurrent users there are at any given time. As backhaul and backbone transmission solutions to mobile and fixed access will be modern and fiber-based, there is no reason to believe that user demand should be limited in any way (anytime soon), given a well-optimized, modern fiber-optic network should be able to reach up to 100 Tbps (e.g., 10 EB per month with 10 traffic hours per day).
Germany, the UK, Belgium, and a few smaller Western countries will continue their fiber deployment for some years to bring their fiber coverage up to the level of countries such as France, Spain, Portugal, and the Netherlands. It is difficult to believe that these countries would not continue to invest substantial money to raise their fiber coverage from their current low levels. Countries with less than 60% fiber-to-the-home coverage have a share of 50+ % of the overall Western European Capex level.
The fact that the Telco industry would eventually experience lower growth rates should not surprise anyone. That has been in the cards since growth began. The figure below takes actual mobile data from Ericsson’s Mobile Visualizer. It applies a simple S-curve growth model dynamics to those data that actually do a very good job of accounting for the behavior. A geometrical growth model (or exponential growth dynamics), while possibly accounting for the early stages of technology adaptation and the resulting data growth, is not a reasonable model to apply here and is not supported by the actual data.
Figure 12 provides the actual Exa Bytes (EB) monthly with a fitted S-Curve extrapolated beyond 2023. The S-Curve is described by the Data Demand Limit (Ls), Growth Rate (k), and the Inflection Year (T0), where growth transitions from acceleration to deceleration. Source:Ericsson Mobile Visualizer resource.
The growth dynamic, applied to the data we extract from the markets shown in the above Figure, indicates that in Western Europe and the CEE (Central Eastern Europe), the inflection point should be expected around 2025. This is the year when the growth rates begin to decline. In Western Europe (and CEE), we would expect the growth rate to become less than 10% by 2030, assuming that no fundamental changes to the growth dynamic occur. The inflection point for the North American markets (i.e., The USA and Canada) is around 2033; this is expected to happen a bit earlier (2030) for Asia. Based on the current growth dynamics, North America will experience growth rates below 10% by 2036. For Asia, this event is expected to take place around 2033. How could FWA traffic growth change these results? The overall behavior would not change. The inflection point may happen later, thus the onset of slower growth rates, and the time when we would expect a growth rate lower than 10% would be a couple of years after the inflection year.
Let us just for fun (usually the best reason) construct a counterfactual situation. Let us assume that data growth continues to follow geometric (exponential) growth indefinitely without reaching a saturation point or encountering any constraints (e.g., resource limits, user behavior limitations). The premise is that user demand for mobile and fixed-line data will continue to grow at a constant, accelerating rate. For mobile data growth, we use the 27% YoY growth of 2023 and use this growth rate for our geometrical growth model. Thus, every ca. 3 years, the demand would double.
If telecom data usage continued to grow geometrically, the implications would (obviously) be profound:
Exponential network demand: Operators would face exponentially increasing demand on their networks, requiring constant and massive investments in capacity to handle growing traffic. Once we reach the limits of the carrying capacity of the network, we have three years (with a CAGR of 27%) until demand has doubled. Obviously, any spectrum position would quickly become insufficient, resulting in massive investments in new infrastructure (sites in mobile and more fiber) would be needed. Capacity would become the growth limiting factor.
Costs: The capital expenditures (Capex) required to keep pace with geometric growth would skyrocket. Operators must continually upgrade or replace network equipment, expand physical infrastructure, and acquire additional spectrum to support the growing data loads. This would lead to unsustainable business models unless prices for services rose dramatically, making such growth scenarios unaffordable for consumers but long before that for the operators themselves.
Environmental and Physical Limits: The physical infrastructure necessary to support geometric growth (cell towers, fiber optic cables, data centers) would also have environmental consequences, such as increased energy consumption and carbon emissions. Additionally, telecom providers would face the law of diminishing returns as building out and maintaining these networks becomes less economically feasible over time.
Consumer Experience: The geometric growth model assumes that user behavior will continue to change dramatically. Consumers would need to find new ways to utilize vast amounts of bandwidth beyond streaming and current data-heavy applications. Continuous innovation in data-hungry applications would be necessary to keep up with the increased data usage.
The counterfactual argument shows that geometric growth, while useful for the early stages of data expansion, becomes unrealistic as it leads to unsustainable economic, physical, and environmental demands. The observed S-curve growth is more appropriate for describing mobile data demand because it accounts for saturation, the limits of user behavior, and the constraints of telecom infrastructure investment.
Back to Analysys Mason’s expected, and quite reasonable, consequence of the (progressively) lower data growth: large-scale investment would become unnecessary.
While the assertion is reasonable, as said, mobile obsolescence hits the industry every 5 to 7 years, regardless of whether there is a new radio access technology (RAT) to take over. I don’t think this will change, or maybe the Industry will spend much more on software annually than previously and less on hardware modernization during obsolescence transformations. Though I suspect that the software would impose increasingly harder requirements on the underlying hardware (whether on-prem or in the cloud), modernization investments into the hardware part would continue to be substantial. This is not even considering the euphoria that may come around the next generation RAT (e.g., 6G).
The fixed broadband fiber infrastructure’s economical and useful life is much longer than that of the mobile infrastructure. The optical transmission equipment is likewise used for access, aggregation, and backbone (although not as long as the optical fiber itself). Additionally, fiber-based fixed broadband networks are operationally (much) more efficient than their mobile counterparts, alluding to the need to re-architect and redesign how they are being built as they are no longer needed inside customer dwellings. Overall, it is not unreasonable to expect that fixed broadband modernization investments will occur less frequently than for mobile networks.
Is Enough Customer Bandwidth a Thing?
Is there an optimum level of bandwidth in bits per second at which a customer is fully (optimized) served? Beyond that, whether the network could provide far more speed or quality does not matter.
For example. for most mobile devices, phones, and tablets, much more than 10 Mbps for streaming would not make much of a viewing difference for the typical customer. Given the assumptions about eyesight and typical viewing distances, more than 90% of people would not notice an improvement in viewing experience on a mobile phone or tablet beyond 1080p resolution. Increasing the resolution beyond that point—such as to 1440p (Quad HD) or 4K would likely not provide a noticeably better experience for most users, as their visual acuity limits their ability to discern finer details on small screens. This means the focus for improving mobile and tablet displays shifts from resolution to other factors like color accuracy, brightness, and contrast rather than chasing higher pixel counts. An optimization strategy that should not necessarily result in higher bandwidth requirements, although moving to higher color depth or more brightness / dynamic range (e.g., HDR vs SDR) would lead to a moderate increase in the required data ranges.
A throughput between 50 and 100 Mbps for fixed broadband TV streaming currently provides an optimum viewing experience. Of course, a fixed broadband household may have many concurrent bandwidth demands that would justify a 1 Gbps fiber to the home or maybe even 10 Gbps downstream to serve the whole household at an optimum experience at any time.
Figure 13 provides the data rate ranges for a streaming format, device type, and typical screen size. The data rate required for streaming video content is determined by various factors, including video resolution, frame rate, compression, and screen size. The data rate calculation (in Mbps) for different streaming formats follows a process that involves estimating the amount of data required to encode each frame and multiplying by the frame rate and compression efficiency. The methodology can be found in many places. See also my blog “5G Economics – An Introduction (Chapter 1)” from Dec. 2016.
Let’s move into high-end and fully immersive virtual reality experiences. The user bandwidth requirement may exceed 100 Mbps and possibly even require a Gbps sustainable bandwidth delivered to the user device to provide an optimum experience. However, jitter and latency performance may not make such full immersion or high-end VR experiences fully optimal over mobile or fixed networks with long distances to the supporting (edge) data centers and cloud servers where the related application may reside. In my opinion, this kind of ultra-high-end specialized service might be better run exclusively on location.
Size Matter.
I once had a CFO who was adamant that an organization’s size on its own would drive a certain amount of Capex. I would, at times, argue that an organization’s size should depend on the number of activities required to support customers (or, more generally, the number of revenue-generating units (RGUs), your given company has or expects to have) and the revenue those generate. In my logic, at the time, the larger a country in terms of surface area, population, and households, the more capex-related activities would be required, thus also resulting in the need for a bigger organization. If you have more RGU, it might also not be too surprising that the organization would be bigger.
Since then, I have scratched my head many times when I look at country characteristics, the RGUs, and Revenues, asking how that can justify a given size of Telco organizations, knowing that there are other Telcos out there that spend the same or more Capex with a substantially smaller organization (also after considering the difference in sourcing strategies). I have never been with an organization that irrespective of its size did not feel pressured work-wise and believed it was too lightly staffed to operate, irrespective of the Capex and activities under management.
Figure 14 illustrates the correlation between the Capex and the number of FTEs in a Telco organization. It should be noted that the upper right point results in a very good correlation of 0.75. Without this point, the correlation would be around 0.25. Note that sourcing does have a minor effect on the correlation.
The above figure illustrates a strong correlation between Capex and the number of people in a Telco organization. However, the correlation would be weaker without the upper right data point. In the data shown here, you will find no correlation between FTEs and a country’s size, such as population or surface area, which is also the case for Capex. There is a weak correlation between FTEs and RGU and a stronger correlation with Revenues. Capex, in general, is very strongly correlated with Revenues. The best multi-linear regression model, chosen by p-value, is a model where Capex relates to FTEs and RGUs. For a Telco with 1000 employees and 1 million RGUs, approximately 50% of the Capex could be explained by the number of FTEs. Of course, in the analysis above, we must remember that correlation does not imply causation. You will have telcos that, in most Capex driver aspects, should be reasonably similar in their investment profiles over time, except the telco with the largest organization will consistently invest more in Capex. While I think this is, in particular, an incumbent vs challenger issue, it is a much broader issue in our industry.
Having spent most of my 20+ year career in Telecom being involved in Capex planning and budgeting, it is clear that the size of an organization plays a role in the size of a Capex budget. Intuitively, it should not be too surprising. Suppose the Capex is lower than the capacity of your organization. In that case, you may have to lay off people with the risk you might be short of resources in the future as you may cycle through modernization or a new technology introduction. On the other hand, if the Capex needs are substantially larger than the organization can cope with, including any sourcing agreements in place, it may not make too much sense to ask for more than what can be managed with the resources available (apart from it being sub-optimal for cash flow optimization).
Telco companies that have fixed and mobile broadband infrastructure in their portfolio with organizations that are poorly optimized and with strict demarcation lines between people working on fixed broadband and mobile broadband will, in general, have much worse Capex efficiencies compared to fully fixed-mobile converged organizations (not to mention suffering from poorer operational efficiencies and work practices compared to integrated organizations). Here, the size of, for example, a mobile organization will drive behavior that rather would spend above and beyond Capex in their Radio Access Network infrastructure than use more clever and proven solutions (e.g., Opanga’s RAIN) to optimize quality and capacity needs across their mobile networks.
In general, the resistance to utilize smarter solutions and clever ideas that may save Capex (and/or Opex) is manifesting in a many-fold of behaviors that I have observed over my 25+ year career (and some I might even have adapted on occasion … but shhhh;-).
Budget heuristics:
𝗦𝗶𝘇𝗲 𝗱𝗼𝗲𝘀𝗻𝘁 𝗺𝗮𝘁𝘁𝗲𝗿 𝗽𝗮𝗿𝗮𝗱𝗶𝗴𝗺 Irrespective of size, my organization will always be busy and understaffed.
𝗧𝗵𝗲 𝗚𝗼𝗹𝗱𝗶𝗹𝗼𝗰𝗸𝘀 𝗙𝗮𝗹𝗹𝗮𝗰𝘆 My organization’s size and structure will determine its optimum Capex spending profile, allowing it to stay busy (and understaffed).
𝗧𝗮𝗻𝗴𝗶𝗯𝗹𝗲 𝗕𝗶𝗮𝘀 A hardware (infrastructure-based) solution is better and more visible than a software solution. I feel more comfortable with my organization being busy with hardware.
𝗧𝗵𝗲 𝗦𝘂𝗻𝗸 𝗖𝗼𝘀𝘁 𝗙𝗮𝗹𝗹𝗮𝗰𝘆 I don’t trust (allegedly) clever software solutions that may lower or postpone my Capex needs and, by that, reduce the need for people in my organization.
𝗕𝘂𝗱𝗴𝗲𝘁 𝗠𝗮𝘅𝗶𝗺𝗶𝘇𝗮𝘁𝗶𝗼𝗻 𝗧𝗲𝗻𝗱𝗲𝗻𝗰𝘆 My organization’s importance and my self-importance are measured by how much Capex I have in my budget. I will resist giving part of my budget away to others.
𝗦𝘁𝗮𝘁𝘂𝘀 𝗤𝘂𝗼 𝗕𝗶𝗮𝘀 I will resist innovation that may reduce my Capex budget, even if it may also help reduce my Opex.
𝗝𝗼𝗯 𝗣𝗿𝗼𝘁𝗲𝗰𝘁𝗶𝗼𝗻𝗶𝘀𝗺 I resist innovation that may result in a more effective organization, i.e., fewer FTEs.
𝗖𝗮𝗽𝗮𝗰𝗶𝘁𝘆 𝗖𝗼𝗺𝗳𝗼𝗿𝘁 𝗦𝘆𝗻𝗱𝗿𝗼𝗺𝗲: The more physical capacity I build into my network, the more we can relax. Our goal is a “Zero Worry Network.”
𝗧𝗵𝗲 𝗙𝗲𝗮𝗿 𝗙𝗮𝗰𝘁𝗼𝗿: The leadership is “easy to scare” when arguing for more capacity Capex opposed to the “if-not”-consequences. (e.g., losing best network awards, poorer customer experience, …).
𝗧𝗵𝗲 𝗕𝘂𝗱𝗴𝗲𝘁 𝗜𝗻𝗲𝗿𝘁𝗶𝗮 Return on Investment (ROI) prioritization is rarely considered (rigorously), particularly after a budget has been released.
𝗔 𝘄𝗮𝗿𝗻𝗶𝗻𝗴: although each is observable in the live, the reader should be aware that there is also a fair amount of deliberate ironic provocation in the above heuristics.
We should never underestimate that within companies, two things make you important (including self-important and self-worthy) … It is: (1) The size of your organization and (2) the amount of money, your budget size, you have for your organization to be busy with.
Any innovation that may lower an organization’s size and budget will be met with resistance from that organization.
The Balancing Act of Capex to Opex Transformations.
Telco cost structures and Capex have evolved significantly due to accounting changes, valuation strategies, technological advancements, and economic pressures. While shifts like IFRS (International Financial Reporting Standards), issued by the International Accounting Standards Board (IASB), have altered how costs are reported and managed, changes in business strategies, such as cell site spin-offs, cloud migrations, and the transition to software-defined networks, have reshaped Capex allocations somewhat. At the same time, economic crises and competitive pressures have influenced Telcos to continually reassess their capital investments, balancing the need to optimize value, innovation, and growth with financial diligence.
One of the most significant drivers of change has been the shift in accounting standards, particularly with the introduction of IFRS16, which replaced the older GAAP-based approaches. Under IFRS16, nearly all leases are now recognized on the balance sheet as right-of-use assets and corresponding liabilities. This change has particularly impacted Telcos, which often engage in long-term leases for cell sites, network infrastructure, and equipment. Previously, under GAAP (Generally Accepted Accounting Principles), many leases were treated as operating leases, keeping them off the balance sheet, and their associated costs were considered operational expenditures (Opex). Now, under IFRS16, these leases are capitalized, leading to an increase in reported Capex as assets and liabilities grow to reflect the leased infrastructure. This shift has redefined how Telcos manage and report their Capex, as what was previously categorized as leasing costs now appears as capital investments, altering key financial metrics like EBITDA and debt ratios that would appear stronger post-IFRS16.
Simultaneously, valuation strategies and financial priorities have driven significant shifts in Telco Capex. Telecom companies have increasingly focused on enhancing metrics such as EBITDA and capital efficiency, leading them to adopt strategies to reduce heavy capital investments. One such strategy is the cell site spin-off, where Telcos sell off their tower and infrastructure assets to specialized independent companies or create separate entities that manage these assets. These spin-offs have allowed Telcos to reduce the Capex tied to maintaining physical assets, replacing it with leasing arrangements, which shift costs towards operational expenses. As a result, Capex related to infrastructure declines, freeing up resources for investments in other areas such as technology upgrades, customer services, and digital transformation. The spun-off infrastructures often result in significant cash inflows from sales. The telcos can then use this cash to improve their balance sheets by reducing debt, reinvesting in new technologies, or distributing higher dividends to shareholders. However, this shift may also reduce control over critical network infrastructure and create long-term lease obligations, resulting in substantial operational expenses as telcos will have to pay the rental costs on the spun-off infrastructure, increasing Opex pressure. I regularly see analysts using the tower spin-off as an argument for why Capex requirements of telcos are no longer wholly trustworthy and, in particular, in comparison with the past capital spending as the passive part of the cell site built used to be a substantial share mobile site Capex of up to 50% to 60% for a standard site built and beyond that for special sites. I believe that as not many new cell sites are being built any longer, and certainly not as many as in the 90s and 2000s, this effect is very minor on the overall Capex. Most new sites are built at a maintenance level, covering new residential or white spot areas.
When considering mobile network evolution and the impact of higher frequencies, it is important not to default to the assumption that more cell sites will always be necessary. If all things are equal, the coverage cell range of a high carrier frequency would be shorter (often much shorter) than the coverage range at a lower frequency. However, all things are not equal. This misconception arises from a classical coverage approach, where the frequency spectrum is radiated evenly across the entire cell area. However, modern cellular networks employ advanced technologies such as beamforming, which allows for more precise and efficient distribution of radio energy. Beamforming concentrates signal power in specific directions rather than thinly spreading it across a wide area, effectively increasing reach and signal quality without additional sites. Furthermore, the support for asymmetric downlink (higher) and uplink (lower) carrier frequencies allows for high-quality service downlink and uplink in situations where the uplink might be challenged at higher frequencies.
Moreover, many mobile networks today have already been densified to accommodate coverage needs and capacity demands. This densification often occurred when spectrum resources were scarce, and the solution was to add more sites for improved performance rather than simply increasing coverage. As newer frequency bands become available, networks can leverage beamforming and existing densification efforts to meet coverage and capacity requirements without necessarily expanding the number of cell sites. Thus, the focus should be optimizing the deployment of advanced technologies like beamforming and Massive MIMO rather than increasing the site count by default. In many cases, densified networks are already equipped to handle higher frequencies, making additional sites unnecessary for coverage alone.
The migration to public cloud solutions from, for example, Amazon’s AWS or Microsoft Azure is another factor influencing the Capex of Telcos. Historically, telecom companies relied on significant upfront Capex to build and maintain their own data centers or switching locations (as they were once called, as these were occupied mainly by the big legacy telecom proprietary telco switching infrastructure), network operations centers, and IT (monolithic) infrastructure. However, with the rise of cloud computing, Telcos are increasingly migrating to cloud-based solutions, reducing the need for large-scale physical infrastructure investments. This shift from hardware to cloud services changes the composition of Capex as the need for extensive data center investments declines, and more flexible, subscription-based cloud services are adopted. Although Capex for physical infrastructure decreases, there is a shift towards Opex as Telcos pay for cloud services on a usage basis.
Further, the transition to software-defined networks (SDNs) and software-centric telecom solutions has transformed the nature of Telco Capex. In the past, Telcos heavily depended on proprietary hardware for network management, which required substantial Capex to purchase and maintain physical equipment. However, with the advancement of virtualization and SDNs, telcos have shifted away from hardware-intensive solutions to more software-driven architectures. This transition reduces the need for continuous Capex on physical assets like routers, switches, and servers and increases investment in software development, licensing, and cloud-based platforms. The software-centric model allows, in theory, Telcos to innovate faster and reduce long-term infrastructure costs.
The Role of Capex in Financial Statements.
Capital expenditures play a critical role in shaping a telecommunications company’s financial health, influencing its income statement, balance sheet, and cash flow statements in various ways. At the same time, Telcos establish financial guardrails to manage the impact of Capex spending on dividends, liquidity, and future cash needs.
In the income statement (see Figure 15 below), Capex does not appear directly as an expense when it is incurred. Instead, it is capitalized on the balance sheet and then expensed over time through depreciation (for tangible assets) or amortization (for intangible assets). This gradual recognition of the Capex expenditure leads to higher depreciation or amortization charges over future periods, reducing the company’s net income. While the immediate impact of Capex is not seen on the income statement, the long-term effects can improve revenue when investments enhance capacity and quality, as with technological upgrades like 5G infrastructure. However, these benefits are offset by the fact that depreciation lowers profitability in the short term (as the net profit is lowered). The last couple of radio access technology (RAT) generations have, in general, caused an increase in telcos’ operational expenses (i.e., Opex) as more cell sites are required, heavier site configurations are implemented (e.g., multi-band antennas, massive MiMo antennas), and energy consumption has increased in absolute terms. Despite every new generation having become relatively more energy efficient in terms of the kWh/GB, in absolute terms, this is not the case, and that matters for the income statement and the incurred operational expenses.
Figure 15 illustrates the typical income statement one may find in a telco’s annual report or official financial statements. The purpose here is to show where Capex may have an influence although Capex will not be directly stated in the Income Statement. Note: the numbers in the above financial statement are for illustration only representing a Telco with 35% EBITDA margin, 20% Capex to Revenue Ratio and a Tax rate of 22%.
On the balance sheet (see Figure 16 below), Capex increases the value of a company’s fixed assets, typically recorded as property, plant, and equipment (PP&E). As new assets are added, the company’s overall asset base grows. However, this is balanced by the accumulation of depreciation, which gradually reduces the book value of these assets over time. How Capex is financed also affects the company’s liabilities or equity. If debt is used to finance Capex, the company’s liabilities increase; if equity financing is used, shareholders’ equity increases. The Balance Sheet together with the Depreciation & Amortization (D&A), typically given in the income statement, can help us estimate the amount of Capex a Telco has spend. The capital expense, typically not directly reported in a companies financial statements, can be estimated by adding the changes between subsequent years of PP&E and Intangible Assets to the D&A.
Figure 16 illustrates the balance sheet one may find in a telco’s annual report or official financial statements. The purpose here is to show where Capex may have an influence. Knowing the Depreciation & Amortization (D&A) typically shown in the Income Statement, the change in PP&E and Intangible Assets (between two subsequent years) will provide an estimate of the Capex of the current year. Note: the numbers in the above financial statement are for illustration only representing a Telco with 35% EBITDA margin, 20% Capex to Revenue Ratio and a Tax rate of 22%.
In the cash flow statement, Capex appears as an outflow under the category of cash flows from investing activities, representing the company’s spending on long-term assets. In the short term, this creates a significant reduction in cash. However, well-planned Capex to enhance infrastructure or expand capacity can lead to higher operating cash flows in the future. If Capex is funded through debt or equity issuance, the inflow of funds will be reflected under cash flows from financing activities.
Figure 17 illustrates the Cash Flow Statements one may find in a telco’s annual report or official financial statements (might have a bit more details than what usually would be provided). We would typically get a 70+% impression of a Telco’s Capex level by looking at the “Net Cash Flow Used in Investing Activities”, unless we are offered Purchases of Tangible and Intangible Assets. Note: the numbers in the above financial statement are for illustration only representing a Telco with 35% EBITDA margin, 20% Capex to Revenue Ratio and a Tax rate of 22%.
To ensure Capex does not overly strain the company’s financial health or limit returns to shareholders, Telcos put in place financial guardrails. Regarding dividends, many companies set specific dividend payout ratios, ensuring that a portion of earnings or free cash flow is consistently returned to shareholders. This practice balances returning value to shareholders while retaining sufficient earnings to fund operations and investments. It is also not unusual that Telco’s commit a given dividend level to shareholders, that as a consequence may place a limit on Capex spending or result in Capex tasking within a given planning period, as management must balance cash outflows between shareholder returns and strategic investments. This may lead to prioritizing essential projects, delaying less critical investments, or seeking alternative financing to maintain both Capex and dividend commitments. Additionally, Telcos often use dividend coverage ratios to ensure they can sustain dividend payouts even during periods of heavy capital expenditure.
Some telcos have chosen not to commit dividends to shareholders in order to maximize Capex investments, aiming to reinvest profits into the business to drive long-term growth and create higher shareholder value. This strategy prioritizes network expansion, technological upgrades, and new market opportunities over immediate cash returns, allowing the company to maintain financial flexibility and pursue strategic objectives more aggressively. When a telco decides to start paying dividends, it may indicate that management believes there are fewer high-value investment opportunities that can deliver returns above the company’s cost of capital. The decision to pay dividends often reflects the view that shareholders may derive greater value from the cash than the company could generate by reinvesting it. Often it signals a shift to a higher degree of maturity (e.g., corporate or market wise) from having been a growth focused company (i.e., the Telco has past the inflection point of growth). An example of maturity, and maybe less about growth opportunities, is the case of T-Mobile USA which in 2024 announced that it would start to pay dividend for the first time in its history targeting a 10 percent annually per share (note: Deutsche Telekom AG gained ownership in 2001, the company was founded in 1994).
Liquidity management is another consideration. Companies monitor their liquidity through current or quick ratios to ensure they can meet short-term obligations without cutting dividends or pausing important Capex projects. To provide an additional safety net, Telcos often maintain cash reserves or access to credit lines to handle immediate financial needs without disrupting long-term investment plans.
Regarding debt management, Telcos must carefully balance using debt to finance Capex. Companies often track their debt-to-equity ratio to avoid over-leveraging, which can lead to higher interest expenses and reduced financial flexibility. Another common metric is net debt to EBITDA, which ensures that debt levels remain manageable concerning the company’s earnings. To avoid breaching agreements with lenders, Telcos often operate under covenants that limit the amount they can spend on Capex without negatively affecting their ability to service debt or pay dividends.
Telcos also plan long-term cash flow to ensure Capex investments align with future financial needs. Many companies establish a capital allocation framework that prioritizes projects with the highest returns, ensuring that investments in infrastructure or technology do not jeopardize future cash flow. Free cash flow (FCF) is a particularly important metric in this context, as it represents the amount of cash available after covering operating expenses and Capex. A positive FCF ensures the company can meet future cash needs while returning value to shareholders through dividends or share buybacks.
Capex budgeting and prioritization are also essential tools for managing large investments. Companies assess the expected return on investment (ROI) and the payback period for Capex projects, ensuring that capital is allocated efficiently. Projects with assumed high strategic value, such as 5G infrastructure upgrades, household fiber coverage, or strategic fiber overbuilt, are often prioritized for their potential to drive long-term revenue growth. Monitoring the Capex-to-sales ratio helps ensure that capital investments are aligned with revenue growth, preventing over-investment in infrastructure that may not yield sufficient returns.
CAPEX EXPECTATIONS 2024 to 2026.
Considering all of the 54 telcos, ignoring MasMovil and WindHellas that are in the process of being integrated, in the pool of New Street Research Quarterly review each with their individual as well as country “peculiarities” (e.g., state of 5G deployment, fiber-optical coverage, fiber uptake, merger-resulting integration Capex, general revenue trends, …), it is possible to get a directional idea of how Capex will develop for each individual telco as well as the overall trend. This is illustrated in the Figure below on a Western European level.
I expect that we will not see a Capex reduction in 2024, supported by how Capex in the third and fourth quarters usually behave compared to the first two quarters, and due to integration and transformation Capex that will carry from 2023 into 2024 and possibly with a tail-end in 2024. I expect most telcos will cut back on new mobile investments, even if some might start ripping out radio access infrastructure from Chinese suppliers. However, I also believe that telcos will try to delay replacement to 2026 to 2028, when the first round of 5G modernization activities would be expected (and even overdue for some countries).
While 5G networks have made significant advancements, the rollout of 5G SA remains limited. By the end of 2023, only five of 39 markets analyzed by GSMA have reached near-complete adoption of 5G SA networks. 17 markets had yet to launch 5G SA at all. One of the primary barriers is the high cost of investment required to build the necessary infrastructure. The expansion and densification of 5G networks, such as installing more base stations, are essential to support 5G SA. According to GSMA, many operators are facing financial hurdles, as returns in many markets have been flat, and any increase is mainly due to inflationary price corrections rather than incremental or new usage occurring. I suspect that telcos may also be more conservative (and even more realistic, maybe) in assessing the real economic potential of the features being enabled by migrating to 5G SA, e.g., advanced network slicing, ultra-low latency, and massive IoT capabilities in comparison with the capital investments and efforts that they would need to incur. I should point out that any core network investments supporting 5G SA would not be expected to have a visible impact on telcos Capex budgets as this would be expected to be less than 10% of the mobile capex.
Figure 18 shows the 2022 status of homes covered by fiber in 16 Western European countries, as well as the number of households remaining. It should be noted that a 100% coverage level may be unlikely, and this data does not consider fiber overbuilt (i.e., multiple companies covering the same households with their individual fiber deployments). Fiber overbuilt becomes increasingly likely as the coverage exceeds 80% (on a geographical regional/city basis). The percentages (yellow color) above the chart show the share of Total 2022 Western European Capex for the country, e.g., Germany’s share of the 2022 Capex was 18% and had ca. 19% of all German households covered with fiber. Source: based on Omdia & Point Topic’s “Broadband Coverage in Europe 2013-2022” (EU Commission Report).
In 2022, a bit more than 50% of all Western European households were covered by fiber (see Figure 18 above), which amounts to approximately 85 million households with fiber coverage. This also leaves approximately 80 million households without fiber reach. Almost 60% of households without fiber coverage are in Germany (38%) and the UK (21%). Both Germany and the UK contributed about 40% of the total Western European Capex spend in 2022.
Moreover, I expect there are still Western European markets where the Capex priority is increasing the fiber-optic household coverage. In 2022, there was a peak in new households covered by fiber in Western Europe (see Figure 15 below), with 13+ million households covered according to the European Commission’s report “Broadband Coverage in Europe 2013-2022“. Germany (a fiber laggard) and the UK, which account for more than 35% of the Western European Capex, are expected to continue to invest substantially in fiber coverage until the end of the decade. As Figure 19 below illustrates, there is still a substantial amount of Capex required to close the fixed broadband coverage gap some Western European countries have.
Figure 19 illustrates the number of households covered by fiber (homes passed) and the number of millions of new households covered in a year. The period from 2017 to 2022 is based on actuals. The period from 2023 to 2026 is forecasted for new households covered based on the last 5-year average deployment or the maximum speed over the last 5 years (Urban: e.g., DE, IT, NL, UK,…) with deceleration as coverage reaches 95% for urban areas and 80% for rural (note: may be optimistic for some countries). The fiber deployment model differentiates between Urban and Rural areas. Source: based on Omdia & Point Topic’s “Broadband Coverage in Europe 2013-2022” (EU Commission Report).
I should point out that I am not assuming that telcos would be required over the next couple of years to swap out Chinese suppliers outside the scope of the European Commission “The EU 5G Toolkit for Security” framework that mainly focuses on 5G mobile networks eventually including the radio access network. It should be kept in mind that there is a relatively big share of high-risk suppliers within the Western European (actually in most European Union member states) fixed broadband networks (e.g., core routers & switches, SBCs, OLT/ONTs, MSAPs) that if subjected to “5G Toolkit for Security”-like regulation, such as in effect in Denmark (i.e., “The Danish Investment Screening Act”), would result in substantial increase in telcos fixed capital spend. We may see that some Western European telcos will commence replacement programs as equipment becomes obsolete (or near obsolete), and I would expect that the fixed broadband Capex will remain relatively high for telcos in Western Europe even beyond 2026.
Thus, overall, I think it is not unrealistic to anticipate a decrease in Capex over the next 3 years. Contrary to some analysts’ expectations, I do not see the lower Capex level being persistent but rather what to expect due to the reasons given above in this blog.
Figure 20 illustrates the pace and financial requirements for fiber-to-the-premises (FTTP) deployment across the EU, emphasizing the significant challenges ahead. Germany needs the highest number of households passed per week and the largest investments at €32.9 billion to reach 80% household coverage by 2031. The total investment required to reach 80% household fiber coverage by 2031 is estimated at over €110 billion, with most of this funding allocated to urban areas. Despite progress, more than 57% of Western European households still lack fiber coverage as of 2022. Achieving this goal will require maintaining the current pace of deployment and overcoming historical performance limitations. Source: based on Omdia & Point Topic’s “Broadband Coverage in Europe 2013-2022” (EU Commission Report).
CAPEX EXPECTATIONS TOWARDS 2030.
Taking the above Capex forecasting approach, based on the individual 54 Western European telcos in the New Street Research Quarterly review, it is relatively straightforward, but not per se very accurate, to extend to 2030, as shown in the figure below.
It is worth mentioning that predicting Capex’s reliability over such a relatively long period of ten years is prone to a high degree of uncertainty and can actually only be done with relatively high reliability if very detailed information is available on each telco’s long-term, short-term and strategy as well as their economic outlook. In my experience from working with very detailed bottom-up Capex models covering a five and beyond-year horizon (which is not the approach I have used here simply for lack of information required for such an exercise not to be futile), it is already prone to a relatively high degree of uncertainty even with all the information, solid strategic outlook, and reasonable assumptions up front.
Figure 21 illustrates Western Europe’s projected capital expenditure (Capex) development from 2020 to 2030. The slight increase in Capex towards 2030 is primarily driven by the modernization of 5G radio access networks (RAN), which could potentially incorporate 6G capabilities and further deploy 5G Standalone (SA) networks. Additionally, there is a focus on swapping out high-risk suppliers in the mobile domain and completing heavy fiber household coverage in the remaining laggard countries. Suppose the European Commission’s 5G Security Toolkit should be extended to fixed broadband networks, focusing on excluding high-risk suppliers in the 5G mobile domain. In that case, this scenario has not been factored into the current model represented here. The percentages on the chart represent the overall Capex to Total Revenue ratio development over the period.
The capital expenditure trends in Western Europe from 2020 to 2030, with projections indicating a steady investment curve (remember that this is the aggregation of 54 Western European telcos Capex development over the period).
A noticeable rise in Capex towards 2030 can be attributed to several key factors, primarily the modernization of 5G Radio Access Networks (RAN). This modernization effort will likely include upgrades to the current 5G infrastructure and potential integration of 6G (or renamed 5G SA) capabilities as Europe prepares for the next generation of mobile technology, which I still believe is an unavoidable direction. Additionally, deploying or expanding 5G Standalone (SA) networks, which offer more advanced features such as network slicing and ultra-low latency, will further drive investments.
Another significant factor contributing to the increased Capex is the planned replacement of high-risk suppliers in the mobile domain. Countries across Western Europe are expected to phase out network equipment from suppliers deemed risky for national security, aligning with broader EU efforts to ensure a secure telecommunications infrastructure. I expect a very strong push from some member state regulators and the European Commission to finish the replacement by 2027/2028. I also expect impacted telcos (of a certain size) to push back and attempt to time a high-risk supplier swap out with their regular mobile infrastructure obsolescence program and introduction of 6G in their networks towards and after 2030.
Figure 22 shows the projections for 2023 and 2030 for the number of homes covered by fiber in Western European countries and the number of households remaining. It should be noted that a 100% coverage level may be unlikely, and this data does not consider fiber overbuilt (i.e., multiple companies covering the same households with their individual fiber deployments). Fiber overbuilt becomes increasingly likely as the coverage exceeds 80% (on a geographical regional/city basis). Source: based on Omdia & Point Topic’s “Broadband Coverage in Europe 2013-2022” (EU Commission Report).
Simultaneously, Western Europe is expected to complete the extensive rollout of fiber-to-the-home (FTTH) networks, as illustrated by Figure 20 above, particularly in countries lagging behind in fiber deployment, such as Germany, the UK, Belgium, Austria, and Greece. These EU member states will likely have finished covering the majority of households (80+%) with high-speed fiber by the end of the decade. On this topic, we should remember that telcos are using various fiber deployment models that minimize (and optimize) their capital investment levels. By 2030 I would expect that almost 80% of all Western European households will be covered with fiber and thus most consumers and businesses will have easy access to gigabit services to their homes by then (and for most countries long before 2030). Germany is still expected to be the Western European fiber laggard by 20230, with an increased share of 50+% of German households not being covered by fiber (note: in 2022, this was 38%). Most other countries will have reached and exceeded 80% fiber household coverage.
It is also important to note that my Capex model does not assume the extension of the European Commission’s 5G Security Toolkit, which focuses on excluding high-risk suppliers in the 5G domain to fixed broadband networks. If the legal framework were to be applied to the fixed broadband sector as well, an event that I see to be very likely, forcing the removal of high-risk suppliers from fiber broadband networks, Capex requirements would likely increase significantly beyond the projections represented in my assessment with the last years of the decade focused on high-risk supplier replacement in Western European Telcos fixed broadband transport and IP networks. While it is I don’t see a (medium-high) risk that all CPEs would be included in a high-risk supplier ban. However, I do believe that CPEs with the ONT integrated may be required to replace their installed CPE base. If a high-risk supplier ban were to include the ONT, there would be several implications.
Any CPEs that use components from the banned supplier would need to be replaced or retrofitted to ensure compliance. This would require swapping the integrated CPE/ONT units for separate CPE and ONT devices from approved suppliers, which could add to installation costs and increase deployment time. Service providers would also need to reassess their network equipment supply chain, ensuring that new ONTs and CPEs meet regulatory standards for security and compliance. Moreover, replacing equipment could potentially disrupt existing service, necessitating careful planning to manage the transition without major outages for customers. This situation would likely also require updates to the network configuration, as replacing an integrated CPE/ONT device could involve reconfiguring customer devices to work seamlessly with the new setup. I believe it is very likely that telcos eventually will offer fixed broadband service, including CPEs and home gateways, that are free of high-risk suppliers end-2-end (e.g., for B2B and public institutions, e.g., defense and other critically sensitive areas). This may extend to requirements that employees working in or with sensitive areas will need a certificate of high-risk supplier-free end-2-end fixed broadband connection to be allowed to work from home or receive any job-related information (this could extend to mobile devices as well). Again, substantial Capex (and maybe a fair amount of time as well) would be required to reach such a high-risk supplier reduction.
AN ALTERNATE REALITY.
I am unsure whether William Webb’s idea of “The End of Telecoms History” (I really recommend you get his book) will have the same profound impact as Francis Fukuyama’s marvelously thought-provoking book “The End of History and the Last Man“ or be more “right” than Fukuyama’s book. However, I think it may be an oversimplification of his ideas to say that he has been proven wrong. The world of Man may have proven more resistant to “boredom” than the book assumed (as Fukuyama conceded in subsequent writing). Nevertheless, I do not believe history can be over unless the history makers and writers are all gone (which may happen sooner rather than later). History may have long and “boring” periods where little new and disruptive things happen. Still, historically, something so far has always disrupted the hiatus of history, followed by a quieter period (e.g., Pax Romana, European Feudalism, Ming Dynasty, 19th century’s European balance of power, …). The nature of history is cyclic. Stability and disruption are not opposing forces but part of an ongoing dynamic. I don’t think telecommunication would be that different. Parts of what we define as telecom may reach a natural end and settle until it is disrupted again; for example, the fixed telephony services on copper lines were disrupted by emerging mobile technologies driven by radio access technology innovation back in the 90s and until today. Or, like circuit-switched voice-centric technologies, which have been replaced by data-centric packet-switched technologies, putting an “end” to the classical voice-based business model of the incumbent telecommunication corporations.
At some point in the not-so-distant future (2030-2040), all Western European households will be covered by optical fiber and have a fiber-optic access connection with indoor services being served by ultra-WiFi coverage (remember approx. 80% of mobile consumption happens indoors). Mobile broadband networks have by then been redesigned to mainly provide outdoor coverage in urban and suburban areas. These are being modernized at minimum 10-year cycles as the need for innovation is relatively minor and more focused on energy efficiency and CO2 footprint reductions. Direct-to-cell (D2C) LEO satellite or stratospheric drone constellations utilizing a cellular spectrum above 1800 MHz serve outdoor coverage of rural regions, as opposed to the current D2C use of low-frequency bands such as 600 – 800 MHz (as higher frequency bands are occupied terrestrially and difficult to coordinate with LEO Satellite D2C providers). Let’s dream that the telco IT landscape, Core, transport, and routing networks will be fully converged (i.e., no fixed silo, no mobile silo) and autonomous network operations deal with most technical issues, including planning and optimization.
In this alternate reality, you pay for and get a broadband service enabled by a fully integrated broadband network. Not a mobile service served by a mobile broadband network (including own mobile backhaul, mobile aggregation, mobile backbone, and mobile core), and, not a fixed service served by a fixed broadband network different from the mobile infrastructure.
Given the Western European countries addressed in this report (i.e., see details in Further Reading #1), we would need to cover a surface area of 3.6 million square kilometers. To ensure outdoor coverage in urban areas and road networks, we may not need more than about 50,000 cell sites compared to today’s 300 – 400 thousand. If the cellular infrastructure is shared, the effective number of sites that are paid in full would be substantially lower than that.
The required mobile Capex ballpark estimate would be a fifth (including its share of related fixed support investment, e.g., IT, Core, Transport, Switching, Routing, Product development, etc.) of what it otherwise would be if we continue “The Mobile History” as it has been running up to today.
In this “Alternate Reality” ” instead of having a mobile Capex level of about 10% of the total fixed and mobile revenue (~15+% of mobile service revenues), we would be down to between 2% and 3% of the total telecom revenues (assuming it remains reasonably flat at a 2023 level. The fixed investment level would be relatively low, household coverage would be finished, and most households would be connected. If we use numbers of fixed broadband Capex without substantial fiber deployment, that level should not be much higher than 5% of the total revenue. Thus, instead of today’s persistent level of 18% – 20% of the total telecom revenues, in our “Alternate Reality,” it would not exceed 10%. And just imagine what such a change would do to the operational cost structure.
Obviously, this fictive (and speculative) reality would be “The End of Mobile History.”
It would be an “End to Big Capex” and a stop to spending mobile Capex like there is no (better fixed broadband) tomorrow.
This is an end-reflection of where the current mobile network development may be heading unless the industry gets better at optimizing and prioritizing between mobile and fixed broadband. Re-architecting the fundamental design paradigms of mobile network design, plan, and build is required, including an urgent reset of current 6G thinking.
ACKNOWLEDGEMENT.
I greatly acknowledge my wife, Eva Varadi, for her support, patience, and understanding during the creative process of writing this Blog. There should be no doubt that without the support of Russell Waller (New Street Research), this blog would not have been possible. Thank you so much for providing the financial telco data for Western Europe that lays the ground for much of the Capex analysis in this article. This blog has also been published in telecomanalysis.net with some minor changes and updates.
FURTHER READING.
New Street Research covers the following countries in their Quarterly report: Austria, Belgium, Denmark, Finland, France, Germany, Greece, Italy, Netherlands, Norway, Portugal, Spain, Sweden, Switzerland, and the United Kingdom. Across those 15 countries, ca. 56 telcos are covered.
Rupert Wood, “A crisis of overproduction in bandwidth means that telecoms capex will inevitably fall,” Analysys Mason (July 2024). A rather costly (for mortals & their budgets, at least) report called “The end of big capex: new strategic options for the telecoms industry”allegedly demonstrates the crisis.
Danish Investment Screening Act, “Particularly sensitive sectors and activities,” Danish Business Authority, (July 2021). Note that the “Danish Investment Screening Act” is closely aligned with broader European Union (EU) frameworks and initiatives to safeguard critical infrastructure from high-risk foreign suppliers. The Act reflects Denmark’s effort to implement national and EU-level policies to protect sensitive sectors from foreign investments that could pose security risks, particularly in critical infrastructure such as telecommunications, energy, and defense.
German press on high-risk suppliers in German telecommunications networks: “Zeit für den Abschied von Huawei, sagt Innenministerin Faeser” (Handelsblatt, August 18, 2023), “Deutsche Telekom und Huawei: Warum die Abhängigkeit bleibt” (Die Welt, September 7, 2023), “Telekom-Netz: Kritik an schleppendem Rückzug von Huawei-Komponenten” (Der Spiegel, September 20, 2023), “Faeser verschiebt Huawei-Bann und stößt auf heftige Kritik” (Handelsblatt, July 18, 2024), “Huawei-Verbot in 5G-Netzen: Deutschland verschärft, aber langsam” (Tagesschau, July 15, 2024), and “Langsame Fortschritte: Deutschland und das Huawei-Dilemma” (Der Spiegel, September 21, 2024) and many many others.
Kim Kyllesbech Larsen, “Capacity planning in mobile data networks experiencing exponential growth in demand” (April 2012). See slide 5, showing that 50% of all data traffic is generated in 1 cell, 80% of data traffic is carried in up to 3 cells, and only 20% of traffic can be regarded as truly mobile. The presentation has been viewed more than 19 thousand times.
Opanga, “The RAIN AI Platform”, provides a cognitive AI-based solution that addresses (1) Network Optimization lowering Capex demand and increasing the Customer Experience, (2) Energy Reduction above and beyond existing supplier solutions leading to further Opex efficiencies, and (3) Network Intelligence using AI to better manage your network data at a much higher resolution than is possible with classical dashboard applied to technology-driven data lakes.
The securitization of the Arctic involves key players such as Greenland (The Polar Bear), Denmark, the USA (The Eagle), Russia (The Brown Bear), and China (The Red Dragon), each with strategic interests in the region. Greenland’s location and resources make it central to geopolitical competition, with Denmark ensuring its sovereignty and security. Greenland’s primary allies are Denmark, the USA, and NATO member countries, which support its security and sovereignty. Unfriendly actors assessed to be potential threats include Russia, due to its military expansion in the Arctic, and China, due to its strategic economic ambitions and influence in the region. The primary threats to Greenland include military tensions, sovereignty challenges, environmental risks, resource exploitation, and economic dependence. Addressing these threats requires a balanced, cooperative approach to ensure regional stability and sustainability.
Cold winds cut like knives, Mountains rise in solitude, Life persists in ice. (Aqqaluk Lynge, “Harsh Embrace” ).
I have been designing, planning, building, and operating telecommunications networks across diverse environmental conditions, ranging from varied geographies to extreme climates. I sort of told myself that I most likely had seen it all. However (and luckily), the more I consider the complexities involved in establishing robust and highly reliable communication networks in Greenland, the more I realize the uniqueness and often extreme challenges involved with building & maintaining communications infrastructures there. The Greenlandic telecommunications incumbent Tusass has successfully built a resilient and dependable transport network that connects nearly every settlement in Greenland, no matter how small. They manage and maintain this network amidst some of the most severe environmental conditions on the planet. The staff of Tusass is fully committed to ensuring connectivity for these remote communities, recognizing that any service disruption can have severe repercussions for those living there.
As an independent board member of Tusass Greenland since 2022, I have witnessed Tusass’s dedication, passion, and understanding of the importance of improving and maintaining their network and connections for the well-being of all Greenlandic communities. To be clear, the opinions I express in this post are solely my own and do not necessarily reflect the views or opinions of Tusass. I believe that my opinions have been shaped by my Tusass and Greenlandic experience, by working closely with Tusass as an independent board member, and by a deep respect for Tusass and its employees. All information that I am using in this post is publicly available through annual reports (of Tusass) or, in general, publicly available on the internet.
Figure 1 Illustrating a coastal telecommunications site supporting the microwave long-haul transport network of Tusass up along the Greenlandic west coast. Courtesy: Tusass A/S (Greenland).
Greenland’s strategic location, its natural resources, environmental significance, and broader geopolitical context make it geopolitically a critical country. Thus, protecting and investing in Greenland’s critical infrastructure is obviously important. Not only from a national and geopolitical security perspective but also with respect to the economic development and stability of Greenland and the Arctic region. If a butterfly’s movements can cause a hurricane, imagine what an angry “polar bear” will do to the global weather and climate. The melting ice caps are enabling new shipping routes and making natural resources much more accessible, and they may also raise the stakes for regional security. For example, with China’s Polar Silk Road initiative where, China seeks to establish (or at least claim) a foothold in the Arctic in order to increase its trade routes and access to resources. This is also reflected in their 2018 declaration stating that China sees itself as a “Near-Arctic State” and concludes that China is one of the continental states that are closest to the Arctic Circle. Russia, which is a real neighboring country to the Arctic region and Circle, has also increased its military presence and economic activities in the Arctic. Recently, Russia has made claims in the Arctic to areas that overlap with what Denmark and Canada see as their natural territories, aiming to secure its northern borders and exploit the region’s resources. Russia has also added new military bases and has conducted large-scale maneuvers along its own Arctic coastline. The potential threats from increased Russian and Chinese Arctic activities pose significant security concerns. Identifying and articulating possible threat scenarios to the Arctic region involving potential hostile actors may indeed justify extraordinary measures and also highlight the need for urgent and substantial investments in and attention to Greenland’s critical infrastructure.
In this article, I focus very much on what key technologies should be considered, why specific technologies should be considered, and how those technologies could be implemented in a larger overarching security and defense architecture driving towards enhancing the safety and security of Greenland:
Leapfrog Quality of Critical Infrastructure: Strengthening the existing critical communications infrastructure should be a priority. With Tusass, this is the case in terms of increasing the existing transport network’s reliability and availability by adding new submarine cables and satellite backbone services and the associated satellite infrastructure. However, the backbone of the Tusass economy is a population of 57 thousand. The investments required to quantum leap the robustness of the existing critical infrastructure, as well as deploying many of the technologies discussed in this post, will not have a positive business case or a reasonable return on investment within a short period (e.g., a couple of years) if approached in the way that is the standard practice for most private corporations around the worlds. External subsidies will be required. The benefit evaluation would need to be considered over the long term, more in line with big public infrastructure projects. Most of these critical infrastructure and technology investments discussed are based on particular geopolitical assumptions and serve as risk-mitigating measures with substantial civil upside if we maintain a dual-use philosophy as a boundary condition for those investments. Overall I believe that a positive case might be made from the perspective of the possible loss of not making them rather than a typical gain or growth case expected if an investment is made.
Smart Infrastructure Development: Focus on building smart infrastructure, integrating sensor networks (e.g., DAS on submarine cables), and AI-driven automation for critical systems like communication networks, transportation, and energy management to improve resilience and operational efficiency. As discussed in this post, Tusass already has a strong communications network that should underpin any work on enhancing the Greenlandic defense architecture. Moreover, Tusass are experts in building and operating critical communications infrastructures in the Arctic. This is critical know-how that should be heavily relied upon in what has to come.
Automated Surveillance and Monitoring Systems: Invest in advanced automated surveillance technologies, such as aquatic and aerial drones, satellite-based monitoring (SIGINT and IMINT), and IoT sensors, to enhance real-time monitoring and protection of Greenland.
Autonomous Defense Systems: Deploy autonomous systems, including unmanned aerial vehicles (UAVs) and unmanned underwater vehicles (UUVs), to strengthen defense capabilities and ensure rapid response to potential threats in the Arctic region. These systems should be the backbone of ad-hoc private network deployments serving both defense and civilian use cases.
Cybersecurity and AI Integration: Implement robust cybersecurity measures and integrate artificial intelligence to protect critical infrastructure and ensure secure, reliable communication networks supporting both military and civilian applications in Greenland.
Dual-Use Infrastructure: Prioritize investments in infrastructure solutions that can serve both military and civilian purposes, such as communication networks and transportation facilities, to maximize benefits and resilience.
Local Economic and Social Benefits: Ensure that defense investments support local economic development by creating new job opportunities and improving essential services in Greenland.
I believe that Greenland needs to build a solid Greenlandic-centered know-how on a foundational level around autonomous and automated systems. In order to get there Greenland will need close and strong alliances that is aligned with the aim of achieving a greater degree of independence through clever use of the latest technologies available. Such local expertise will be essential in order to reduce the dependency on external support (e.g., from Denmark and Allies) and ensure that they can maintain operational capabilities independently, particularly during a security crisis. Automation, enabled by digitization and AI-enabled system architectures, would be key to managing and monitoring Greenland’s remote and inaccessible geography and resources efficiently and securely, minimizing the need for extensive human intervention. Leveraging autonomous defense and surveillance technologies and stepping up in digital maturity is an important path to compensating for Greenland’s small population. Additionally, implementing robust, with respect to hardware AND software, automated systems will allow Greenland to protect and maintain its critical infrastructure and services, mitigating the risks associated with (too much) reliance on Denmark or allies during a time of crisis where such resources may be scarce or impractical to timely move to Greenland.
Figure 2 A view from Tusass HQ over Nuuk, Greenland. Courtesy: Tusass A/S (Greenland).
GREENLAND – A CONCISE INTRODUCTION.
Greenland, or Kalaallit Nunaat as it is called in Greenlandic, has a surface area of about 2.2 million square kilometers with ca. 80% covered by ice and is the world’s largest island. It is an autonomous territory of Denmark with a population of approximately 57 thousand. Its surface area is comparable to that of Alaska (1.7 million km2) or Saudi Arabia (2.2 million km2). It is predominantly covered by ice, with a population scattered in smaller settlements along the western coastlines where the climate is milder and more hospitable. Greenland’s extensive coastline measures ca. 44 thousand kilometers and is one of the most remote and sparsely populated coastlines in the world. This remoteness contrasts with more densely populated and developed coastlines like the United States. The remoteness of Greenland’s coastline is further emphasized by a lack of civil infrastructure. There are no connecting roads between settlements, and most (if not all) travel between communities relies on maritime or air transport.
Greenland’s coastline presents several unique security challenges due to its particularities, such as its vast length, rugged terrain, harsh climate, and limited population. These factors make Greenland challenging to monitor and protect effectively, which is critical for several reasons:
The vast and inaccessible terrain.
Harsh climate and weather conditions.
Sparse population and limited infrastructure.
Maritime and resource security challenges.
Communications technology challenges.
Geopolitical significance.
The capital and largest city is Nuuk, located on the southwestern coast. With a population of approximately 18+ thousand or 30+% of the total, Nuuk is Greenland’s administrative and economic center, offering modern amenities and serving as the hub for the island’s limited transportation network. Sisimiut, north of Nuuk on the western coast. It is the second-largest town in Greenland, with a population of around 5,500+. Sisimiut is known for its fishing industry and serves as a base for much of the Greenlandic tourism and outdoor activities.
On the remote and inhospitable eastern coast, Tasiilaq is the largest town in the Ammassalik area, with a population of little less than 2,000. It is relatively isolated compared to the western settlements and is known for its breathtaking natural scenery and opportunities for adventure tourism (check out https://visitgreenland.com/ for much more information). In the far north, on the west coast, we have Qaanaaq (also known as Thule), which is one of the world’s most northern towns, with a population of ca. 600. Located near Qaanaaq, is the so-called Pituffik Space Base which is the United States’ northernmost military base, established in 1951, and a key component of NATO’s early warning and missile defense systems. The USA have had a military presence in Greenland since the early days of the World War II and strengthened during the Cold War. It also plays an important role in monitoring Arctic airspace and supporting the region’s avionics operations.
As of 2023, Greenland has approximately 56 inhabited settlements. I am using the word “settlement” as an all-inclusive covering communities with a population of 10s of thousands (Nuuk) down to 100s or lower. With few exceptions, there are no settlements with connecting roads or any other overland transportation connections with other settlements. All person- and goods transportation between the different settlements is taken by plane or helicopter (provided by Air Greenland) or seaborne transportation (e.g., Royal Artic Line, RAL).
Greenland is rich in natural resources. Apart from water (for hydropower), this includes significant mining, oil, and gas reserves. These natural resources are largely untapped and present substantial opportunities for economic development (and temptation for friendly as well as unfriendly actors). Greenland is believed to have one of the world’s largest deposits of rare earth elements (although by far not comparable to China), extremely valuable as an alternative to the reliance of China and critical for various high-tech applications, including electronics (e.g., your smartphone), renewable energy technologies (e.g., wind turbines and EVs), and defense systems. Graphite and platinum are also present in Greenland and are important in many industrial processes. Some estimates indicate that northeast Greenland’s waters could hold large reserves of (yet) undiscovered oil and gas reserves. Other areas are likewise believed to contain substantial hydrocarbon reserves. However, Greenland’s arctic environment presents severe exploration and extraction challenges, such as extreme cold, ice cover, and remoteness, that so far has made it also very costly and complicated to extraxt its natural resources. With the global warming the economical and practical barrier for exploitation is contineously reducing.
FROM STRATEGIC OUTPOST TO ARCTIC STRONGHOLD: THE EVOLVING SECURITY SIGNIFICANCE OF GREENLAND.
Figure 3 illustrates Greenland’s reliance on and the importance of critical communications infrastructure connecting local communities as well as bridging the rest of the world and the internet. Courtesy: DALL-E.
From a security perspective Greenland has evolved significantly since the Second World War. During World War II, its importance was primarily based on its location as a midway point between North America and Europe serving as a refueling and weather station for allied aircrafts crossing the Atlantic to and from Europe. Additionally, its remote geographical location combined with its harsh climate provided a “safe haven” for monitoring and early warning installations.
During the Cold War era, Greenland’s importance grew (again) due to its proximity to the Soviet Union (and Russia today). Greenland became a key site for early warning radar systems and an integral part of the North American Aerospace Defense Command (NORAD) network designed to detect Soviet bombers and missiles heading toward North America. In 1951, the USA-controlled Thule Air Base, today it is called Pituffik Space Base, located in northwest Greenland, was constructed with the purpose of hosting long-range bombers and providing an advanced point (from a USA perspective) for early warning and missile defense systems.
As global tensions eased in the post-Cold War period, Greenland’s strategic status diminished somewhat. However, its status is now changing again due to Russia’s increased aggression in Europe (and geopolitically) and a more assertive China with expressed interest in the Arctic. The arctic ice is melting due to climate change and has resulted in new maritime routes being possible, such as the Northern Sea Route. Also, making Arctic resources more accessible. Thus, we now observe an increased interest from global powers in the Arctic region. And as was the case during the cold-War period (maybe with much higher stakes), Greenland has become strategically critical for monitoring and controlling these emerging routes, and the Arctic in general. Particularly with the observed increased activity and interest from Russia and China.
Greenland’s position in the North Atlantic, bridging the gap between North America and Europe, has become a crucial spot for monitoring and controlling the transatlantic routes. Greenland is part of the so-called Greenland-Iceland-UK (GIUK) Gap. This gap is a critical “chokepoint” for controlling naval and submarine operations, as was evident during the Second World War (e.g., read up on the Battle of the Atlantic). Controlling the Gap increases the security of maritime and air traffic between the continents. Thus, Greenland has again become a key component in defense strategies and threat scenarios envisioned and studied by NATO (and the USA).
GREENLANDS GEOPOLITICAL ROLE.
Greenland’s recent significance in the Arctic should not be underestimated. It arises, in particular, from climate change and, as a result, melting ice caps that have and will enable new shipping routes and potential (easier) access to Greenland’s untapped natural resources.
Greenland hosts critical military and surveillance assets, including early warning radar installations as well as air & naval bases. These defense assets actively contributes to global security and is integral to NATO’s missile defense and early warning systems. They provide data for monitoring potential missile threats and other aerial activities in the North Atlantic and Arctic regions. Greenland’s air and naval bases also support specialized military operations, providing logistical hubs for allied forces operating in the Arctic and North Atlantic.
From a security perspective, Greenland’s control is not only about monitoring and defense. It is also about deterring potential threats from potential hostile actors. It allows for effective monitoring and defense of the Arctic and North Atlantic regions. Enabling the detection and tracking of submarines, ships, and aircraft. Such capabilities enhance situational awareness and operational readiness, but more importantly, it sends a message to potential adversaries (e.g., maybe unaware, as unlikely as it may be, about the deficiencies of Danish Arctic patrol ships). The ability to project power and maintain a military presence in this area is necessary for deterring potential adversaries and protecting he critical communications infrastructure (e.g., submarine cables), maritime routes, and airspace.
The strategic location of Greenland is key to contribute to the global security dynamics. Ensuring Greenland’s security and stability is essential for also maintaining control over critical transatlantic routes, monitoring Arctic activities, and protecting against potential threats from hostile actors. Making Greenland a cornerstone of the defense infrastructure and an essential area for geopolitical strategy in the North Atlantic and Arctic regions.
INFRASTRUCTURE RECOMMENDATIONS.
Recent research has focused on Greenland in the context of Arctic security (see “Greenland in Arctic Security: (De)securitization Dynamics under Climatic Thaw and Geopolitical Freeze” by M. Jacobsen et al.). The work emphasizes the importance of maintaining and enhancing surveillance and early warning systems. Greenland is advised to invest in advanced radar systems and satellite monitoring capabilities. These systems are relevant for detecting potential threats and providing timely information, ensuring national and regional security. I should point to the following traditional academic use of the word “securitization,” particularly from the Copenhagen School, which refers to framing an issue as an existential threat requiring extraordinary measures. Thus, securitization is the process by which topics are framed as matters of security that should be addressed with urgency and exeptional measures.
The research work furthermore underscores the Greenlandic need for additional strategic infrastructure development, such as enhancing or building new airport facilities and the associated infrastructure. This would for example include expanding and upgrading existing airports to improve connectivity within Greenland and with external partners (e.g., as is happening with the new airport in Nuuk). Such developments would also support economic activities, emergency response, and defense operations. Thus, it combines civic and military applications in what could be defined as dual-purpose infrastructure programs.
The above-mentioned research argues for the need to develop advanced communication systems, Signals Intelligence (SIGINT), and Image Intelligence (IMINT) gathering technologies based on satellite- and aerial-based platforms. These wide-area coverage platforms are critical to Greenland due to its vast and remote areas, where traditional communication networks may be insufficient or impractical. Satellite communication systems such as GEO, MEO, and LEO (and combinations thereof), and stratospheric high-altitude platform systems (HAPS) are relevant for maintaining robust surveillance, facilitating rapid emergency response, and ensuring effective coordination of security as well as search & rescue operations.
Expanding broadband internet access across Greenland is also a key recommendation (that is already in progress today). This involves improving the availability and reliability of communications-related connectivity by additional submarine cables and by new satellite internet services, ensuring that even the most remote communities have reliable broadband internet connectivity. All communities need to have access to broadband internet, be connected, enable economic development, improve quality of life in general, and integrate remote areas into the national and global networks. These communication infrastructure improvements are important for civilian and military purposes, ensuring that Greenland can effectively manage its security challenges and leverage new economic opportunities for its communities. It is my personal opinion that most communities or settlements are connected to the wider internet, and the priority should be to improve the redundancy, availability, and reliability of the existing critical communications infrastructure. With that also comes more quality in the form of higher internet speeds.
The applicability of at least some of the specific securitization recommendations for Greenland, as outlined in Marc Jacobsen’s “Greenland in Arctic Security: (De)securitization Dynamics Under Climatic Thaw and Geopolitical Freeze,” may be somewhat impractical given the unique characteristics of Greenland with its vast area and very small population. Quite a few recommendations (in my opinion), even if in place “today or tomorrow,” would require a critical scale of expertise, human, and industrial capital that Greenland does not have available on its own (and also is unlikely to have in the future). Thus, some of the recommendations depend on such resources to be delivered from outside Greenland, posing inherent availability risks to provide in a crisis (assuming that such capacity would even be available under normal circumstances). This dependency on external actors, particularly Danish and International investors, complicates Greenland’s ability to independently implement policies recommended by the securitization framework. It could lead to conflicts between local priorities and the interests of external stakeholders, particularly in a time of a clear and present security crisis (e.g., Russia attempting to expand west above and beyond Ukraine).
Also, as a result of Greenland’s small population there will be a limited pool of available local personnel with the needed skills to draw upon for implementing and maintaining many of the recommendations in “Greenland in Arctic Security: (De)securitization Dynamics under Climatic Thaw and Geopolitical Freeze”. Training and deploying enough high-tech skilled individuals to cover Greenland’s vast territory and technology needs is a very complex challenge given the limited human resources and challenges in getting external high-tech resouces to Greenland.
I believe Greenland should focus on establishing a comprehensive security strategy that minimizes its dependency on its natural allies and external actors in general. The dual-use approach should be integral to such a security strategy, where technology investments serve civil and defense purposes whenever possible. This approach ensures that Greenlandic society benefits directly from investments in building a robust security framework. I will come back to the various technologies that may be relevant in achieving more independence and less reliance on the external actors that are so prevalent in Greenland today.
HOW CRITICAL IS CRITICAL INFRASTRUCTURE TO GREENLAND
Communications infrastructure is seen as critical in Greenland. It has to provide a reliable and good quality service despite Greenland having some of the most unfavorable environmental conditions in which to build and operate communications networks. Greenland is characterized by vast distances between relatively small, isolated communities. Thus, this makes effective communication essential for bridging those gaps, allowing people to stay connected with each other and as well as the outside world irrespective of weather or geography. The lack of a comprehensive road network and reliance on sea and air travel further emphasize the importance of reliable and available telecommunications services, ensuring timely communication and coordination across the country.
Telecommunications infrastructure is a cornerstone of economic development in Greenland (as it has been elsewhere). It is about efficient internet and telephony services and its role in business operations, e-commerce activities, and international market connections. These aspects are important for the economic growth, education, and diversification of the many Greenlandic communities. The burgeoning tourism industry will also depend on (maybe even demand) robust communication networks to serve those tourists, ensure their safety in remote areas, and promote tourism activities in general. This illustrates very firmly that the communications infrastructure is critical (should there be any doubts).
Telecommunications infrastructure also enables distance learning in education and health services, providing people in remote areas with access to high-quality education that otherwise would not be possible (e.g., Coursera, Udemy Academy, …). Telemedicine has obvious benefits for healthcare services that are often limited in remote regions. It allows residents to receive remote medical consultations and services (e.g., by video conferencing) without the need for long-distance and time-consuming travels that often may aggravate a patient’s condition. Emergency response and public safety are other critical areas in which our communications infrastructure plays a crucial role. Greenland’s harsh and unpredictable weather can lead to severe storms, avalanches, and ice-related incidents. It is therefore important to have a reliable communication network that allows for timely warnings, supporting rescue operations & coordination, and public safety. Moreover, maritime safety also depends on a robust communication infrastructure, enabling reliable communication between ships and coastal stations.
A strong communication network can significantly enhance social connectivity, and help maintaining social ties, such as among families and communities across Greenland. Thus reduce the feeling of isolation. Supporting social cohesion in communities as well as between settlements. Telecommunications can also facilitate sharing and preserving the Greenlandic culture and language through digital media (e.g., Tusass Music), online platforms, and social networks (e.g., Facebook used by ca. 85% of the eligible population, that number is ca. 67% in Denmark).
For a government and its administration, maintaining effective and reliable communication is essential for well-functioning public services and its administration. It should facilitate coordination between different levels of government and remote administration. Additionally, environmental monitoring and research benefit greatly from a reliable and available communication infrastructure. Greenland’s unique environment attracts scientific research, and robust communication networks are essential for supporting data transmission (in general), coordination of research activities, and environmental monitoring. Greenland’s role in global climate change studies should also be supported by communication networks that provide the means of sharing essential climate data collected from remote research stations.
Last but not least. A well-protected (i.e., redundant) and highly available communications infrastructure is a cornerstone of any national defense or emergency situation. If it is well functioning, the critical communications infrastructure will support the seamless operation of military and civilian coordination, protect against cyber threats, and ensure public confidence during a crisis situation (natural or man-made). The importance of investing in and maintaining such a critical infrastructure cannot be underestimated. It plays a critical role in a nation’s overall security and resilience.
TUSASS: THE BACKBONE OF GREENLANDS CRITICAL COMMUNICATIONS INFRASTRUCTURE.
Tusass is the primary telecommunications provider in Greenland. It operates a comprehensive telecom network that includes submarine cables with 5 landing stations in Greenland, very long microwave (MW) radio chains (i.e., long-haul backbone transmission links) with MW backhaul branches to settlements along its way, and broadband satellite connections to deliver telephony, internet, and other communication services across the country. The company is wholly owned by the Government of Greenland (Naalakkersuisut). Positioning Tusass as a critical company responsible for the nation’s communications infrastructure. Tusass faces unique challenges due to the vast, remote, and rugged terrain. Extreme weather conditions make it difficult, often impossible, to work outside for at least 3 – 4 months a year. This complicates the deployment and maintenance of any infrastructure in general and a communications network in particular. The regulatory framework mandates that Tusass fulfills a so-called Public Service Obligation, or PSO. This requires Tusass to provide essential telecommunications services to all of Greenland, even the most isolated communities. This requires Tusass to continue to invest heavily in expanding and enhancing its critical infrastructure, providing reliable and high-quality services to all residents throughout Greenland.
Tusass is the main and, in most areas, the only telecommunications provider in Greenland. The company holds a dominant market position where it provides essential services such as fixed-line telephony, mobile networks, and internet services. The Greenlandic market for internet and data connections was liberalized in 2015. The liberalization allowed private Internet Service Providers (ISPs) to purchase wholesale connections from Tusass and resell them. Despite liberalization, Tusass remains the dominant force in Greenland’s telecommunications sector. Tusass’s market position can be attributed to its extensive communications infrastructure and its government ownership. With a population of 57 thousand and its vast geographical size, it would be highly uneconomical and human-resource wise very chalenging to have duplicate competing physical communications infrastructures and support organizations in Greenland. Not to mention that it would take many years before an alternative telco infrastructure could be up an running matching what is already in place. Thus, while there are smaller niche service providers, Tusass effectively operates as Greenland’s sole telecom provider.
Figure 4 Illustrates one of many of Tusass’s long-haul microwave site along Greenland’s west coast. Accessible only by helicopter. Courtesy: Tusass A/S (Greenland).
CURRENT STATE OF CRITICAL COMMUNICATIONS INFRASTRUCTURE.
The illustration below provides an overview of some of the major and critical infrastructures available in Greenland, with a focus on the communications infrastructure provided by Tusass, such as submarine cables, microwave (MW) radios radio chains, and satellite ground stations, which all connect Greenland and give access to the Internet for all of Greenland.
Figure 5 illustrates the Greenlandic telecommunications provider Tusass infrastructure. Note that Tusass is the incumbent and only telecom provider in Greenland. Currently, five hydropower plants (shown above, location only indicative) provide more than 80% of Greenland’s electricity demand. A new international airport is expected to be operational in Nuuk from November 2024. Source: from Tusass Annual Report 2023 with some additions and minor edits.
From the south of Nanortalik up to above Upernavik on the west coast, Tusass has a 1,700+ km long microwave radio chain connecting all settlements along Greenland’s west coast from the south to the north distributed, supported by 67 microwave (MW) radio sites. Thus, have a microwave radio equipment located for every ca. 25 km ensuring very high performance and availability of connectivity to the many settlements along the West Coast. This setup is called a long-haul microwave chain that uses a series of MW radio relay stations to transmit data over long distances (e.g., up to thousands of kilometers). The harsh climate with heavy rain, snow, and icing makes it very challenging to operate high-frequency, high-bandwidth microwaves (i.e., the short distances between the radio chain sites). The MW radio sites are mainly located on remote peaks in the harsh and unforgiving coastal landscape (ensuring line-of-site), making helicopters the only means of accessing these locations for maintenance and fueling. The field engineers here are pretty much superheroes maintaining the critical communications infrastructure of Greenland and understanding its life-and-death implications for all the remote communities if it breaks down (with the additional danger of meeting a very hungry polar bear and being stuck for several days on a location due to poor weather preventing the helicopter from picking the engineers up again).
Figure 6 illustrates a typical housing for field service staff when on site visits. As the weather can change very rapidly in Greenland it is not uncommon that field service staff have to wait for many days before they can be picked up again by the helicopter. Courtesy: Tusass A/S (Greenland).
Greenland relies on the “Greenland Connect” submarine cable to connect to the rest of the world and the wider internet with a modern-day throughput. The submarine cable connecting Greenland to Canada and Iceland runs from Newfoundland and Labrador in Canada to Nuuk and continues from Qaqortoq in Greenland to land in Iceland (that connects further to Copenhagen and the wider internet). Tusass, furthermore, has deployed submarine cables between 5 of the major Greenlandic settlements, including Nuuk, up the west coast and down to the south (i.e., Qaqortoq). The submarine cables provide some level of redundancies, increased availability, and substantial capacity & quality augmentation to the long-haul MW chain that carries the traffic from surrounding settlements. The submarine cables are critical and essential for the modernization and digitalization of Greenland. However, there are only two main submarine broadband cable connection points, the Canada – Nuuk and Qaqortoq – Iceland submarine connections, to and from Greenland. From a security perspective, this poses substantial and unique risks to Greenland, and its role and impact need to be considered in any work on critical infrastructure strategy. If both international submarine cables were compromised, intentionally or otherwise, it would become challenging, if possible, to sustain today’s communications demand. Most traffic would have to be supported by existing satellite capacity, which is substantially lower than the existing submarine cables can support, leaving the capacity mainly for mission-critical communications. Allowing little spare capacity for consumer and non-critical business communication needs. This said, as long as Greenlandic submarine cables, terrestrial transport, and switching infrastructure are functional, it would be possible to internally to Greenland maintain a resemblance of internet services and communication means between connected settlements using modern day network design thinking.
Moreover, while the submarine cables along the west coast offer redundancy to the land-based long-haul transport solution, there are substantial risks to settlements and their populations where the long-haul MW solution is the only means of supporting remote Greenlandic communities. Given Greenland’s unique geographic and climate challenges it is not only very costly but also time-consuming to reduce the risk of disruption to the existing lesser redundant critical infrastructure already in place (e.g., above Aasiaat north of the Arctic Circle).
Using satellites is an additional dimension, and part of the connectivity toolkit, that can be used to improve the redundancy and availability of the land- and water-based critical communications infrastructures. However, the drawback of satellite systems is that they generally are bandwidth/throughput limited and have longer signal delays (latency and round-trip time) than terrestrial-based communications systems. These issues could pose some limitations on how well some services can be supported or will function and would require a versatile traffic management & prioritization system in case the satellite solution would be the only means of connecting a relatively high-traffic area (e.g., Tasiilaq) used to ground-based support of broadband transport means with substantial more available bandwidth than accessible to the satellite solution. Particular for GEO stationary satellite services, with the satellite located at 36 thousand kilometer altitude, the data traffic flow needs to be carefully optimized in order to function well irrespective of the substantial latency experienced on such connections that at the very best can be 239 milliseconds and in practice might be closer to twice that or more. This poses significant challenges to particular TCP/IP data flows on such response-time-challenged connections and applications sensitivity short round trip times.
Optimizing and stabilizing TCP/IP data flows over GEO satellite connections requires a multi-faceted approach involving enhancements to the TCP protocol (e.g., window scaling, SACK, TCP Hypla, …), the use of hybrid and proxy solutions, application-layer adjustments, error correction mechanisms, Quality of Service (QoS) and traffic shaping, DNS optimizations, and continuous network monitoring. Combining these strategies makes it possible to mitigate some of the inherent challenges of high-latency satellite links and ensure more effective and efficient IP flows and better utilization of the available satellite link bandwidth. Optimizing control signals and latency-sensitive data flows over GEO and LEO satellite connections may also substantially reduce the sensitivity to the prohibitive long delays experienced on GEO connections, using the lower latency LEO connection (RTT < ~ 50 ms @ 500 km altitude), or, if available as a better alternative a long-haul microwave link or submarine connection.
Tusass, in collaboration with the Spanish satellite company Hispasat, make use of the Greenland geostationary satellite, Greensat. Tusass signed an agreement with Hispasat to lease space capacity (800 MHz @ Ku-band) on the Amazonas Nexus satellite until the end of its lifetime (i.e., 2038+/-). Greensat was taken into operation in the last quarter of 2023 (note: it was launched in February 2023), providing services to the satellite-only settlement areas around Qaanaaq, the northernmost settlement on the west coast of Greenland, and Tasiilaq and Ittoqortormiut (north of Tasiilaq), on the remote east coast. All mobile and fixed traffic from a satellite-only area is routed to a satellite ground station that is connected to the geostationary satellite (see the illustration below). The satellite’s primary mission is to provide broadband services to areas that, due to geography & climate and cost, are impractical to connect by submarine cable or long-haul microwave links. The Greensat satellite closes the connection to the rest of the world and the internet via a ground station on Gran Canaria. It also connects to Greenland via submarine cables in Nuuk (via Canada and Qaqortoq).
Figure 7 The image shows a large geostationary satellite ground-station antenna located in Greenland’s cold and remote area. The antenna’s primary purpose is to facilitate communication with geostationary satellites 36 thousand kilometers away, transmitting and receiving data. It may support various services such as Internet, television broadcasting, weather monitoring, and emergency communications. The components are (1) a parabolic reflector (dish), (2) a feed horn and receiver, (3) a mount and support structure, (4) control and monitoring systems, and (5) a radome (not shown on the picture) which is a structural, weatherproof enclosure that protects the antenna from environmental elements without interfering with the electromagnetic signals it transmits and receives. The LEO satellite ground stations are much smaller as the distance between the ground and the low-earth satellite is much smaller, i.e., ca. 350 – 650 km, resulting in less challenging receive and transmit conditions (compared to the connection to a geostationary satellite).
In addition, Tusass also makes use of UK-based OneWeb (Eutelsat) LEO satellite backhaul services at several locations where an area fixed and mobile traffic is routed to a point-of-presence connected to a satellite ground station that connects to a OneWeb satellite that connects to the central switching center in Nuuk (connected to another ground station).
CRITICAL PROPERTIES FOR RELIABLE AND SECURE TRANSPORT NETWORKS.
A physical transport network comprises many tangible components, such as cables, routers, and switches, which form an interconnected system capable of transmitting data. The network is designed and planned according to a given expected coverage, use and level of targeted quality (e.g., speed, latency, priority and security). Moreover, we are also concerned about such a networks availability as well as reliability. We design the physical and logical (i.e., related to higher levels of the OSI stack than the physical) network according to a given target availability, that is how many hours in a year should the network minimum be operational and available to our customers. You will see availability given in percentage of the total hours in a year (e.g., 8,760 hours in a normal year and 8,784 hours in a leap year). So an availability of 99.9% means that we target a minimum operational time of our network of 8,751 hours, or, alternatively, accept a maximum of 9 hours of downtime. The reliability of a network refers to the probability hat the network will continue to function without failure for a given period. For example, say you have a mean time between failures (MTBF) of 8750 hours and you want to figure out what the likelihood is of operating without failure for 4,380 hours (half a year), you find that there is a ca. 60% chance of operating without a failure (or 40% that a failure may occur within the next 6 months). For a critical infrastructure the availability and reliability metrics are very important to consider in any design and planning process.
In contrast to the physical network depiction, a network graph representation abstracts the physical transport network into a mathematical model where graph nodes (or vertexes) represent the network’s many components and edges (or links) represent the physical and logical connections between these network’s many components. Modellizing the physical (and logical) network allows designers and planners to study in detail a networks robustness against many types of disruptions as well as its general functioning and performance.
Suppose we are using a graph approach in our design of a critical communications network. We then need to carefully consider various graph properties critical for the network’s robustness, security, reliability, and efficiency. To achieve this, one must strive for resilience and fault tolerance by designing for increased redundancy and availability involving multiple paths, edges, or connections between nodes, preventing single points of failure (SPoF). This involves creating a network where the number of independent paths between any two nodes is maximized (often subject to economics and feasibility boundary conditions). An optimal average degree of nodes should also be a design criterion. A higher degree of nodes enhances the graph’s, and thus the underlying network’s, resilience, thus avoiding increased vulnerability.
Scalability is a crucial network property. This is best achieved through a hierarchical structure (or topology) that allows for efficient network management as the network expands. The Modularity, which is another graph KPI, ensures that the network can integrate new nodes and edges without major reconfigurations, supporting civilian expansion and military operations or dual-purpose operations. To meet low-latency and high-throughput requirements, the shortest-path routing algorithms should be applied to allow us to minimize the latency or round-trip time (and thus increase throughput). Moreover, bandwidth management should be implemented, allowing the network to handle large data volumes in a prioritized manner (if required). This also ensures that the network can accommodate peak loads and prioritize critical communication when it is compromised.
Security is a paramount property of any communications network. In today’s environment with many real and dangerous cyber threats, it may be one of the most important topics to consider. Each node and link (or edge) in a network requires robust defenses against cyber threats. In our design, we need to think about encryption, authentication, intrusion, and anomaly detection systems. Network segmentation will help isolate critical defense communications from civilian traffic, preventing breaches from compromising the entire network. Survivability is enhanced by minimizing the Network Diameter, a graph property. A low (or lower) network diameter ensures that a network can quickly reroute traffic in case of failures and is an important design element for robustness against targeted attacks and random failures.
Likewise, interoperability is essential for seamless integration between civilian and military communication systems. Flexible protocols and specifications (e.g., Open API) enable different types of traffic and varying security requirements. These frameworks provide the structure, tools, and best practices needed to build and maintain secure communication systems. Thereby protecting against the various cyber threats we have today and expect in the future. Efficiency is achieved through effective load balancing (e.g., on a logical as well as physical level) to distribute traffic evenly across the network, prevent bottlenecks, optimize performance, and design for energy-efficient operations, particularly in remote or harsh environments or in case a part of the network has been compromised.
In order to support both civilian services and defense operations, accessibility and high availability are very important design requirements to consider when having a network with extensive large-scale coverage, including in very remote areas. Incorporating redundant communication links, such as satellite, fiber optic, and wireless, are design choices that allow for high availability even under adverse and disruptive conditions. It makes good sense in an environment such as Greenland to ensure that long-haul microwave links have a given level of redundancy either by satellite backhaul, submarine cable, or additional MW redundancy. While we always strive for our designs to be cost-effective, it may be a challenge if the circumstances dictate that the best redundancy (availability) solution is solved by non-terrestrial means (e.g., by satellite or submarine means). However, efficiency should be addressed by optimizing resource allocation to balance cost with performance, ensuring civil and defense needs are met without excessive expenditure, and sharing infrastructure where feasible to reduce costs while maintaining security through logical separation.
Ultra-secure transport networks are designed to meet stringent reliability, resilience, and security requirements. These type of networks are critical for civil and defense applications, ensuring continuous operation and protection against various threats. The important graph properties that such networks should exhibit include high connectivity, redundancy, low diameter, high node degree, network segmentation, robustness to attacks, scalability, efficient load balancing, geographical diversity, and adaptive routing.
High connectivity ensures multiple independent paths between any pair of nodes in the network, which is crucial for a communication network’s resilience and fault tolerance. This allows the network to maintain functionality even if several nodes or links fail, making it capable of withstanding targeted attacks or random failures without significant performance degradation. Redundancy, which involves having multiple backup paths and nodes, enhances fault tolerance and high availability by providing alternative routes for data transmission if primary paths fail. Redundancy also applies to critical network components such as switches, routers, and communication links, ensuring no or uncritical single point of failure.
A low diameter, the longest-shortest path between any two nodes, ensures data can travel quickly across the network, minimizing latency. This is especially important in time-sensitive applications. High node degree, meaning nodes are connected to many other nodes, increases the network’s robustness and allows for multiple paths for data to traverse, contributing to security and availability. However, it is essential to manage the trade-off between having a high node degree and the complexity of the network.
Network segmentation and compartmentalization will enhance security by limiting the impact of breaches or failures on a small part of the network. This is of particular importance when having a dual-use network design. Network segmentation divides the network into multiple smaller subnetworks. Each segment may have its own security and access control policies. Network compartmentalization involves designing isolated environments where, for example, data and functionalities are separated based on their criticality and sensitivity (this is, in general, a logical separation). Both strategies help contain cyber threats as well as prevent them from spreading across an entire network. Moreover, it also allows for a more granular control over network traffic and access. With this consideration, we should have a network that is robust against various types of attacks, including both physical and cyber attacks, by using secure protocols, encryption, authentication mechanisms, and intrusion detection systems. The aim of the network topology should be to minimize the impact of potential attacks on critical network nodes and links.
In a country such as Greenland, with settlements spread out over a very long distance and supported by very long and exposed transmission links (e.g., long-haul microwave links), geographical diversity is an essential design consideration that allows us to protect the functioning of services against localized disasters or failures. Typically, this involves distributing switching and management nodes, including data centers, across different geographic locations, ensuring that a failure in one area or with a main transport link does not disrupt the major parts of a network. This is particularly important for disaster recovery and business continuity. Finally, the network should support adaptive and dynamic routing protocols that can quickly respond to changes in the network topology, such as node failures or changes in traffic patterns. Such protocols will enhance the network’s resilience by automatically finding the best real-time data transmission paths.
TUSASS NETWORK AS A GRAPH.
Real maps, such as the Greenland map shown below at the left side of Figure 8, provide valuable geographical context and are essential for understanding the physical layout and extent of, for example, a transport network. A graph representation, as shown on the right side of Figure 8, on the other hand, offers a powerful and complementary perspective of the real-world network topology. It can emphasize the structural properties (and qualities) without those disappearing in geographical details that often are not relevant to the network functioning (if designed appropriately). A graph can contain many layers of network information that pretty much describe the network stack if required (e.g., from physical transport up through IP, TCP/IP, and to the application layers). It also supports many types of advanced analysis, design scenarios, and different types of simulations. A graph representation of a communications network is an invaluable tool for network design, planning, troubleshooting, analysis, and management.
Thus, the network graph approach offers several benefits for planning and operations. Firstly, the approach can often visualize the network’s topology better than a geographical map. It facilitates the understanding of various network (and graph) relationships and interconnections between the various network components. Secondly, the graph algorithms can be applied to the network graph and support the analysis of its characteristics, such as availability and redundancy scores, connectivity in general, the shortest paths, and so forth. This kind of analysis helps us identify critical nodes or links that may be sensitive to network and service disruption. It can also help significantly in maintaining and optimizing a network’s operation.
So, analyzing the our communication network’s graph representation makes it possible to identify potential weaknesses in the physical transport network, such as single points of failure (SPoF), bottlenecks, or areas with limited or weak redundancy. These identified weaknesses can then be addressed to enhance the network’s resilience, e.g., improving our network’s redundancy, availability and thus its overall reliability.
Figure 8 The chart above shows on the left side the topology of the (real) transport network of Tusass with the reference point in the Greenlandic settlements it connects. It should be noted that the actual transport network is slightly different as there are more hops between settlements than is shown here. On the right side is a graph representation of the Tusass transport network, shown on the left. The network graph represents the transport network on the west coast north and southbound. There are three main connection categories: (Black dashed line) Microwave (MW), (Orange dashed line) Submarine Cable, and (Blue solid line) Satellite, of which there are a GEO and a LEO arrangement. The size of the node, or settlements, represents the size of the population, which is also why Nuuk has the largest circle. The graph has been drawn consistent with the Kamada-Kawai layout, which is particularly useful for small to medium graphs, providing a reasonable, intuitive visualization of the structural relationship between nodes.
In the following, it is important to understand that due to Greenland’s specific conditions, such as weather and geography, building a robust transport network regarding reliability and redundancy will always be challenging, particularly when relying on the standard toolbox for designing, planning, and creating such networks. With geographical challenges should for example be understood the resulting lack of civil infrastructure connecting settlements … such as the lack of a road network.
The Table below provides key performance indicators (KPIs) for the Greenlandic (Tusass) transport network graph, as illustrated in Figure 8 above. It represents various aspects of the transport network’s structure and connectivity. This graph consists of 93 vertices (e.g., settlements and other connection points, such as long-haul MW radio sites) and 101 edges (transport connections), and it is fully connected, meaning all nodes are reachable within the network. There is only one subgraph, indicating no isolated segments as expected.
The Average Path Length suggests that it takes on average 39 steps to travel between any two nodes. This is a relatively high number, which may indicate a less efficient network. The Diameter of a network is defined as the longest shortest path between any two nodes. It can be shown that the value of the diameter lies between the value of the radius and twice that value (and not higher;-). The diameter is found to be 32, indicating a quite high maximum distance between the most distant nodes. This suggests that the network has a quite extensive reach, as is also obvious from the various illustrations of the transport network above (Figure 8) and below (Figure 11 & 12). Apart from the fact that such a high diameter may indicate potential inefficiencies, a large diameter can also mean that, in the worst-case scenarios, such as a compromised link or connectivity issues in general, communication between some nodes involves many steps (or hops), potentially leading to higher latency and slower data transmission. Related to the Diameter, the network Radius is the minimum eccentricity of any node, which is the shortest path from the most central node to the farthest node. Here, we find the radius to be 16, which means that even the most centrally located node is relatively far from some other nodes in the network. Something that is also very obvious from the various illustrations of the transport network. This emphasizes that the network has nodes that are significantly far apart. Without sufficient redundancy in place, such a transport network may be more sensitive to disruption of the connectivity.
From the perspective of redundancy, a large diameter and radius may imply that the network has fewer alternative paths between distant nodes (i.e., a lower redundancy score). This is, for example, the case between the northern point of Kullorsuaq and Aasiaat. Aasiaat is the first settlement (from the North) to be connected both by microwave and submarine cable and thus has an alternative connectivity solution to the long-haul microwave chain. If a critical node or link fails, the alternative path latency might be considerably longer than the compromised connectivity, such as would be the case with the alternative connectivity being satellite-based, leading to inefficiencies and possible reduced performance. This can also suggest potential capacity bottlenecks where specific paths are heavily relied upon without having enough capacity to act as the sole connectivity for a given transmission path. Thus, the vulnerability of the network to failures increases, resulting in reduced performance for customers in the affected area.
We find a Graph Density, at 0.024. This value indicates a sparse network with relatively few connections compared to the number of possible connections. The Clustering Coefficient is 0.014 and indicates that there are very few tightly-knit groups of nodes (again easily confirmed by visual inspection of the graph itself, see the various figures). The value of the Average Betweenness (ca. 423) measures how often nodes act as bridges along the shortest path between other nodes, indicating a significant central node (i.e., Nuuk).
The Average Closeness of 0.0003 and the Average Eigenvector Centrality of 0.105 provide insights into settlements’ influence and accessibility within the transport network. The Average Closeness measures of how close, on average, nodes are to each other. A high value indicates that nodes (or settlements) are close to each other meaning that the information (e.g., user data, signaling) being transported over the network spreads quickly and efficiently. And not surprisingly the opposite would be the case for a low average value. For our Tusass network the average closeness is very low and suggests that the network may face challenges in accessibility and efficiency, with nodes (settlements) being relatively far from one another. This typically will have an impact on the speed and effectiveness of communication across the network. The Average Eigenvector Centrality measures the overall importance (or influence) of nodes within a network. The term Eigenvectoris a mathematical concept from linear algebra that represents the stable state of the network and provides insights into the structure of the graph and thus the network. For our Tusass network the average eigenvector value is (very) low and indicates a distribution of influence across several nodes that may actually prevent reliance on a single point of failure and, in general, such structures are thought to enhance a network’s resilience and redundancy. An Average Degree of ca. 2 means that each node has about 2 connections on average, indicating a hierarchical network structure with fewer direct connections and with a somewhat low level of redundancy, consistent with what can be observed from the various illustrations shown in this post. This do indicate that our network may be more vulnerable to disruption and failures and have a relative high latency (thus, a high round trip time).
Say that for some reason, the connection to Ilulissat, a settlement north of Aasiaat on the west coast with a little under 5 thousand people, is disrupted due to a connectivity issue between Ilulissat and Qasigiannguit, a neighboring settlement to Ilulissat with ca. a thousand people. This would today disconnect ca. 11 thousand people from receiving communications services or ca. 20% of Tusass’s customer base as all settlements north of Ilulissat would likewise be disconnected because of the reliance on the broken connection to also transport their data towards Nuuk and the internet using the submarine cables out of Greenland. In the terminology of the network graph, a broken connection (or edge as it is called in graph theory) that breaks up the network into two (or more) disconnected parts is called a Bridge. Thus, the connection between Ilulissat and Qasigiannguit is a bridge, as if it is broken, disconnecting the northern part of the long-haul microwave network above Ilulissat. Similarly, if Ilulissat were a central switching hub disrupted, it would disconnect the upper northern network from the network south of Ilulissat, and we would call Ilulissat an Articulation Point.For example, a submarine cable between Aasiaat and Ilulissat would provide redundancy for this particular event, mitigating a disruption of the microwave long-haul network between Ilulissat and Aasiaat that would disconnect at least 20% of the population from communications services.
The transport network has 44 Articulation Points and 57 Bridges, highlighting vulnerabilities where node or link failures could significantly disrupt the connectivity between parts of the network, disconnecting major parts of the network and thus disrupting services. A Modularity of 0.65 suggests a moderately high presence of distinct communities, with the network divided into 8 such communities (see Figure below).
Figure 9 In network analysis, a “natural” community (or cluster) is a group of nodes that are more densely connected to each other than to nodes outside the group. Natural communities are denser subgraphs within a larger network. Identifying such communities helps in understanding the structure and function of the network. In the above analysis of how Tusass’s transport network connects to the various settlements illustrates quiet well the various categories of connectivity (e.g., long-haul microwaves only, submarine cable redundancy, satellite redundancy, etc..) in the communications network of Tusass,
A Throughput (or Degree) of 202 indicates a network with an overall capacity for data transmission. The Degree is the average number of connections per node for a network graph. In a transport network, the degree indicates how many direct connections it has to other settlements. A higher degree implies better connectivity and potentially a higher resilience and redundancy. In a fully connected network with 93 nodes, the total degree would be 93 multiplied by 92, which equals 8,556. Therefore, a value of 202 is quite low in comparison, indicating that the network is far from fully connected, which anyway would be unusual for a transport network on this side. Our transport network is relatively sparse and, thus, resulting in a lower total degree, suggesting that fewer direct paths exist between nodes. This may potentially also mean less overall network redundancy. In the case of a node or link failure, there might be fewer alternative routes, which, as a consequence, can impact network reliability and resilience. Lower degree values can also indicate limited capacity for data transmission between nodes, potentially leading to congestion or bottlenecks if certain paths become over-utilized. This can, of course, then affect the efficiency and speed of data transfer within the network as traffic congestion levels increase.
The KPIs, shown in Table 1 below, collectively indicate that our Greenlandic transport network has several critical points and connections that could affect redundancy and availability. Particularly if they become compromised or experience outages. The high number of articulation points and bridges indicates possible design weaknesses, with the low density and average degree suggesting a limited level of redundancy. In fact, Tusass has, over several years, improved its transport network resilience, focusing on increasing the level of redundancy and reducing critical single points of failure. However, the changes and additions are costly and, due to the environmental conditions of Greenland, are also time-consuming, having fewer working days available for outdoor civil work projects.
Table 1 illustrates the most important graph KPIs, also described in the text above and below, that are associated with the graph representation of the Tusass transport network represented by the settlement connectivity (approximating but not one-to-one with the actual transport network).
In graph theory, an articulation point(see Figure 10 below) is a node that, if it is removed from the network, would split the network into disconnected parts. In our story, an articulation point would be one of our Greenlandic settlements. These types of points are thus important in maintaining network connectivity and serve as points in the network where alternative redundancy schemes might serve well. Therefore, creating additional redundancy in the network’s routing paths and implementing alternative connections will mitigate the impact of a failure of an articulation point, ensuring continued operations in case of a disruption. Basically, the more redundancy that a network has, the fewer articulation points the network will have; see also the illustration below.
Figure 10 The figure above illustrates the redundancy and availability of 3 simple undirected graphs with 4 nodes. The first graph is fully connected, with no articulation points or bridges, resulting in a redundancy and availability score of 100%. Thus I can remove a Node or a Connection from the graph and the remainder will remain full connected. The second graph, which is partly connected, has one articulation point and one bridge, leading to a redundancy and availability score of 75%. If I remove the third Node or the connection between Node 3 and Node 4, I would end with a disconnected Node 4 and a graph that has been broken up in 2 (e.g., if Node 3 is removed we have 2 sub-graphs {1,2} and {4}), The third graph, also partly connected, contains two articulation points and three bridges, resulting in a redundancy score of 0% and an availability score of 50%. Articulation points and bridges are highlighted in red to emphasize their critical roles in graph connectivity. Note: An articulation point is a node whose removal disconnects the graph and a bridge is an edge whose removal disconnects the graph.
Careful consideration of articulation points is crucial in preventing network partitioning, where removing a single node can disconnect the overall network into multiple sub-segments of the network. The connectivity between different segments is obviously critical for continuous data flow and service availability. Often, design and planning requirements dictate that if a network is broken into parts due to various disruption scenarios, these parts will remain functional and continue to provide a service that is possible with reduced performance. Network designers would make use of different strategies, such as increasing the physical redundancy of the transmission network as well as making use of routing algorithms on a higher level, such as multipath routing and diverse routing paths. Moreover, optimizing the placement of articulation points and routing paths (i.e., how traffic flows through the communications network) also maximizes resource utilization and may ensure optimal network performance and service availability for an operator’s customers.
Figure 11 illustrates the many articulation points of our Greenlandic settlements, represented as red stars in the graph of the Greenlandic transport network. Removing an articulation point (a critical node) would partition the graph into multiple disconnected components and may lead to severe service interruption.
In graph theory, a bridge is a network connection (or edge) whose removal would split the graph into multiple disconnected components. This type of connection is obviously critical for maintaining connectivity and facilitating communication between different network parts. In real life with real networks, the network designers would, in general, spend considerable time to ensure that such critical connections (i.e., so-called bridges) do not have an over-proportional impact on their network availability by, for example, building alternative connections (i.e., redundant connections) or ensuring that the impact of a compromised bridge would have a minimum impact in terms of the number of customers.
For our transport network in Greenland, the long-haul microwave transport network is overall less sensitive to disruption on a settlement level, as the underlying topology is like a long spine at high capacity and reasonable redundancy built-in with branches of MW radios that connect from the spine to a particular settlement. Thus, in most cases in this analysis, the long-haul MW radio site, in proximity to a given settlement, is the actual articulation point (not the settlement itself). The Nuuk data center, a central switching hub, is, by definition, an articulation point of very high criticality.
As discussed above and shown below (Figure 12), in the context of our transport network, bridges may play a crucial role in network resilience and fault tolerance. In our story, bridges represent the transport connections connecting Greenlandic settlements and the core network back in Nuuk (i.e., the master network node). In our representations, a bridge can, for example, be (1) a Microwave connection, (2) A submarine cable connection, and (3) a satellite connection provided by Tusass’s geo stationary satellite (e.g., Greensat) or by the low-earth orbiting OneWeb satellite. By identifying and managing bridges, network designers can mitigate the impact of link failures and disruptions, ensuring continuous operation and availability of services. Moreover, keeping network bridges in mind and minimizing them when planning a transport network will significantly reduce the risk of customer-affecting outages and keep the impact of transport disruption and the subsequent network partitioning to a minimum.
Figure 12 illustrates the many (edge) bridges and transport connections present in the graph of the Greenlandic transport network. Removing a bridge would split the network (graph) into multiple disconnected components, leading to network fragmentation and parts that may no longer sustain services. The above picture is common for long microwave chains with many hops (the connections themselves). The remedy is to make shorter hops, as Tusass is doing, and ensure that the connection itself is redundant equipment-wise (e.g., if one radio fails, there is another to take over). However, such a network would remain sensitive to any disruption of the MW site location and the large MW dish antenna.
Network designers should deploy redundancy mechanisms that would minimize the risk of the disruptive impact of compromised articulation points and bridges. They have several choices to choose from, such as multipath routing (e.g., ring topologies), link aggregation, and diverse routing paths to enhance redundancy and availability. These mechanisms will help minimize the impact of bridge failures and improve the overall network availability by increasing the level of network redundancy on a physical and logical level. Moreover, optimizing the placement of bridges and routing paths in a transport network will maximize resource utilization and ensure optimal network performance and service availability.
Knowing a given networks Articulation Points and Bridges will allow us to define an Availability and a Redundancy Score that we can use to evaluate and optimize a network’s robustness and reliability. Some examples of these concepts for simpler graphs (i.e., 4 nodes) are also shown in Figure 10 above. In the context of the Greenland transport network used here, these metrics can help us understand how resilient the network is to failures.
The Availability Score measures the proportion of nodes that are not articulation points, which might compromise our network’s overall availability if compromised. This score measures the risk of exposure to service disruption in case of a disconnection. As a reminder, the articulation point, or cut-vertex, is a node that, when removed, increases the number of components of the network and, thus, potentially the amount of disconnecting parts. The formula that is used to calculate the availability score is given by the total number of settlements (e.g., 93) minus the number of articulation points (e.g., 44) divided by the total number of settlements (e.g., 93). In this context, a higher availability score indicates a more robust network where fewer nodes are critical points of failure. Suppose we get a score that is close to one. In that case, this indicates that most nodes are not articulation points, suggesting that the network can sustain multiple node failures without significant loss of connectivity (see Figure 10 for a relatively simple illustration of this).
The Redundancy Score measures the proportion of connections that are not bridges, which could result in severe service disruptions to our customers if compromised. When a bridge is compromised or removed, it increases the number of network parts. The formula for the redundancy score is the total number of transport connections (edges, e.g., 101) minus the number of bridges (e.g., 57) divided by the total number of transport connections (edges, e.g., 101). Thus, in this context of redundancy, a higher redundancy score indicates a more resilient network where fewer edges are critical points of failure. If we get a redundancy score that is close to 100%, it would indicate that most of our (transport) connections cannot be categorized as bridges. This also suggests that our network can sustain multiple connectivity failures without it, resulting in a significant loss of overall connectivity and a severe service interruption.
Having more switching centers, or central hubs, can significantly enhance a communications network’s resilience, availability, and redundancy. It also reduces the consequences and impact of disruption to critical bridges in the network. Moreover, by distributing traffic, isolating failures, and providing multiple paths for data transmission, these central hubs may ensure continuous service to our customers and improve the overall network performance. In my opinion, implementing strategies to support multiple switching centers is essential for maintaining a robust and reliable communications infrastructure capable of withstanding various disruptions and enabling scaling to meet any future demands.
For our Greenlandic transport network shown above, we find an Availability Score of 53% and a Redundancy Score of 44%. While the scores may appear on the low side, we need to keep in mind that we are in Greenland with a population of 57 thousand mainly distributed along the west coast (from south to the north) in about 50+ settlements with 30%+ living in Nuuk. Tusass communications network connects to pretty much all settlements in Greenland, covering approximately 3,500+ km on the west coast (e.g., comparable to the distance from the top of Norway all the way down to the most southern point of Sicily), and irrespective of the number of people living in them. This is also a very clear desire, expectation, and direction that has been given by the Greenlandic administration (i.e., via the universal service obligation imposed on Tusass). The Tusass transport network is not designed with strict financial KPIs in mind and with the financial requirement that a given connection to a settlement would need to have a positive return on investment within a few years (as is the prevalent norm in our Industry). The transport network of Tusass has been designed to connect all communities of Greenland to an adequate level of quality and availability, prioritizing the coverage of the Greenlandic population (and the settlements they live in) rather than whether or not it makes hard financial sense. Tusass’s network is continuously upgraded and expanded as the demand for more advanced broadband services increases (as it does anywhere else in the world).
CRITICAL TECHNOLOGIES RELEVANT TO GREENLAND AND THE WIDER ARCTIC.
Greenland’s strategic location in the Arctic and its untapped natural resources, such as rare earth elements, oil, and gas, has increasingly drawn the attention of major global powers like the United States, Russia, and China. The melting Arctic ice due to climate change is opening new shipping routes and making these resources more accessible, escalating the geopolitical competition in the region.
Greenland must establish a defense and security strategy that minimizes its dependency on its natural allies and external actors to mitigate a situation where such may not be available or have the resources to commit to Greenland. An integral part of such a security strategy should be a dual-use, civil, and defense requirement whenever possible. Ensuring that Greenlandic society gets an immediate and sustainable return on investments in establishing a solid security framework.
5G technology offers significant advancements over previous generations of wireless networks, particularly in terms of private networking, speed, reliability, and latency across a variety of coverage platforms, e.g., (normal fixed) terrestrial antennas, vehicle-based (i.e., Cell on Wheels), balloon-based, drone-based, LEO-satellite based. This makes 5G ideal for setting up ad-hoc mobile coverage areas for military and critical civil applications. One of the key capabilities of 5G that supports these use cases is network slicing, which allows for the creation of dedicated virtual networks optimized for specific requirements.
Telia Norway has conducted trials together with the Norwegian Armed Forces in Norway to demonstrate the use of 5G for military applications (note: I think this is one of the best examples of an operator-defense collaboration on deployment innovation and directly applies to Arctic conditions). These trials included setting up ad-hoc 5G networks to support various military scenarios (including in an Arctic-like climate). The key findings demonstrated the ability to provide high-speed, low-latency communications in challenging environments, supporting real-time situational awareness and secure communications for military personnel. Ericsson has also partnered with the UK Ministry of Defense to trial 5G applications for military use. These trials focused on using 5G to support secure communications, enhance situational awareness, and enable the use of autonomous systems in military operations. NATO has conducted exercises incorporating 5G technology to evaluate its potential for improving command and control, situational awareness, and logistics in multi-national military operations. These exercises have shown the potential of 5G to enhance interoperability and coordination among allied forces. It is a very meaningful dual-use technology.
5G private networks offer a dedicated and secure network environment for specific organizations or use cases, which can be particularly beneficial in the Arctic and Greenland. These private networks can provide reliable communication and data transfer in remote and harsh environments, supporting military and civil applications. For instance, in Greenland, 5G private networks can enhance communication for scientific research stations, ensuring that data from environmental monitoring and climate research is transmitted securely and efficiently. They can also support critical infrastructure, such as power grids and transportation networks, by providing a reliable communication backbone. Moreover, in Greenland, the existing public telecommunications network may be designed in such a way that it essentially could operate as a “private” network in case transmission lines connecting settlements would be compromised (e.g., due to natural or unnatural causes), possibly a “thin” LEO satellite connection out of the settlement.
5G provides ultra-fast data speeds and low latency, enabling (near) real-time communication and data processing. This is crucial for military operations and emergency response scenarios where timely information is vital. Network slicing allows a single physical 5G network to be divided into multiple virtual networks, each tailored to specific applications or user groups. This ensures that critical communications are prioritized and reliable even during network congestion. It should be considered that for Greenland, the transport network (e.g., long-haul microwave network, routing choices, and satellite connections) might be limiting how fast the ultra-fast data speeds can become and may, at least along some transport routes, limit the round trip time performance (e.g., GEO satellite connections).
5G Enhanced Mobile Broadband (eMBB) provides high-speed internet access to support applications such as video streaming, augmented reality (AR), and virtual reality (VR) for situational awareness and training. Massive Machine-Type Communications (mMTC) supports a large number of IoT devices for monitoring and controlling equipment, sensors, and vehicles in both military and civil scenarios. Ultra-Reliable (Low-Latency) Communications (URLLC) ensures dependable and timely communication for critical applications such as command and control systems as well as unmanned and autonomous communication platforms (e.g., terrestrial, aerial, and underwater drones). I should note that designing defense and secure systems for ultra-low latency (< 10 ms) requirements would be a mistake as such cannot be guaranteed under all scenarios. The ultra-reliability (and availability) of transport connectivity is a critical challenge as it ensures that a given system has sufficient autonomy. Ultra-low latency of a given connectivity is much less critical.
For military (defense) applications, 5G can be rapidly deployed in the field using portable base stations to create a mobile (private) network. This is particularly useful in remote or hostile environments where traditional infrastructure is unavailable or has been compromised. Network slicing can create a secure, dedicated network for military operations. This ensures that sensitive data and communications are protected from interception and jamming. The low latency of 5G supports (near) real-time video feeds from drones, body cameras, and other surveillance equipment, enhancing situational awareness and decision-making in combat or reconnaissance missions.
Figure 13 The hierarchical coverage architecture shown above is relevant for military or, for example, search and rescue operations in remote areas like Greenland (or the Arctic in general), integrating multiple technological layers to ensure robust communication and surveillance. LEO satellites provide extensive broadband and SIGINT & IMINT coverage, supported by GEO satellites for stable links and data processing through ground stations. High Altitude Platforms (HAPs) offer 5G, IMINT, and SIGINT coverage at mid-altitudes, enhancing communication reach and resolution. The HAP system offers an extremely mobile and versatile platform for civil and defense scenarios. An ad-hoc private 5G network on the ground ensures secure, real-time communication for tactical operations. This multi-layered architecture is crucial for maintaining connectivity and operational efficiency in Greenland’s harsh and remote environments. The multi-layered communications network integrates IOT networks that may have been deployed in the past or in a specific mission context.
In critical civil applications, 5G can provide reliable communication networks for first responders during natural disasters or large-scale emergencies. Network slicing ensures that emergency services have priority access to the network, enabling efficient coordination and response. 5G can support the rapid deployment of communication networks in disaster-stricken areas, ensuring that affected populations can access critical services and information. Network slicing can allocate dedicated resources for smart city applications, such as traffic management, public safety, and environmental monitoring, ensuring that these services remain operational even during peak usage times. Thus, for Greenland, ensuring 5G availability would be through coastal settlements and possibly coastal coverage (outside settlements) of 5G at a lower frequency range (e.g., 600 – 900 MHz), prioritizing 5G coverage rather than 5G enhanced mobile broadband (i.e., any coverage at a high coverage probability is better than no coverage at certainty).
Besides 5G, what other technologies would otherwise be of importance in a Greenland Technology Strategy as it relates to its security and ensuring its investments and efforts also return beneficially to its society (e.g., a dual-use priority):
It would be advisable to increase the number of community networks within the overall network that can continue functioning if cut off from the main communications network. Thus, communications services in smaller and remote settlements depend less on a main or very few central communications control and management hubs. This requires on a local settlement level, or grouping of settlements, self-healing, remote (as opposed to a central hub) management, distributed databases, regional data center (typically a few racks), edge computing, local DNS, CDNs and content hosting, satellite connection, … Most telecom infrastructure manufacturing companies have today network in a box solutions that allow for such designs. Such solutions enable private 5G networks to function isolated from a public PLMN and fixed transport network.
It is essential to develop a (very) highly available and redundant digital transport infrastructure leveraging the existing topology strengthened by additional submarine cables (less critical than some of the other means of connectivity), increased transport ring- & higher-redundancy topologies, multi-level satellite connections (GEO, MEO & LEO, supplier redundancy) with more satellite ground gateways on Greenland (e.g., avoiding “off-Greenland” traffic routing). In addition, a remotely controlled stratospheric drone platform could provide additional connectivity redundancy at very high broadband speeds and low latencies.
Satellite backhaul solutions, operating, for example, from a Low Earth Orbit (LEO), such as shown in Figure below, are extending internet services to the farthest reaches of the globe. These satellites offer many benefits, as already discussed above, in connecting remote, rural, and previously un- and under-served areas with reliable internet services. Many remote regions lack foundational telecom infrastructure, particularly long-haul transport networks for carrying traffic away from remote populated areas. Satellite backhauls do not only offer a substantially better financial solution for enhancing internet connectivity to remote areas but are often the only viable solution for connectivity. The satellite backhaul solution is an important part of the toolkit to improve on redundancy and availability of particular very long and extensive long-haul microwave transport networks through remote areas (e.g., Greenland’s rugged and frequently hostile harsh coastal areas) where increasing the level of availability and redundancy with terrestrial means may be impractical (due to environmental factors) or incredibly costly. – LEO satellites provide several security advantages over GEO satellites when considering resistance to hostile actions to disrupt satellite communications. One significant factor is the altitude at which LEO satellites operate, which is between 500 and 2,000 kilometers, compared to GEO satellites, which are positioned approximately 36,000 kilometers above the equator. The lower altitude makes LEO satellites less vulnerable to long-range anti-satellite (ASAT) missiles. – LEO satellite networks are usually composed of large constellations with many satellites, often numbering in the dozens to hundreds. This extensive LEO network constellation provides some redundancy, meaning the network can still function effectively if some satellites are “taken out.” In contrast, GEO satellites are typically much fewer in number, and each satellite covers a much larger area, so losing even one GEO satellite will have a significant impact. – Another advantage of LEO satellites is their rapid movement across the sky relative to the Earth’s surface, completing an orbit in about 90 to 120 minutes. This constant movement makes it more challenging for hostile actors to track and target individual satellites for extended periods. In comparison, GEO satellites remain stationary relative to a fixed point on Earth, making them easier to locate and target. LEO satellites’ lower altitude also results in lower latency than GEO satellites. This can benefit secure, time-sensitive communications and is less susceptible to interception and jamming due to the reduced time delay. However, any security architecture of the critical transport infrastructure should not only rely on one type of satellite configuration. – Both GEO and LEO satellites have their purpose and benefits. Moreover, a hierarchical multi-dimensional topology, including stratospheric drones and even autonomous underwater vehicles, is worth considering when designing critical communications architecture. It is also worth remembering that public satellite networks may offer a much higher degree of communications redundancy and availability than defense-specific constellations. However, for SIGINT & IMINT collection, the defense-specific satellite constellations are likely much more advanced (unfortunately, they are also not as numerous as their civilian “cousins”). This said, a stratospheric aerial platform (e.g., HAP) would be substantially more powerful in IMINT and possibly also for some SIGINT tasks (or/and less costly & versatile) than a defense-specific satellite solution.
Figure 14 illustrates the architecture of a Low Earth Orbit (LEO) satellite backhaul system used by providers like OneWeb as well as StarLink with their so-called “Community Gateway” (i.e., using their Ka-band). It showcases the connectivity between terrestrial internet infrastructure (i.e., Satellite Gateways) and satellites in orbit, enabling high-speed data transmission. The network consists of LEO satellites that communicate with each other (inter-satellite Comms) using the Ku and Ka frequency bands. These satellites connect to ground-based satellite gateways (GW), which interface with Points of Presence (PoP) and Internet Exchange Points (IXP), integrating the space-based network with the terrestrial internet (WWW). Note: The indicated speeds and frequency bands (e.g., Ku: 12–18 GHz, Ka: 28–40 GHz) and data speeds illustrate the network’s capabilities.
Establish collaboration and agreements with LEO direct to cellular device satellite providers (i.e., there are many more than StarLink (US) around, e.g., AST Spacemobile (US), Lynk Mobile (US), Sateliot (Spain),…) that would offer cellular services across Greenland. A possible concern is to what degree such systems can be relied upon in a crisis, as these are controlled by external commercial companies operating satellites outside the control and influence of Greenlandic interests. For more details about LEO satellites, see my recent article “The Next Frontier: LEO Satellites for Internet Services.”.
Figure 15 illustrates an LEO satellite direct-to-device communication in remote areas without terrestrially-based communications infrastructure. Satellites are the only means of communication by a normal mobile device or classical satellite phone. Courtesy: DALL-E.
Establish an unmanned (remotely operated) stratospheric High Altitude Platform System (HAPS) (i.e., an advanced drone-based platform) or Unmanned Aerial Vehicles (UAV) over Greenland (or The Arctic region) with payload agnostic capabilities. This could easily be run out of existing Greenlandic ground-based aviation infrastructure (e.g., Kangerlussuaq, Nuuk, or many other community airports across Greenland). This platform could eventually become autonomous or require little human intervention. The high-altitude platform could support mission-critical ad-hoc networking for civil and defense applications (over Greenland). Such a multi-purpose platform can be used for IMINT and SIGINT (i.e., for both civil & defense) and civil communication means, including establishing connectivity to the ground-based transport network in case of disruptions. Lastly, a HAPS may also permanently offer high-quality and capacity 5G mobile services or act as a private ultra-secure 5G network in an ad-hoc mission-specific scenario. For a detailed account of stratospheric drones and how these compared with low-earth satellites, see my recent article “Stratospheric Drones & Low Earth Satellites: Revolutionizing Terrestrial Rural Broadband from the Skies?”. – Stratospheric drones, which operate in the stratosphere at altitudes around 20 to 50 kilometers, offer several security advantages over traditional satellite communications and submarine communication cables, especially from a Greenlandic perspective. These drones are less accessible and harder to target due to their altitude, which places them out of reach for most ground-based anti-aircraft systems and well above the range of most manned aircraft. This makes them less vulnerable to hostile actions compared to satellites, which can be targeted by anti-satellite (ASAT) missiles, or submarine cables, which can be physically cut or damaged by underwater operations. The drones would stay over Greenlandic, or NATO, territory while by nature, design, and purpose, submarine communications cables and satellites, in general, are extending far beyond the territory of Greenland. – The mobility and flexibility of stratospheric drones allow them to be quickly repositioned as needed, making it difficult for adversaries to consistently target them. Unlike satellites that follow predictable orbits or submarine cables with fixed routes, these drones can change their location dynamically to respond to threats or optimize their coverage. This is particularly advantageous for Greenland, whose vast and harsh environment makes maintaining and protecting fixed communication infrastructure challenging. – Deploying a fleet of stratospheric drones provides redundancy and scalability. If one drone is compromised or taken out of service, others can fill the gap, ensuring continuous communication coverage. This distributed approach reduces the risk of a single point of failure, which is more pronounced with individual satellites or single submarine cables. For Greenland, this means a more reliable and resilient communication network that can adapt to disruptions. – Stratospheric drones can be rapidly deployed and recovered, making it an easier platform to maintain and upgrade them as needed compared to for example satellite based platforms and even terrestrial deployed networks. This quick deployment capability is crucial for Greenland, where harsh weather conditions can complicate the maintenance and repair of fixed infrastructure. Unlike satellites that require expensive and complex launches or submarine cables that involve extensive underwater laying and maintenance efforts, drones offer a more flexible and manageable solution. – Drones can also establish secure, line-of-sight communication links that are less susceptible to interception and jamming. Operating closer to the ground compared to satellites allows the use of higher frequencies narrower beams that are more difficult to jam. Additionally, drones can employ advanced encryption and frequency-hopping techniques to further secure their communications, ensuring that sensitive data remains protected. Stratospheric drones can also be equipped with advanced surveillance and countermeasure technologies to detect and respond to threats. For instance, they can carry sensors to monitor the electromagnetic spectrum for jamming attempts and deploy countermeasures to mitigate these threats. This proactive defense capability enhances their security profile compared to passive communication infrastructure like satellites or cables. – From a Greenlandic perspective, stratospheric drones offer significant advantages. They can be deployed over specific areas of interest, providing targeted communication coverage for remote or strategically important regions. This is particularly useful for covering Greenland’s vast and sparsely populated areas. Modern stratospheric drones are designed to support multi-dimensional payloads, or as it might also be called, payload agnostic (e.g., SIGINT & IMINT equipment, 5G base station and advanced antenna, laser communication systems, …) and stay operational for extended periods, ranging from weeks to months, ensuring sustained communication coverage without the need for frequent replacements or maintenance. – Last but not least, Greenland may be an ideal safe testing ground due to its vast, remote and thinly populated regions.
Figure 16 illustrates a Non-Terrestrial Network consisting of a stratospheric High Altitude Platform (HAP) drone-based constellation providing terrestrial Cellular broadband services to terrestrial mobile users delivered to their normal 5G terminal equipment that may range from smartphone and tablets to civil and military IOT networks and devices. Each hexagon represents a beam inside the larger coverage area of the stratospheric drone. One could assign three HAPs to cover a given area to deliver very high-availability services to a rural area. The operating altitude of a HAP constellation is between 10 and 50 km, with an optimum of around 20 km. It is assumed that there is inter-HAP connectivity, e.g., via laser links. Of course, it is also possible to contemplate having the gNB (full 5G radio node) in the stratospheric drone entirely, allowing easier integration with LEO satellite backhauls, for example. There might even be applications (e.g., defense, natural & unnatural disaster situations, …) where a standalone 5G SA core is integrated.
Unmanned Underwater Vehicles (UUV), also known as Autonomous Underwater Vehicles (AUV), are obvious systems to deploy for underwater surveillance & monitoring that may also have obvious dual-use purposes (e.g., fisheries & resource management, iceberg tracking and navigation, coastal defense and infrastructure protection such as for submarine cables). Depending on the mission parameters and type of AUV, the range is between up to 100 kilometers (e.g., REMUS100) to thousands of kilometers (e.g., SeaBed2030) and an operational time (endurance) from max. 24 hours (e.g., REMUS100, Bluefin-21), to multiple days (e.g., Boing Echo Voyager), to several months (SeaBed2030). A subset of this kind of underwater solution would be swarm-like AUV constellations. See Figure 17 below for an illustration.
Increase RD&T (Research, Development & Trials) on Arctic Internet of Things (A-IOT) (note: require some level of coverage, minimum satellite) for civil, defense/military (e.g., Military IOTnor M-IOT) and dual-use applications, such as surveillance & reconnaissance, environmental monitoring, infrastructure security, etc… (note: IOTs are not only for terrestrial use cases but also highly interesting for aquatic applications in combination with AUV/UUVs). Military IoT refers to integrating IoT technologies tailored explicitly for military applications. These devices enhance operational efficiency, improve situational awareness, and support decision-making processes in various military contexts. Military IoT encompasses various connected devices, sensors, and systems that collect, transmit, and analyze data to support defense and security operations. In the vast and remote regions of Greenland and the Arctic, military IoT devices can be deployed for continuous surveillance and reconnaissance. This includes using drones, such as advanced HAPS, equipped with cameras and sensors to monitor borders, track the movements of ships and aircraft, and detect any unauthorized activities. Military IoT sensors can also monitor Arctic environmental conditions, tracking ice thickness changes, weather patterns, and sea levels. Such data is crucial for planning and executing military operations in the challenging Arctic environment but is also of tremendous value for the Greenlandic society. The importance of dual-use cases, civil and defense, cannot be understated; here are some examples: – Infrastructure Monitoring and Maintenance: (Military Use Case) IoT sensors can be deployed to monitor the structural integrity of military installations, such as bases and airstrips, ensuring they remain operational and safe for use. These sensors can detect stress, wear, and potential damage due to extreme weather conditions. These IoT devices and networks can also be deployed for perimeter defense and monitoring. (Civil Use Case) The same technology can be applied to civilian infrastructure, including roads, bridges, and public buildings. Continuous monitoring can help maintain these civil infrastructures by providing early warnings about potential failures, thus preventing accidents and ensuring public safety. – Secure Communication Networks – Military Use Case: Military IoT devices can establish secure communication networks in remote areas, ensuring that military units can maintain reliable and secure communications even in the Arctic’s harsh conditions. This is critical for coordinating operations and responding to threats. Civil Use Case: In civilian contexts, these communication networks can enhance connectivity in remote Greenlandic communities, providing essential services such as emergency communications, internet access, and telemedicine. This helps bridge the digital divide and improve residents’ quality of life. – Environmental Monitoring and Maritime Safety – Military Use Case: Military IoT devices, such as underwater sensor networks and buoys, can be deployed to monitor sea conditions, ice movements, and potential maritime threats. These devices can provide real-time data critical for naval operations, ensuring safe navigation and strategic planning. Civil Use Case: The same sensors and buoys can be used for civilian purposes, such as ensuring the safety of commercial shipping lanes, fishing operations, and maritime travel. Real-time monitoring of sea conditions and icebergs can prevent maritime accidents and enhance the safety of maritime activities. – Fisheries Management and Surveillance – Military Use Case: IoT devices can monitor and patrol Greenlandic waters for illegal fishing activities and unauthorized maritime incursions. Drones and underwater sensors can track vessel movements, ensuring that military forces can respond to potential security threats. Civil Use Case: These monitoring systems can support fisheries management by tracking fish populations and movements, helping to enforce sustainable fishing practices and prevent overfishing. This data is important for the local economy, which heavily relies on fishing.
Implement Distributed Acoustic Sensing (DAS) on submarine cables. DAS utilizes existing fiber-optic cables, such as those used for telecommunications, to detect and monitor acoustic signals in the underwater environment. This innovative technology leverages the sensitivity of fiber-optic cables to vibrations and sound waves, allowing for the detection of various underwater activities. This could also be integrated with the AUV and A-IOTs-based sensor systems. It should be noted that jamming a DAS system is considerably more complex than jamming traditional radio-frequency (RF) or wireless communication systems. DAS’s significant security and defense advantages might justify deploying more submarine cables around Greenland. This investment is compelling because of enhanced surveillance and security, improved connectivity, and strategic and economic benefits. By leveraging DAS technology, Greenland could strengthen its national security, support economic development, and maintain its strategic importance in the Arctic region.
Greenland should widely embrace autonomous systems deployment and technologies based on artificial intelligence (AI). AI is a technology that could compensate for the challenges of having a vast geography, a hostile climate, and a small population. This may, by far, be one of the most critical components of a practical security strategy for Greenland. Getting experience with autonomous systems in a Greenlandic and Arctic setting should be prioritized. Collaboration & knowledge exchange with Canadian and American universities should be structurally explored, as well as other larger (friendly) countries with Arctic interests (e.g., Norway, Iceland, …).
Last but not least, cybersecurity is an essential, even foundational, component of the securitization of Greenland and the wider Arctic, addressing the protection of critical infrastructure, the integrity of surveillance and monitoring systems, and the defense against geopolitical cyber threats. The present state and level of maturity of cybersecurity and defense (against cyber threats) related to Greenland’s critical infrastructure has to improve substantially. Prioritizing cybersecurity may have to be at the expense of other critical activities due to limited resources with relevant expertise available to businesses in Greenland). Today, international collaboration is essential for Greenland to develop strong cyber defense capabilities, ensure secure communication networks, and implement effective incident response plans. However, it is essential for Greenland’s security that a cybersecurity architecture is tailor-made to the particularities of Greenland and allows Greenland to operate independently should friendly actors and allies not be in a position to provide assistance.
Figure 17 Above illustrates an Unmanned Underwater Vehicle (UUV) near the coast of Greenland inspecting a submarine cable. The UUV is a robotic device that operates underwater without a human onboard, controlled either autonomously or remotely. In and around Greenland’s coastline, UUVs may serve both defense and civilian purposes. For defense, they can patrol for submarines, monitor underwater traffic, and detect potential threats, enhancing maritime security. Civilian applications include search & rescue missions, scientific research, where UUVs map the seabed, study marine life, and monitor environmental changes, crucial for understanding climate change impacts. Additionally, they inspect underwater infrastructure like submarine cables, ensuring their integrity and functionality. UUVs’ versatility makes them invaluable for comprehensive underwater exploration and security along Greenland’s long coast line. Integrated defense architectures may combine the UUV, Distributed Acoustic Sensor (DAS) networks deployed at submarine cables, and cognitive AI-based closed-loop security solutions (e.g., autonomous operation). Courtesy: DALL-E.
How do we frame some of the above recommendations into a context of securitization in the academic sense of the word aligned with the Copenhagen School (as I understand it)? I will structure this as the “Securitizing Actor(s),” “Extraordinary Measures Required,” and the “Geopolitical Implications”:
Example 1:Improving Communications networks as a security priority.
Securitizing Actor(s): Greenland’s government, possibly supported by Denmark and international allies (e.g., The USA’s Pituffik Space Base on Greenland), frames the lack of higher availability and reliable communication networks as an existential threat to national security, economic development, and stability, including the ability to defend Greenland effectively during a global threat or crisis.
Extraordinary Measures Required: Greenland can invest in advanced digital communication technologies to address the threat. This includes upgrading infrastructure such as fiber-optic cables, satellite communication systems, stratospheric high-altitude platform (HAP) with IMINT, SIGINT, and broadband communications payload, and 5G wireless networks to ensure they are reliable and can handle increased data traffic. Implementing advanced cybersecurity measures to protect these networks from cyber threats is also crucial. Additionally, investments in broadband expansion to remote areas ensure comprehensive coverage and connectivity.
Geopolitical Implications: By framing the reliability and availability of digital communications networks as a national security issue, Greenland ensures that significant resources are allocated to upgrade and maintain these critical infrastructures. Greenland may also attract European Union investments to leapfrogging the critical communications infrastructure. This improves Greenland’s day-to-day communication and economic activities and enhances its strategic importance by ensuring secure and efficient information flow. Reliable digital networks are essential for attracting international investments, supporting digital economies, and maintaining social cohesion.
Example 2: Geopolitical Competition in the Arctic
Securitizing Actor(s): The Greenland government, aligned with Danish and international allies’ interests, views the increasing presence of Russian and Chinese activities in the Arctic as a direct threat to Greenland’s sovereignty and security.
Extraordinary Measures Required: In response, Greenland can adopt advanced surveillance and defense technologies, such as Distributed Acoustic Sensing (DAS) systems to monitor underwater activities and Unmanned Aerial & Underwater Vehicles (UAVs & UUVs) for continuous aerial surveillance. Additionally, deploying advanced communication networks, including satellite-based systems, ensures secure and reliable information flow.
Geopolitical Implications: By framing foreign powers’ increased activities as a security threat (e.g., Russia and China), Greenland can attract NATO and European Union investments and support for deploying cutting-edge surveillance and defense technologies. This enhances Greenland’s security infrastructure, deters potential adversaries, and solidifies its strategic importance within the alliance.
Example 3: Cybersecurity as a National Security Priority.
Securitizing Actor(s): Greenland, aligned with its allies, frames the potential for cyber-attacks on critical infrastructure (such as power grids, communication networks, and military installations) as an existential threat to national security.
Extraordinary Measures Required: To address this threat, Greenland can invest in state-of-the-art cybersecurity technologies, including artificial intelligence-driven threat detection systems, encrypted communication channels, and comprehensive incident response frameworks. Establishing partnerships with global cybersecurity firms and participating in international cybersecurity exercises can also be part of the strategy.
Geopolitical Implications: By securitizing cybersecurity, Greenland ensures that significant resources are allocated to protect its digital infrastructure. This safeguards its critical systems and enhances its attractiveness as a secure location for international investments, reinforcing its geopolitical stability and economic growth.
Example 4: Arctic IoT and Dual-Use Military IoT Networks as a Security Priority.
Securitizing Actor(s): Greenland’s government, supported by Denmark and international allies, frames the lack of Arctic IoT and dual-use military IoT networks as an existential threat to national security, economic development, and environmental monitoring.
Extraordinary Measures Required: Greenland can invest in deploying Arctic IoT and dual-use military IoT networks to address the threat. These networks involve a comprehensive system of interconnected sensors, devices, and communication technologies designed to operate in the harsh Arctic environment. This includes deploying sensors for environmental monitoring, enhancing surveillance capabilities, and improving communication and data-sharing across military and civilian applications.
Geopolitical Implications: By framing the lack of Arctic IoT and dual-use military IoT networks as a national security issue, Greenland ensures that significant resources are allocated to develop and maintain these advanced technological infrastructures. This improves situational awareness and operational efficiency and enhances Greenland’s strategic importance by providing real-time data and robust monitoring capabilities. Reliable IoT networks are essential for protecting critical infrastructure, supporting economic activities, and maintaining environmental and national security.
THE DANISH DEFENSE & SECURITY AGREEMENT COVERING THE PERIOD 2024 TO 2033.
Recently, Denmark approved its new defense and security agreement for the period 2024-2033. This strongly emphasizes Denmark’s strategic reorientation in response to the new geopolitical realities. A key element in the Danish commitment to NATO’s goals includes a spending level approaching and possibly superseding the 2% of GDP on defense by 2030. It is not 2% for the sake of 2%. There really is a lot to be done, and as soon as possible. The agreement entails significant financial investments totaling approximately 190 billion DKK (or ca. 25+ billion euros) over the next ten years to quantum leap defense capabilities and critical infrastructure.
The defense agreement emphasizes the importance of enhancing security in the Arctic region, including, of course, Greenland. Thus, Greenland’s strategic significance in the current geopolitical landscape is recognized, particularly in light of Russian activities and Chinese expressed intentions (e.g., re: the “Polar Silk Road”). The agreement aims to strengthen surveillance, sovereignty enforcement, and collaboration with NATO in the Arctic. As such, we should expect investments to improve surveillance capabilities that would strengthen the enforcement of Greenland’s sovereignty. Ensuring that Greenland and Denmark can effectively monitor and protect its Arctic territories (together with its allies). The defense agreement stresses the importance of supporting NATO’s mission in the Arctic region, contributing to collective defense and deterrence efforts.
What I very much like in the new defense agreement is the expressed focus on dual-use infrastructure investments that benefit Greenland’s defense (& military) and civilian sectors. This includes upgrading existing facilities and enhancing operational capabilities in the Arctic that allow a rapid response to security threats. The agreement ensures that defense investments also bring economic and social benefits to Greenlandic society, consistent with a dual-use philosophy. In order for this to become a reality, it will involve a close collaboration with local authorities, businesses, and research institutions to support the local economy and create new job opportunities (as well as ensure that there is a local emphasis on relevant education to ensure that such investments are locally sustainable and not relying on an “army” of Danes and others of non-Greenlandic origin).
The defense agreement unsurprisingly expresses a strong commitment to enhancing cybersecurity measures as well as addressing hybrid threats in Greenland. This reflects the broader security challenges of the new technology introduction required, the present cyber-maturity level, and, of course, the current (and future expected) geopolitical tensions. The architects behind the agreement have also realized that there is a big need to improve recruitment, retention, and appropriate training within the defense forces, ensuring that personnel are well-prepared to operate in the Arctic environment in general and in Greenland in particular.
It is great to see that the Danish “Defense and Security Agreement” for 2024-2033 reflects the principles of securitization by framing Greenland’s security as an existential threat and justifying substantial investments and strategic initiatives in response. The focus of the agreement is on enhancing critical infrastructure, surveillance platforms, and international cooperation while ensuring that the benefits of the local economy align with the concept of securitization. That is to ensure that Greenland is well-prepared to address current and future security challenges and anticipated threats in the Arctic region.
The agreement underscores the importance of advanced surveillance systems, such as, for example, satellite-based monitoring and sophisticated radar systems as mentioned in the agreement. These technologies are deemed important for maintaining situational awareness and ensuring the security of Denmark’s territories, including Greenland and the Arctic region in general. In order to improve response times as well as effectiveness, enhanced surveillance capabilities are essential for detecting and tracking potential threats. Moreover, such capabilities are also important for search and rescue, and many other civilian use cases are consistent with the intention to ensure that applied technologies for defense purposes have dual-use capabilities and can also be used for civilian purposes.
There are more cyber threats than ever before. These threats are getting increasingly sophisticated with the advance of AI and digitization in general. So, it is not surprising that cybersecurity technologies are also an important topic in the agreement. The increasing threat of cyber attacks, particularly against critical infrastructure and often initiated by hostile state actors, necessitates a robust cybersecurity defense in order to protect our critical infrastructure and the sensitive information it typically contains. This includes implementing advanced encryption, intrusion detection systems, and secure communication networks to safeguard against cyber threats.
The defense agreement also highlights the importance of having access to unmanned systems or drones. There are quite a few examples of such systems as discussed in some detail above, and can be found in my more extensive article “Stratospheric Drones & Low Earth Satellites: Revolutionizing Terrestrial Rural Broadband from the Skies?“. There are two categories of drones that may be interesting. One is the unmanned version that typically is remotely controlled in an operations center at a distance from the actual unmanned platform. The other is the autonomous (or semi-autonomous) drone version that is enabled by AI and many integrated sensors to operate independently of direct human control or at least largely without real-time human intervention. Examples such as Unmanned Vehicles (UVs) and Autonomous Vehicles (AVs) are typically associated with underwater (UUV/UAV) or aerial (UAV/AAV) platforms. This kind of technology provides versatile, very flexible surveillance & reconnaissance, and defense platforms that are not reliant on a large staff of experts to operate. They are particularly valuable in the Arctic region, where harsh environmental conditions can limit the effectiveness of manned missions.
The development and deployment of dual-use technologies are also emphasized in the agreement. These technologies, which have both civilian and military applications, are necessary for maximizing the return on investment in defense infrastructure. It may also, at the moment, be easier to find funding if it is defense-related. Technology examples include advancements in satellite communications and broadband networks, enhancing military capabilities, and civilian connectivity, particularly how those various communications technologies can seamlessly integrate with one another is very important.
Furthermore, artificial intelligence (AI) has been identified as a transformative technology for defense and security. While AI is often referred to as a singular technology. However, it is actually an umbrella term that encompasses a broad spectrum of frameworks, tools, and techniques that have a common basis in models that are being trained on large (or very large) sets of data in order to offer various predictive capabilities of increasing sophistication. This leads to the expectation that, for example, AI-driven analytics and decision-making applications will enhance the operational efficiency and, not unimportantly, the quality of real-time decision-making in the field (which may or may not be correct and for sure may be somewhat optimistic expectations at least at a basic level). AI-enabled defense platforms or applications are likely to result in improved threat detection as well as being able to support strategic planning. As long as the risk of false outcomes is acceptable, such a system will enrich the defense systems and provide significant advantages in managing complex and highly dynamic security environments and time-critical threat scenarios.
Lastly, the agreement stresses the need for advanced logistics and supply chain technologies. Efficient logistics are critical for sustaining military operations and ensuring the timely delivery of equipment and supplies. Automation, real-time tracking, and predictive analytics in logistics management can significantly improve the resilience and responsiveness of defense operations.
AT THIS POINT IN MY GREENLANDIC JOURNEY.
In my career, I have designed, planned, built, and operated telecommunications networks in many places under vastly different environmental conditions (e.g., geography and climate). The more I think about building robust and highly reliable communication networks in Greenland, including all the IT & compute enablers required, the more I appreciate how challenging and different it is to do so in Greenland. Tusass has built a robust and reliable transport network connecting nearly all settlements in Greenland down to the smallest size. Tusass operates and maintains this network under some of the harshest environmental conditions in the world, with an incredible dedication to all those settlements that depend on being connected to the outside world and where a compromised connection may have dire consequences for the unconnected community.
Figure 18 Shows a coastal radio site in Greenland. It illustrates one of the frequent issues of the critical infrastructure being covered by ice as well as snow. Courtesy: Tusass A/S (Greenland),
Comparing the capital spending level of Tusass in Greenland with the averages of other Western European countries, we find that Tusass does not invest significantly more of its revenue than the telco industry’s country averages of many other Western European countries. In fact, its 5-year average Capex to Revenue ratio is close to the Western European country average (19% over the period 2019 to 2023). In terms of capital investments compared to the revenue generating units (RGUs), Tusass does have the highest level of 18.7 euros per RGU per month, based on a 5-year average over the period 2019 to 2023, in comparison with the average of several Western European markets, coming out at 6.6 euros per RGU per month, as shown in the chart below. This difference is not surprising when considering the available population in Greenland compared to the populations in the countries considered in the comparison. The variation of capital investments for Tusass also shows a much larger variation than other countries due to substantially less population to bear the burden of financing big capital-intensive projects, such as the deployment of new submarine cables (e.g., typically coming out at 30 to 50 thousand euros per km), new satellite connections (normally 10+ million euros depending on the asset arrangement), RAN modernization (e.g., 5G), and so forth … For example, the average absolute capital spend was 14.0±1.5 million euros between 2019 and 2022, while 2023 was almost 40 million euros (a little less than 4% of the annual defense and security budget of Denmark) due to, according with Tusass annual report, RAN modernization (e.g., 5G), satellite (e.g., Greensat) and submarine cable investments (initial seabed investigation). All these investments bring better quality through higher reliability, integrity, and availability of Greenland’s critical communications infrastructure although there are not a large population (e.g., millions) to spread such these substantial investments over.
Figure 19 In a Western European context, Greenland does not, on average, invest substantially more in telecom infrastructure relative to its revenues and revenue-generating units (i.e., its customer service subscriptions) despite having a very low population of about 57 thousand and an area of 2.2 million square kilometers, the size of Alaska and only 33% smaller than India. The chart shows the country’s average Capex to Revenue ratio and the Capex in euros per RGU per month over the last 5 years (2019 to 2023) for Greenland (e.g., Tusass annual reports) and Western Europe (using data from New Street Research).
The capital investments required to leapfrog Greenland’s communications network availability and redundancy scores beyond 70% (versus 53% and 44%, respectively, in 2023) would be very substantial, requiring both additional microwave connections (including redesigns), submarine cables, and new satellite arrangements, and new ground stations (e.g., to or in settlements with more than a population of 1,000 inhabitants).
Those investments would serve the interests of the Greenlandic society and that of Denmark and NATO in terms of boosting the defense and security of Greenland, which is also consistent with all the relevant parties’ expressed intent of securitization of Greenland. The required capital investments related to further leapfrogging the safety, availability, and reliability, above and beyond the current plans, of the critical communications infrastructure would be far higher than previously capital spend levels by Tusass (and Greenland) and unlikely to be economically viable using conventional business financial metrics (e.g., net present value NPV > 0 and internal rate of return IRR > a given hurdle rate). The investment needs to be seen as geopolitical relevant for the security & safety of Greenland, and with a strong focus on dual-use technologies, also as beneficial to the Greenlandic society.
Even with unlimited funding and financing to enhance Greenland’s safety and security, the challenging weather conditions and limited availability of skilled resources mean that it will take considerable time to successfully complete such an extensive program. Designing, planning and building a solid defense and security architecture meaningful to Greenlandic conditions will take time. Though, I am also convinced that there are already pieces of the puzzle operational today that is important any future work.
Figure 18 An aerial view of one of Tusass’s west coast sites supporting coastal radio as well as hosting one of the many long-haul microwave sites along the west coast of Greenland. Courtesy: Tusass A/S (Greenland).
RECOMMENDATIONS.
A multifaceted approach is essential to ensure that Greenland’s strategic and infrastructure development aligns with its unique geographical and geopolitical context.
Firstly, Greenland should prioritize the development of dual-use critical infrastructure and the supporting architectures that can serve both civilian and defense (& military) purposes. For example expanding and upgrading airport facilities (e.g., as is happening with the new airport in Nuuk), enhancing broadband internet access (e.g., as Tusass is very much focusing on adding more submarine cables and satellite coverage), and developing advanced integrated communication platforms like satellite-based and unmanned aerial systems (UAS), such as payload agnostic stratospheric high altitude platforms (HAPs). Such dual-use infrastructure platforms could bolster the national security. Moreover it could support economic activities that would improve community connectivity, and enhance the quality of life for Greenland’s residents irrespective of where they live in Greenland. There is little doubt that securing funding from international allies (e.g., European Union, NATO, …) and public-private partnerships will be crucial in supporting the financing of these projects. Also ensuring that civil and defense needs are met efficiently and with the right balance.
Additionally, it is important to invest in critical enablers like advanced monitoring and surveillance technologies for the security & safety. Greenland should in particular focus on satellite monitoring, Distributed Acoustic Sensing (DAS) on its submarine cables, and Unmanned Vehicles for Underwater and Aerial applications (e.g., UUVs & UAVs). Such systems will enable a more comprehensive monitoring of activities around and over Greenland. This would allow Greenland to secure its maritime routes, and protecting Greenland’s natural resources (among other things). Enhanced surveillance capabilities will also provide multi-dimensional real-time data for national security, environmental monitoring, and disaster response scenarios. Collaborating with NATO and other international partners should focus on sharing technology know-how, expertise in general, and intelligence that will ensure that Greenland’s surveillance capabilities are on par with global standards.
Tusass’s transport network connecting (almost) all of Greenland’s settlements is an essential and critical asset for Greenland. It should be the backbone for any dual-use enhancement serving civil as well as defense scenarios. Adding additional submarine cables and more satellite connections are important (on-going) parts of those enhancements and will substantially increase both the network availability, resilience and hardening to disruptions natural as well as man-made kinds. However, increasing the communications networks ability to fully, or even partly, function in case of network parts being cut off from a few main switching centers may be something that could be considered. With todays technologies might also be affordable to do and fit well with Tusass’s multi-dimensional connectivity strategy using terrestrial means (e.g., microwave connections), sub-marine cables and satellites.
Last but not least, considering Greenland’s limited human resources, the technologies and advanced platforms implemented must have a large degree of autonomy and self-reliance. This will likely only be achieved with solid partnerships and strong alliances with Denmark and other natural allies, including the Nordic countries in and near the Arctic Circle (e.g., Island, Faroe Island, Norway, Sweden, Finland, The USA, and Canada). In particular, Norway has had recent experience with the dual use of ad-hoc and private 5G networking for defense applications. Joint operation of UUV and UAVs integrated with DAS and satellite constellation could be operated within the Arctic Circle. Developing and implementing advanced AI-based technologies should be a priority. Such collaborations could also make these advanced technologies much more affordable than if only serving one country. These technologies can compensate for the sparse population and vast geographical challenges that Greenland and the larger Arctic Circle pose, providing efficient and effective infrastructure management, surveillance, and economic development solutions. Achieving a very high degree of autonomous operation of the multi-dimensional technology landscape required for leapfrogging the security of Greenland, the Greenlandic Society, and its critical infrastructure would be essential for Greenland to be self-reliant and less dependent on substantial external resources that may be problematic to guaranty in times of crisis.
By focusing on these recommendations, Greenland can enhance its strategic importance, improve its critical infrastructure resilience, and ensure sustainable economic growth while maintaining its unique environmental heritage.
Being a field technician in Greenland poses some occupational hazards that is unknown in most other places. Apart from the harsh weather, remoteness of many of the infrastructure locations, on many occasions field engineers have encountered hungry polar bears in the field. The polar bear is a very dangerous predator that is always on the look out for its next protein-rich meal.
Trym Eiterjord, “What the 14th Five-Year Plan says about China’s Arctic Interests”, The Arctic Institute, (November 2023). The link also includes references to several other articles related to the China-Arctic relationship from the Arctic Institute China Series 2023.
Deo, Narsingh. “Graph Theory with Applications to Engineering and Computer Science,” Dover Publications. This book is a reasonably accessible starting point for learning more about graphs. If this is new to you, I recommend going for the following Geeks for Geeks ” Introduction to Graph Data Structure” (April 2024), which provides a quick intro to the world of graphs.
The State Council Information Office of the People’s Republic of China, “China’s Arctic Policy”, (January 2018).
ACKNOWLEDGEMENT.
I greatly acknowledge my wife, Eva Varadi, for her support, patience, and understanding during the creative process of writing this article. I am incredible thankful to Tusass for providing many great pictures used in the post that illustrates the (good weather!) conditions that Tusass field technicians are faced with in the field working tirelessly on the critical communications infrastructure throughout Greenland. While the pictures shown in this post are really beautiful and breathtaking, the weather is unforgiven frequently stranding field workers for days at some of those remote site locations. Add to this picture the additional dangers of a hungry polar bear that will go to great length getting its weekly protein intake.