If Greenland were digitally disconnected tomorrow, how much of its public sector could still operate?

If Greenland were digitally cut off tomorrow, how much of its public sector would still function? The uncomfortable answer: very little. The truth is that not only would the public sector break down, but society as a whole would likely also break down the longer a digital isolation would be in effect. This article outlines why it does not necessarily have to be this way and suggests that some remedies and actions can be taken to minimize the impact of an event where Greenland would be digitally isolated from the rest of the internet for an extended period (e.g., weeks to months).

We may like, or feel tempted, to think of digital infrastructure as neutral plumbing. But as I wrote earlier, “digital infrastructure is no longer just about connectivity, but about sovereignty and resilience.” Greenland today has neither.

A recent Sermitsiaq article on Greenland’s “Digital Afhængighed af Udlandet” by Poul Krarup, which describes research work done by the Tænketanken Digital Infrastruktur, laid it bare and crystal clear: the backbone of Greenland’s administration, email, payments, and even municipal services, runs on servers and platforms that are located mainly outside Greenland (and Denmark). Global giants in Europe and the US hold the keys. Greenland doesn’t. My own research reveals just how dramatic this dependency is. The numbers from my own study of 315 Greenlandic public-sector domains make it painfully clear: over 70% of web/IP hosting is concentrated among just three foreign providers, including Microsoft, Google, and Cloudflare. For email exchanges (MX), it’s even worse: the majority of MX records sit entirely outside Greenland’s control.

So imagine the cable is cut, the satellite links fail, or access to those platforms is revoked. Schools, hospitals, courts, and municipalities. How many could still function? How many could even switch on a computer?

This isn’t a thought experiment. It’s a wake-up call.

In my earlier work on Greenland’s critical communications infrastructure, “Greenland: Navigating Security and Critical Infrastructure in the Arctic – A Technology Introduction”, I have pointed out both the resilience and the fragility of what exists today. Tusass has built and maintained a transport network that keeps the country connected under some of the harshest Arctic conditions. That achievement is remarkable, but it is also costly and economically challenging without external subsidies and long-term public investment. With a population of just 57,000 people, Greenland faces challenges in sustaining this infrastructure on market terms alone.

DIGITAL SOVEREIGNTY.

What do we mean when we use phrases like “the digital sovereignty of Greenland is at stake”? Let’s break down the complex language (for techies like myself). Sovereignty in the classical sense is about control over land, people, and institutions. Digital sovereignty extends this to the virtual space. It is primarily about controlling data, infrastructure, and digital services. As societies digitalize, critical aspects of sovereignty move into the digital sphere, such as,

  • Infrastructure as territory: Submarine cables, satellites, data centers, and cloud platforms are the digital equivalents of ports, roads, and airports. If you don’t own or control them, you depend on others to move your “digital goods.”
  • Data as a resource: Just as natural resources are vital to economic sovereignty, data has become the strategic resource of the digital age. Those who store, process, and govern data hold significant power over decision-making and value creation.
  • Platforms as institutions: Social media, SaaS, and search engines act like global “public squares” and administrative tools. If controlled abroad, they may undermine local political, cultural, or economic authority.

The excellent book by Anu Bradford, “Digital Empires: The Global Battle to Regulate Technology,” describes how the digital world is no longer a neutral, borderless space but is increasingly shaped by the competing influence of three distinct “empires.” The American model is built around the dominance of private platforms, such as Google, Amazon, and Meta, where innovation and market power drive the agenda. The scale and ubiquity of Silicon Valley firms have enabled them to achieve a global reach. In contrast, the Chinese model fuses technological development with state control. Here, digital platforms are integrated into the political system, used not only for economic growth but also for surveillance, censorship, and the consolidation of authority. Between these two poles lies the European model, which has little homegrown platform power but exerts influence through regulation. By setting strict rules on privacy, competition, and online content, Europe has managed to project its legal standards globally, a phenomenon Bradford refers to as the “Brussels effect” (which is used here in a positive sense). Bradford’s analysis highlights the core dilemma for Greenland. Digital sovereignty cannot be achieved in isolation. Instead, it requires navigating between these global forces while ensuring that Greenland retains the capacity to keep its critical systems functioning, its data governed under its own laws, and its society connected even when global infrastructures falter. The question is not which empire to join, but how to engage with them in a way that strengthens Greenland’s ability to determine its own digital future.

In practice, this means that Greenland’s strategy cannot be about copying one of the three empires, but rather about carving out a space of resilience within their shadow. Building a national Internet Exchange Point ensures that local traffic continues to circulate on the island rather than being routed abroad, even when external links fail. Establishing a sovereign GovCloud provides government, healthcare, and emergency services with a secure foundation that is not dependent on distant data centers or foreign jurisdictions. Local caching of software updates, video libraries, and news platforms enables communities to operate in a “local mode” during disruptions, preserving continuity even when global connections are disrupted. These measures do not create independence from the digital empires. Still, they give Greenland the ability to negotiate with them from a position of greater strength, ensuring that participation in the global digital order does not come at the expense of local control or security.

FROM DAILY RESILIENCE TO STRATEGIC FRAGILITY.

I have argued that integrity, robustness, and availability must be the guiding principles for Greenland’s digital backbone, both now and in the future.

  • Integrity means protecting against foreign influence and cyber threats through stronger cybersecurity, AI support, and autonomous monitoring.
  • Robustness requires diversifying the backbone with new submarine cables, satellite systems, and dual-use assets that can serve both civil and defense needs.
  • Availability depends on automation and AI-driven monitoring, combined with autonomous platforms such as UAVs, UUVs, IoT sensors, and distributed acoustic sensing on submarine cables, to keep services running across vast and remote geographies with limited human resources.

The conclusion I drew in my previous work remains applicable today. Greenland must develop local expertise and autonomy so that critical communications are not left vulnerable to outside actors in times of crisis. Dual-use investments are not only about defense; they also bring better services, jobs, and innovation.

Article content
Source: Tusass Annual Report 2023 with some additions and minor edits.

The Figure above illustrates the infrastructure of the Greenlandic sole telecommunications provider, Tusass. Note that Tusass is the incumbent and only telecom provider in Greenland. Currently, five hydropower plants (shown above, location only indicative) provide more than 80% of Greenland’s electricity demand. Greenland is entering a period of significant infrastructure transformation, with several large projects already underway and others on the horizon. The most visible change is in aviation. Following the opening of the new international airport in Nuuk in 2024, with its 2,200-meter runway capable of receiving direct flights from Europe and North America, attention has turned to Ilulissat, on the Northwestern Coast of Greenland, and Qaqortoq. Ilulissat is being upgraded with its own 2,200-meter runway, a new terminal, and a control tower, while the old 845-meter strip is being converted into an access road. In southern Greenland, a new airport is being built in Qaqortoq, with a 1,500-meter runway scheduled to open around 2026. Once completed, these three airports, Nuuk, Ilulissat, and Qaqortoq, the largest town in South Greenland, will together handle roughly 80 percent of Greenland’s passenger traffic, reshaping both tourism and domestic connectivity. Smaller projects, such as the planned airport at Ittoqqortoormiit and changes to heliport infrastructure in East Greenland, are also part of this shift, although on a longer horizon.

Beyond air travel, the next decade is likely to bring new developments in maritime infrastructure. There is growing interest in constructing deep-water ports, both to support commercial shipping and to enable the export of minerals from Greenland’s interior. Denmark has already committed around DKK 1.6 billion (approximately USD 250 million) between 2026 and 2029 for a deep-sea port and related coastal infrastructure, with several proposals directly linked to mining ventures. In southern Greenland, for example, the Tanbreez multi-element rare earth project lies within reach of Qaqortoq, and the new airport’s specifications were chosen with freight requirements in mind. Other mineral prospects, ranging from rare earths to nickel and zinc, will require their own supporting infrastructure, roads, power, and port facilities, if the project transitions from exploration to production. The timelines for these mining and port projects are less certain than for the airports, since they depend on market conditions, environmental approvals, and financing. Yet it is clear that the 2025–2035 period will be decisive for Greenland’s economic and strategic trajectory. The combination of new airports, potential deep-water harbors, and the possible opening of significant mining operations would amount to the largest coordinated build-out of Greenlandic infrastructure in decades. Moreover, several submarine cable projects have been mentioned that would strengthen international connectivity to Greenland, as well as strengthen the redundancy and robustness of settlement connectivity, in addition to the existing long-haul microwave network connecting all settlements along the west coast from North to South.

And this is precisely why the question of a sudden digital cut-off matters so much. Without integrity, robustness, and availability built into the communications infrastructure, Greenland’s public sector and its critical infrastructure remain dangerously exposed. What looks resilient in daily operation could unravel overnight if the links to the outside world were severed or internal connectivity were compromised. In particular, the dependency on Nuuk is a critical risk.

GREENLAND’s DIGITAL INFRASTRUCTURE BY LAYER.

Let’s peel the digital onion layer by layer of Greenland’s digital infrastructure.

Is Greenland’s digital infrastructure broken down by the layers upon which society’s continuous functioning depends? This illustration shows how applications, transport, routing, and interconnect all depend on the external connectivity.

Greenland’s digital infrastructure can be understood as a stack of interdependent layers, each of which reveals a set of vulnerabilities. This is illustrated by the Figure above. At the top of the stack lie the applications and services that citizens, businesses, and government rely on every day. These include health IT systems, banking platforms, municipal services, and cloud-based applications. The critical issue is that most of these services are hosted abroad and have no local “island mode.” In practice, this means that if Greenland is digitally cut off, domestic apps and services will fail to function because there is no mechanism to run them independently within the country.

Beneath this sits the physical transport layer, which is the actual hardware that moves data. Greenland is connected internationally by just two subsea cables, routed via Iceland and Canada. A few settlements, such as Tasiilaq, remain entirely dependent on satellite links, while microwave radio chains connect long stretches of the west coast. At the local level, there is some fiber deployment, but it is limited to individual settlements rather than forming part of a national backbone. This creates a transport infrastructure that, while impressive given Greenland’s geography, is inherently fragile. Two cables and a scattering of satellites do not amount to genuine redundancy for a nation. The next layer is IP/TCP transport, where routing comes into play. Here, too, the system is basic. Greenland relies on a limited set of upstream providers with little true diversity or multi-homing. As a result, if one of the subsea cables is cut, large parts of the country’s connectivity collapse, because traffic cannot be seamlessly rerouted through alternative pathways. The resilience that is taken for granted in larger markets is largely absent here.

Finally, at the base of the stack, interconnect and routing expose the structural dependency most clearly. Greenland operates under a single Autonomous System Number (ASN). An ASN is a unique identifier assigned to a network operator (like Tusass) that controls its own routing on the Internet. It allows the network to exchange traffic and routing information with other networks using the Border Gateway Protocol (BGP). In Greenland, there is no domestic internet exchange point (IXP) or peering between local networks. All traffic must be routed abroad first, whether it is destined for Greenland or beyond. International transit flows through Iceland and Canada via the subsea cables, and via geostationary GreenSat satellite connectivity through Grand Canaria as a limited (in capacity) fallback that connected via the submarine network back to Greenland. There is no sovereign government cloud, almost no local caching for global platforms, and only a handful of small data centers (being generous with the definition here). The absence of scaled redundancy and local hosting means that virtually all of Greenland’s digital life depends on international connections.

GREENLAND’s DIGITAL LIFE ON A SINGLE THREAD.

Considering the many layers described above, a striking picture emerges: applications, transport, routing, and interconnect are all structured in ways that assume continuous external connectivity. What appears robust on a day-to-day basis can unravel quickly. A single cable cut, upstream outage, or local transmission fault in Greenland does not just slow down the internet. It can also disrupt it. It can paralyze everyday life across almost every sector, as much of the country’s digital backbone relies on external connectivity and fragile local transport. For the government, the reliance on cloud-hosted systems abroad means that email, document storage, case management, and health IT systems would go dark. Hospitals and clinics could lose access to patient records, lab results, and telemedicine services. Schools would be cut off from digital learning platforms and exam systems that are hosted internationally. Municipalities, which already lean on remote data centers for payroll, social services, and citizen portals, would struggle to process even routine administrative tasks. In finance, the impact would be immediate. Greenland’s card payment and clearing systems are routed abroad; without connectivity, credit and debit card transactions could no longer be authorized. ATMs would stop functioning. Shops, fuel stations, and essential suppliers would be forced into cash-only operations at best, and even that would depend on whether their local systems can operate in isolation. The private sector would be equally disrupted. Airlines, shipping companies, and logistics providers all rely on real-time reservation and cargo systems hosted outside Greenland. Tourism, one of the fastest-growing industries, is almost entirely dependent on digital bookings and payments. Mining operations under development would be unable to transmit critical data to foreign partners or markets. Even at the household level, the effects could be highly disruptive. Messaging apps, social media, and streaming platforms all require constant external connections; they would stop working instantly. Online banking and digital ID services would be unreachable, leaving people unable to pay bills, transfer money, or authenticate themselves for government services. As there are so few local caches or hosting facilities in Greenland, even “local” digital life evaporates once the cables are cut. So we will be back to reading books and paper magazines again.

This means that an outage can cascade well beyond the loss of entertainment or simple inconvenience. It undermines health care, government administration, financial stability, commerce, and basic communication. In practice, the disruption would touch every citizen and every institution almost immediately, with few alternatives in place to keep essential civil services running.

GREENLAND’s DIGITAL INFRASTRUCTURE EXPOSURE: ABOUT THE DATA.

In this inquiry, I have primarily analyzed two pillars of Greenland’s digital presence: web/IP hosting, as well as MX (mail exchange) hosting. These may sound technical, but they are fundamental to understanding. Web/IP hosting determines where Greenland’s websites and online services physically reside, whether inside Greenland’s own infrastructure or abroad in foreign data centers. MX hosting determines where email is routed and processed, and is crucial for the operation of government, business, and everyday communication. Together, these layers form the backbone of a country’s digital sovereignty.

What the data shows is sobering. For example, the Government’s own portal nanoq.gl is hosted locally by Tele Greenland (i.e., Tusass GL), but its email is routed through Amazon’s infrastructure abroad. The national airline, airgreenland.gl, also relies on Microsoft’s mail servers in the US and UK. These are not isolated cases. They illustrate the broader pattern of dependence. If hosting and mail flows are predominantly external, then Greenland’s resilience, control, and even lawful access are effectively in the hands of others.

The data from the Greenlandic .gl domain space paints a clear and rather bleak picture of dependency and reliance on the outside world. My inquiry covered 315 domains, resolving more than a thousand hosts and IPs and uncovering 548 mail exchangers, which together form a dependency network of 1,359 nodes and 2,237 edges. What emerges is not a story of local sovereignty but of heavy reliance on external, that is, outside Greenland, hosting.

When broken down, it becomes clear how much of the Greenlandic namespace is not even in use. Of the 315 domains, only 190 could be resolved to a functioning web or IP host, leaving 125 domains, or about 40 percent, with no active service. For mail exchange, the numbers are even more striking: only 98 domains have MX records, while 217 domains, it would appear, cannot be used for email, representing nearly seventy percent of the total. In other words, the universe of domains we can actually analyze shrinks considerably once you separate the inactive or unused domains from those that carry real digital services.

It is within this smaller, active subset that the pattern of dependency becomes obvious. The majority of the web/IP hosting we can analyze is located outside Greenland, primarily on infrastructure controlled by American companies such as Cloudflare, Microsoft, Google, and Amazon, or through Danish and European resellers. For email, the reliance is even more complete: virtually all MX hosting that exists is foreign, with only two domains fully hosted in Greenland. This means that both Greenland’s web presence and its email flows are overwhelmingly dependent on servers and policies beyond its own borders. The geographic spread of dependencies is extensive, spanning the US, UK, Ireland, Denmark, and the Netherlands, with some entries extending as far afield as China and Panama. This breadth raises uncomfortable questions about oversight, control, and the exposure of critical services to foreign jurisdictions.

Security practices add another layer of concern. Many domains lack the most basic forms of email protection. The Sender Policy Framework (SPF), which instructs mail servers on which IP addresses are authorized to send on behalf of a domain, is inconsistently applied. DomainKeys Identified Mail (DKIM), which uses cryptographic signatures to verify that an email originates from the claimed sender, is also patchy. Most concerning is that Domain-based Message Authentication, Reporting, and Conformance (DMARC), a policy that allows a domain to instruct receiving mail servers on how to handle suspicious emails (for example, reject or quarantine them), is either missing or set to “none” for many critical domains. Without SPF, DKIM, and DMARC properly configured, Greenlandic organizations are wide open to spoofing and phishing, including within government and municipal domains.

Taken together, the picture is clear. Greenland’s digital backbone is not in Greenland. Its critical web and mail infrastructure lives elsewhere, often in the hands of hyperscalers far beyond Nuuk’s control. The question practically asks itself: if those external links were cut tomorrow, how much of Greenland’s public sector could still function?

GREENLAND’s DIGITAL INFRASTRUCTURE EXPOSURE: SOME KEY DATA OUT OF A VERY RICH DATASET.

Article content
The Figure shows the distribution of Greenlandic (.gl) web/IP domains hosted on a given country’s infrastructure. Note that domains are frequently hosted in multiple countries. However, very few (2!) have an overlap with Greenland.

The chart of Greenland (.gl) Web/IP Infrastructure Hosting by Supporting Country reveals the true geography of Greenland’s digital presence. The data covers 315 Greenlandic domains, of which 190 could be resolved to active web or IP hosts. From these, I built a dependency map showing where in the world these domains are actually served.

The headline finding is stark: 57% of Greenlandic domains depend on infrastructure in the United States. This reflects the dominance of American companies such as Cloudflare, Microsoft, Google, and Amazon, whose services sit in front of or fully host Greenlandic websites. In contrast, only 26% of domains are hosted on infrastructure inside Greenland itself (primarily through Tele Greenland/Tusass). Denmark (19%), the UK (14%), and Ireland (13%) appear as the next layers of dependency, reflecting the role of regional resellers, like One.com/Simply, as well as Microsoft and Google’s European data centers. Germany, France, Canada, and a long tail of other countries contribute smaller shares.

It is worth noting that the validity of this analysis hinges on how the data are treated. Each domain is counted once per country where it has active infrastructure. This means a domain like nanoq.gl (the Greenland Government portal) is counted for both Greenland and its foreign dependency through Amazon’s mail services. However, double-counting with Greenland is extremely rare. Out of the 190 resolvable domains, 73 (38%) are exclusively Greenlandic, 114 (60%) are solely foreign, and only 2 (~1%) domains are hybrids, split between Greenland and another country. Those two are Nanoq.gl and airgreenland.gl, both of which combine a Greenland presence with foreign infrastructure. This is why the Figure above shows percentages that add up to more than 100%. They represent the dependency footprint. The share of Greenlandic domains that touch each country. They do not represent a pie chart of mutually exclusive categories. What is most important to note, however, is that the overlap with Greenland is vanishingly small. In practice, Greenlandic domains are either entirely local or entirely foreign. Very few straddle the boundary.

The conclusion is sobering. Greenland’s web presence is deeply externalized. With only a quarter of domains hosted locally, and more than half relying on US-controlled infrastructure, the country’s digital backbone is anchored outside its borders. This is not simply a matter of physical location. It is about sovereignty, resilience, and control. The dominance of US, Danish, and UK providers means that Greenland’s citizens, municipalities, and even government services are reliant on infrastructure they do not own and cannot fully control.

Article content
Figure shows the distribution of Greenlandic (.gl) domains by the supporting country for the MX (mail exchange) infrastructure. It shows that nearly all email services are routed through foreign providers.

The Figure above of the MX (mail exchange) infrastructure by supporting country reveals an even more pronounced pattern of external reliance compared to the above case for web hosting. From the 315 Greenlandic domains examined, only 98 domains had active MX records. These are the domains that can be analyzed for mail routing and that have been used in the analysis below.

Among them, 19% of all Greenlandic domains send their mail through US-controlled infrastructure, primarily Microsoft’s Outlook/Exchange services and Google’s Gmail. The United Kingdom (12%), Ireland (9%), and Denmark (8%) follow, reflecting the presence of Microsoft and Google’s European data centers and Danish resellers. France and Australia appear with smaller shares at 2%, and beyond that, the contributions of other countries are negligible. Greenland itself barely registers. Only two domains, accounting for 1% of the total, utilize MX infrastructure hosted within Greenland. The rest rely on servers beyond its borders. This result is consistent with our sovereignty breakdown: almost all Greenlandic email is foreign-hosted, with just two domains entirely local and one hybrid combining Greenlandic and foreign providers.

Again, the validity of this analysis rests on the same method as the web/IP chart. Each domain is counted once per country where its MX servers are located. Percentages do not add up to 100% because domains may span multiple countries; however, crucially, as with web hosting, double-counting with Greenland is vanishingly rare. In fact, virtually no Greenlandic domains combine local and foreign MX; they are either foreign-only or, in just two cases, local-only.

The story is clear and compelling: Greenland’s email infrastructure is overwhelmingly externalized. Where web hosting still accounts for a quarter of domains within the country, email sovereignty is almost nonexistent. Nearly all communication flows through servers controlled by US, UK, Ireland, or Denmark. The implication is sobering. In the event of disruption, policy disputes, or surveillance demands, Greenland has little autonomous control over its most basic digital communications.

Article content
A sector-level view of how Greenland’s web/IP domains are hosted, local vs externally (outside Greenland).

This chart provides a sector-level view of how Greenlandic domains are hosted, distinguishing between those resolved locally in Greenland and those hosted outside of Greenland. It is based on the subset of 190 domains for which sufficient web/IP hosting information was available. Importantly, the categorization relies on individual domains, not on companies as entities. A single company or institution may own and operate multiple domains, which are counted separately for the purpose of this analysis. There is also some uncertainty in sector assignment, as many domains have ambiguous names and were categorized using best-fit rules.

The distribution highlights the uneven exercise of digital sovereignty across sectors. In education and finance, the dependency is absolute: 100 percent of domains are hosted externally, with no Greenland-based presence at all. It should not come as a big surprise that ninety percent of government domains are hosted in Greenland, while only 10 percent are hosted outside. From a Digital Government sovereignty perspective, this would obviously be what should be expected. Transportation shows a split, with about two-thirds of domains hosted locally and one-third abroad, reflecting a mix of Tele Greenland-hosted (Tusass GL) domains alongside foreign-hosted services, such as airgreenland.gl. According to the available data, Energy infrastructure is hosted entirely abroad, underscoring possibly one of the most critical vulnerabilities in the dataset. By contrast, telecom domains, unsurprisingly, given Tele Greenland’s role, are entirely local, making it the only sector with 100 percent internal hosting. Municipalities present a more positive picture, with three-quarters of domains hosted locally and one-quarter abroad, although this still represents a partial external dependency. Finally, the large and diverse “Other” category, which contains a mix of companies, organizations, and services, is skewed towards foreign hosting (67 percent external, 33 percent local).

Taken together, the results underscore three important points. First, sector-level sovereignty is highly uneven. While telecom, municipal, and Governmental web services retain more local control, most finance, education, and energy domains are overwhelmingly external. We should keep in mind that when a Greenlandic domain resolves to local infrastructure, it indicates that the frontend web hosting, the visible entry point that users connect to, is located within Greenland, typically through Tele Greenland (i.e., Tusass GL). However, this does not automatically mean that the entire service stack is local. Critical back-end components such as databases, authentication services, payment platforms, or integrated cloud applications may still reside abroad. In practice, a locally hosted domain therefore guarantees only that the web interface is served from Greenland, while deeper layers of the service may remain dependent on foreign infrastructure. This distinction is crucial when evaluating genuine digital sovereignty and resilience. However, the overall pattern is unmistakable. Greenland’s digital presence remains heavily reliant on foreign hosting, with only pockets of local sovereignty.

Article content
A sector-level view of the share of locally versus externally (i.e., outside Greenland) MX (mail exchange) hosted Greenlandic domains (.gl).

The Figure above provides a sector-level view of how Greenlandic domains handle their MX (mail exchange) infrastructure, distinguishing between those hosted locally and those that rely on foreign providers. The analysis is based on the subset of 94 domains (out of 315 total) where MX hosting could be clearly resolved. In other words, these are the domains for which sufficient DNS information was available to identify the location of their mail servers. As with the web/IP analysis, it is important to note two caveats: sector classification involves a degree of interpretation, and the results represent individual domains, not individual companies. A single organization may operate multiple domains, some of which are local and others external.

The results are striking. For most sectors, such as education, finance, transport, energy, telecom, and municipalities, the dependence on foreign MX hosting is total. 100 percent of identified domains rely on external providers for email infrastructure. Even critical sectors such as energy and telecom, where one might expect a more substantial local presence, are fully externalized. The government sector presents a mixed picture. Half of the government domains examined utilize local MX hosting, while the other half are tied to foreign providers. This partial local footprint is significant, as it shows that while some government email flows are retained within Greenland, an equally large share is routed through servers abroad. The “other” sector, which includes businesses, NGOs, and various organizations, shows a small local footprint of about 3 percent, with 97 percent hosted externally. Taken together, the Figure paints a more severe picture of dependency than the web/IP hosting analysis.

While web hosting still retained about a quarter of domains locally, in the case of email, nearly everything is external. Even in government, where one might expect strong sovereignty, half of the domains are dependent on foreign MX servers. This distinction is critical. Email is the backbone of communication for both public and private institutions, and the routing of Greenland’s email infrastructure almost entirely abroad highlights a deep vulnerability. Local MX records guarantee only that the entry point for mail handling is in Greenland. They do not necessarily mean that mail storage or filtering remains local, as many services rely on external processing even when the MX server is domestic.

The broader conclusion is clear. Greenland’s sovereignty in digital communications is weakest in email. Across nearly all sectors, external providers control the infrastructure through which communication must pass, leaving Greenland reliant on systems located far outside its borders. Irrespective of how the picture painted here may appear severe in terms of digital sovereignty, it is not altogether surprising. The almost complete externalization of Greenlandic email infrastructure is not surprising, given that most global email services are provided by U.S.-based hyperscalers such as Microsoft and Google. This reliance on Big Tech is the norm worldwide, but it carries particular implications for Greenland, where dependence on foreign-controlled communication channels further limits digital sovereignty and resilience.

The analysis of the 94 MX hosting entries shows a striking concentration of Greenlandic email infrastructure in the hands of a few large players. Microsoft dominates the picture with 38 entries, accounting for just over 40 percent of all records, while Amazon follows with 20 entries, or around 21 percent. Google, including both Gmail and Google Cloud Platform services, contributes an additional 8 entries, representing approximately 9% of the total. Together, these three U.S. hyperscalers control nearly 70 percent of all Greenlandic MX infrastructure. By contrast, Tele Greenland (Tusass GL) appears in only three cases, equivalent to just 3 percent of the total, highlighting the minimal local footprint. The remaining quarter of the dataset is distributed across a long tail of smaller European and global providers such as Team Blue in Denmark, Hetzner in Germany, OVH and O2Switch in France, Contabo, Telenor, and others. The distribution, however you want to cut it, underscores the near-total reliance on U.S. Big Tech for Greenland’s email services, with only a token share remaining under national control.

Out of 179 total country mentions across the dataset, the United States is by far the most dominant hosting location, appearing in 61 cases, or approximately 34 percent of all country references. The United Kingdom follows with 38 entries (21 percent), Ireland with 28 entries (16 percent), and Denmark with 25 entries (14 percent). France (4 percent) and Australia (3 percent) form a smaller second tier, while Greenland itself appears only three times (2 percent). Germany also accounts for three entries, and all other countries (Austria, Norway, Spain, Czech Republic, Slovakia, Poland, Canada, and Singapore) occur only once each, making them statistically marginal. Examining the structure of services across locations, approximately 30 percent of providers are tied to a single country, while 51 percent span two countries (for example, UK–US or DK–IE). A further 18 percent are spread across three countries, and a single case involved four countries simultaneously. This pattern reflects the use of distributed or redundant MX services across multiple geographies, a characteristic often found in large cloud providers like Microsoft and Amazon.

The key point is that, regardless of whether domains are linked to one, two, or three countries, the United States is present in the overwhelming majority of cases, either alone or in combination with other countries. This confirms that U.S.-based infrastructure underpins the backbone of Greenlandic email hosting, with European locations such as the UK, Ireland, and Denmark acting primarily as secondary anchors rather than true alternatives.

WHAT DOES IT ALL MEAN?

Greenland’s public digital life overwhelmingly runs on infrastructure it does not control. Of 315 .gl domains, only 190 even have active web/IP hosting, and just 98 have resolvable MX (email) records. Within that smaller, “real” subset, most web front-ends are hosted abroad and virtually all email rides on foreign platforms. The dependency is concentrated, with U.S. hyperscalers—Microsoft, Amazon, and Google—accounting for nearly 70% of MX services. The U.S. is also represented in more than a third of all MX hosting locations (often alongside the UK, Ireland, or Denmark). Local email hosting is almost non-existent (two entirely local domains; a few Tele Greenland/Tusass appearances), and even for websites, a Greenlandic front end does not guarantee local back-end data or apps.

That architecture has direct implications for sovereignty and security. If submarine cables, satellites, or upstream policies fail or are restricted, most government, municipal, health, financial, educational, and transportation services would degrade or cease, because their applications, identity systems, storage, payments, and mail are anchored off-island. Daily resilience can mask strategic fragility: the moment international connectivity is severely compromised, Greenland lacks the local “island mode” to sustain critical digital workflows.

This is not surprising. U.S. Big Tech dominates email and cloud apps worldwide. Still, it may pose a uniquely high risk for Greenland, given its small population, sparse infrastructure, and renewed U.S. strategic interest in the region. Dependence on platforms governed by foreign law and policy erodes national leverage in crisis, incident response, and lawful access. It exposes citizens to outages or unilateral changes that are far beyond Nuuk’s control.

The path forward is clear: treat digital sovereignty as critical infrastructure. Prioritize local capabilities where impact is highest (government/municipal core apps, identity, payments, health), build island-mode fallbacks for essential services, expand diversified transport (additional cables, resilient satellite), and mandate basic email security (SPF/DKIM/DMARC) alongside measurable locality targets for hosting and data. Only then can Greenland credibly assure that, even if cut off from the world, it can still serve its people.

CONNECTIVITY AND RESILIENCE: GREENLAND VERSUS OTHER SOVEREIGN ISLANDS.

Article content
Sources: Submarine cable counts from TeleGeography/SubmarineNetworks.com; IXPs and ASNs from Internet Society Pulse/Peering DB and RIR data; GDP and Population from IMF/Worldband (2023/2024); Internet penetration from ITU and National Statistics.

The comparative table shown above highlights Greenland’s position among other sovereign and autonomous islands in terms of digital infrastructure. With two international submarine cables, Greenland shares the same level of cable redundancy as the Faroe Islands, Malta, the Maldives, Seychelles, Cuba, and Fiji. This places it in the middle tier of island connectivity: above small states like Comoros, which rely on a single cable, but far behind island nations such as Cyprus, Ireland, or Singapore, which have built themselves into regional hubs with multiple independent international connections.

Where Greenland diverges is in the absence of an Internet Exchange Point (IXP) and its very limited number of Autonomous Systems (ASNs). Unlike Iceland, which couples four cables with three IXPs and over ninety ASNs, Greenland remains a network periphery. Even smaller states such as Malta, Seychelles, or Mauritius operate IXPs and host more ASNs, giving them greater routing autonomy and resilience.

In terms of internet penetration, Greenland fares relatively well, with a rate of over 90 percent, comparable to other advanced island economies. Yet the country’s GDP base is extremely limited, comparable to the Faroe Islands and Seychelles, which constrains its ability to finance major independent infrastructure projects. This means that resilience is not simply a matter of demand or penetration, but rather a question of policy choices, prioritization, and regional partnerships.

Seen from a helicopter’s perspective, Greenland is neither in the worst nor the best position. It has more resilience than single-cable states such as Comoros or small Pacific nations. Still, it lags far behind peer islands that have deliberately developed multi-cable redundancy, local IXPs, and digital sovereignty strategies. For policymakers, this raises a fundamental challenge: whether to continue relying on the relative stability of existing links, or to actively pursue diversification measures such as a national IXP, additional cable investments, or regional peering agreements. In short, Greenland’s digital sovereignty depends less on raw penetration figures and more on whether its infrastructure choices can elevate it from a peripheral to a more autonomous position in the global network.

HOW TO ELEVATE SOUTH GREENLAND TO A PREFERRED TO A PREFFERED DIGITAL HOST FOR THE WORLD … JUST SAYING, WHY NOT!

At first glance, South Greenland and Iceland share many of the same natural conditions that make Iceland an attractive hub for data centers. Both enjoy a cool North Atlantic climate that allows year-round free cooling, reducing the need for energy-intensive artificial systems. In terms of pure geography and temperature, towns such as Qaqortoq and Narsaq in South Greenland are not markedly different from Reykjavík or Akureyri. From a climatic standpoint, there is no inherent reason why Greenland should not also be a viable location for large-scale hosting facilities.

The divergence begins not with climate but with energy and connectivity. Iceland spent decades developing a robust mix of hydropower and geothermal plants, creating a surplus of cheap renewable electricity that could be marketed to international hyperscale operators. Greenland, while rich in hydropower potential, has only a handful of plants tied to local demand centers, with no national grid and limited surplus capacity. Without investment in larger-scale, interconnected generation, it cannot guarantee the continuous, high-volume power supply that international data centers demand. Connectivity is the other decisive factor. Iceland today is connected to four separate submarine cable systems, linking it to Europe and North America, which gives operators confidence in redundancy and low-latency routes across the Atlantic. South Greenland, by contrast, depends on two branches of the Greenland Connect system, which, while providing diversity to Iceland and Canada, does not offer the same level of route choice or resilience. The result is that Iceland functions as a transatlantic bridge, while Greenland remains an endpoint.

For South Greenland to move closer to Iceland’s position, several changes would be necessary. The most important would be a deliberate policy push to develop surplus renewable energy capacity and make it available for export into data center operations. Parallel to this, Greenland would need to pursue further international submarine cables to break its dependence on a single system and create genuine redundancy. Finally, it would need to build up the local digital ecosystem by fostering an Internet Exchange Point and encouraging more networks to establish Autonomous Systems on the island, ensuring that Greenland is not just a transit point but a place where traffic is exchanged and hosted, and, importantly, making money on its own Digital Infrastructure and Sovereignty. South Greenland already shares the climate advantage that underpins Iceland’s success, but climate alone is insufficient. Energy scale, cable diversity, and deliberate policy have been the ingredients that have allowed Iceland to transform itself into a digital hub. Without similar moves, Greenland risks remaining a peripheral node rather than evolving into a sovereign center of digital resilience.

A PRACTICAL BLUEPRINT FOR GREENLAND TOWARDS OWNING ITS DIGITAL SOVEREIGNTY.

No single measure eliminates Greenland’s dependency on external infrastructure, banking, global SaaS, and international transit, which are irreducible. But taken together, these steps described below maximize continuity of essential functions during cable cuts or satellite disruption, improve digital sovereignty, and strengthen bargaining power with global vendors. The trade-off is cost, complexity, and skill requirements, which means Greenland must prioritize where full sovereignty is truly mission-critical (health, emergency, governance) and accept graceful degradation elsewhere (social media, entertainment, SaaS ERP).

A. Keep local traffic local (routing & exchange).

Proposal: Create or strengthen a national IXP in Nuuk, with a secondary node (e.g., Sisimiut or Qaqortoq). Require ISPs, mobile operators, government, and major content/CDNs to peer locally. Add route-server policies with “island-mode” communities to ensure that intra-Greenland routes stay reachable even if upstream transit is lost. Deploy anycasted recursive DNS and host authoritative DNS for .gl domains on-island, with secondaries abroad.

Pros:

  • Dramatically reduces the latency, cost, and fragility of local traffic.
  • Ensures Greenland continues to “see itself” even if cut off internationally.
  • DNS split-horizon prevents sensitive internal queries from leaking off-island.

Cons:

  • Needs policy push. Voluntary peering is often insufficient in small markets.
  • Running redundant IXPs is a fixed cost for a small economy.
  • CDNs may resist deploying nodes without incentives (e.g., free rack and power).

A natural and technically well-founded reaction, especially given Greenland’s monopolistic structure under Tusass, is that an IXP or multiple ASNs might seem redundant. Both content and users reside on the same Tusass network, and intra-Greenland traffic already remains local at Layer 3. Adding an IXP would not change that in practice. Without underlying physical or organizational diversity, an exchange point cannot create redundancy on its own.

However, over the longer term, an IXP can still serve several strategic purposes. It provides a neutral routing and governance layer that enables future decentralization (e.g., government, education, or sectoral ASNs), strengthens “island-mode” resilience by isolating internal routes during disconnection from the global Internet, and supports more flexible traffic management and security policies. Notably, an IXP also offers a trust and independence layer that many third-party providers, such as hyperscalers, CDNs, and data-center networks, typically require before deploying local nodes. Few global operators are willing to peer inside the demarcation of a single national carrier’s network. A neutral IXP provides them with a technical and commercial interface independent of Tusass’s internal routing domain, thereby making on-island caching or edge deployments more feasible in the future. In that sense, this accurately reflects today’s technical reality. The IXP concept anticipates tomorrow’s structural and sovereignty needs, bridging the gap between a functioning monopoly network and a future, more open digital ecosystem.

In practice (and in my opinion), Tusass is the only entity in Greenland with the infrastructure, staff, and technical capacity to operate an IXP. While this challenges the ideal of neutrality, it need not invalidate the concept if the exchange is run on behalf of Naalakkersuisut (the Greenlandic self-governing body) or under a transparent, multi-stakeholder governance model. The key issue is not who operates the IXP, but how it is governed. Suppose Tusass provides the platform while access, routing, and peering policies are openly managed and non-discriminatory. In that case, the IXP can still deliver genuine benefits: local routing continuity, “island-mode” resilience, and a neutral interface that encourages future participation by hyperscalers, CDNs, and sectoral networks.

B. Host public-sector workloads on-island.

Proposal: Stand up a sovereign GovCloud GL in Nuuk (failover in another town, possible West-East redundancy), operated by a Greenlandic entity or tightly contracted partner. Prioritize email, collaboration, case handling, health IT, and emergency comms. Keep critical apps, archives, and MX/journaling on-island even if big SaaS (like M365) is still used abroad.

Pros:

  • Keeps essential government operations functional in an isolation event.
  • Reduces legal exposure to extraterritorial laws, such as the U.S. CLOUD Act.
  • Provides a training ground for local IT and cloud talent.

Cons:

  • High CapEx + ongoing OpEx; cloud isn’t a one-off investment.
  • Scarcity of local skills; risk of over-reliance on a few engineers.
  • Difficult to replicate the breadth of SaaS (ERP, HR, etc.) locally; selective hosting is realistic, full stack is not.

C. Make email & messaging “cable- and satellite-outage proof”.

Proposal: Host primary MX and mailboxes in GovCloud GL with local antispam, journaling, and security. Use off-island secondaries only for queuing. Deploy internal chat/voice/video systems (such as Matrix, XMPP, or local Teams/Zoom gateways) to ensure that intra-Greenland traffic never routes outside the country. Define an “emergency federation mode” to isolate traffic during outages.

Pros:

  • Ensures communication between government, hospitals, and municipalities continues during outages.
  • Local queues prevent message loss even if foreign relays are unreachable.
  • Pre-tested emergency federation builds institutional muscle memory.

Cons:

  • Operating robust mail and collaboration platforms locally is a resource-intensive endeavor.
  • Risk of user pushback if local platforms feel less polished than global SaaS.
  • The emergency “mode switch” adds operational complexity and must be tested regularly.

D. Put the content edge in Greenland.

Proposal: Require or incentivize CDN caches (Akamai, Cloudflare, Netflix, OS mirrors, software update repos, map tiles) to be hosted inside Greenland’s IXP(s).

Pros:

  • Improves day-to-day performance and cuts transit bills.
  • Reduces dependency on subsea cables for routine updates and content.
  • Keeps basic digital life (video, software, education platforms) usable in isolation.

Cons:

  • CDNs deploy based on scale; Greenland’s market may be marginal without a subsidy.
  • Hosting costs (power, cooling, rackspace) must be borne locally.
  • Only covers cached/static content; dynamic services (banking, SaaS) still break without external connectivity.

E. Implement into law & contracts.

Proposal: Mandate data residency for public-sector data; require “island-mode” design in procurement. Systems must demonstrate the ability to authenticate locally, operate offline, maintain usable data, and retain keys under Greenlandic custody. Impose peering obligations for ISPs and major SaaS/CDNs.

Pros:

  • Creates a predictable baseline for sovereignty across all agencies.
  • Prevents future procurement lock-in to non-resilient foreign SaaS.
  • Gives legal backing to technical requirements (IXP, residency, key custody).

Cons:

  • May raise the costs of IT projects (compliance overhead).
  • Without a strong enforcement, rules risk becoming “checkbox” exercises.
  • Possible trade friction if foreign vendors see it as protectionist.

F. Strengthen physical resilience

Proposal: Maintain and upgrade subsea cable capacity (Greenland Connect and Connect North), add diversity (spur/loop and new landings), and maintain long-haul microwave/satellite as a tertiary backup. Pre-engineer quality of service downgrades for graceful degradation.

Pros:

  • Adds true redundancy. Nothing replaces a working subsea cable.
  • Tertiary paths (satellite, microwave) keep critical services alive during failures.
  • Clear QoS downgrades make service loss more predictable and manageable.

Cons:

  • High (possibly very high) CapEx. New cable segments cost tens to hundreds of millions of euros.
  • Satellite/microwave backup cannot match the throughput of subsea cables.
  • International partners may be needed for funding and landing rights.

Security & trust

Proposal: Deploy local PKI and HSMs for the government. Enforce end-to-end encryption. Require local custody of cryptographic keys. Audit vendor remote access and include kill switches.

Pros:

  • Prevents data exposure via foreign subpoenas (without Greenland’s knowledge).
  • Local trust anchors give confidence in sovereignty claims.
  • Kill switches and audit trails enhance vendor accountability.

Cons:

  • PKI and HSM management requires very specialized skills.
  • Adds operational overhead (key lifecycle, audits, incident response).
  • Without strong governance, there is a risk of “security theatre” rather than absolute security.

On-island first as default. A key step for Greenland is to make on-island first the norm so that local-to-local traffic stays local even if Atlantic cables fail. Concretely, stand up a national IXP in Nuuk to keep domestic traffic on the island and anchor CDN caches; build a Greenlandic “GovCloud” to host government email, identity, records, and core apps; and require all public-sector systems to operate in “island mode” (continue basic services offline from the rest of the world). Pair this with local MX, authoritative DNS, secure chat/collaboration, and CDN caches, so essential content and services remain available during outages. Back it with clear procurement rules on data residency and key custody to reduce both outage risk and exposure to foreign laws (e.g., CLOUD Act), acknowledging today’s heavy—if unsurprising—reliance on U.S. hyperscalers (Microsoft, Amazon, Google).

What this changes, and what it doesn’t. These measures don’t aim to sever external ties. They should rebalance them. The goal is graceful degradation that keeps government services, domestic payments, email, DNS, and health communications running on-island, while accepting that global SaaS and card rails will go dark during isolation. Finally, it’s also worth remembering that local caching is only a bridge, not a substitute for global connectivity. In the first days of an outage, caches would keep websites, software updates, and even video libraries available, allowing local email and collaboration tools to continue running smoothly. But as the weeks pass, those caches would inevitably grow stale. News sites, app stores, and streaming platforms would stop refreshing, while critical security updates, certificates, and antivirus definitions would no longer be available, leaving systems exposed to risk. If isolation lasted for months, the impact would be much more profound. Banking and card clearing would be suspended, SaaS-driven ERP systems would break down, and Greenland would slide into a “local only” economy, relying on cash and manual processes. Over time, the social impact would also be felt, with the population cut off from global news, communication, and social platforms. Caching, therefore, buys time, but not independence. It can make an outage manageable in the short term, yet in the long run, Greenland’s economy, security, and society depend on reconnecting to the outside world.

The Bottom line. Full sovereignty is unrealistic for a sparse, widely distributed country, and I don’t think it makes sense to strive for that. It just appears impractical. In my opinion, partial sovereignty is both achievable and valuable. Make on-island first the default, keep essential public services and domestic comms running during cuts, and interoperate seamlessly when subsea links and satellites are up. This shifts Greenland from its current state of strategic fragility to one of managed resilience, without overlooking the rest of the internet.

ACKNOWLEDGEMENT.

I want to acknowledge my wife, Eva Varadi, for her unwavering support, patience, and understanding throughout the creative process of writing this article. I would also like to thank Dr. Signe Ravn-Højgaard, from “Tænketanken Digital Infrastruktur”, and the Sermitsiaq article “Digital afhængighed af udlandet” (“Digital dependency on foreign countries”) by Paul Krarup, for inspiring this work, which is also a continuation of my previous research and article titled “Greenland: Navigating Security and Critical Infrastructure in the Arctic – A Technology Introduction”. I would like to thank Lasse Jarlskov for his insightful comments and constructive feedback on this article. His observations regarding routing, OSI layering, and the practical realities of Greenland’s network architecture were both valid and valuable, helping refine several technical arguments and improve the overall clarity of the analysis.

CODE AND DATASETS.

The Python code and datasets used in the analysis are available on my public GitHub: https://github.com/drkklarsen/greenland_digital_infrastructure_mapping (the code is still work in progress, but it is functional and will generate similar data as analyzed in this article).

ABBREVIATION LIST.

ASN — Autonomous System Number: A unique identifier assigned to a network operator that controls its own routing on the Internet, enabling the exchange of traffic with other networks using the Border Gateway Protocol (BGP).

BGP — Border Gateway Protocol: The primary routing protocol of the Internet, used by Autonomous Systems to exchange information about which paths data should take across networks.

CDN — Content Delivery Network: A system of distributed servers that cache and deliver content (such as videos, software updates, or websites) closer to users, reducing latency and dependency on international links.

CLOUD Act — Clarifying Lawful Overseas Use of Data Act: A U.S. law that allows American authorities to demand access to data stored abroad by U.S.-based cloud providers, raising sovereignty and privacy concerns for other countries.

DMARC — Domain-based Message Authentication, Reporting and Conformance: An email security protocol that tells receiving servers how to handle messages that fail authentication checks, protecting against spoofing and phishing.

DKIM — DomainKeys Identified Mail: An email authentication method that uses cryptographic signatures to verify that a message has not been altered and truly comes from the claimed sender.

DNS — Domain Name System: The hierarchical system that translates human-readable domain names (like example.gl) into IP addresses that computers use to locate servers.

ERP — Enterprise Resource Planning A type of integrated software system that organizations use to manage business processes such as finance, supply chain, HR, and operations.

GL — Greenland (country code top-level domain, .gl) The internet country code for Greenland, used for local domain names such as nanoq.gl.

GovCloud — Government Cloud: A sovereign or dedicated cloud infrastructure designed for hosting public-sector applications and data within national jurisdiction.

HSM — Hardware Security Module: A secure physical device that manages cryptographic keys and operations, used to protect sensitive data and digital transactions.

IoT — Internet of Things: A network of physical devices (sensors, appliances, vehicles, etc.) connected to the internet, capable of collecting and exchanging data.

IP — Internet Protocol: The fundamental addressing system of the Internet, enabling data packets to be sent from one computer to another.

ISP — Internet Service Provider: A company or entity that provides customers with access to the internet and related services.

IXP — Internet Exchange Point: A physical infrastructure where networks interconnect directly to exchange internet traffic locally rather than through international transit links.

MX — Mail Exchange (Record): A type of DNS record that specifies the mail servers responsible for receiving email on behalf of a domain.

PKI — Public Key Infrastructure: A framework for managing encryption keys and digital certificates, ensuring secure electronic communications and authentication.

SaaS — Software as a Service: Cloud-based applications delivered over the internet, such as Microsoft 365 or Google Workspace, are usually hosted on servers outside the country.

SPF — Sender Policy Framework: An email authentication protocol that defines which mail servers are authorized to send email on behalf of a domain, reducing the risk of forgery.

Tusass is the national telecommunications provider of Greenland, formerly Tele Greenland, responsible for submarine cables, satellite links, and domestic connectivity.

UAV — Unmanned Aerial Vehicle: An aircraft without a human pilot on board, often used for surveillance, monitoring, or communications relay.

UUV — Unmanned Underwater Vehicle: A robotic submarine used for monitoring, surveying, or securing undersea infrastructure such as cables.

The Telco Ascension to the Sky.

It’s 2045. Earth is green again. Free from cellular towers and the terrestrial radiation of yet another G, no longer needed to justify endless telecom upgrades. Humanity has finally transcended its communication needs to the sky, fully served by swarms of Low Earth Orbit (LEO) satellites.

Millions of mobile towers have vanished. No more steel skeletons cluttering skylines and nature in general. In their place: millions of beams from tireless LEO satellites, now whispering directly into our pockets from orbit.

More than 1,200 MHz of once terrestrially-bound cellular spectrum below the C-band had been uplifted to LEO satellites. Nearly 1,500 MHz between 3 and 6 GHz had likewise been liberated from its earthly confines, now aggressively pursued by the buzzing broadband constellations above.

It all works without a single modification to people’s beloved mobile devices. Everyone enjoyed the same, or better, cellular service than in those wretched days of clinging to terrestrial-based infrastructure.

So, how did this remarkable transformation come about?

THE COVERAGE.

First, let’s talk about coverage. The chart below tells the story of orbital ambition through three very grounded curves. On the x-axis, we have the inclination angle, which is the degree to which your satellites are encouraged to tilt away from the equator to perform their job. On the y-axis: how much of the planet (and its people) they’re actually covering. The orange line gives us land area coverage. It starts low, as expected, tropical satellites don’t care much for Greenland. But as the inclination rises, so does their sense of duty to the extremes (the poles that is). The yellow line represents population coverage, which grows faster than land, maybe because humans prefer to live near each other (or they like the scenery). By the time you reach ~53° inclination, you’re covering about 94% of humanity and 84% of land areas. The dashed white line represents mobile cell coverage, the real estate of telecom towers. A constellation at a 53° inclination would cover nearly 98% of all mobile site infrastructure. It serves as a proxy for economic interest. It closely follows the population curve, but adds just a bit of spice, reflecting urban density and tower sprawl.

This chart illustrates the cumulative global coverage achieved at varying orbital inclination angles for three key metrics: land area (orange), population (yellow), and estimated terrestrial mobile cell sites (dashed white). As inclination increases from equatorial (0°) to polar (90°), the percentage of global land and population coverage rises accordingly. Notably, population coverage reaches approximately 94% at ~53° inclination, a critical threshold for satellite constellations aiming to maximize global user reach without the complexity of polar orbits. The mobile cell coverage curve reflects infrastructure density and aligns closely with population distribution.

The satellite constellation’s beams have replaced traditional terrestrial cells, providing a one-to-one coverage substitution. They not only replicate coverage in former legacy cellular areas but also extend service to regions that previously lacked connectivity due to low commercial priority from telecom operators. Today, over 3 million beams substitute obsolete mobile cells, delivering comparable service across densely populated areas. An additional 1 million beams have been deployed to cover previously unserved land areas, primarily rural and remote regions, using broader, lower-capacity beams with radii up to 10 kilometers. While these rural beams do not match the density or indoor penetration of urban cellular coverage, they represent a cost-effective means of achieving global service continuity, especially for basic connectivity and outdoor access in sparsely populated zones.

Conclusion? If you want to build a global satellite mobile network, you don’t need to orbit the whole planet. Just tilt your constellation enough to touch the crowded parts, and leave the tundra to the poets. However, this was the “original sin” of LEO Direct-2-Cellular satellites.

THE DEMAND.

Although global mobile traffic growth slowed notably after the early 2020s, and the terrestrial telecom industry drifted toward its “end of history” moment, the orbital network above inherited a double burden. Not only did satellite constellations need to deliver continuous, planet-wide coverage, a milestone legacy telecoms had never reached, despite millions of ground sites, but they also had to absorb globally converging traffic demands as billions of users crept steadily toward the throughput mean.

This chart shows the projected DL traffic across a full day (UTC), based on regions where local time falls within the evening Busy Hour window (17:00–22:00) and are within satellite coverage (minimum elevation ≥ 25°). The BH population is calculated hourly, taking into account time zone alignment and visibility, with a 20% concurrency rate applied. Each active user is assumed to consume 500 Mbps downlink in 2045. The peak, reaching over
This chart shows the uplink traffic demand experienced across a full day (UTC), based on regions under Busy Hour conditions (17:00–22:00 local time) and visible to the satellite constellation (with a minimum elevation angle of 25°). For each UTC hour, the BH population within coverage is calculated using global time zone mapping. Assuming a 20% concurrency rate and an average uplink throughput of 50 Mbps per active user, the total UL traffic is derived. The resulting curve reflects how demand shifts in response to the Earth’s rotation beneath the orbital band. The peak, reaching over

The radio access uplink architecture relies on low round-trip times for proper scheduling, timing alignment, and HARQ (Hybrid Automatic Repeat Request) feedback cycles. The propagation delay at 350 km yields a round-trip time of about 2.5 to 3 milliseconds, which falls within the bounds of what current specifications can accommodate. This is particularly important for latency-sensitive applications such as voice, video, and interactive services that require low jitter and reliable feedback mechanisms. In contrast, orbits at 550 km or above push latency closer to the edge of what NR protocols can tolerate, which could hinder performance or require non-standard adaptations. The beam geometry also plays a central role. At lower altitudes, satellite beams projected to the ground are inherently smaller. This smaller footprint translates into tighter beam patterns with narrower 3 dB cut-offs, which significantly improves frequency reuse and spatial isolation. These attributes are important for deploying high-capacity networks in densely populated urban environments, where interference and spectrum efficiency are paramount. Narrower beams allow D2C operators to steer coverage toward demand centers while minimizing adjacent-beam interference dynamically. Operating at 350 km is not without drawbacks. The satellite’s ground footprint at this altitude is smaller, meaning that more satellites are required to achieve full Earth coverage. Additionally, satellites at this altitude are exposed to greater atmospheric drag, resulting in shorter orbital lifespans unless they are equipped with more powerful or efficient propulsion systems to maintain altitude. The current design aims for a 5-year orbital lifespan. Despite this, the shorter lifespan has an upside, as it reduces the long-term risks of space debris. Deorbiting occurs naturally and quickly at lower altitudes, making the constellation more sustainable in the long term.

THE CONSTELLATION.

The satellite-to-cellular infrastructure has now fully matured into a global-scale system capable of delivering mobile broadband services that are not only on par with, but in many regions surpass, the performance of terrestrial cellular networks. At its core lies a constellation of low Earth orbit satellites operating at an altitude of 350 kilometers, engineered to provide seamless, high-quality indoor coverage for both uplink and downlink, even in densely urban environments.

To meet the evolving expectations of mobile users, each satellite beam delivers a minimum of 50 Mbps of uplink capacity and 500 Mbps of downlink capacity per user, ensuring full indoor quality even in highly cluttered environments. Uplink transmissions utilize the 600 MHz to 1800 MHz band, providing 1200 MHz of aggregated bandwidth. Downlink channels span 1500 MHz of spectrum, ranging from 2100 MHz to the upper edge of the C-band. At the network’s busiest hour (e.g., around 20:00 local time) across the most densely populated regions south of 53° latitude, the system supports a peak throughput of 60,000 Tbps for downlink and 6,000 Tbps for uplink. To guarantee reliability under real-world utilization, the system is engineered with a 25% capacity overhead, raising the design thresholds to 75,000 Tbps for DL and 7,500 Tbps for UL during peak demand.

Each satellite beam is optimized for high spectral efficiency, leveraging advanced beamforming, adaptive coding, and cutting-edge modulation. Under these conditions, downlink beams deliver 4.5 Gbps, while uplink beams, facing more challenging reception constraints, achieve 1.8 Gbps. Meeting the adjusted peak-hour demand requires approximately 16.7 million active DL beams and 4.2 million UL beams, amounting to over 20.8 million simultaneous beams concentrated over the peak demand region.

Thanks to significant advances in onboard processing and power systems, each satellite now supports up to 5,000 independent beams simultaneously. This capability reduces the number of satellites required to meet regional peak demand to approximately 4,200. These satellites are positioned over a region spanning an estimated 45 million square kilometers, covering the evening-side urban and suburban areas of the Americas, Europe, Africa, and Asia. This configuration yields a beam density of nearly 0.46 beams per square kilometer, equivalent to one active beam for every 2 square kilometers, densely overlaid to provide continuous, per-user, indoor-grade connectivity. In urban cores, beam radii are typically below 1 km, whereas in lower-density suburban and rural areas, the system adjusts by using larger beams without compromising throughput.

Because peak demand rotates longitudinally with the Earth’s rotation, only a portion of the entire constellation is positioned over this high-demand region at any given time. To ensure 4,200 satellites are always present over the region during peak usage, the total constellation comprises approximately 20,800 satellites, distributed across several hundred orbital planes. These planes are inclined and phased to optimize temporal availability, revisit frequency, and coverage uniformity while minimizing latency and handover complexity.

The resulting Direct-to-Cellular satellite constellation and system of today is among the most ambitious communications infrastructures ever created. With more than 20 million simultaneous beams dynamically allocated across the globe, it has effectively supplanted traditional mobile towers in many regions, delivering reliable, high-speed, indoor-capable broadband connectivity precisely where and when people need it.

When Telcos Said ‘Not Worth It,’ Satellites Said ‘Hold My Beam. In the world of 2045, even the last village at the end of the dirt road streams at 500 Mbps. No tower in sight, just orbiting compassion and economic logic finally aligned.

THE SATELLITE.

The Cellular Device to Satellite Path.

The uplink antennas aboard the Direct-to-Cellular satellites have been specifically engineered to reliably receive indoor-quality transmissions from standard (unmodified) mobile devices operating within the 600 MHz to 1800 MHz band. Each device is expected to deliver a minimum of 50 Mbps uplink throughput, even when used indoors in heavily cluttered urban environments. This performance is made possible through a combination of wideband spectrum utilization, precise beamforming, and extremely sensitive receiving systems in orbit. The satellite uplink system operates across 1200 MHz of aggregated bandwidth (e.g., 60 channels of 20 MHz), spanning the entire upper UHF and lower S-band. Because uplink signals originate from indoor environments, where wall and structural penetration losses can exceed 20 dB, the satellite link budget must compensate for the combined effects of indoor attenuation and free-space propagation at a 350 km orbital altitude. At 600 MHz, which represents the lowest frequency in the UL band, the free space path loss alone is approximately 133 dB. When this is compounded with indoor clutter and penetration losses, the total attenuation the satellite must overcome reaches approximately 153 dB or more.

Rather than specifying the antenna system at a mid-band average frequency, such as 900 MHz (i.e., the mid-band of the 600 MHz to 1800 MHz range), the system has been conservatively engineered for worst-case performance at 600 MHz. This design philosophy ensures that the antenna will meet or exceed performance requirements across the entire uplink band, with higher frequencies benefiting from naturally improved gain and narrower beamwidths. This choice guarantees that even the least favorable channels, those near 600 MHz, support reliable indoor-grade uplink service at 50 Mbps, with a minimum required SNR of 10 dB to sustain up to 16-QAM modulation. Achieving this level of performance at 600 MHz necessitated a large physical aperture. The uplink receive arrays on these satellites have grown to approximately 700 to 750 m² in area, and are constructed using modular, lightweight phased-array tiles that unfold in orbit. This aperture size enables the satellite to achieve a receive gain of approximately 45 dBi at 600 MHz, which is essential for detecting low-power uplink transmissions with high spectral efficiency, even from users deep indoors and under cluttered conditions.

Unlike earlier systems, such as AST SpaceMobile’s BlueBird 1, launched in the mid-2020s with an aperture of around 900 m² and challenged by the need to acquire indoor uplink signals, today’s Direct-to-Cellular (D2C) satellites optimize the uplink and downlink arrays separately. This separation allows each aperture to be custom-designed for its frequency and link budget requirements. The uplink arrays incorporate wideband, dual-polarized elements, such as log-periodic or Vivaldi structures, backed by high-dynamic-range low-noise amplifiers and a distributed digital beamforming backend. Assisted by real-time AI beam management, each satellite can simultaneously support and track up to 2,500 uplink beams, dynamically allocating them across the active coverage region.

Despite their size, these receive arrays are designed for compact launch configurations and efficient in-orbit deployment. Technologies such as inflatable booms, rigidizable mesh structures, and ultralight composite materials allow the arrays to unfold into large apertures while maintaining structural stability and minimizing mass. Because these arrays are passive receivers, thermal loads are significantly lower than those of transmit systems. Heat generation is primarily limited to the digital backend and front-end amplification chains, which are distributed across the array surface to facilitate efficient thermal dissipation.

The Satellite to Cellular Device Path.

The downlink communication path aboard Direct-to-Cellular satellites is engineered as a fully independent system, physically and functionally separated from the uplink antenna. This separation reflects a mature architectural philosophy that has been developed over decades of iteration. The downlink and uplink systems serve fundamentally different roles and operate across vastly different frequency bands, with their power, thermal, and antenna constraints. The downlink system operates in the frequency range from 2100 MHz up to the upper end of the C-band, typically around 4200 MHz. This is significantly higher than the uplink range, which extends from 600 to 1800 MHz. Due to this disparity in wavelength, a factor of nearly six between the lowest uplink and highest downlink frequencies, a shared aperture is neither practical nor efficient. It is widely accepted today that integrating transmit and receive functions into a single broadband aperture would compromise performance on both ends. Instead, today’s satellites utilize a dual-aperture approach, with the downlink antenna system optimized exclusively for high-frequency transmission and the uplink array designed independently for low-frequency reception.

In order to deliver 500 Mbps per user with full indoor coverage, each downlink beam must sustain approximately 4.5 Gbps, accounting for spectral reuse and beam overlap. At an orbital altitude of 350 kilometers, downlink beams must remain narrow, typically covering no more than a 1-kilometer radius in urban zones, to match uplink geometry and maintain beam-level concurrency. The antenna gain required to meet these demands is in the range of 50 to 55 dBi, which the satellites achieve using high-frequency phased arrays with a physical aperture of approximately 100 to 200 m². Because the downlink system is responsible for high-power transmission, the antenna tiles incorporate GaN-based solid-state power amplifiers (SSPAs), which deliver hundreds of watts per panel. This results in an overall effective isotropic radiated power (EIRP) of 50 to 60 dBW per beam, sufficient to reach deep indoor devices even at the upper end of the C-band. The power-intensive nature of the downlink system introduces thermal management challenges (describe below in the next section), which are addressed by physically isolating the transmit arrays from the receiver surfaces. The downlink and uplink arrays are positioned on opposite sides of the spacecraft bus or thermally decoupled through deployable booms and shielding layers.

The downlink beamforming is fully digital, allowing real-time adaptation of beam patterns, power levels, and modulation schemes. Each satellite can form and manage up to 2,500 independent downlink beams, which are coordinated with their uplink counterparts to ensure tight spatial and temporal alignment. Advanced AI algorithms help shape beams based on environmental context, usage density, and user motion, thereby further improving indoor delivery performance. The modulation schemes used on the downlink frequently reach 256-QAM and beyond, with spectral efficiencies of six to eight bits per second per Hz in favorable conditions.

The physical deployment of the downlink antenna varies by platform, but most commonly consists of front-facing phased array panels or cylindrical surfaces fitted with azimuthally distributed tiles. These panels can be either fixed or mounted on articulated platforms that allow active directional steering during orbit, depending on the beam coverage strategy, an arrangement also called gumballed.

No Bars? Not on This Planet. In 2045, even the Icebears will have broadband. When satellites replaced cell towers, the Arctic became just another neighborhood in the global gigabit grid.

Satellite System Architecture.

The Direct-to-Cellular satellites have evolved into high-performance, orbital base stations that far surpass the capabilities of early systems, such as AST SpaceMobile’s Bluebird 1 or SpaceX’s Starlink V2 Mini. These satellites are engineered not merely to relay signals, but to deliver full-featured indoor mobile broadband connectivity directly to standard handheld devices, anywhere on Earth, including deep urban cores and rural regions that have been historically underserved by terrestrial infrastructure.

As described earlier, today’s D2C satellite supports up to 5,000 simultaneous beams, enabling real-time uplink and downlink with mobile users across a broad frequency range. The uplink phased array, designed to capture low-power, deep-indoor signals at 600 MHz, occupies approximately 750 m². The DL array, optimized for high-frequency, high-power transmission, spans 150 to 200 m². Unlike early designs, such as Bluebird 1, which used a single, large combined antenna, today’s satellites separate the uplink and downlink arrays to optimize each for performance, thermal behavior, and mechanical deployment. These two systems are typically mounted on opposite sides of the satellite and thermally isolated from one another.

Thermal management is one of the defining challenges of this architecture. While AST’s Bluebird 1 (i.e., from mid-2020s) boasted a large antenna aperture approaching 900 m², its internal systems generated significantly less heat. Bluebird 1 operated with a total power budget of approximately 10 to 12 kilowatts, primarily dedicated to a handful of downlink beams and limited onboard processing. In contrast, today’s D2C satellite requires a continuous power supply of 25 to 35 kilowatts, much of which must be dissipated as heat in orbit. This includes over 10 kilowatts of sustained RF power dissipation from the DL system alone, in addition to thermal loads from the digital beamforming hardware, AI-assisted compute stack, and onboard routing logic. The key difference lies in beam concurrency and onboard intelligence. The satellite manages thousands of simultaneous, high-throughput beams, each dynamically scheduled and modulated using advanced schemes such as 256-QAM and beyond. It must also process real-time uplink signals from cluttered environments, allocate spectral and spatial resources, and make AI-driven decisions about beam shape, handovers, and interference mitigation. All of this requires a compute infrastructure capable of delivering 100 to 500 TOPS (tera-operations per second), distributed across radiation-hardened processors, neural accelerators, and programmable FPGAs. Unlike AST’s Bluebird 1, which offloaded most of its protocol stack to the ground, today’s satellites run much of the 5G core network onboard. This includes RAN scheduling, UE mobility management, and segment-level routing for backhaul and gateway links.

This computational load compounds the satellite’s already intense thermal environment. Passive cooling alone is insufficient. To manage thermal flows, the spacecraft employs large radiator panels located on its outer shell, advanced phase-change materials embedded behind the DL tiles, and liquid loop systems that transfer heat from the RF and compute zones to the radiative surfaces. These thermal systems are intricately zoned and actively managed, preventing the heat from interfering with the sensitive UL receive chains, which require low-noise operation under tightly controlled thermal conditions. The DL and UL arrays are thermally decoupled not just to prevent crosstalk, but to maintain stable performance in opposite thermal regimes: one dominated by high-power transmission, the other by low-noise reception.

To meet its power demands, the satellite utilizes a deployable solar sail array that spans 60 to 80 m². These sails are fitted with ultra-high-efficiency solar cells capable of exceeding 30–35% efficiency. They are mounted on articulated booms that track the sun independently from the satellite’s Earth-facing orientation. They provide enough current to sustain continuous operation during daylight periods, while high-capacity batteries, likely based on lithium-sulfur or solid-state chemistry, handle nighttime and eclipse coverage. Compared to the Starlink V2 Mini, which generates around 2.5 to 3.0 kilowatts, and the Bluebird 1, which operates at roughly 10–12 kilowatts. Today’s system requires nearly three times the generation and five times the thermal rejection capability compared to the initial satellites of the mid-2020s.

Structurally, the satellite is designed to support this massive infrastructure. It uses a rigid truss core (i.e., lattice structure) with deployable wings for the DL system and a segmented, mesh-based backing for the UL aperture. Propulsion is provided by Hall-effect or ion thrusters, with 50 to 100 kilograms of inert propellant onboard to support three to five years of orbital station-keeping at an altitude of 350 kilometers. This height is chosen for its latency and spatial reuse advantages, but it also imposes continuous drag, requiring persistent thrust.

The AST Bluebird 1 may have appeared physically imposing in its time due to its large antenna, thermal, computational, and architectural complexity. Today’s D2C satellite, 20 years later, far exceeds anything imagined two decades earlier. The heat generated by its massive beam concurrency, onboard processing, and integrated network core makes its thermal management system not only more severe than Bluebird 1’s but also one of the primary limiting factors in the satellite’s physical and functional design. This thermal constraint, in turn, shapes the layout of its antennas, compute stack, power system, and propulsion.

Mass and Volume Scaling.

The AST’s Bluebird 1, launched in the mid-2020s, had a launch mass of approximately 1,500 kilograms. Its headline feature was a 900 m² unfoldable antenna surface, designed to support direct cellular connectivity from space. However, despite its impressive aperture, the system was constrained by limited beam concurrency, modest onboard computing power, and a reliance on terrestrial cores for most network functions. The bulk of its mass was dominated by structural elements supporting its large antenna surface and the power and thermal subsystems required to drive a relatively small number of simultaneous links. Bluebird’s propulsion was chemical, optimized for initial orbit raising and limited station-keeping, and its stowed volume fit comfortably within standard medium-lift payload fairings. Starlink’s V2 Mini, although smaller in physical aperture, featured a more balanced and compact architecture. Weighing roughly 800 kilograms at launch, it was designed around high-throughput broadband rather than direct-to-cellular use. Its phased array antenna surface was closer to 20–25 m², and it was optimized for efficient manufacturing and high-density orbital deployment. The V2 Mini’s volume was tightly packed, with solar panels, phased arrays, and propulsion modules folded into a relatively low-profile bus optimized for rapid deployment and low-cost launch stacking. Its onboard compute and thermal systems were scaled to match its more modest power budget, which typically hovered around 2.5 to 3.0 kilowatts.

In contrast, today’s satellites occupy an entirely new performance regime. The dry mass of the satellite ranges between 2,500 and 3,500 kilograms, depending on specific configuration, thermal shielding, and structural deployment method. This accounts for its large deployable arrays, high-density digital payload, radiator surfaces, power regulation units, and internal trusses. The wet mass, including onboard fuel reserves for at least 5 years of station-keeping at 350 km altitude, increases by up to 800 kilograms, depending on the propulsion type (e.g., Hall-effect or gridded ion thrusters) and orbital inclination. This brings the total launch mass to approximately 3,000 to 4,500 kilograms, or more than double ATS’s old Bluebird 1 and roughly five times that of SpaceX’s Starlink V2 Mini.

Volume-wise, the satellites require a significantly larger stowed configuration than either AST’s Bluebird 1 or SpaceX’s Starlink V2 Mini. While both of those earlier systems were designed to fit within traditional launch fairings, Bluebird 1 utilizes a folded hinge-based boom structure, and Starlink V2 Mini is optimized for ultra-compact stacking. Today’s satellite demands next-generation fairing geometries, such as 5-meter-class launchers or dual-stack configurations. This is driven by the dual-antenna architecture and radiator arrays, which, although cleverly folded during launch, expand dramatically once deployed in orbit. In its operational configuration, the satellite spans tens of meters across its antenna booms and solar sails. The uplink array, built as a lightweight, mesh-backed surface supported by rigidizing frames or telescoping booms, unfolds to a diameter of approximately 30 to 35 meters, substantially larger than Bluebird 1’s ~20–25 meter maximum span and far beyond the roughly 10-meter unfolded span of Starlink V2 Mini. The downlink panels, although smaller, are arranged for precise gimballed orientation (i.e., a pivoting mechanism allowing rotation or tilt along one or more axes) and integrated thermal control, which further expands the total deployed volume envelope. The volumetric footprint of today’s D2C satellite is not only larger in surface area but also more spatially complex, as its segregated UL and DL arrays, thermal zones, and solar wings must avoid interference while maintaining structural and thermal equilibrium. Compared to the simplified flat-pack layout of Starlink V2 Mini and the monolithic boom-deployed design of Bluebird 1.

The increase in dry mass, wet mass, and deployed volume is not a byproduct of inefficiency, but a direct result of very substantial performance improvements that were required to replace terrestrial mobile towers with orbital systems. Today’s D2C satellites deliver an order of magnitude more beam concurrency, spectral efficiency, and per-user performance than its 2020s predecessors. This is reflected in every subsystem, from power generation and antenna design to propulsion, thermal control, and computing. As such, it represents the emergence of a new class of satellite altogether: not merely a space-based relay or broadband node, but a full-featured, cloud-integrated orbital RAN platform capable of supporting the global cellular fabric from space.

CAN THE FICTION BECOME A REALITY?

From the perspective of 2025, the vision of a global satellite-based mobile network providing seamless, unmodified indoor connectivity at terrestrial-grade uplink and downlink rates, 50 Mbps up, 500 Mbps down, appears extraordinarily ambitious. The technical description from 2045 outlines a constellation of 20,800 LEO satellites, each capable of supporting 5,000 independent full-duplex beams across massive bandwidths, while integrating onboard processing, AI-driven beam control, and a full 5G core stack. To reach such a mature architecture within two decades demands breakthrough progress across multiple fronts.

The most daunting challenge lies in achieving indoor-grade cellular uplink at frequencies as low as 600 MHz from devices never intended to communicate with satellites. Today, even powerful ground-based towers struggle to achieve sub-1 GHz uplink coverage inside urban buildings. For satellites at an altitude of 350 km, the free-space path loss alone at 600 MHz is approximately 133 dB. When combined with clutter, penetration, and polarization mismatches, the system must close a link budget approaching 153–160 dB, from a smartphone transmitting just 23 dBm (200 mW) or less. No satellite today, including AST SpaceMobile’s BlueBird 1, has demonstrated indoor uplink reception at this scale or consistency. To overcome this, the proposed system assumes deployable uplink arrays of 750 m² with gain levels exceeding 45 dBi, supported by hundreds of simultaneously steerable receive beams and ultra-low-noise front-end receivers. From a 2025 lens, the mechanical deployment of such arrays, their thermal stability, calibration, and mass management pose nontrivial risks. Today’s large phased arrays are still in their infancy in space, and adaptive beam tracking from fast-moving LEO platforms remains unproven at the required scale and beam density.

Thermal constraints are also vastly more complex than anything currently deployed. Supporting 5,000 simultaneous beams and radiating tens of kilowatts from compact platforms in LEO requires heat rejection systems that go beyond current radiator technology. Passive radiators must be supplemented with phase-change materials, active fluid loops, and zoned thermal isolation to prevent transmit arrays from degrading the performance of sensitive uplink receivers. This represents a significant leap from today’s satellites, such as Starlink V2 Mini (~3 kW) or BlueBird 1 (~10–12 kW), neither of which operates with a comparable beam count, throughput, or antenna scale.

The required onboard compute is another monumental leap. Running thousands of simultaneous digital beams, performing real-time adaptive beamforming, spectrum assignment, HARQ scheduling, and AI-driven interference mitigation, all on-orbit and without ground-side offloading, demands 100–500 TOPS of radiation-hardened compute. This is far beyond anything that will be flying in 2025. Even state-of-the-art military systems rely heavily on ground computing and centralized control. The 2045 vision implies on-orbit autonomy, local decision-making, and embedded 5G/6G core functionality within each spacecraft, a full software-defined network node in orbit. Realizing such a capability requires not only next-gen processors but also significant progress in space-grade AI inference, thermal packaging, and fault tolerance.

On the power front, generating 25–35 kW per satellite in LEO using 60–80 m² solar sails pushes the boundary of photovoltaic technology and array mechanics. High-efficiency solar cells must achieve conversion rates exceeding 30–35%, while battery systems must maintain high discharge capacity even in complete darkness. Space-based power architectures today are not yet built for this level of sustained output and thermal dissipation.

Even if the individual satellite challenges are solved, the constellation architecture presents another towering hurdle. Achieving seamless beam handover, full spatial reuse, and maintaining beam density over demand centers as the Earth rotates demands near-perfect coordination of tens of thousands of satellites across hundreds of planes. No current LEO operator (including SpaceX) manages a constellation of that complexity, beam concurrency, or spatial density. Furthermore, scaling the manufacturing, testing, launch, and in-orbit commissioning of over 20,000 high-performance satellites will require significant cost reductions, increased factory throughput, and new levels of autonomous deployment.

Regulatory and spectrum allocation are equally formidable barriers. The vision entails the massively complex undertaking of a global reallocation of terrestrial mobile spectrum, particularly in the sub-3 GHz bands, to LEO operators. As of 2025, such a reallocation is politically and commercially fraught, with entrenched mobile operators and national regulators unlikely to cede prime bands without extensive negotiation, incentives, and global coordination. The use of 600–1800 MHz from orbit for direct-to-device is not yet globally harmonized (and may never be), and existing terrestrial rights would need to be either vacated or managed via complex sharing schemes.

From a market perspective, widespread device compatibility without modification implies that standard mobile chipsets, RF chains, and antennas evolve to handle Doppler compensation, extended RTT timing budgets, and tighter synchronization tolerances. While this is not insurmountable, it requires updates to 3GPP standards, baseband silicon, and potentially network registration logic, all of which must be implemented without degrading terrestrial service. Although NTN (non-terrestrial networks) support has begun to emerge in 5G standards, the level of transparency and ubiquity envisioned in 2045 is not yet backed by practical deployments.

While the 2045 architecture described so far assumes a single unified constellation delivering seamless global cellular service from orbit, the political and commercial realities of space infrastructure in 2025 strongly suggest a fragmented outcome. It is unlikely that a single actor, public or private, will be permitted, let alone able, to monopolize the global D2C landscape. Instead, the most plausible trajectory is a competitive and geopolitically segmented orbital environment, with at least one major constellation originating from China (note: I think it is quit likely we may see two major ones), another from the United States, a possible second US-based entrant, and potentially a European-led system aimed at securing sovereign connectivity across the continent. This fracturing of the orbital mobile landscape imposes a profound constraint on the economic and technical scalability of the system. The assumption that a single constellation could achieve massive economies of scale, producing, launching, and managing tens of thousands of high-performance satellites with uniform coverage obligations, begins to collapse under the weight of geopolitical segmentation. Each competitor must now shoulder its own development, manufacturing, and deployment costs, with limited ability to amortize those investments over a unified global user base. Moreover, such duplication of infrastructure risks saturating orbital slots and spectrum allocations, while reducing the density advantage that a unified system would otherwise enjoy. Instead of concentrating thousands of active beams over a demand zone with a single coordinated fleet, separate constellations must compete for orbital visibility and spectral access over the same urban centers. The result is likely to be a decline in per-satellite utilization efficiency, particularly in regions of geopolitical overlap or contested regulatory coordination.

2045: One Vision, Many Launch Pads. The dream of global satellite-to-cellular service may shine bright, but it won’t rise from a single constellation. With China, the U.S., and others racing skyward, the economics of universal LEO coverage could fracture into geopolitical silos, making scale, spectrum, and sustainability more contested than ever.

Finally, the commercial viability of any one constellation diminishes when the global scale is eroded. While a monopoly or globally dominant operator could achieve lower per-unit satellite costs, higher average utilization, and broader roaming revenues, a fractured environment reduces ARPU (average revenue per user). It increases the breakeven threshold for each deployment. Satellite throughput that could have been centrally optimized now risks duplication and redundancy, increasing operational overhead and potentially slowing innovation as vendors attempt to differentiate on proprietary terms. In this light, the architecture described earlier must be seen as an idealized vision. This convergence point may never be achieved in pure form unless global policy, spectrum governance, and commercial alliances move toward more integrated outcomes. While the technological challenges of the 2045 D2C system are significant, the fragmentation of market structure and geopolitical alignment may prove an equally formidable barrier to realizing the full systemic potential. While a monopoly or globally dominant operator could achieve lower per-unit satellite costs, higher average utilization, and broader roaming revenues, a fractured environment reduces ARPU (average revenue per user). It increases the breakeven threshold for each deployment. Satellite throughput that could have been centrally optimized now risks duplication and redundancy, increasing operational overhead and potentially slowing innovation as vendors attempt to differentiate on proprietary terms. In this light, the architecture described earlier must be seen as an idealized vision. This convergence point may never be achieved in pure form unless global policy, spectrum governance, and commercial alliances move toward more integrated outcomes. While the technological challenges of the 2045 D2C system are significant, the fragmentation of market structure and geopolitical alignment may prove an equally formidable barrier to realizing the full systemic potential.

Heavenly Coverage, Hellish Congestion. Even a single mega-constellation turns the sky into premium orbital real estate … and that’s before the neighbors show up with their own fleets. Welcome to the era of broadband traffic … in space.

Despite these barriers, incremental paths forward exist. Demonstration satellites in the late 2020s, followed by regional commercial deployments in the early 2030s, could provide real-world validation. The phased evolution of spectrum use, dual-use handsets, and AI-assisted beam management may mitigate some of the scaling concerns. Regulatory alignment may emerge as rural and unserved regions increasingly depend on space-based access. Ultimately, the achievement of the 2045 architecture relies not only on engineering but also on sustained cross-industry coordination, geopolitical alignment, and commercial viability on a planetary scale. As of 2025, the probability of realizing the complete vision by 2045, in terms of indoor-grade, direct-to-device service via a fully orbital mobile core, is perhaps 40–50%, with a higher probability (~70%) for achieving outdoor-grade or partially integrated hybrid services. The coming decade will reveal whether the industry can fully solve the unique combination of thermal, RF, computational, regulatory, and manufacturing challenges required to replace the terrestrial mobile network with orbital infrastructure.

POSTSCRIPT – THE ECONOMICS.

The Direct-to-Cellular satellite architecture described in this article would reshape not only the technical landscape of mobile communications but also its economic foundation. The very premise of delivering mobile broadband directly from space, bypassing terrestrial towers, fiber backhaul, and urban permitting, undermines one of the most entrenched capital systems of the 20th and early 21st centuries: the mobile infrastructure economy. Once considered irreplaceable, the sprawling ecosystem of rooftop leases, steel towers, field operations, base stations, and fiber rings has been gradually rendered obsolete by a network that floats above geography.

The financial implications of such a shift are enormous. Before such an orbital transition described in this article, the global mobile industry invested well over 300 billion USD annually in network CapEx and Opex, with a large share dedicated to the site infrastructure layer, construction, leasing, energy, security, and upkeep of millions of base stations and their associated land or rooftop assets. Tower companies alone have become multi-billion-dollar REITs (i.e., Real Estate Investment Trusts), profiting from site tenancy and long-term operating contracts. As of the mid-2020s, the global value tied up in the telecom industry’s physical infrastructure is estimated to exceed 2.5 to 3 trillion USD, with tower companies like Cellnex and American Tower collectively managing hundreds of billions of dollars in infrastructure assets. An estimated $300–500 billion USD invested in mobile infrastructure represents approximately 0.75% to 1.5% of total global pension assets and accounts for 15% to 30% of pension fund infrastructure investments. This real estate-based infrastructure model defined mobile economics for decades and has generally been regarded as a reasonably safe haven for investors. In contrast, the 2045 D2C model front-loads its capital burden into satellite manufacturing, launch, and orbital operations. Rather than being geographically bound, capital is concentrated into a fleet of orbital base stations, each capable of dynamically serving users across vast and shifting geographies. This not only eliminates the need for millions of distributed cell sites, but it also breaks the historical tie between infrastructure deployment and national geography. Coverage no longer scales with trenching crews or urban permitting delays but with orbital plane density and beamforming algorithms.

Yet, such a shift does not necessarily mean lower cost, only different economics. Launching and operating tens of thousands of advanced satellites, each capable of supporting thousands of beams and running onboard compute environments, still requires massive capital outlay and ongoing expenditures in space traffic management, spectrum coordination, ground gateways, and constellation replenishment. The difference lies in utilization and marginal reach. Where terrestrial infrastructure often struggles to achieve ROI in rural or low-income markets, orbital systems serve these zones as part of the same beam budget, with no new towers or trenches required.

Importantly, the 2045 model would likely collapse the mobile value chain. Instead of a multi-layered system of operators, tower owners, fiber wholesalers, and regional contractors, a vertically integrated satellite operator can now deliver the full stack of mobile service from orbit, owning the user relationship end-to-end. This disintermediation has significant implications for revenue distribution and regulatory control, and challenges legacy operators to either adapt or exit.

The scale of economic disruption mirrors the scale of technical ambition. This transformation could rewrite the very economics of connectivity. While the promise of seamless global coverage, zero tower density, and instant-on mobility is compelling, it may also signal the end of mobile telecom as a land-based utility.

If this little science fiction story comes true, and there are many good and bad reasons to doubt it, Telcos may not Ascend to the Sky, but take the Stairway to Heaven.

Graveyard of the Tower Titans. This symbolic illustration captures the end of an era, depicting headstones for legacy telecom giants such as American Tower, Crown Castle, and SBA Communications, as well as the broader REIT (Real Estate Investment Trust) infrastructure model that once underpinned the terrestrial mobile network economy. It serves as a metaphor for the systemic shift brought on by Direct-to-Cellular (D2C) satellite networks. What’s fading is not only the mobile tower itself, but also the vast ancillary industry that has grown around it, including power systems, access rights, fiber-infrastructure, maintenance firms, and leasing intermediaries, as well as the telecom business model that relied on physical, ground-based infrastructure. As the skies take over the signal path, the economic pillars of the old telecom world may no longer stand.

FURTHER READING.

Kim K. Larsen, “Will LEO Satellite Direct-to-Cellular Networks Make Traditional Mobile Networks Obsolete?”, A John Strand Consult Report, (January 2025). This has also been published in full on my own Techneconomyblog.

Kim K. Larsen, “Can LEO Satellites close the Gigabit Gap of Europe’s Unconnectables?“ Techneconomyblog (April 2025).

Kim K. Larsen, “The Next Frontier: LEO Satellites for Internet Services.” Techneconomyblog (March 2024).

Kim K. Larsen, “Stratospheric Drones & Low Earth Satellites: Revolutionizing Terrestrial Rural Broadband from the Skies?” Techneconomyblog (January 2024).

Kim K. Larsen, “A Single Network Future“, Techneconomyblog (March 2024).

ACKNOWLEDGEMENT.

I would like to acknowledge my wife, Eva Varadi, for her unwavering support, patience, and understanding throughout the creative process of writing this article.

Can LEO Satellites close the Gigabit Gap of Europe’s Unconnectables?

Is LEO satellite broadband a cost-effective and capable option for rural areas of Europe? Given that most seem to agree that LEO satellites will not replace mobile broadband networks, it seems only natural to ask whether LEO satellites might help the EU Commission’s Digital Decade Policy Programme (DDPP) 2030 goal of having all EU households (HH) covered by gigabit connections delivered by so-called very high-capacity networks, including gigabit capable fiber-optic and 5G networks, by 2030 (i.e., only focusing on the digital infrastructure pillar of the DDPP).

As of 2023, more than €80 billion had been allocated in national broadband strategies and through EU funding instruments, including the Connecting Europe Facility and the Recovery and Resilience Facility. However, based on current deployment trajectories and cost structures, an additional €120 billion or more is expected to close the remaining connectivity gap from the approximately 15.5 million rural homes without a gigabit option in 2023. This brings the total investment requirement to over €200 billion. The shortfall is most acute in rural and hard-to-reach regions where network deployment is significantly more expensive. In these areas, connecting a single household with high-speed broadband infrastructure, especially via FTTP, can easily exceed €10,000 in public subsidy, given the long distances and low density of premises. It would be a very “cheap” alternative for Europe if a non-EU-based (i.e., USA) satellite constellation could close the gigabit coverage gap even by a small margin. However, given some of the current geopolitical factors, 200 billion euros could enable Europe to establish its own large LEO satellite constellation if it can match (or outperform) the unitary economics of SpaceX, rather than its IRIS² satellite program.

In this article, my analysis focuses on direct-to-dish low Earth orbit (LEO) satellites with expected capabilities comparable to, or exceeding, those projected for SpaceX’s Starlink V3, which is anticipated to deliver up to 1 Terabit per second of total downlink capacity. For such satellites to represent a credible alternative to terrestrial gigabit connectivity, several thousand would need to be in operation. This would allow overlapping coverage areas, increasing effective throughput to household outdoor dishes across sparsely populated regions. Reaching such a scale may take years, even under optimistic deployment scenarios, highlighting the importance of aligning policy timelines with technological maturity.

GIGABITS IN THE EU – WHERE ARE WE, AND WHERE DO WE THINK WE WILL GO?

  • In 2023, Fibre-to-the-Premises (FTTP) rural HH coverage was ca. 52%. For the EU28, this means that approximately 16 million rural homes lack fiber coverage.
  • By 2030, projected FTTP deployment in the EU28 will result in household coverage reaching almost 85% of all rural homes (under so-called BaU conditions), leaving approximately 5.5 million households without it.
  • Due to inferior economics, it is estimated that approximately 10% to 15% of European households are “unconnectable” by FTTP (although not necessarily by FWA or broadband mobile in general).
  • EC estimated (in 2023) that over 80 billion euros in subsidies have been allocated in national budgets, with an additional 120 billion euros required to close the gigabit ambition gap by 2030 (e.g., over 10,000 euros per remaining rural household in 2023).

So, there is a considerable number of so-called “unconnectable” households within the European Union (i.e., EU28). These are, for example, isolated dwellings away from inhabited areas (e.g., settlements, villages, towns, and cities). They often lack the most basic fixed communications infrastructure, although some may have old copper lines or only relatively poor mobile coverage.

The figure below illustrates the actual state of FTTP deployment in rural households in 2023 (orange bars) as well as a Rural deployment scenario that extends FTTP deployment to 2030, using the maximum of the previous year’s deployment level and the average of the last three years’ deployment levels. Any level above 80% grows by 1% pa (arbitrarily chosen). The data source for the above is “Digital Decade 2024: Broadband Coverage in Europe 2023” by the European Commission. The FTTP pace has been chosen individually for suburban and rural areas to match the expectations expressed in the reports for 2030.

ARE LEO DIRECT-TO-DISH (D2D) SATELLITES A CREDIBLE ALTERNATIVE FOR THE “UNCONNECTABLES”?

  • For Europe, a non-EU-based (i.e., US-based) satellite constellation could be a very cost-effective alternative to closing the gigabit coverage gap.
  • Megabit connectivity (e.g., up to 100+ Mbps) is already available today with SpaceX Starlink LEO satellites in rural areas with poor broadband alternatives.
  • The SpaceX Starlink V2 satellite can provide approximately 100 Gbps (V1.5 ~ 20+ Gbps), and its V3 is expected to deliver 1,000 Gbps within the satellite’s coverage area, with a maximum coverage radius of over 500 km.
  • The V3 may have 320 beams (or more), each providing approximately ~3 Gbps (i.e., 320 x 3 Gbps is ca. 1 Tbps). With a frequency re-use factor of 40, 25 Gbps can be supplied within a unique coverage area. With “adjacent” satellites (off-nadir), the capacity within a unique coverage area can be enhanced by additional beams that overlap the primary satellite (nadir).
  • With an estimated EU28 “unconnectable” household density of approximately 1.5 per square kilometer, the LEO satellite constellation would cover more than 20,000 households, each with a capacity of 20 Gbps over an area of 15,000 square kilometers.
  • At a peak-hour user concurrency of 15% and a per-user demand of 1 Gbps, the backhaul demand would reach 3 terabits per second (Tbps). This means we have an oversubscription ratio of approximately 3:1, which must be met by a single 1 Tbps satellite, or could be served by three overlapping satellites.
  • This assumes a 100% take-up rate of the unconnectable HHs and that each would select a 1 Gbps service (assuming such would be available). In rural areas, the take-up rate may not be significantly higher than 60%, and not all households will require a 1 Gbps service.
  • This also assumes that there are no alternatives to LEO satellite direct-to-dish service, which seems unlikely for at least some of the 20,000 “unconnectable” households. Given the typical 5G coverage conditions associated with the frequency spectrum license conditions, one might hope for some decent 5G coverage; alas, it is unlikely to be gigabit in deep rural and isolated areas.

For example, consider the Starlink LEO satellite V1.5, which has a total capacity of approximately 25 Gbps, comprising 32 beams that deliver 800 Mbps per beam, including dual polarization, to a ground-based user dish. It can provide a maximum of 6.4 Gbps over a minimum area of ca. 6,000 km² at nadir with an Earth-based dish directly beneath the satellite. If the coverage area is situated in a UK-based rural area, for example, we would expect to find, on average, 150,000 rural households using an average of 25 rural homes per km². If a household demands 100 Mbps at peak, only 60 households can be online at full load concurrently per area. With 10% concurrency, this implies that we can have a total of 600 households per area out of 150,000 homes. Thus, 1 in 250 households could be allowed to subscribe to a Starlink V1.5 if the target is 100 Mbps per home and a concurrency factor of 10% within the coverage area. This is equivalent to stating that the oversubscription ratio is 250:1, and reflects the tension between available satellite capacity and theoretical rural demand density. In rural UK areas, the beam density is too high relative to capacity to allow universal subscription at 100 Mbps unless more satellites provide overlapping service. For a V1.5 satellite, we can support four regions (i.e., frequency reuse groups), each with a maximum throughput of 6.4 Gbps. Thus, the satellite can support a total of 2,400 households (i.e., 4 x 600) with a peak demand of 100 Mbps and a concurrency rate of 10%. As other satellites (off-nadir) can support the primary satellite, it means that some areas’ demand may be supported by two to three different satellites, providing a multiplier effect that can increase the capacity offered. The Starlink V2 satellite is reportedly capable of supporting up to a total of 100 Gbps (approximately four times that of V1.5), while the V3 will support up to 1 Tbps, which is 40 times that of V1.5. The number of beams and, consequently, the number of independent frequency groups, as well as spectral efficiency, are expected to be improved over V1.5, which are factors that will enhance the overall total capacity of the newer Starlink satellite generations.

By 2030, the EU28 rural areas are expected to achieve nearly 85% FTTP coverage under business-as-usual deployment scenarios. This would leave approximately 5.5 million households, referred to as “unconnectables,” without direct access to high-speed fiber. These households are typically isolated, located in sparsely populated or geographically challenging regions, where the economics of fiber deployment become prohibitively uneconomical. Although there may be alternative broadband options, such as FWA, 5G mobile coverage, or copper, it is unlikely that such “unconnectable” homes would sustainably have a gigabit connection.

This may be where LEO satellite constellations enter the picture as a possible alternative to deploying fiber optic cables in uneconomical areas, such as those that are unconnectable. The anticipated capabilities of Starlink’s third-generation (V3) satellites, offering approximately 1 Tbps of total downlink capacity with advanced beamforming and frequency reuse, already make them a viable candidate for servicing low-density rural areas, assuming reasonable traffic models similar to those of an Internet Service Provider (ISP). With modest overlapping coverage from two or three such satellites, these systems could deliver gigabit-class service to tens of thousands of dispersed households without (much) oversubscription, even assuming relatively high concurrency and usage.

Considering this, there seems little doubt that an LEO constellation, just slightly more capable than SpaceX’s Starlink V3 satellite, appears to be able to fully support the broadband needs of the remaining unconnected European households expected by 2030. This also aligns well with the technical and economic strengths of LEO satellites: they are ideally suited for delivering high-capacity service to regions where population density is too low to justify terrestrial infrastructure, yet digital inclusion remains equally essential.

LOW-EARTH ORBIT SATELLITES DIRECT-TO-DISRUPTION.

I have in my blog “Will LEO Satellite Direct-to-Cell Networks make Terrestrial Networks Obsolete?” I provided some straightforward reasons why the LEO satellite with direct to an unmodified smartphone capabilities (e.g., Lynk Global, AST Spacemobile) would not make existing cellular network obsolete and would be of most value in remote or very rural areas where no cellular coverage would be present (as explained very nicely by Lynk Global) offering a connection alternative to satellite phones such as Iridium, and thus being complementary existing terrestrial cellular networks. Thus, despite the hype, we should not expect a direct disruption to regular terrestrial cellular networks from LEO satellite D2C providers.

Of course, the question could also be asked whether LEO satellites directed to an outdoor (terrestrial) dish could threaten existing fiber optic networks, the business case, and the value proposition. After all, the SpaceX Starlink V3 satellite, not yet operational, is expected to support 1 Terabit per second (Tbps) over a coverage area of several thousand kilometers in diameter. It is no doubt an amazing technological achievement for SpaceX to have achieved a 10x leap in throughput from its present generation V2 (~100 Gbps).

However, while a V3-like satellite may offer an (impressive) total capacity of 1 Tbps, this capacity is not uniformly available across its entire footprint. It is distributed across multiple beams, potentially 256 or more, each with a bandwidth of approximately 4 Gbps (i.e., 1 Tbps / 256 beams). With a frequency reuse factor of, for example, 5, the effective usable capacity per unique coverage area becomes a fraction of the satellite’s total throughput. This means that within any given beam footprint, the satellite can only support a limited number of concurrent users at high bandwidth levels.

As a result, such a satellite cannot support more than roughly a thousand households with concurrent 1 Gbps demand in any single area (or, alternatively, about 10,000 households with 100 Mbps concurrent demand). This level of support would be equivalent to a small FTTP (sub)network serving no more than 20,000 households at a 50% uptake rate (i.e., 10,000 connected homes) and assuming a concurrency of 10%. A deployment of this scale would typically be confined to a localized, dense urban or peri-urban area, rather than the vast rural regions that LEO systems are expected to serve.

In contrast, a single Starlink V3-like satellite would cover a vast region, capable of supporting similar or greater numbers of users, including those in remote, low-density areas that FTTP cannot economically reach. The satellite solution described here is thus not designed to densify urban broadband, but rather to reach rural, remote, and low-density areas where laying fiber is logistically or economically impractical. Therefore, such satellites and conventional large-scale fiber networks are not in direct competition, as they cannot match their density, scale, or cost-efficiency in high-demand areas. Instead, it complements fiber infrastructure by providing connectivity and reinforces the case for hybrid infrastructure strategies, in which fiber serves the dense core, and LEO satellites extend the digital frontier.

However, terrestrial providers must closely monitor their FTTP deployment economics and refrain from extending too far into deep rural areas beyond a certain household density, which is likely to increase over time as satellite capabilities improve. The premise of this blog is that capable LEO satellites by 2030 could serve unconnected households that are unlikely to have any commercial viability for terrestrial fiber and have no other gigabit coverage option. Within the EU28, this represents approximately 5.5 million remote households. A Starlink V3-like 1 Tbps satellite could provide a gigabit service (occasionally) to those households and certainly hundreds of megabits per second per isolated household. Moreover, it is likely that over time, more capable satellites will be launched, with SpaceX being the most likely candidate for such an endeavor if it maintains its current pace of innovation. Such satellites will likely become increasingly interesting for household densities above 2 households per square kilometer. However, suppose an FTTP network has already been deployed. In that case, it seems unlikely that the satellite broadband service would render the terrestrial infrastructure obsolete, as long as it is priced competitively in comparison to the satellite broadband network.

LEO satellite direct-to-dish (D2D) based broadband networks may be a credible and economical alternative to deploying fiber in low-density rural households. The density boundary of viable substitution for a fiber connection with a gigabit satellite D2D connection may shift inward (from deep rural, low-density household areas). This reinforces the case for hybrid infrastructure strategies, in which fiber serves the denser regions and LEO satellites extend the digital frontier to remote and rural areas.

THE USUAL SUSPECT – THE PUN INTENDED.

By 2030, SpaceX’s Starlink will operate one of the world’s most extensive low Earth orbit (LEO) satellite constellations. As of early 2025, the company has launched more than 6,000 satellites into orbit; however, most of these, including V1, V1.5, and V2, are expected to cease operation by 2030. Industry estimates suggest that Starlink could have between 15,000 and 20,000 operational satellites by the end of the decade, which I anticipate to be mainly V3 and possibly a share of V4. This projection depends largely on successfully scaling SpaceX’s Starship launch vehicle, which is designed to deploy up to 60 or more next-generation V3 satellites per mission with the current cadence. However, it is essential to note that while SpaceX has filed applications with the International Telecommunication Union (ITU) and obtained FCC authorization for up to 12,000 satellites, the frequently cited figure of 42,000 satellites includes additional satellites that are currently proposed but not yet fully authorized.

The figure above, based on an idea of John Strand of Strand Consult, provides an illustrative comparison of the rapid innovation and manufacturing cycles of SpaceX LEO satellites versus the slower progression of traditional satellite development and spectrum policy processes, highlighting the growing gap between technological advancement and regulatory adaptation. This is one of the biggest challenges that regulatory institutions and policy regimes face today.

Amazon’s Project Kuiper has a much smaller planned constellation. The Federal Communications Commission (FCC) has authorized Amazon to deploy 3,236 satellites under its initial phase, with a deadline requiring that at least 1,600 be launched and operational by July 2026. Amazon began launching test satellites in 2024 and aims to roll out its service in late 2025 or early 2026. On April 28, 2025, Amazon launched its first 27 operational satellites for Project Kuiper aboard a United Launch Alliance Atlas (ULA) V rocket from Cape Canaveral, Florida. This marks the beginning of Amazon’s deployment of its planned 3,236-satellite constellation aimed at providing global broadband internet coverage. Though Amazon has hinted at potential expansion beyond its authorized count, any Phase 2 remains speculative and unapproved. If such an expansion were pursued and granted, the constellation could eventually grow to 6,000 satellites, although no formal filings have yet been made to support the higher amount.

China is rapidly advancing its low Earth orbit (LEO) satellite capabilities, positioning itself as a formidable competitor to SpaceX’s Starlink by 2030. Two major Chinese LEO satellite programs are at the forefront of this effort: the Guowang (13,000) and Qianfan (15,000) constellations. So, by 2030, it is reasonable to expect that China will field a national LEO satellite broadband system with thousands of operational satellites, focused not just on domestic coverage but also on extending strategic connectivity to Belt and Road Initiative (BRI) countries, as well as regions in Africa, Asia, and South America. Unlike SpaceX’s commercially driven approach, China’s system is likely to be closely integrated with state objectives, combining broadband access with surveillance, positioning, and secure communication functionality. While it remains unclear whether China will match SpaceX’s pace of deployment or technological performance by 2030, its LEO ambitions are unequivocally driven by geopolitical considerations. They will likely shape European spectrum policy and infrastructure resilience planning in the years ahead. Guowang and Qianfan are emblematic of China’s dual-use strategy, which involves developing technologies for both civilian and military applications. This approach is part of China’s broader Military-Civil Fusion policy, which seeks to integrate civilian technological advancements into military capabilities. The dual-use nature of these satellite constellations raises concerns about their potential military applications, including surveillance and communication support for the People’s Liberation Army.

AN ILLUSTRATION OF COVERAGE – UNITED KINGDOM.

It takes approximately 172 Starlink beams to cover the United Kingdom, with 8 to 10 satellites overhead simultaneously. To have a persistent UK coverage in the order of 150 satellite constellations across appropriate orbits. Starlink’s 53° inclination orbital shell is optimized for mid-latitude regions, providing frequent satellite passes and dense, overlapping beam coverage over areas like southern England and central Europe. This results in higher throughput and more consistent connectivity with fewer satellites. In contrast, regions north of 53°N, such as northern England and Scotland, lie outside this optimal zone and depend on higher-inclination shells (70° and 97.6°), which have fewer satellites and wider, less efficient beams. As a result, coverage in these Northern areas is less dense, with lower signal quality and increased latency.

For this blog, I developed a Python script, with fewer than 600 lines of code (It’s a physicist’s code, so unlikely to be super efficient), to simulate and analyze Starlink’s satellite coverage and throughput over the United Kingdom using real orbital data. By integrating satellite propagation, beam modeling, and geographic visualization, it enables a detailed assessment of regional performance from current Starlink deployments across multiple orbital shells. Its primary purpose is to assess how the currently deployed Starlink constellation performs over UK territory by modeling where satellites pass, how their beams are steered, and how often any given area receives coverage. The simulation draws live TLE (Two-Line Element) data from Celestrak, a well-established source for satellite orbital elements. Using the Skyfield library, the code propagates the positions of active Starlink satellites over a 72-hour period, sampling every 5 minutes to track their subpoints across the United Kingdom. There is no limitation on the duration or sampling time. Choosing a more extended simulation period, such as 72 hours, provides a more statistically robust and temporally representative view of satellite coverage by averaging out orbital phasing artifacts and short-term gaps. It ensures that all satellites complete multiple orbits, allowing for more uniform sampling of ground tracks and beam coverage, especially from shells with lower satellite densities, such as the 70° and 97.6° inclinations. This results in smoother, more realistic estimates of average signal density and throughput across the entire region.

Each satellite is classified into one of three orbital shells based on inclination angle: 53°, 70°, and 97.6°. These shells are simulated separately and collectively to understand their individual and combined contributions to UK coverage. The 53° shell dominates service in the southern part of the UK, characterized by its tight orbital band and high satellite density (see the Table below). The 70° shell supplements coverage in northern regions, while the 97.6° polar shell offers sparse but critical high-latitude support, particularly in Scotland and surrounding waters. The simulation assumes several (critical) parameters for each satellite type, including the number of beams per satellite, the average beam radius, and the estimated throughput per beam. These assumptions reflect engineering estimates and publicly available Starlink performance information, but are deliberately simplified to produce regional-level coverage and throughput estimates, rather than user-specific predictions. The simulation does not account for actual user terminal distribution, congestion, or inter-satellite link (ISL) performance, focusing instead on geographic signal and capacity potential.

These parameters were used to infer beam footprints and assign realistic signal density and throughput values across the UK landmass. The satellite type was inferred from its shell (e.g., most 53° shell satellites are currently V1.5), and beam properties were adjusted accordingly.

The table above presents the core beam modeling parameters and satellite-specific assumptions used in the Starlink simulation over the United Kingdom. It includes general values for beam steering behavior, such as Gaussian spread, steering limits, city-targeting probabilities, and beam spacing constraints, as well as performance characteristics tied to specific satellite generations to the extent it is known (e.g., Starlink V1.5, V2 Mini, and V2 Full). These assumptions govern the placement of beams on the Earth’s surface and the capacity each beam can deliver. For instance, the City Exclusion Radius of 0.25 degrees corresponds to a ~25 km buffer around urban centers, where beam placement is probabilistically discouraged. Similarly, the beam radius and throughput per beam values align with known design specifications submitted by SpaceX to the U.S. Federal Communications Commission (FCC), particularly for Starlink’s V1.5 and V2 satellites. The table above also defines overlap rules, specifying the maximum number of beams that can overlap in a region and the maximum number of satellites that can contribute beams to a given point. This helps ensure that simulations reflect realistic network constraints rather than theoretical maxima.

Overall, the developed code offers a geographically and physically grounded simulation of how the existing Starlink network performs over the UK. It helps explain observed disparities in coverage and throughput by visualizing the contribution of each shell and satellite generation. This modeling approach enables planners and researchers to quantify satellite coverage performance at national and regional scales, providing insight into both current service levels and facilitating future constellation evolution, which is not discussed here.

The figure illustrates a 72-hour time-averaged Starlink coverage density over the UK. The asymmetric signal strength pattern reflects the orbital geometry of Starlink’s 53° inclination shell, which concentrates satellite coverage over southern and central England. Northern areas receive less frequent coverage due to fewer satellite passes and reduced beam density at higher latitudes.

This image above presents the Starlink Average Coverage Density over the United Kingdom, a result from a 72-hour simulation using real satellite orbital data from Celestrak. It illustrates the mean signal exposure across the UK, where color intensity reflects the frequency and density of satellite beam illumination at each location.

At the center of the image, a bright yellow core indicating the highest signal strength is clearly visible over the English Midlands, covering cities such as Birmingham, Leicester, and Bristol. The signal strength gradually declines outward in a concentric pattern—from orange to purple—as one moves northward into Scotland, west toward Northern Ireland, or eastward along the English coast. While southern cities, such as London, Southampton, and Plymouth, fall within high-coverage zones, northern cities, including Glasgow and Edinburgh, lie in significantly weaker regions. The decline in signal intensity is especially apparent beyond the 56°N latitude. This pattern is entirely consistent with what we know about the structure of the Starlink satellite constellation. The dominant contributor to coverage in this region is the 53° inclination shell, which contains 3,848 satellites spread across 36 orbital planes. This shell is designed to provide dense, continuous coverage to heavily populated mid-latitude regions, such as the southern United Kingdom, continental Europe, and the continental United States. However, its orbital geometry restricts it to a latitudinal range that ends near 53 to 54°N. As a result, southern and central England benefit from frequent satellite passes and tightly packed overlapping beams, while the northern parts of the UK do not. Particularly, Scotland lies at or beyond the shell’s effective coverage boundary.

The simulation may indicate how Starlink’s design prioritizes population density and market reach. Northern England receives only partial benefit, while Scotland and Northern Ireland fall almost entirely outside the core coverage of the 53° shell. Although some coverage in these areas is provided by higher inclination shells (specifically, the 70° shell with 420 satellites and the 97.6° polar shell with 227 satellites), these are sparser in both the number of satellites and the orbital planes. Their beams may also be broader and less (thus) less focused, resulting in lower average signal strength in high-latitude regions.

So, why is the coverage not textbook nice hexagon cells with uniform coverage across the UK? The simple answer is that real-world satellite constellations don’t behave like the static, idealized diagrams of hexagonal beam tiling often used in textbooks or promotional materials. What you’re seeing in the image is a time-averaged simulation of Starlink’s actual coverage over the UK, reflecting the dynamic and complex nature of low Earth orbit (LEO) systems like Starlink’s. Unlike geostationary satellites, LEO satellites orbit the Earth roughly every 90 minutes and move rapidly across the sky. Each satellite only covers a specific area for a short period before passing out of view over the horizon. This movement causes beam coverage to constantly shift, meaning that any given spot on the ground is covered by different satellites at different times. While individual satellites may emit beams arranged in a roughly hexagonal pattern, these patterns move, rotate, and deform continuously as the satellite passes overhead. The beams also vary in shape and strength depending on their angle relative to the Earth’s surface, becoming elongated and weaker when projected off-nadir, i.e., when the satellite is not directly overhead. Another key reason lies in the structure of Starlink’s orbital configuration. Most of the UK’s coverage comes from satellites in the 53° inclination shell, which is optimized for mid-latitude regions. As a result, southern England receives significantly denser and more frequent coverage than Scotland or Northern Ireland, which are closer to or beyond the edge of this shell’s optimal zone. Satellites serving higher latitudes originate from less densely populated orbital shells at 70° and 97.6°, which result in fewer passes and wider, less efficient beams.

The above heatmap does not illustrate a snapshot of beam locations at a specific time, but rather an averaged representation of how often each part of the UK was covered over a simulation period. This type of averaging smooths out the moment-to-moment beam structure, revealing broader patterns of coverage density instead. That’s why we see a soft gradient from intense yellow in the Midlands, where overlapping beams pass more frequently, to deep purple in northern regions, where passes are less common and less centered.

Illustrates an idealized hexagonal beam coverage footprint over the UK. For visual clarity, only a subset of hexagons is shown filled with signal intensity (yellow core to purple edge), to illustrate a textbook-like uniform tiling. In reality, satellite beams from LEO constellations, such as Starlink, are dynamic, moving, and often non-uniform due to orbital motion, beam steering, and geographic coverage constraints.

The two charts below provide a visual confirmation of the spatial coverage dynamics behind the Starlink signal strength distribution over the United Kingdom. Both are based on a 72-hour simulation using real Starlink satellite data obtained from Celestrak, and they accurately reflect the operational beam footprints and orbital tracks of currently active satellites over the United Kingdom.

This figure illustrates time-averaged Starlink coverage density over the UK with beam footprints (left) and satellite ground tracks (right) by orbital shell. The high density of beams and tracks from the 53° shell over southern UK leads to stronger and more consistent coverage. At the same time, northern regions receive fewer, more widely spaced passes from higher-inclination shells (70° and 97.6°), resulting in lower aggregate signal strength.

The first chart displays the beam footprints (i.e., the left side chart above) of Starlink satellites across the UK, color-coded by orbital shell: cyan for the 53° shell, green for the 70° shell, and magenta for the 97° polar shell. The concentration of cyan beam circles in southern and central England vividly demonstrates the dominance of the 53° shell in this region. These beams are tightly packed and frequent, explaining the high signal coverage in the earlier signal strength heatmap. In contrast, northern England and Scotland are primarily served by green and magenta beams, which are more sparse and cover larger areas — a clear indication of the lower beam density from the higher-inclination shells.

The second chart illustrates the satellite ground tracks (i.e., the right side chart above) over the same period and geographic area. Again, the saturation of cyan lines in the southern UK underscores the intensive pass frequency of satellites in the 53° inclined shell. As one moves north of approximately 53°N, these tracks vanish almost entirely, and only the green (70° shell) and magenta (97° shell) paths remain. These higher inclination tracks cross through Scotland and Northern Ireland, but with less spatial and temporal density, which supports the observed decline in average signal strength in those areas.

Together, these two charts provide spatial and orbital validation of the signal strength results. They confirm that the stronger signal levels seen in southern England stem directly from the concentrated beam targeting and denser satellite presence of the 53° shell. Meanwhile, the higher-latitude regions rely on less saturated shells, resulting in lower signal availability and throughput. This outcome is not theoretical — it reflects the live state of the Starlink constellation today.

The figure illustrates the estimated average Starlink throughput across the United Kingdom over a 72-hour window. Throughput is highest over southern and central England due to dense satellite traffic from the 53° orbital shell, which provides overlapping beam coverage and short revisit times. Northern regions experience reduced throughput from sparser satellite passes and less concentrated beam coverage.

The above chart shows the estimated average throughput of Starlink Direct-2-Dish across the United Kingdom, simulated over 72 hours using real orbital data from Celestrak. The values are expressed in Megabits per second (Mbps) and are presented as a heatmap, where higher throughput regions are shown in yellow and green, and lower values fade into blue and purple. The simulation incorporates actual satellite positions and coverage behavior from the three operational inclination shells currently providing Starlink service to the UK. Consistent with the signal strength, beam footprint density, and orbital track density, the best quality and most supplied capacity are available south of the 53°N inclination.

The strongest throughput is concentrated in a horizontal band stretching from Birmingham through London to the southeast, as well as westward into Bristol and south Wales. In this region, the estimated average throughput peaks at over 3,000 Mbps, which can support more than 30 concurrent customers each demanding 100 Mbps within the coverage area or up to 600 households with an oversubscription rate of 1 to 20. This aligns closely with the signal strength and beam density maps also generated in this simulation and is driven by the dense satellite traffic of the 53° inclination shell. These satellites pass frequently over southern and central England, where their beams overlap tightly and revisit times are short. The availability of multiple beams from different satellites at nearly all times drives up the aggregate throughput experienced at ground level. Throughput falls off sharply beyond approximately 54°N. In Scotland and Northern Ireland, values typically stay well below 1,000 Mbps. This reduction directly reflects the sparser presence of higher-latitude satellites from the 70° and 97.6° shells, which are fewer in number and more widely spaced, resulting in lower revisit frequencies and broader, less concentrated beams. The throughput map thus offers a performance-level confirmation of the underlying orbital dynamics and coverage limitations seen in the satellite and beam footprint charts.

While the above map estimates throughput in realistic terms, it is essential to understand why it does not reflect the theoretical maximum performance implied by Starlink’s physical layer capabilities. For example, a Starlink V1.5 satellite supports eight user downlink channels, each with 250 MHz of bandwidth, which in theory amounts to a total of 2 GHz of spectrum. Similarly, if one assumes 24 beams, each capable of delivering 800 Mbps, that would suggest a satellite capacity in the range of approximately 19–20 Gbps. However, these peak figures assume an ideal case with full spectrum reuse and optimized traffic shaping. In practice, the estimated average throughput shown here is the result of modeling real beam overlap and steering constraints, satellite pass timing, ground coverage limits, and the fact that not all beams are always active or directed toward the same location. Moreover, local beam capacity is shared among users and dynamically managed by the constellation. Therefore, the chart reflects a realistic, time-weighted throughput for a given geographic location, not a per-satellite or per-user maximum. It captures the outcome of many beams intermittently contributing to service across 72 hours, modulated by orbital density and beam placement strategy, rather than theoretical peak link rates.

A valuable next step in advancing the simulation model would be the integration of empirical user experience data across the UK footprint. If datasets such as comprehensive Ookla performance measurements (e.g., Starlink-specific download and upload speeds, latency, and jitter) were available with sufficient geographic granularity, the current Python model could be calibrated and validated against real-world conditions. Such data would enable the adjustment of beam throughput assumptions, satellite visibility estimates, and regional weighting factors to better reflect the actual service quality experienced by users. This would enhance the model’s predictive power, not only in representing average signal and throughput coverage, but also in identifying potential bottlenecks, underserved areas, or mismatches between orbital density and demand.

It is also important to note that this work relies on a set of simplified heuristics for beam steering, which are designed to make the simulation both tractable and transparent. In this model, beams are steered within a fixed angular distance from each satellite’s subpoint, with probabilistic biases against cities and simple exclusion zones (i.e., I operate with an exclusion radius of approximately 25 km or more). However, in reality, Starlink’s beam steering logic is expected to be substantially more advanced, employing dynamic optimization algorithms that account for real-time demand, user terminal locations, traffic load balancing, and satellite-satellite coordination via laser interlinks. Starlink has the crucial (and obvious) operational advantage of knowing exactly where its customers are, allowing it to direct capacity where it is needed most, avoid congestion (to an extent), and dynamically adapt coverage strategies. This level of real-time awareness and adaptive control is not replicated in this analysis, which assumes no knowledge of actual user distribution and treats all geographic areas equally.

As such, the current Python code provides a first-order geographic approximation of Starlink coverage and capacity potential, not a reflection of the full complexity and intelligence of SpaceX’s actual network management. Nonetheless, it offers a valuable structural framework that, if calibrated with empirical data, could evolve into a much more powerful tool for performance prediction and service planning.

Median Starlink download speeds in the United Kingdom, as reported by Ookla, from Q4 2022 to Q4 2024, indicate a general decline through 2023 and early 2024, followed by a slight recovery in late 2024. Source: Ookla.com.

The decline in real-world median user speeds, observed in the chart above, particularly from Q4 2023 to Q3 2024, may reflect increasing congestion and uneven coverage relative to demand, especially in areas outside the dense beam zones of the 53° inclination shell. This trend supports the simulation’s findings: while orbital geometry enables strong average coverage in the southern UK, northern regions rely on less frequent satellite passes from higher-inclination shells, which limits performance. The recovery of the median speed in Q4 2024 could be indicative of new satellite deployments (e.g., more V2 Minis or V2 Fulls) beginning to ease capacity constraints, something future simulation extensions could model by incorporating launch timelines and constellation updates.

Illustrates a European-based dual-use Low Earth Orbit (LEO) satellite constellation providing broadband connectivity to Europe’s millions of unconnectables by 2030 on a secure and strategic infrastructure platform covering Europe, North Africa, and the Middle East.

THE 200 BILLION EUROS QUESTION – IS THERE A PATH TO A EUROPEAN SPACE INDEPENDENCE?

Let’s start with the answer! Yes!

Is €200 billion, the estimated amount required to close the EU28 gigabit gap between 2023 and 2030, likely to enable Europe to build its own LEO satellite constellation and potentially develop one that is more secure, inclusive, and strategically aligned with its values and geopolitical objectives? In comparison, the European Union’s IRIS² (Infrastructure for Resilience, Interconnectivity and Security by Satellite) program has been allocated a total budget of 10+ billion euros aiming at building 264 LEO satellites (1,200 km) and 18 MEO satellites (8,000 km) mainly by the European “Primes” (i.e., the usual “suspects” of legacy defense contractors) by 2030. For that amount, we should even be able to afford our dedicated European stratospheric drone program for real-world use cases, as opposed to, for example, Airbus’s (AALTO) Zephyr fragile platform, which, imo, is more an impressive showcase of an innovative, sustainable (solar-driven) aerial platform than a practical, robust, high-performance communications platform.

A significant portion of this budget should be dedicated to designing, manufacturing, and launching a European-based satellite constellation. If Europe could match the satellite cost price of SpaceX, and not that of IRIS² (which appears to be large based on legacy satellite platform thinking or at least the unit price tag is), it could launch a very substantial number of EU-based LEO satellites within 200 billion euros (also for a lot less obviously). It easily matches the number of SpaceX’s long-term plans and would vastly surpass the satellites authorized under Kuiper’s first phase. To support such a constellation, Europe must invest heavily in launch infrastructure. While Ariane 6 remains in development, it could be leveraged to scale up the Ariane program or develop a reusable European launch system, mirroring and improving upon the capabilities of SpaceX’s Starship. This would reduce long-term launch costs, boost autonomy, and ensure deployment scalability over the decade. Equally essential would be establishing a robust ground segment covering the deployment of a European-wide ground station network, edge nodes, optical interconnects, and satellite laser communication capabilities.

Unlike Starlink, which benefits from SpaceX’s vertical integration, and Kuiper, which is backed by Amazon’s capital and logistics empire, a European initiative would rely heavily on strong multinational coordination. With 200 billion euros, possibly less if the usual suspects (i.e., ” Primes”) are managed accordingly, Europe could close the technology gap rapidly, secure digital sovereignty, and ensure that it is not dependent on foreign providers for critical broadband infrastructure, particularly for rural areas, government services, and defense.

Could this be done by 2030? Doubtful, unless Europe can match SpaceX’s impressive pace of innovation. That is at least to match the 3 years (2015–2018) it took SpaceX to achieve a fully reusable Falcon 9 system and the 4 years (2015–2019) it took to go from concept to the first operational V1 satellite launch. Elon has shown it is possible.

KEY TAKEAWAYS.

LEO satellite direct-to-dish broadband, when strategically deployed in underserved and hard-to-reach areas, should be seen not as a competitor to terrestrial networks but as a strategic complement. It provides a practical, scalable, and cost-effective means to close the final connectivity gap, one that terrestrial networks alone are unlikely to bridge economically. In sparsely populated rural zones, where fiber deployment becomes prohibitively expensive, LEO satellites may render new rollouts obsolete. In these cases, satellite broadband is not just an alternative. It may be essential. Moreover, it can also serve as a resilient backup in areas where rural fiber is already deployed, especially in regions lacking physical network redundancy. Rather than undermining terrestrial infrastructure, LEO extends its reach, reinforcing the case for hybrid connectivity models central to achieving EU-wide digital reach by 2030.

Instead of continuing to subsidize costly last-mile fiber in uneconomical areas, European policy should reallocate a portion of this funding toward the development of a sovereign European Low-Earth Orbit (LEO) satellite constellation. A mere 200 billion euros, or even less, would go a very long way in securing such a program. Such an investment would not only connect the remaining “unconnectables” more efficiently but also strengthen Europe’s digital sovereignty, infrastructure resilience, and strategic autonomy. A European LEO system should support dual-use applications, serving both civilian broadband access and the European defense architecture, thereby enhancing secure communications, redundancy, and situational awareness in remote regions. In a hybrid connectivity model, satellite broadband plays a dual role: as a primary solution in hard-to-reach zones and as a high-availability backup where terrestrial access exists, reinforcing a layered, future-proof infrastructure aligned with the EU’s 2030 Digital Decade objectives and evolving security imperatives.

Non-European dependence poses strategic trade-offs: The rise of LEO broadband providers, SpaceX, and China’s state-aligned Guowang and Qianfan, underscores Europe’s limited indigenous capacity in the Low Earth Orbit (LEO) space. While non-EU options may offer faster and cheaper rural connectivity, reliance on foreign infrastructure raises concerns about sovereignty, data governance, and security, especially amid growing geopolitical tensions.

LEO satellites, especially those similar or more capable than Starlink V3, can technically support the connectivity needs of Europe’s 2030s “unconnectable” (rural) households. Due to geography or economic constraints, these homes are unlikely to be reached by FTTP even under the most ambitious business-as-usual scenarios. A constellation of high-capacity satellites could serve these households with gigabit-class connections, especially when factoring in user concurrency and reasonable uptake rates.

The economics of FTTP deployment sharply deteriorate in very low-density rural regions, reinforcing the need for alternative technologies. By 2030, up to 5.5 million EU28 households are projected to remain beyond the economic viability of FTTP, down from 15.5 million rural homes in 2023. The European Commission has estimated that closing the gigabit gap from 2023 to 2030 requires around €200 billion. LEO satellite broadband may be a more cost-effective alternative, particularly with direct-to-dish architecture, at least for the share of unconnectable homes.

While LEO satellite networks offer transformative potential for deep rural coverage, they do not pose a threat to existing FTTP deployments. A Starlink V3 satellite, despite its 1 Tbps capacity, can serve the equivalent of a small fiber network, about 1,000 homes at 1 Gbps under full concurrency, or roughly 20,000 homes with 50% uptake and 10% busy-hour concurrency. FTTP remains significantly more efficient and scalable in denser areas. Satellites are not designed to compete with fiber in urban or suburban regions, but rather to complement it in places where fiber is uneconomical or otherwise unviable.

The technical attributes of LEO satellites make them ideally suited for sparse, low-density environments. Their broad coverage area and increasingly sophisticated beamforming and frequency reuse capabilities allow them to efficiently serve isolated dwellings, often spread across tens of thousands of square kilometers, where trenching fiber would be infeasible. These technologies extend the digital frontier rather than replace terrestrial infrastructure. Even with SpaceX’s innovative pace, it seems unlikely that this conclusion will change substantially within the next five years, at the very least.

A European LEO constellation could be feasible within a € 200 billion budget: The €200 billion gap identified for full gigabit coverage could, in theory, fund a sovereign European LEO system capable of servicing the “unconnectables.” If Europe adopts leaner, vertically integrated innovation models like SpaceX (and avoids legacy procurement inefficiencies), such a constellation could deliver comparable technical performance while bolstering strategic autonomy.

The future of broadband infrastructure in Europe lies in a hybrid strategy. Fiber and mobile networks should continue to serve densely populated areas, while LEO satellites, potentially supplemented by fixed wireless and 5G, offer a viable path to universal coverage. By 2030, a satellite constellation only slightly more capable than Starlink V3 could deliver broadband to virtually all of Europe’s remaining unconnected homes, without undermining the business case for large-scale FTTP networks already in place.

CAUTIONARY NOTE.

While current assessments suggest that a LEO satellite constellation with capabilities on par with or slightly exceeding those anticipated for Starlink V3 could viably serve Europe’s remaining unconnected households by 2030, it is important to acknowledge the speculative nature of these projections. The assumptions are based on publicly available data and technical disclosures. Still, it is challenging to have complete visibility into the precise specifications, performance benchmarks, or deployment strategies of SpaceX’s Starlink satellites, particularly the V3 generation, or, for that matter, Amazon’s Project Kuiper constellation. Much of what is known comes from regulatory filings (e.g., FCC), industry reports and blogs, Reddit, and similar platforms, as well as inferred capabilities. Therefore, while the conclusions drawn here are grounded in credible estimates and modeling, they should be viewed with caution until more comprehensive and independently validated performance data become available.

THE SATELLITE’S SPECS – MOST IS KEPT A “SECRET”, BUT THERE IS SOME LIGHT.

Satellite capacity is not determined by a single metric, but instead emerges from a tightly coupled set of design parameters. Variables such as spectral efficiency, channel bandwidth, polarization, beam count, and reuse factor are interdependent. Knowing a few of them allows us to estimate, bound, or verify others. This is especially valuable when analyzing or validating constellation design, performance targets, or regulatory filings.

For example, consider a satellite that uses 250 MHz channels with 2 polarizations and a spectral efficiency of 5.0 bps/Hz. These inputs directly imply a channel capacity of 1.25 Gbps and a beam capacity of 2.5 Gbps. If the satellite is intended to deliver 100 Gbps of total throughput, as disclosed in related FCC filings, one can immediately deduce that 40 beams are required. If, instead, the satellite’s reuse architecture defines 8 x 250 MHz channels per reuse group with a reuse factor of 5, and each reuse group spans a fixed coverage area. Both the theoretical and practical throughput within that area can be computed, further enabling the estimation of the total number of beams, the required spectrum, and the likely user experience. These dependencies mean that if the number of user channels, full bandwidth, channel bandwidth, number of beams, or frequency reuse factor is known, it becomes possible to estimate or cross-validate the others. This helps identify design consistency or highlight unrealistic assumptions.

In satellite systems like Starlink, the total available spectrum is limited. This is typically divided into discrete channels, for example, eight 250 MHz channels (as is the case for Starlink’s Ku-band downlink to the user’s terrestrial dish). A key architectural advantage of spot-beam satellites (e.g., with spots that are at least 50 to 80 km wide) is that frequency channels can be reused in multiple spatially separated beams, as long as the beams do not interfere with one another. This is not based on a fixed reuse factor, as seen in terrestrial cellular systems, but on beam isolation, achieved through careful beam shaping, angular separation, and sidelobe control (as also implemented in the above Python code for UK Starlink satellite coverage, albeit in much simpler ways). For instance, one beam covering southern England can use the same frequency channels as another beam covering northern Scotland, because their energy patterns do not overlap significantly at ground level. In a constellation like Starlink’s, where hundreds or even thousands of beams are formed across a satellite footprint, frequency reuse is achieved through simultaneous but non-overlapping spatial beam coverage. The reuse logic is handled dynamically on board or through ground-based scheduling, based on real-time traffic load and beam geometry.

This means that for a given satellite, the total instantaneous throughput is not only a function of spectral efficiency and bandwidth per beam, but also of the number of beams that can simultaneously operate on overlapping frequencies without causing harmful interference. If a satellite has access to 2 GHz of bandwidth and 250 MHz channels, then up to 8 distinct channels can be formed. These channels can be replicated across different beams, allowing many more than 8 beams to be active concurrently, each using one of those 8 channels, as long as they are separated enough in space. This approach allows operators to scale capacity dramatically through dense spatial reuse, rather than relying solely on expanding spectrum allocations. The ability to reuse channels across beams depends on antenna performance, beamwidth, power control, and orbital geometry, rather than a fixed reuse pattern. The same set of channels is reused across non-interfering coverage zones enabled by directional spot beams. Satellite beams can be “stacked on top of each other” up to the number of available channels, or they can be allocated optimally across a coverage area determined by user demand.

Although detailed specifications of commercial satellites, whether in operation or in the planning phase, are usually not publicly disclosed. However, companies are required to submit technical filings to the U.S. Federal Communications Commission (FCC). These filings include orbital parameters, frequency bands in use, EIRP, and antenna gain contours, as well as estimated capabilities of the satellite and user terminals. The FCC’s approval of SpaceX’s Gen2 constellation, for instance, outlines many of these values and provides a foundation upon which informed estimates of system behavior and performance can be made. The filings are not exhaustive and may omit sensitive performance data, but they serve as authoritative references for bounding what is technically feasible or likely in deployment.

ACKNOWLEDGEMENT.

I would like to acknowledge my wife, Eva Varadi, for her unwavering support, patience, and understanding throughout the creative process of writing this article.

FURTHER READINGS.

Kim K. Larsen, “Will LEO Satellite Direct-to-Cellular Networks Make Traditional Mobile Networks Obsolete?”, A John Strand Consult Report, (January 2025). This has also been published in full on my own Techneconomy blog.

Kim K. Larsen, “The Next Frontier: LEO Satellites for Internet Services.” Techneconomyblog (March 2024).

Kim K. Larsen, “Stratospheric Drones & Low Earth Satellites: Revolutionizing Terrestrial Rural Broadband from the Skies?” Techneconomyblog (January 2024).

Kim K. Larsen, “A Single Network Future“, Techneconomyblog (March 2024).

NOTE: My “Satellite Coverage Concept Model,” which I have applied to Starlink Direct-2-Dish coverage and Services in the United Kingdom, is not limited to the UK alone but can be straightforwardly generalized to other countries and areas.

Will LEO Satellite Direct-to-Cell Networks make Terrestrial Networks Obsolete?

THE POST-TOWER ERA – A FAIRYTAIL.

From the bustling streets of New York to the remote highlands of Mongolia, the skyline had visibly changed. Where steel towers and antennas once dominated now stood open spaces and restored natural ecosystems. Forests reclaimed their natural habitats, and birds nested in trees undisturbed by the scaring of high rural cellular towers. This transformation was not sudden but resulted from decades of progress in satellite technology, growing demand for ubiquitous connectivity, an increasingly urgent need to address the environmental footprint of traditional telecom infrastructures, and the economic need to dramatically reduce operational expenses tied up in tower infrastructure. By the time the last cell site was decommissioned, society stood at the cusp of a new age of connectivity by LEO satellites covering all of Earth.

The annual savings worldwide from making terrestrial cellular towers obsolete in total cost are estimated to amount to at least 300 billion euros, and it is expected that moving cellular access to “heaven” will avoid more than 150 million metric tons of CO2 emissions annually. The retirement of all terrestrial cellular networks worldwide has been like eliminating the entire carbon footprint of The Netherlands or Malaysia and leading to a dramatic reduction in demand for sustainable green energy sources that previously were used to power the global cellular infrastructure.

INTRODUCTION.

Recent postings and a substantial part of commentary give the impression that we are heading towards a post-tower era where Elon Musk’s Low Earth Orbit (LEO) satellite Starlink network (together with competing options, e.g., ATS Spacemobile and Lynk, and no, I do not see Amazon’s Project Kuiper in this space) will make terrestrially-based tower infrastructure and earth-bound cellular services obsolete.

T-Mobile USA is launching its Direct-to-Cell (D2C) service via SpaceX’s Starlink LEO satellite network. The T-Mobile service is designed to work with existing LTE-compatible smartphones, allowing users to connect to Starlink satellites without needing specialized hardware or smartphone applications.

Since the announcement, posts and media coverage have declared the imminent death of the terrestrial cellular network. When it is pointed out that this may be a premature death sentence to an industry, telecom operators, and their existing cellular mobile networks, it is also not uncommon to be told off as being too pessimistic and an unbeliever in Musk’s genius vision. Musk has on occasion made it clear the Starlink D2C service is aimed at texts and voice calls in remote and rural areas, and to be honest, the D2C service currently hinges on 2×5 MHz in the T-Mobile’s PCS band, adding constraints to the “broadbandedness” of the service. The fact that the service doesn’t match the best of T-Mobile US’s 5G network quality (e.g., 205+ Mbps downlink) or even get near its 4G speeds should really not bother anyone, as the value of the D2C service is that it is available in remote and rural areas with little to no terrestrial cellular coverage and that you can use your regular cellular device with no need for a costly satellite service and satphone (e.g., Iridium, Thuraya, Globalstar).

While I don’t expect to (or even want to) change people’s beliefs, I do think it would be great to contribute to more knowledge and insights based on facts about what is possible with low-earth orbiting satellites as a terrestrial substitute and what is uninformed or misguided opinion.

The rise of LEO satellites has sparked discussions about the potential obsolescence of terrestrial cellular networks. With advancements in satellite technology and increasing partnerships, such as T-Mobile’s collaboration with SpaceX’s Starlink, proponents envision a future where towers are replaced by ubiquitous connectivity from the heavens. However, the feasibility of LEO satellites achieving service parity with terrestrial networks raises significant technical, economic, and regulatory questions. This article explores the challenges and possibilities of LEO Direct-to-Cell (D2C) networks, shedding light on whether they can genuinely replace ground-based cellular infrastructure or will remain a complementary technology for specific use cases.

WHY DISTANCE MATTERS.

The distance between you (your cellular device) and the base station’s antenna determines your expected service experience in cellular and wireless networks. The longer you are away from the base station that serves you, in general, the poorer your connection quality and performance will be, with everything else being equal. As the distance increases, signal weakening (i.e., path loss) grows exponentially, reducing signal quality and making it harder for devices to maintain reliable communication. Closer proximity allows for more substantial, faster, and more stable connections, while longer distances require more power and advanced technologies like beamforming or repeaters to compensate.

Physics tells us how a signal loses its signal strength (or power) over a distance with the square of the distance from the source of the signal itself (either the base station transmitter or the consumer device). This applies universally to all electromagnetic waves traveling in free space. Free space means that there are no obstacles, reflections, or scattering. No terrain features, buildings, or atmospheric conditions interfere with the propagation signal.

So, what matters to the Free Space Path Loss (FSPL)? That is the signal strength over a given distance in free space:

  • The signal strength reduces (the path loss increases) with the square of the distance (d) from its source.
  • Path loss increases (i.e., signal strength decreases) with the (square of the) frequency (f). The higher the frequency, the higher the path loss at a given distance from the signal source.
  • A larger transmit antenna aperture reduces the path loss by focusing the transmitted signal (energy) more efficiently. An antenna aperture is an antenna’s “effective area” that captures or transmits electromagnetic waves. It depends directly on antenna gain and inverse of the square of the signal frequency (i.e., higher frequency → smaller aperture).
  • Higher receiver gain will also reduce the path loss.

$PL_{FS} \; = \; \left( \frac{4 \pi}{c} \right)^2 (d \; f)^2 \; \propto d^2 \; f^2$

$$FSPL_{dB} \; = 10 \; Log_{10} (PL_{FS}) \; = \; 20 \; Log_{10}(d) \; + \; 20 \; Log_{10}(f) \; + \; constant$$

The above equations show a strong dependency on distance; the farther away, the larger the signal loss, and the higher the frequency, the larger the signal loss. Relaxing some of the assumptions leading to the above relationship leads us to the following:

$FSPL_{dB}^{rs} \; = \; 20 \; Log_{10}(d) \; – \; 10 \; Log_{10}(A_t^{eff}) \; – \; 10 \; Log_{10}(G_{r}) \; + \; constant$

The last of the above equations introduces the transmitter’s effective antenna aperture (\(A_t^{eff}\)) and the receiver’s gain (\(G_r\)), telling us that larger apertures reduce path loss as they focus the transmitted energy more efficiently and that higher receiver gain likewise reduces the path loss (i.e., “they hear better”).

It is worth remembering that the transmitter antenna aperture is directly tied to the transmitter gain ($G_t$) when the frequency (f) has been fixed. We have

$A_t^{eff} \; = \; \frac{c^2}{4\pi} \; \frac{1}{f^2} \; G_t \; = \; 0.000585 \; m^2 \; G_t \;$ @ f = 3.5 GHz.

From the above, as an example, it is straightforward to see that the relative path loss difference between the two distances of 550 km (e.g., typical altitude of an LEO satellite) and 2.5 km (typical terrestrial cellular coverage range ) is

$\frac{PL_{FS}(550 km)}{PL_{FS}(2.5 km)} \; = \; \left( \frac {550}{2.5}\right)^2 \; = \; 220^2 \; \approx \; 50$ thousand. So if all else was equal (it isn’t, btw!), we would expect that the signal loss at a distance of 550 km would be 50 thousand times higher than at 2.5 km. Or, in the electrical engineer’s language, at a distance of 550 km, the loss would be 47 dB higher than at 2.5 km.

The figure illustrates the difference between (a) terrestrial cellular and (b) satellite coverage. A terrestrial cellular signal typically covers a radius of 0.5 to 5 km. In contrast, a LEO satellite signal travels a substantial distance to reach Earth (e.g., Starlink satellite is at an altitude of 550 km). While the terrestrial signal propagates through the many obstacles it meets on its earthly path, the satellite signal’s propagation path would typically be free-space-like (i.e., no obstacles) until it penetrates buildings or other objects to reach consumer devices. Historically, most satellite-to-Earth communication has relied on outdoor ground stations or dishes where the outdoor antenna on Earth provides LoS to the satellite and will also compensate somewhat for the signal loss due to the distance to the satellite.

Let’s compare a terrestrial 5G 3.5 GHz advanced antenna system (AAS) 2.5 km from a receiver with a LEO satellite system at an altitude of 550 km. Note I could have chosen a lower frequency, e.g., 800 MHz or the PCS 1900 band. While it would give me some advantages regarding path loss (i.e., $FSPL \; \propto \; f^2$), the available bandwidth is rather smallish and insufficient for state-or-art 5G services (imo!). From a free-space path loss perspective, independently of frequency, we need to overcome an almost 50 thousand times relative difference in distance squared (ca. 47 dB difference) in favor of the terrestrial system. In this comparison, it should be understood that the terrestrial and the satellite systems use the same carrier frequency (otherwise, one should account for the difference in frequency), and the only difference that matters (for the FSPL) is the difference in distance to the receiver.

Suppose I require that my satellite system has the same signal loss in terms of FSPL as my terrestrial system to aim at a comparable quality of service level. In that case, I have several options in terms of satellite enhancements. I could increase transmit power, although it would imply that I need a transmit power of 47 dB more than the terrestrial system, or approximately 48 kW, which is likely impractical for the satellite due to power limitations. Compare this with the current Starlink transmit power of approximately 32 W (45 dBm), ca. 1,500 times lower. Alternatively, I could (in theory!) increase my satellite antenna aperture, leading to a satellite antenna with a diameter of ca. 250 meters, which is enormous compared to current satellite antennas (e.g., Starlink’s ca. 0.05 m2 aperture for a single antenna and total area in the order of 1.6 m2 for the Ku/Ka bands). Finally, I could (super theoretically) also massively improve my consumer device (e.g., smartphone) to receive gain (with 47 dB) from today’s range of -2 dBi to +5 dBi. Achieving 46 dBi gain in a smartphone receiver seems unrealistic due to size, power, and integration constraints. As the target of LEO satellite direct-to-cell services is to support commercially available cellular devices used in terrestrial, only the satellite specifications can be optimized.

Based on a simple free-space approach, it appears unreasonable that an LEO satellite communication system can provide 5G services at parity with a terrestrial cellular network to normal (unmodified) 5G consumer devices without satellite-optimized modifications. The satellite system’s requirements for parity with a terrestrial communications system are impractical (but not impossible) and, if pursued, would significantly drive up design complexity and cost, likely making such a system highly uneconomical.

At this point, you should ask yourself if it is reasonable to assume that a terrestrial communication cellular system can be taken to propagate as its environment is “free-space” like. Thus, obstacles, reflections, and scattering are ignored. Is it really okay to presume that terrain features, buildings, or atmospheric conditions do not interfere with the propagation of the terrestrial cellular signal? Of course, the answer should be that it is not okay to assume that. When considering this, let’s see if it matters much compared to the LEO satellite path loss.

TERRESTRIAL CELLULAR PROPAGATION IS NOT HAPPENING IN FREE SPACE, AND NEITHER IS A SATELLITE’S.

The Free-Space Path Loss (FSPL) formula assumes ideal conditions where signals propagate in free space without interference, blockage, or degradation, besides what would naturally be by traveling a given distance. However, as we all experience daily, real-world environments introduce additional factors such as obstructions, multipath effects, clutter loss, and environmental conditions, necessitating corrections to the FSPL approach. Moving from one room of our house to another can easily change the cellular quality and our experience (e.g., dropped calls, poorer voice quality, lower speed, changing from using 5G to 4G or even to 2G, no coverage at all). Driving through a city may also result in ups and downs with respect to the cellular quality we experience. Some of these effects are tabulated below.

Urban environments typically introduce the highest additional losses due to dense buildings, narrow streets, and urban canyons, which significantly obstruct and scatter signals. For example, the Okumura-Hata Urban Model accounts for such obstructions and adds substantial losses to the FSPL, averaging around 30–50 dB, depending on the density and height of buildings.

Suburban environments, on the other hand, are less obstructed than urban areas but still experience moderate clutter losses from trees, houses, and other features. In these areas, corrections based on the Okumura-Hata Suburban Model add approximately 10–20 dB to the FSPL, reflecting the moderate level of signal attenuation caused by vegetation and scattered structures.

Rural environments have the least obstructions, resulting in the lowest additional loss. Corrections based on the Okumura-Hata Rural Model typically add around 5–10 dB to the FSPL. These areas benefit from open landscapes with minimal obstructions, making them ideal for long-range signal propagation.

Non-line-of-sight (NLOS) conditions increase additionally the path loss, as signals must diffract or scatter to reach the receiver. This effect adds 10–20 dB in suburban and rural areas and 20–40 dB in urban environments, where obstacles are more frequent and severe. Similarly, weather conditions such as rain and foliage contribute to signal attenuation, with rain adding up to 1–5 dB/km at higher frequencies (above 10 GHz) and dense foliage introducing an extra 5–15 dB of loss.

The corrections for these factors can be incorporated into the FSPL formula to provide a more realistic estimation of signal attenuation. By applying these corrections, the FSPL formula can reflect the conditions encountered in terrestrial communication systems across different environments.

The figure above illustrates the differences and similarities concerning the coverage environment for (a) terrestrial and (b) satellite communication systems. The terrestrial signal environment, in most instances, results in the loss of the signal as it propagates through the terrestrial environment due to vegetation, terrain variations, urban topology or infrastructure, weather, and ultimately, as the signal propagates from the outdoor environment to the indoor environment it signal reduces further as it, for example, penetrates windows with coatings, outer and inner walls. The combination of distance, obstacles, and material penetration leads to a cumulative reduction in signal strength as the signal propagates through the terrestrial environment. For the satellite, as illustrated in (b), a substantial amount of signal is reduced due to the vast distance it has to travel before reaching the consumer. If no outdoor antenna connects with the satellite signal, then the satellite signal will be further reduced as it penetrates roofs, multiple ceilings, multiple floors, and walls.

It is often assumed that a satellite system has a line of sight (LoS) without environmental obstructions in its signal propagation (besides atmospheric ones). The reasoning is not unreasonable as the satellite is on top of the consumers of its services and, of course, a correct approach when the consumer has an outdoor satellite receiver (e.g., a dish) in direct LoS with the satellite. Moreover, historically, most satellite-to-Earth communication has relied on outdoor ground stations or outdoor dishes (e.g., placed on roofs or another suitable location) where the outdoor antenna on Earth provides LoS to the satellite’s antenna also compensating somewhat for the signal loss due to the distance to the satellite.

When considering a satellite direct-to-cell device, we no longer have the luxury of a satellite-optimized advanced Earth-based outdoor antenna to facilitate the communications between the satellite and the consumer device. The satellite signal has to close the connection with a standard cellular device (e.g., smartphone, tablet, …), just like the terrestrial cellular network would have to do.

However, 80% or more of our mobile cellular traffic happens indoors, in our homes, workplaces, and public places. If a satellite system had to replace existing mobile network services, it would also have to provide a service quality similar to that of consumers from the terrestrial cellular network. As shown in the above figure, this involves urban areas where the satellite signal will likely pass through a roof and multiple floors before reaching a consumer. Depending on housing density, buildings (shadowing) may block the satellite signal, resulting in substantial service degradation for consumers suffering from such degrading effects. Even if the satellite signal would not face the same challenges as a terrestrial cellular signal, such as with vegetation, terrain variations, and the horizontal dimension of urban topology (e.g., outer& inner walls, coated windows,… ), the satellite signal would still have to overcome the vertical dimension of urban topologies (e..g, roofs, ceilings, floors, etc…) to connect to consumers cellular devices.

For terrestrial cellular services, the cellular network’s signal integrity will (always) have a considerable advantage over the satellite signal because of the proximity to the consumer’s cellular device. With respect to distance alone, an LEO satellite at an altitude of 550 km will have to overcome a 50 thousand times (or a 47 dB) path loss compared to a cellular base station antenna 2.5 km away. Overcoming that path loss penalty adds considerable challenges to the antenna design, which would seem highly challenging to meet and far from what is possible with today’s technology (and economy).

CHALLENGES SUMMARIZED.

Achieving parity between a Low Earth Orbit (LEO) satellite providing Direct-to-Cell (D2C) services and a terrestrial 5G network involves overcoming significant technical challenges. The disparity arises from fundamental differences in these systems’ environments, particularly in free-space path loss, penetration loss, and power delivery. Terrestrial networks benefit from closer proximity to the consumer, higher antenna density, and lower propagation losses. In contrast, LEO satellites must address far more significant free-space path losses due to the large distances involved and the additional challenges of transmitting signals through the atmosphere and into buildings.

The D2C challenges for LEO satellites are increasingly severe at higher frequencies, such as 3.5 GHz and above. As we have seen above, the free-space path loss increases with the square of the frequency, and penetration losses through common building materials, such as walls and floors, are significantly higher. For an LEO satellite system to achieve indoor parity with terrestrial 5G services at this frequency, it would need to achieve extraordinary levels of effective isotropic radiated power (EIRP), around 65 dB, and narrow beamwidths of approximately 0.5° to concentrate power on specific service areas. This would require very high onboard power outputs, exceeding 1 kW, and large antenna apertures, around 2 m in diameter, to achieve gains near 55 dBi. These requirements place considerable demands on satellite design, increasing mass, complexity, and cost. Despite these optimizations, indoor service parity at 3.5 GHz remains challenging due to persistent penetration losses of around 20 dB, making this frequency better suited for outdoor or line-of-sight applications.

Achieving a stable beam with the small widths required for a LEO satellite to provide high-performance Direct-to-Cell (D2C) services presents significant challenges. Narrow beam widths, on the order of 0.5° to 1°, are essential to effectively focus the satellite’s power and overcome the high free-space path loss. However, maintaining such precise beams demands advanced satellite antenna technologies, such as high-gain phased arrays or large deployable apertures, which introduce design, manufacturing, and deployment complexities. Moreover, the satellite must continuously track rapidly moving targets on Earth as it orbits around 7.8 km/s. This requires highly accurate and fast beam-steering systems, often using phased arrays with electronic beamforming, to compensate for the relative motion between the satellite and the consumer. Any misalignment in the beam can result in significant signal degradation or complete loss of service. Additionally, ensuring stable beams under variable conditions, such as atmospheric distortion, satellite vibrations, and thermal expansion in space, adds further layers of technical complexity. These requirements increase the system’s power consumption and cost and impose stringent constraints on satellite design, making it a critical challenge to achieve reliable and efficient D2C connectivity.

As the operating frequency decreases, the specifications for achieving parity become less stringent. At 1.8 GHz, the free-space path loss and penetration losses are lower, reducing the signal deficit. For a LEO satellite operating at this frequency, a 2.5 m² aperture (1.8 m diameter) antenna and an onboard power output of around 800 W would suffice to deliver EIRP near 60 dBW, bringing outdoor performance close to terrestrial equivalency. Indoor parity, while more achievable than 3.5 GHz, would still face challenges due to penetration losses of approximately 15 dB. However, the balance between the reduced propagation losses and achievable satellite optimizations makes 1.8 GHz a more practical compromise for mixed indoor and outdoor coverage.

At 800 MHz, the frequency-dependent losses are significantly reduced, making it the most feasible option for LEO satellite systems to achieve parity with terrestrial 5G networks. The free-space path loss decreases further, and penetration losses into buildings are reduced to approximately 10 dB, comparable to what terrestrial systems experience. These characteristics mean that the required specifications for the satellite system are notably relaxed. A 1.5 m² aperture (1.4 m diameter) antenna, combined with a power output of 400 W, would achieve sufficient gain and EIRP (~55 dBW) to deliver robust outdoor coverage and acceptable indoor service quality. Lower frequencies also mitigate the need for extreme beamwidth narrowing, allowing for more flexible service deployment.

Most consumers’ cellular consumption happens indoors. These consumers are compared to an LEO satellite solution typically better served by existing 5G cellular broadband networks. When considering a direct-to-normal-cellular device, it would not be practical to have an LEO satellite network, even an extensive one, to replace existing 5G terrestrial-based cellular networks and the services these support today.

This does not mean that LEO satellite cannot be of great utility when connecting to an outdoor Earth-based consumer dish, as is already evident in many remote, rural, and suburban places. The summary table above also shows that LEO satellite D2C services are feasible, without too challenging modifications, at the lower cellular frequency ranges between 600 MHz to 1800 MHz at service levels close to the terrestrial systems, at least in rural areas and for outdoor services in general. In indoor situations, the LEO Satellite D2C signal is more likely to be compromised due to roof and multiple floor penetration scenarios to which a terrestrial signal may be less exposed.

WHAT GOES DOWN MUST COME UP.

LEO satellite services that provide direct to unmodified mobile cellular device services are getting us all too focused on the downlink path from the satellite directly to the device. It seems easy to forget that unless you deliver a broadcast service, we also need the unmodified cellular device to directly communicate meaningfully with the LEO satellite. The challenge for an unmodified cellular device (e.g., smartphone, tablet, etc.) to receive the satellite D2C signal has been explained extensively in the previous section. In the satellite downlink-to-device scenario, we can optimize the design specifications of the LEO satellite to overcome some (or most, depending on the frequency) of the challenges posed by the satellite’s high altitude (compared to a terrestrial base station’s distance to the consumer device). In the device direct-uplink-to-satellite, we have very little to no flexibility unless we start changing the specifications of the terrestrial device portfolio. Suppose we change the specifications for consumer devices to communicate better with satellites. In that case, we also change the premise and economics of the (wrong) idea that LEO satellites should be able to completely replace terrestrial cellular networks at service parity with those terrestrial cellular networks.

Achieving uplink communication from a standard cellular device to an LEO satellite poses significant challenges, especially when attempting to match the performance of a terrestrial 5G network. Cellular devices are designed with limited transmission power, typically in the range of 23–30 dBm (0.2–1 watt), sufficient for short-range communication with terrestrial base stations. However, when the receiving station is a satellite orbiting between 550 and 1,200 kilometers, the transmitted signal encounters substantial free-space path loss. The satellite must, therefore, be capable of detecting and processing extremely weak signals, often below -120 dBm, to maintain a reliable connection.

The free-space path loss in the uplink direction is comparable to that in the downlink, but the challenges are compounded by the cellular device’s limitations. At higher frequencies, such as 3.5 GHz, path loss can exceed 155 dB, while at 1.8 GHz and 800 MHz, it reduces to approximately 149.6 dB and 143.6 dB, respectively. Lower frequencies favor uplink communication because they experience less path loss, enabling better signal propagation over large distances. However, cellular devices typically use omnidirectional antennas with very low gain (0–2 dBi), poorly suited for long-distance communication, placing even greater demands on the satellite’s receiving capabilities.

The satellite must compensate for these limitations with highly sensitive receivers and high-gain antennas. Achieving sufficient antenna gain requires large apertures, often exceeding 4 meters in diameter for 800 MHz or 2 meters for 3.5 GHz, increasing the satellite’s size, weight, and complexity. Phased-array antennas or deployable reflectors are often used to achieve the required gain. Still, their implementation is constrained by the physical limitations and costs of launching such systems into orbit. Additionally, the satellite’s receiver must have an exceptionally low noise figure, typically in the range of 1–3 dB, to minimize internal noise and allow the detection of weak uplink signals.

Interference is another critical challenge in the uplink path. Unlike terrestrial networks, where signals from individual devices are isolated into small sectors, satellites receive signals over larger geographic areas. This broad coverage makes it difficult to separate and process individual transmissions, particularly in densely populated areas where numerous devices transmit simultaneously. Managing this interference requires sophisticated signal processing capabilities on the satellite, increasing its complexity and power demands.

The motion of LEO satellites introduces additional complications due to the Doppler effect, which causes a shift in the uplink signal frequency. At higher frequencies like 3.5 GHz, these shifts are more pronounced, requiring real-time adjustments to the receiver to compensate. This dynamic frequency management adds another layer of complexity to the satellite’s design and operation.

Among the frequencies considered, 3.5 GHz is the most challenging for uplink communication due to high path loss, pronounced Doppler effects, and poor building penetration. Satellites operating at this frequency must achieve extraordinary sensitivity and gain, which is difficult to implement at scale. At 1.8 GHz, the challenges are somewhat reduced as the path loss and Doppler effects are less severe. However, the uplink requires advanced receiver sensitivity and high-gain antennas to approach terrestrial network performance. The most favorable scenario is at 800 MHz, where the lower path loss and better penetration characteristics make uplink communication significantly more feasible. Satellites operating at this frequency require less extreme sensitivity and gain, making it a practical choice for achieving parity with terrestrial 5G networks, especially for outdoor and light indoor coverage.

Uplink, the consumer device to satellite signal direction, poses additional limitations to the frequency range. Such systems may be interesting to 600 MHz to a maximum of 1.8 GHz, which is already challenging for uplink and downlink in indoor usage. Service in the lower cellular frequency range is feasible for outdoor usage scenarios in rural and remote areas and for non-challenging indoor environments (e.g., “simple” building topologies).

The premise that LEO satellite D2C services would make terrestrial cellular networks redundant everywhere by offering service parity appears very unlikely, and certainly not with the current generation of LEO satellites being launched. The altitude range of the LEO satellites (300 – 1200 km) and frequency ranges used for most terrestrial cellular services (600 MHz to 5 GHz) make it very challenging and even impractical (for higher cellular frequency ranges) to achieve quality and capacity parity with existing terrestrial cellular networks.

LEO SATELLITE D2C ARCHITECTURE.

A subscriber would realize they have LEO satellite Direct-to-Cell coverage through network signaling and notifications provided by their mobile device and network operator. Using this coverage depends on the integration between the LEO satellite system and the terrestrial cellular network, as well as the subscriber’s device and network settings. Here’s how this process typically works:

When a subscriber moves into an area where traditional terrestrial coverage is unavailable or weak, their mobile device will periodically search for available networks, as it does when trying to maintain connectivity. If the device detects a signal from a LEO satellite providing D2C services, it may indicate “Satellite Coverage” or a similar notification on the device’s screen.

This recognition is possible because the LEO satellite extends the subscriber’s mobile network. The satellite broadcasts system information on the same frequency bands licensed to the subscriber’s terrestrial network operator. The device identifies the network using the Public Land Mobile Network (PLMN) ID, which matches the subscriber’s home network or a partner network in a roaming scenario. The PLMN is a fundamental component of terrestrial and LEO satellite D2C networks, which is the identifier that links a mobile consumer to a specific mobile network operator. It enables communication, access rights management, network interoperability, and supporting services such as voice, text, and data.

The PLMN is also directly connected to the frequency bands used by an operator and any satellite service provider, acting as an extension of the operator’s network. It ensures that devices access the appropriately licensed bands through terrestrial or satellite systems and governs spectrum usage to maintain compliance with regulatory frameworks. Thus, the PLMN links the network identification and frequency allocation, ensuring seamless and lawful operation in terrestrial and satellite contexts.

In an LEO satellite D2C network, the PLMN plays a similar but more complex role, as it must bridge the satellite system with terrestrial mobile networks. The satellite effectively operates as an extension of the terrestrial PLMN, using the same MCC and MNC codes as the consumer’s home network or a roaming partner. This ensures that consumer devices perceive the satellite network as part of their existing subscription, avoiding the need for additional configuration or specialized hardware. When the satellite provides coverage, the PLMN enables the device to authenticate and access services through the operator’s core network, ensuring consistency with terrestrial operations. It ensures that consumer authentication, billing, and service provisioning remain consistent across the terrestrial and satellite domains. In cases where multiple terrestrial operators share access to a satellite system, the PLMN facilitates the correct routing of consumer sessions to their respective home networks. This coordination is particularly important in roaming scenarios, where a consumer connected to a satellite in one region may need to access services through their home network located in another region.

For a subscriber to make use of LEO satellite coverage, the following conditions must be met:

  • Device Compatibility: The subscriber’s mobile device must support satellite connectivity. While many standard devices are compatible with satellite D2C services using terrestrial frequencies, certain features may be required, such as enhanced signal processing or firmware updates. Modern smartphones are increasingly being designed to support these capabilities.
  • Network Integration: The LEO satellite must be integrated with the subscriber’s mobile operator’s core network. This ensures the satellite extends the terrestrial network, maintaining seamless authentication, billing, and service delivery. Consumers can make and receive calls, send texts, or access data services through the satellite link without changing their settings or SIM card.
  • Service Availability: The type of services available over the satellite link depends on the network and satellite capabilities. Initially, services may be limited to text messaging and voice calls, as these require less bandwidth and are easier to support in shared satellite coverage zones. High-speed data services, while possible, may require further advancements in satellite capacity and network integration.
  • Subscription or Permissions: Subscribers must have access to satellite services through their mobile plan. This could be included in their existing plan or offered as an add-on service. In some cases, roaming agreements between the subscriber’s home network and the satellite operator may apply.
  • Emergency Use: In specific scenarios, satellite connectivity may be automatically enabled for emergencies, such as SOS messages, even if the subscriber does not actively use the service for regular communication. This is particularly useful in remote or disaster-affected areas with unavailable terrestrial networks.

Once connected to the satellite, the consumer experience is designed to be seamless. The subscriber can initiate calls, send messages, or access other supported services just as they would under terrestrial coverage. The main differences may include longer latency due to the satellite link and, potentially, lower data speeds or limitations on high-bandwidth activities, depending on the satellite network’s capacity and the number of consumers sharing the satellite beam.

Managing a call on a Direct-to-Cell (D2C) satellite network requires specific mobile network elements in the core network, alongside seamless integration between the satellite provider and the subscriber’s terrestrial network provider. The service’s success depends on how well the satellite system integrates into the terrestrial operator’s architecture, ensuring that standard cellular functions like authentication, session management, and billing are preserved.

In a 5G network, the core network plays a central role in managing calls and data sessions. For a D2C satellite service, key components of the operator’s core network include the Access and Mobility Management Function (AMF), which handles consumer authentication and signaling. The AMF establishes and maintains connectivity for subscribers connecting via the satellite. Additionally, the Session Management Function (SMF) oversees the session context for data services. It ensures compatibility with the IP Multimedia Subsystem (IMS), which manages call control, routing, and handoffs for voice-over-IP communications. The Unified Data Management (UDM) system, another critical core component, stores subscriber profiles, detailing permissions for satellite use, roaming policies, and Quality of Service (QoS) settings.

To enforce network policies and billing, the Policy Control Function (PCF) applies service-level agreements and ensures appropriate charges for satellite usage. For data routing, elements such as the User Plane Function (UPF) direct traffic between the satellite ground stations and the operator’s core network. Additionally, interconnect gateways manage traffic beyond the operator’s network, such as the Internet or another carrier’s network.

The role of the satellite provider in this architecture depends on the integration model. If the satellite system is fully integrated with the terrestrial operator, the satellite primarily acts as an extension of the operator’s radio access network (RAN). In this case, the satellite provider requires ground stations to downlink traffic from the satellites and forward it to the operator’s core network via secure, high-speed connections. The satellite provider handles radio gateway functionality, translating satellite-specific protocols into formats compatible with terrestrial systems. In this scenario, the satellite provider does not need its own core network because the operator’s core handles all call processing, authentication, billing, and session management.

In a standalone model, where the LEO satellite provider operates independently, the satellite system must include its own complete core network. This requires implementing AMF, SMF, UDM, IMS, and UPF, allowing the satellite provider to directly manage subscriber sessions and calls. In this case, interconnect agreements with terrestrial operators would be needed to enable roaming and off-network communication.

Most current D2C solutions, including those proposed by Starlink with T-Mobile or AST SpaceMobile, follow the integrated model. In these cases, the satellite provider relies on the terrestrial operator’s core network, reducing complexity and leveraging existing subscriber management systems. The LEO satellites are primarily responsible for providing RAN functionality and ensuring reliable connectivity to the terrestrial core.

REGULATORY CHALLENGES.

LEO satellite networks offering Direct-to-Cell (D2C) services face substantial regulatory challenges in their efforts to operate within frequency bands already allocated to terrestrial cellular services. These challenges are particularly significant in regions like Europe and the United States, where cellular frequency ranges are tightly regulated and managed by national and regional authorities to ensure interference-free operations and equitable access among service providers.

The cellular frequency spectrum in Europe and the USA is allocated through licensing frameworks that grant exclusive usage rights to mobile network operators (MNOs) for specific frequency bands, often through competitive auctions. For example, in the United States, the Federal Communications Commission (FCC) regulates spectrum usage, while in Europe, national regulatory authorities manage spectrum allocations under the guidelines set by the European Union and CEPT (European Conference of Postal and Telecommunications Administrations). The spectrum currently allocated for cellular services, including low-band (e.g., 600 MHz, 800 MHz), mid-band (e.g., 1.8 GHz, 2.1 GHz), and high-band (e.g., 3.5 GHz), is heavily utilized by terrestrial operators for 4G LTE and 5G networks.

In March 2024, the Federal Communications Commission (FCC) adopted a groundbreaking regulatory framework to facilitate collaborations between satellite operators and terrestrial mobile service providers. This initiative, termed “Supplemental Coverage from Space,” allows satellite operators to use the terrestrial mobile spectrum to offer connectivity directly to consumer handsets and is an essential component of FCC’s “Single Network Future.” The framework aims to enhance coverage, especially in remote and underserved areas, by integrating satellite and terrestrial networks. The FCC granted SpaceX (November 2024) approval to provide direct-to-cell services via its Starlink satellites. This authorization enables SpaceX to partner with mobile carriers, such as T-Mobile, to extend mobile coverage using satellite technology. The approval includes specific conditions to prevent interference with existing services and to ensure compliance with established regulations. Notably, the FCC also granted SpaceX’s request to provide service to cell phones outside the United States. For non-US operations, Starlink must obtain authorization from the relevant governments. Non-US operations are authorized in various sub-bands between 1429 MHz and 2690 MHz.

In Europe, the regulatory framework for D2C services is under active development. The European Conference of Postal and Telecommunications Administrations (CEPT) is exploring the regulatory and technical aspects of satellite-based D2C communications. This includes understanding connectivity requirements and addressing national licensing issues to facilitate the integration of satellite services with existing mobile networks. Additionally, the European Space Agency (ESA) has initiated feasibility studies on Direct-to-Cell connectivity, collaborating with industry partners to assess the potential and challenges of implementing such services across Europe. These studies aim to inform future regulatory decisions and promote innovation in satellite communications.

For LEO satellite operators to offer D2C services in these regulated bands, they would need to reach agreements with the licensed MNOs with the rights to these frequencies. This could take the form of spectrum-sharing agreements or leasing arrangements, wherein the satellite operator obtains permission to use the spectrum for specific purposes, often under strict conditions to avoid interference with terrestrial networks. For example, SpaceX’s collaboration with T-Mobile in the USA involves utilizing T-Mobile’s existing mid-band spectrum (i.e., PCS1900) under a partnership model, enabling satellite-based connectivity without requiring additional spectrum licensing.

In Europe, the situation is more complex due to the fragmented nature of the regulatory environment. Each country manages its spectrum independently, meaning LEO operators must negotiate agreements with individual national MNOs and regulators. This creates significant administrative and logistical hurdles, as the operator must align with diverse licensing conditions, technical requirements, and interference mitigation measures across multiple jurisdictions. Furthermore, any satellite use of the terrestrial spectrum in Europe must comply with European Union directives and ITU (International Telecommunication Union) regulations, prioritizing terrestrial services in these bands.

Interference management is a critical regulatory concern. LEO satellites operating in the same frequency bands as terrestrial networks must implement sophisticated coordination mechanisms to ensure their signals do not disrupt terrestrial operations. This includes dynamic spectrum management, geographic beam shaping, and power control techniques to minimize interference in densely populated areas where terrestrial networks are most active. Regulators in the USA and Europe will likely require detailed technical demonstrations and compliance testing before approving such operations.

Another significant challenge is ensuring equitable access to spectrum resources. MNOs have invested heavily in acquiring and deploying their licensed spectrum, and many may view satellite D2C services as a competitive threat. Regulators would need to establish clear frameworks to balance the rights of terrestrial operators with the potential societal benefits of extending connectivity through satellites, particularly in underserved rural or remote areas.

Beyond regulatory hurdles, LEO satellite operators must collaborate extensively with MNOs to integrate their services effectively. This includes interoperability agreements to ensure seamless handoffs between terrestrial and satellite networks and the development of business models that align incentives for both parties.

TAKEAWAYS.

Ditect-to-cell LEO satellite networks face considerable technology hurdles in providing services comparable to terrestrial cellular networks.

  • Overcoming free-space path loss and ensuring uplink connectivity from low-power mobile devices with omnidirectional antennas.
  • Cellular devices transmit at low power (typically 23–30 dBm), making it difficult for uplink signals to reach satellites in LEO at 500–1,200 km altitudes.
  • Uplink signals from multiple devices within a satellite beam area can overlap, creating interference that challenges the satellite’s ability to separate and process individual uplink signals.
  • Developing advanced phased-array antennas for satellites, dynamic beam management, and low-latency signal processing to maintain service quality.
  • Managing mobility challenges, including seamless handovers between satellites and beams and mitigating Doppler effects due to the high relative velocity of LEO satellites.
  • The high relative velocity of LEO satellites introduces frequency shifts (i.e., Doppler Effect) that the satellite must compensate for dynamically to maintain signal integrity.
  • Address bandwidth limitations and efficiently reuse spectrum while minimizing interference with terrestrial and other satellite networks.
  • Scaling globally may require satellites to carry varied payload configurations to accommodate regional spectrum requirements, increasing technical complexity and deployment expenses.
  • Operating on terrestrial frequencies necessitates dynamic spectrum sharing and interference mitigation strategies, especially in densely populated areas, limiting coverage efficiency and capacity.
  • Ensuring the frequent replacement of LEO satellites due to shorter lifespans increases operational complexity and cost.

On the regulatory front, integrating D2C satellite services into existing mobile ecosystems is complex. Spectrum licensing is a key issue, as satellite operators must either share frequencies already allocated to terrestrial mobile operators or secure dedicated satellite spectrum.

  • Securing access to shared or dedicated spectrum, particularly negotiating with terrestrial operators to use licensed frequencies.
  • Avoiding interference between satellite and terrestrial networks requires detailed agreements and advanced spectrum management techniques.
  • Navigating fragmented regulatory frameworks in Europe, where national licensing requirements vary significantly.
  • Spectrum Fragmentation: With frequency allocations varying significantly across countries and regions, scaling globally requires navigating diverse and complex spectrum licensing agreements, slowing deployment and increasing administrative costs.
  • Complying with evolving international regulations, including those to be defined at the ITU’s WRC-27 conference.
  • Developing clear standards and agreements for roaming and service integration between satellite operators and terrestrial mobile network providers.
  • The high administrative and operational burden of scaling globally diminishes economic benefits, particularly in regions where terrestrial networks already dominate.
  • While satellites excel in rural or remote areas, they might not meet high traffic demands in urban areas, restricting their ability to scale as a comprehensive alternative to terrestrial networks.

The idea of D2C satellite networks making terrestrial cellular networks obsolete is ambitious but fraught with practical limitations. While LEO satellites offer unparalleled reach in remote and underserved areas, they struggle to match terrestrial networks’ capacity, reliability, and low latency in urban and suburban environments. The high density of base stations in terrestrial networks enables them to handle far greater traffic volumes, especially for data-intensive applications.

  • Coverage advantage: Satellites provide global reach, particularly in remote or underserved regions, where terrestrial networks are cost-prohibitive and often of poor quality or altogether lacking.
  • Capacity limitations: Satellites struggle to match the high-density traffic capacity of terrestrial networks, especially in urban areas.
  • Latency challenges: Satellite latency, though improving, cannot yet compete with the ultra-low latency of terrestrial 5G for time-critical applications.
  • Cost concerns: Deploying and maintaining satellite constellations is expensive, and they still depend on terrestrial core infrastructure (although the savings if all terrestrial RAN infrastructure could be avoided is also very substantial).
  • Complementary role: D2C networks are better suited as an extension to terrestrial networks, filling coverage gaps rather than replacing them entirely.

The regulatory and operational constraints surrounding using terrestrial mobile frequencies for D2C services severely limit scalability. This fragmentation makes it difficult to achieve global coverage seamlessly and increases operational and economic inefficiencies. While D2C services hold promise for addressing connectivity gaps in remote areas, their ability to scale as a comprehensive alternative to terrestrial networks is hampered by these challenges. Unless global regulatory harmonization or innovative technical solutions emerge, D2C networks will likely remain a complementary, sub-scale solution rather than a standalone replacement for terrestrial mobile networks.

FURTHER READING.

  1. Kim K. Larsen, “The Next Frontier: LEO Satellites for Internet Services.” Techneconomyblog, (March 2024).
  2. Kim K. Larsen, “Stratospheric Drones & Low Earth Satellites: Revolutionizing Terrestrial Rural Broadband from the Skies?” Techneconomyblog, (January 2024).
  3. Kim K. Larsen, “A Single Network Future“, Techneconomyblog, (March 2024).
  4. T.S. Rappaport, “Wireless Communications – Principles & Practice,” Prentice Hall (1996). In my opinion, it is one of the best graduate textbooks on communications systems. I bought it back in 1999 as a regular hardcover. I have not found it as a Kindle version, but I believe there are sites where a PDF version may be available (e.g., Scribd).

ACKNOWLEDGEMENT.

I greatly acknowledge my wife, Eva Varadi, for her support, patience, and understanding during the creative process of writing this article.

Greenland: Navigating Security and Critical Infrastructure in the Arctic – A Technology Introduction.

The securitization of the Arctic involves key players such as Greenland (The Polar Bear), Denmark, the USA (The Eagle), Russia (The Brown Bear), and China (The Red Dragon), each with strategic interests in the region. Greenland’s location and resources make it central to geopolitical competition, with Denmark ensuring its sovereignty and security. Greenland’s primary allies are Denmark, the USA, and NATO member countries, which support its security and sovereignty. Unfriendly actors assessed to be potential threats include Russia, due to its military expansion in the Arctic, and China, due to its strategic economic ambitions and influence in the region. The primary threats to Greenland include military tensions, sovereignty challenges, environmental risks, resource exploitation, and economic dependence. Addressing these threats requires a balanced, cooperative approach to ensure regional stability and sustainability.

Cold winds cut like knives, Mountains rise in solitude, Life persists in ice. (Aqqaluk Lynge, “Harsh Embrace” ).

I have been designing, planning, building, and operating telecommunications networks across diverse environmental conditions, ranging from varied geographies to extreme climates. I sort of told myself that I most likely had seen it all. However (and luckily), the more I consider the complexities involved in establishing robust and highly reliable communication networks in Greenland, the more I realize the uniqueness and often extreme challenges involved with building & maintaining communications infrastructures there. The Greenlandic telecommunications incumbent Tusass has successfully built a resilient and dependable transport network that connects nearly every settlement in Greenland, no matter how small. They manage and maintain this network amidst some of the most severe environmental conditions on the planet. The staff of Tusass is fully committed to ensuring connectivity for these remote communities, recognizing that any service disruption can have severe repercussions for those living there.

As an independent board member of Tusass Greenland since 2022, I have witnessed Tusass’s dedication, passion, and understanding of the importance of improving and maintaining their network and connections for the well-being of all Greenlandic communities. To be clear, the opinions I express in this post are solely my own and do not necessarily reflect the views or opinions of Tusass. I believe that my opinions have been shaped by my Tusass and Greenlandic experience, by working closely with Tusass as an independent board member, and by a deep respect for Tusass and its employees. All information that I am using in this post is publicly available through annual reports (of Tusass) or, in general, publicly available on the internet.

Figure 1 Illustrating a coastal telecommunications site supporting the microwave long-haul transport network of Tusass up along the Greenlandic west coast. Courtesy: Tusass A/S (Greenland).

Greenland’s strategic location, its natural resources, environmental significance, and broader geopolitical context make it geopolitically a critical country. Thus, protecting and investing in Greenland’s critical infrastructure is obviously important. Not only from a national and geopolitical security perspective but also with respect to the economic development and stability of Greenland and the Arctic region. If a butterfly’s movements can cause a hurricane, imagine what an angry “polar bear” will do to the global weather and climate. The melting ice caps are enabling new shipping routes and making natural resources much more accessible, and they may also raise the stakes for regional security. For example, with China’s Polar Silk Road initiative where, China seeks to establish (or at least claim) a foothold in the Arctic in order to increase its trade routes and access to resources. This is also reflected in their 2018 declaration stating that China sees itself as a “Near-Arctic State” and concludes that China is one of the continental states that are closest to the Arctic Circle. Russia, which is a real neighboring country to the Arctic region and Circle, has also increased its military presence and economic activities in the Arctic. Recently, Russia has made claims in the Arctic to areas that overlap with what Denmark and Canada see as their natural territories, aiming to secure its northern borders and exploit the region’s resources. Russia has also added new military bases and has conducted large-scale maneuvers along its own Arctic coastline. The potential threats from increased Russian and Chinese Arctic activities pose significant security concerns. Identifying and articulating possible threat scenarios to the Arctic region involving potential hostile actors may indeed justify extraordinary measures and also highlight the need for urgent and substantial investments in and attention to Greenland’s critical infrastructure.

In this article, I focus very much on what key technologies should be considered, why specific technologies should be considered, and how those technologies could be implemented in a larger overarching security and defense architecture driving towards enhancing the safety and security of Greenland:

  • Leapfrog Quality of Critical Infrastructure: Strengthening the existing critical communications infrastructure should be a priority. With Tusass, this is the case in terms of increasing the existing transport network’s reliability and availability by adding new submarine cables and satellite backbone services and the associated satellite infrastructure. However, the backbone of the Tusass economy is a population of 57 thousand. The investments required to quantum leap the robustness of the existing critical infrastructure, as well as deploying many of the technologies discussed in this post, will not have a positive business case or a reasonable return on investment within a short period (e.g., a couple of years) if approached in the way that is the standard practice for most private corporations around the worlds. External subsidies will be required. The benefit evaluation would need to be considered over the long term, more in line with big public infrastructure projects. Most of these critical infrastructure and technology investments discussed are based on particular geopolitical assumptions and serve as risk-mitigating measures with substantial civil upside if we maintain a dual-use philosophy as a boundary condition for those investments. Overall I believe that a positive case might be made from the perspective of the possible loss of not making them rather than a typical gain or growth case expected if an investment is made.
  • Smart Infrastructure Development: Focus on building smart infrastructure, integrating sensor networks (e.g., DAS on submarine cables), and AI-driven automation for critical systems like communication networks, transportation, and energy management to improve resilience and operational efficiency. As discussed in this post, Tusass already has a strong communications network that should underpin any work on enhancing the Greenlandic defense architecture. Moreover, Tusass are experts in building and operating critical communications infrastructures in the Arctic. This is critical know-how that should be heavily relied upon in what has to come.
  • Automated Surveillance and Monitoring Systems: Invest in advanced automated surveillance technologies, such as aquatic and aerial drones, satellite-based monitoring (SIGINT and IMINT), and IoT sensors, to enhance real-time monitoring and protection of Greenland.
  • Autonomous Defense Systems: Deploy autonomous systems, including unmanned aerial vehicles (UAVs) and unmanned underwater vehicles (UUVs), to strengthen defense capabilities and ensure rapid response to potential threats in the Arctic region. These systems should be the backbone of ad-hoc private network deployments serving both defense and civilian use cases.
  • Cybersecurity and AI Integration: Implement robust cybersecurity measures and integrate artificial intelligence to protect critical infrastructure and ensure secure, reliable communication networks supporting both military and civilian applications in Greenland.
  • Dual-Use Infrastructure: Prioritize investments in infrastructure solutions that can serve both military and civilian purposes, such as communication networks and transportation facilities, to maximize benefits and resilience.
  • Local Economic and Social Benefits: Ensure that defense investments support local economic development by creating new job opportunities and improving essential services in Greenland.

I believe that Greenland needs to build a solid Greenlandic-centered know-how on a foundational level around autonomous and automated systems. In order to get there Greenland will need close and strong alliances that is aligned with the aim of achieving a greater degree of independence through clever use of the latest technologies available. Such local expertise will be essential in order to reduce the dependency on external support (e.g., from Denmark and Allies) and ensure that they can maintain operational capabilities independently, particularly during a security crisis. Automation, enabled by digitization and AI-enabled system architectures, would be key to managing and monitoring Greenland’s remote and inaccessible geography and resources efficiently and securely, minimizing the need for extensive human intervention. Leveraging autonomous defense and surveillance technologies and stepping up in digital maturity is an important path to compensating for Greenland’s small population. Additionally, implementing robust, with respect to hardware AND software, automated systems will allow Greenland to protect and maintain its critical infrastructure and services, mitigating the risks associated with (too much) reliance on Denmark or allies during a time of crisis where such resources may be scarce or impractical to timely move to Greenland.

Figure 2 A view from Tusass HQ over Nuuk, Greenland. Courtesy: Tusass A/S (Greenland).

GREENLAND – A CONCISE INTRODUCTION.

Greenland, or Kalaallit Nunaat as it is called in Greenlandic, has a surface area of about 2.2 million square kilometers with ca. 80% covered by ice and is the world’s largest island. It is an autonomous territory of Denmark with a population of approximately 57 thousand. Its surface area is comparable to that of Alaska (1.7 million km2) or Saudi Arabia (2.2 million km2). It is predominantly covered by ice, with a population scattered in smaller settlements along the western coastlines where the climate is milder and more hospitable. Greenland’s extensive coastline measures ca. 44 thousand kilometers and is one of the most remote and sparsely populated coastlines in the world. This remoteness contrasts with more densely populated and developed coastlines like the United States. The remoteness of Greenland’s coastline is further emphasized by a lack of civil infrastructure. There are no connecting roads between settlements, and most (if not all) travel between communities relies on maritime or air transport.

Greenland’s coastline presents several unique security challenges due to its particularities, such as its vast length, rugged terrain, harsh climate, and limited population. These factors make Greenland challenging to monitor and protect effectively, which is critical for several reasons:

  • The vast and inaccessible terrain.
  • Harsh climate and weather conditions.
  • Sparse population and limited infrastructure.
  • Maritime and resource security challenges.
  • Communications technology challenges.
  • Geopolitical significance.

The capital and largest city is Nuuk, located on the southwestern coast. With a population of approximately 18+ thousand or 30+% of the total, Nuuk is Greenland’s administrative and economic center, offering modern amenities and serving as the hub for the island’s limited transportation network. Sisimiut, north of Nuuk on the western coast. It is the second-largest town in Greenland, with a population of around 5,500+. Sisimiut is known for its fishing industry and serves as a base for much of the Greenlandic tourism and outdoor activities.

On the remote and inhospitable eastern coast, Tasiilaq is the largest town in the Ammassalik area, with a population of little less than 2,000. It is relatively isolated compared to the western settlements and is known for its breathtaking natural scenery and opportunities for adventure tourism (check out https://visitgreenland.com/ for much more information). In the far north, on the west coast, we have Qaanaaq (also known as Thule), which is one of the world’s most northern towns, with a population of ca. 600. Located near Qaanaaq, is the so-called Pituffik Space Base which is the United States’ northernmost military base, established in 1951, and a key component of NATO’s early warning and missile defense systems. The USA have had a military presence in Greenland since the early days of the World War II and strengthened during the Cold War. It also plays an important role in monitoring Arctic airspace and supporting the region’s avionics operations.

As of 2023, Greenland has approximately 56 inhabited settlements. I am using the word “settlement” as an all-inclusive covering communities with a population of 10s of thousands (Nuuk) down to 100s or lower. With few exceptions, there are no settlements with connecting roads or any other overland transportation connections with other settlements. All person- and goods transportation between the different settlements is taken by plane or helicopter (provided by Air Greenland) or seaborne transportation (e.g., Royal Artic Line, RAL).

Greenland is rich in natural resources. Apart from water (for hydropower), this includes significant mining, oil, and gas reserves. These natural resources are largely untapped and present substantial opportunities for economic development (and temptation for friendly as well as unfriendly actors). Greenland is believed to have one of the world’s largest deposits of rare earth elements (although by far not comparable to China), extremely valuable as an alternative to the reliance of China and critical for various high-tech applications, including electronics (e.g., your smartphone), renewable energy technologies (e.g., wind turbines and EVs), and defense systems. Graphite and platinum are also present in Greenland and are important in many industrial processes. Some estimates indicate that northeast Greenland’s waters could hold large reserves of (yet) undiscovered oil and gas reserves. Other areas are likewise believed to contain substantial hydrocarbon reserves. However, Greenland’s arctic environment presents severe exploration and extraction challenges, such as extreme cold, ice cover, and remoteness, that so far has made it also very costly and complicated to extraxt its natural resources. With the global warming the economical and practical barrier for exploitation is contineously reducing.

FROM STRATEGIC OUTPOST TO ARCTIC STRONGHOLD: THE EVOLVING SECURITY SIGNIFICANCE OF GREENLAND.

Figure 3 illustrates Greenland’s reliance on and the importance of critical communications infrastructure connecting local communities as well as bridging the rest of the world and the internet. Courtesy: DALL-E.

From a security perspective Greenland has evolved significantly since the Second World War. During World War II, its importance was primarily based on its location as a midway point between North America and Europe serving as a refueling and weather station for allied aircrafts crossing the Atlantic to and from Europe. Additionally, its remote geographical location combined with its harsh climate provided a “safe haven” for monitoring and early warning installations.

During the Cold War era, Greenland’s importance grew (again) due to its proximity to the Soviet Union (and Russia today). Greenland became a key site for early warning radar systems and an integral part of the North American Aerospace Defense Command (NORAD) network designed to detect Soviet bombers and missiles heading toward North America. In 1951, the USA-controlled Thule Air Base, today it is called Pituffik Space Base, located in northwest Greenland, was constructed with the purpose of hosting long-range bombers and providing an advanced point (from a USA perspective) for early warning and missile defense systems.

As global tensions eased in the post-Cold War period, Greenland’s strategic status diminished somewhat. However, its status is now changing again due to Russia’s increased aggression in Europe (and geopolitically) and a more assertive China with expressed interest in the Arctic. The arctic ice is melting due to climate change and has resulted in new maritime routes being possible, such as the Northern Sea Route. Also, making Arctic resources more accessible. Thus, we now observe an increased interest from global powers in the Arctic region. And as was the case during the cold-War period (maybe with much higher stakes), Greenland has become strategically critical for monitoring and controlling these emerging routes, and the Arctic in general. Particularly with the observed increased activity and interest from Russia and China.

Greenland’s position in the North Atlantic, bridging the gap between North America and Europe, has become a crucial spot for monitoring and controlling the transatlantic routes. Greenland is part of the so-called Greenland-Iceland-UK (GIUK) Gap. This gap is a critical “chokepoint” for controlling naval and submarine operations, as was evident during the Second World War (e.g., read up on the Battle of the Atlantic). Controlling the Gap increases the security of maritime and air traffic between the continents. Thus, Greenland has again become a key component in defense strategies and threat scenarios envisioned and studied by NATO (and the USA).

GREENLANDS GEOPOLITICAL ROLE.

Greenland’s recent significance in the Arctic should not be underestimated. It arises, in particular, from climate change and, as a result, melting ice caps that have and will enable new shipping routes and potential (easier) access to Greenland’s untapped natural resources.

Greenland hosts critical military and surveillance assets, including early warning radar installations as well as air & naval bases. These defense assets actively contributes to global security and is integral to NATO’s missile defense and early warning systems. They provide data for monitoring potential missile threats and other aerial activities in the North Atlantic and Arctic regions. Greenland’s air and naval bases also support specialized military operations, providing logistical hubs for allied forces operating in the Arctic and North Atlantic.

From a security perspective, Greenland’s control is not only about monitoring and defense. It is also about deterring potential threats from potential hostile actors. It allows for effective monitoring and defense of the Arctic and North Atlantic regions. Enabling the detection and tracking of submarines, ships, and aircraft. Such capabilities enhance situational awareness and operational readiness, but more importantly, it sends a message to potential adversaries (e.g., maybe unaware, as unlikely as it may be, about the deficiencies of Danish Arctic patrol ships). The ability to project power and maintain a military presence in this area is necessary for deterring potential adversaries and protecting he critical communications infrastructure (e.g., submarine cables), maritime routes, and airspace.

The strategic location of Greenland is key to contribute to the global security dynamics. Ensuring Greenland’s security and stability is essential for also maintaining control over critical transatlantic routes, monitoring Arctic activities, and protecting against potential threats from hostile actors. Making Greenland a cornerstone of the defense infrastructure and an essential area for geopolitical strategy in the North Atlantic and Arctic regions.

INFRASTRUCTURE RECOMMENDATIONS.

Recent research has focused on Greenland in the context of Arctic security (see “Greenland in Arctic Security: (De)securitization Dynamics under Climatic Thaw and Geopolitical Freeze” by M. Jacobsen et al.). The work emphasizes the importance of maintaining and enhancing surveillance and early warning systems. Greenland is advised to invest in advanced radar systems and satellite monitoring capabilities. These systems are relevant for detecting potential threats and providing timely information, ensuring national and regional security. I should point to the following traditional academic use of the word “securitization,” particularly from the Copenhagen School, which refers to framing an issue as an existential threat requiring extraordinary measures. Thus, securitization is the process by which topics are framed as matters of security that should be addressed with urgency and exeptional measures.

The research work furthermore underscores the Greenlandic need for additional strategic infrastructure development, such as enhancing or building new airport facilities and the associated infrastructure. This would for example include expanding and upgrading existing airports to improve connectivity within Greenland and with external partners (e.g., as is happening with the new airport in Nuuk). Such developments would also support economic activities, emergency response, and defense operations. Thus, it combines civic and military applications in what could be defined as dual-purpose infrastructure programs.

The above-mentioned research argues for the need to develop advanced communication systems, Signals Intelligence (SIGINT), and Image Intelligence (IMINT) gathering technologies based on satellite- and aerial-based platforms. These wide-area coverage platforms are critical to Greenland due to its vast and remote areas, where traditional communication networks may be insufficient or impractical. Satellite communication systems such as GEO, MEO, and LEO (and combinations thereof), and stratospheric high-altitude platform systems (HAPS) are relevant for maintaining robust surveillance, facilitating rapid emergency response, and ensuring effective coordination of security as well as search & rescue operations.

Expanding broadband internet access across Greenland is also a key recommendation (that is already in progress today). This involves improving the availability and reliability of communications-related connectivity by additional submarine cables and by new satellite internet services, ensuring that even the most remote communities have reliable broadband internet connectivity. All communities need to have access to broadband internet, be connected, enable economic development, improve quality of life in general, and integrate remote areas into the national and global networks. These communication infrastructure improvements are important for civilian and military purposes, ensuring that Greenland can effectively manage its security challenges and leverage new economic opportunities for its communities. It is my personal opinion that most communities or settlements are connected to the wider internet, and the priority should be to improve the redundancy, availability, and reliability of the existing critical communications infrastructure. With that also comes more quality in the form of higher internet speeds.

The applicability of at least some of the specific securitization recommendations for Greenland, as outlined in Marc Jacobsen’s “Greenland in Arctic Security: (De)securitization Dynamics Under Climatic Thaw and Geopolitical Freeze,” may be somewhat impractical given the unique characteristics of Greenland with its vast area and very small population. Quite a few recommendations (in my opinion), even if in place “today or tomorrow,” would require a critical scale of expertise, human, and industrial capital that Greenland does not have available on its own (and also is unlikely to have in the future). Thus, some of the recommendations depend on such resources to be delivered from outside Greenland, posing inherent availability risks to provide in a crisis (assuming that such capacity would even be available under normal circumstances). This dependency on external actors, particularly Danish and International investors, complicates Greenland’s ability to independently implement policies recommended by the securitization framework. It could lead to conflicts between local priorities and the interests of external stakeholders, particularly in a time of a clear and present security crisis (e.g., Russia attempting to expand west above and beyond Ukraine).

Also, as a result of Greenland’s small population there will be a limited pool of available local personnel with the needed skills to draw upon for implementing and maintaining many of the recommendations in “Greenland in Arctic Security: (De)securitization Dynamics under Climatic Thaw and Geopolitical Freeze”. Training and deploying enough high-tech skilled individuals to cover Greenland’s vast territory and technology needs is a very complex challenge given the limited human resources and challenges in getting external high-tech resouces to Greenland.

I believe Greenland should focus on establishing a comprehensive security strategy that minimizes its dependency on its natural allies and external actors in general. The dual-use approach should be integral to such a security strategy, where technology investments serve civil and defense purposes whenever possible. This approach ensures that Greenlandic society benefits directly from investments in building a robust security framework. I will come back to the various technologies that may be relevant in achieving more independence and less reliance on the external actors that are so prevalent in Greenland today.

HOW CRITICAL IS CRITICAL INFRASTRUCTURE TO GREENLAND

Communications infrastructure is seen as critical in Greenland. It has to provide a reliable and good quality service despite Greenland having some of the most unfavorable environmental conditions in which to build and operate communications networks. Greenland is characterized by vast distances between relatively small, isolated communities. Thus, this makes effective communication essential for bridging those gaps, allowing people to stay connected with each other and as well as the outside world irrespective of weather or geography. The lack of a comprehensive road network and reliance on sea and air travel further emphasize the importance of reliable and available telecommunications services, ensuring timely communication and coordination across the country.

Telecommunications infrastructure is a cornerstone of economic development in Greenland (as it has been elsewhere). It is about efficient internet and telephony services and its role in business operations, e-commerce activities, and international market connections. These aspects are important for the economic growth, education, and diversification of the many Greenlandic communities. The burgeoning tourism industry will also depend on (maybe even demand) robust communication networks to serve those tourists, ensure their safety in remote areas, and promote tourism activities in general. This illustrates very firmly that the communications infrastructure is critical (should there be any doubts).

Telecommunications infrastructure also enables distance learning in education and health services, providing people in remote areas with access to high-quality education that otherwise would not be possible (e.g., Coursera, Udemy Academy, …). Telemedicine has obvious benefits for healthcare services that are often limited in remote regions. It allows residents to receive remote medical consultations and services (e.g., by video conferencing) without the need for long-distance and time-consuming travels that often may aggravate a patient’s condition. Emergency response and public safety are other critical areas in which our communications infrastructure plays a crucial role. Greenland’s harsh and unpredictable weather can lead to severe storms, avalanches, and ice-related incidents. It is therefore important to have a reliable communication network that allows for timely warnings, supporting rescue operations & coordination, and public safety. Moreover, maritime safety also depends on a robust communication infrastructure, enabling reliable communication between ships and coastal stations.

A strong communication network can significantly enhance social connectivity, and help maintaining social ties, such as among families and communities across Greenland. Thus reduce the feeling of isolation. Supporting social cohesion in communities as well as between settlements. Telecommunications can also facilitate sharing and preserving the Greenlandic culture and language through digital media (e.g., Tusass Music), online platforms, and social networks (e.g., Facebook used by ca. 85% of the eligible population, that number is ca. 67% in Denmark).

For a government and its administration, maintaining effective and reliable communication is essential for well-functioning public services and its administration. It should facilitate coordination between different levels of government and remote administration. Additionally, environmental monitoring and research benefit greatly from a reliable and available communication infrastructure. Greenland’s unique environment attracts scientific research, and robust communication networks are essential for supporting data transmission (in general), coordination of research activities, and environmental monitoring. Greenland’s role in global climate change studies should also be supported by communication networks that provide the means of sharing essential climate data collected from remote research stations.

Last but not least. A well-protected (i.e., redundant) and highly available communications infrastructure is a cornerstone of any national defense or emergency situation. If it is well functioning, the critical communications infrastructure will support the seamless operation of military and civilian coordination, protect against cyber threats, and ensure public confidence during a crisis situation (natural or man-made). The importance of investing in and maintaining such a critical infrastructure cannot be underestimated. It plays a critical role in a nation’s overall security and resilience.

TUSASS: THE BACKBONE OF GREENLANDS CRITICAL COMMUNICATIONS INFRASTRUCTURE.

Tusass is the primary telecommunications provider in Greenland. It operates a comprehensive telecom network that includes submarine cables with 5 landing stations in Greenland, very long microwave (MW) radio chains (i.e., long-haul backbone transmission links) with MW backhaul branches to settlements along its way, and broadband satellite connections to deliver telephony, internet, and other communication services across the country. The company is wholly owned by the Government of Greenland (Naalakkersuisut). Positioning Tusass as a critical company responsible for the nation’s communications infrastructure. Tusass faces unique challenges due to the vast, remote, and rugged terrain. Extreme weather conditions make it difficult, often impossible, to work outside for at least 3 – 4 months a year. This complicates the deployment and maintenance of any infrastructure in general and a communications network in particular. The regulatory framework mandates that Tusass fulfills a so-called Public Service Obligation, or PSO. This requires Tusass to provide essential telecommunications services to all of Greenland, even the most isolated communities. This requires Tusass to continue to invest heavily in expanding and enhancing its critical infrastructure, providing reliable and high-quality services to all residents throughout Greenland.

Tusass is the main and, in most areas, the only telecommunications provider in Greenland. The company holds a dominant market position where it provides essential services such as fixed-line telephony, mobile networks, and internet services. The Greenlandic market for internet and data connections was liberalized in 2015. The liberalization allowed private Internet Service Providers (ISPs) to purchase wholesale connections from Tusass and resell them. Despite liberalization, Tusass remains the dominant force in Greenland’s telecommunications sector. Tusass’s market position can be attributed to its extensive communications infrastructure and its government ownership. With a population of 57 thousand and its vast geographical size, it would be highly uneconomical and human-resource wise very chalenging to have duplicate competing physical communications infrastructures and support organizations in Greenland. Not to mention that it would take many years before an alternative telco infrastructure could be up an running matching what is already in place. Thus, while there are smaller niche service providers, Tusass effectively operates as Greenland’s sole telecom provider.

Figure 4 Illustrates one of many of Tusass’s long-haul microwave site along Greenland’s west coast. Accessible only by helicopter. Courtesy: Tusass A/S (Greenland).

CURRENT STATE OF CRITICAL COMMUNICATIONS INFRASTRUCTURE.

The illustration below provides an overview of some of the major and critical infrastructures available in Greenland, with a focus on the communications infrastructure provided by Tusass, such as submarine cables, microwave (MW) radios radio chains, and satellite ground stations, which all connect Greenland and give access to the Internet for all of Greenland.

Figure 5 illustrates the Greenlandic telecommunications provider Tusass infrastructure. Note that Tusass is the incumbent and only telecom provider in Greenland. Currently, five hydropower plants (shown above, location only indicative) provide more than 80% of Greenland’s electricity demand. A new international airport is expected to be operational in Nuuk from November 2024. Source: from Tusass Annual Report 2023 with some additions and minor edits.

From the south of Nanortalik up to above Upernavik on the west coast, Tusass has a 1,700+ km long microwave radio chain connecting all settlements along Greenland’s west coast from the south to the north distributed, supported by 67 microwave (MW) radio sites. Thus, have a microwave radio equipment located for every ca. 25 km ensuring very high performance and availability of connectivity to the many settlements along the West Coast. This setup is called a long-haul microwave chain that uses a series of MW radio relay stations to transmit data over long distances (e.g., up to thousands of kilometers). The harsh climate with heavy rain, snow, and icing makes it very challenging to operate high-frequency, high-bandwidth microwaves (i.e., the short distances between the radio chain sites). The MW radio sites are mainly located on remote peaks in the harsh and unforgiving coastal landscape (ensuring line-of-site), making helicopters the only means of accessing these locations for maintenance and fueling. The field engineers here are pretty much superheroes maintaining the critical communications infrastructure of Greenland and understanding its life-and-death implications for all the remote communities if it breaks down (with the additional danger of meeting a very hungry polar bear and being stuck for several days on a location due to poor weather preventing the helicopter from picking the engineers up again).

Figure 6 illustrates a typical housing for field service staff when on site visits. As the weather can change very rapidly in Greenland it is not uncommon that field service staff have to wait for many days before they can be picked up again by the helicopter. Courtesy: Tusass A/S (Greenland).

Greenland relies on the “Greenland Connect” submarine cable to connect to the rest of the world and the wider internet with a modern-day throughput. The submarine cable connecting Greenland to Canada and Iceland runs from Newfoundland and Labrador in Canada to Nuuk and continues from Qaqortoq in Greenland to land in Iceland (that connects further to Copenhagen and the wider internet). Tusass, furthermore, has deployed submarine cables between 5 of the major Greenlandic settlements, including Nuuk, up the west coast and down to the south (i.e., Qaqortoq). The submarine cables provide some level of redundancies, increased availability, and substantial capacity & quality augmentation to the long-haul MW chain that carries the traffic from surrounding settlements. The submarine cables are critical and essential for the modernization and digitalization of Greenland. However, there are only two main submarine broadband cable connection points, the Canada – Nuuk and Qaqortoq – Iceland submarine connections, to and from Greenland. From a security perspective, this poses substantial and unique risks to Greenland, and its role and impact need to be considered in any work on critical infrastructure strategy. If both international submarine cables were compromised, intentionally or otherwise, it would become challenging, if possible, to sustain today’s communications demand. Most traffic would have to be supported by existing satellite capacity, which is substantially lower than the existing submarine cables can support, leaving the capacity mainly for mission-critical communications. Allowing little spare capacity for consumer and non-critical business communication needs. This said, as long as Greenlandic submarine cables, terrestrial transport, and switching infrastructure are functional, it would be possible to internally to Greenland maintain a resemblance of internet services and communication means between connected settlements using modern day network design thinking.

Moreover, while the submarine cables along the west coast offer redundancy to the land-based long-haul transport solution, there are substantial risks to settlements and their populations where the long-haul MW solution is the only means of supporting remote Greenlandic communities. Given Greenland’s unique geographic and climate challenges it is not only very costly but also time-consuming to reduce the risk of disruption to the existing lesser redundant critical infrastructure already in place (e.g., above Aasiaat north of the Arctic Circle).

Using satellites is an additional dimension, and part of the connectivity toolkit, that can be used to improve the redundancy and availability of the land- and water-based critical communications infrastructures. However, the drawback of satellite systems is that they generally are bandwidth/throughput limited and have longer signal delays (latency and round-trip time) than terrestrial-based communications systems. These issues could pose some limitations on how well some services can be supported or will function and would require a versatile traffic management & prioritization system in case the satellite solution would be the only means of connecting a relatively high-traffic area (e.g., Tasiilaq) used to ground-based support of broadband transport means with substantial more available bandwidth than accessible to the satellite solution. Particular for GEO stationary satellite services, with the satellite located at 36 thousand kilometer altitude, the data traffic flow needs to be carefully optimized in order to function well irrespective of the substantial latency experienced on such connections that at the very best can be 239 milliseconds and in practice might be closer to twice that or more. This poses significant challenges to particular TCP/IP data flows on such response-time-challenged connections and applications sensitivity short round trip times.

Optimizing and stabilizing TCP/IP data flows over GEO satellite connections requires a multi-faceted approach involving enhancements to the TCP protocol (e.g., window scaling, SACK, TCP Hypla, …), the use of hybrid and proxy solutions, application-layer adjustments, error correction mechanisms, Quality of Service (QoS) and traffic shaping, DNS optimizations, and continuous network monitoring. Combining these strategies makes it possible to mitigate some of the inherent challenges of high-latency satellite links and ensure more effective and efficient IP flows and better utilization of the available satellite link bandwidth. Optimizing control signals and latency-sensitive data flows over GEO and LEO satellite connections may also substantially reduce the sensitivity to the prohibitive long delays experienced on GEO connections, using the lower latency LEO connection (RTT < ~ 50 ms @ 500 km altitude), or, if available as a better alternative a long-haul microwave link or submarine connection.

Tusass, in collaboration with the Spanish satellite company Hispasat, make use of the Greenland geostationary satellite, Greensat. Tusass signed an agreement with Hispasat to lease space capacity (800 MHz @ Ku-band) on the Amazonas Nexus satellite until the end of its lifetime (i.e., 2038+/-). Greensat was taken into operation in the last quarter of 2023 (note: it was launched in February 2023), providing services to the satellite-only settlement areas around Qaanaaq, the northernmost settlement on the west coast of Greenland, and Tasiilaq and Ittoqortormiut (north of Tasiilaq), on the remote east coast. All mobile and fixed traffic from a satellite-only area is routed to a satellite ground station that is connected to the geostationary satellite (see the illustration below). The satellite’s primary mission is to provide broadband services to areas that, due to geography & climate and cost, are impractical to connect by submarine cable or long-haul microwave links. The Greensat satellite closes the connection to the rest of the world and the internet via a ground station on Gran Canaria. It also connects to Greenland via submarine cables in Nuuk (via Canada and Qaqortoq).

Figure 7 The image shows a large geostationary satellite ground-station antenna located in Greenland’s cold and remote area. The antenna’s primary purpose is to facilitate communication with geostationary satellites 36 thousand kilometers away, transmitting and receiving data. It may support various services such as Internet, television broadcasting, weather monitoring, and emergency communications. The components are (1) a parabolic reflector (dish), (2) a feed horn and receiver, (3) a mount and support structure, (4) control and monitoring systems, and (5) a radome (not shown on the picture) which is a structural, weatherproof enclosure that protects the antenna from environmental elements without interfering with the electromagnetic signals it transmits and receives. The LEO satellite ground stations are much smaller as the distance between the ground and the low-earth satellite is much smaller, i.e., ca. 350 – 650 km, resulting in less challenging receive and transmit conditions (compared to the connection to a geostationary satellite).

In addition, Tusass also makes use of UK-based OneWeb (Eutelsat) LEO satellite backhaul services at several locations where an area fixed and mobile traffic is routed to a point-of-presence connected to a satellite ground station that connects to a OneWeb satellite that connects to the central switching center in Nuuk (connected to another ground station).

CRITICAL PROPERTIES FOR RELIABLE AND SECURE TRANSPORT NETWORKS.

A physical transport network comprises many tangible components, such as cables, routers, and switches, which form an interconnected system capable of transmitting data. The network is designed and planned according to a given expected coverage, use and level of targeted quality (e.g., speed, latency, priority and security). Moreover, we are also concerned about such a networks availability as well as reliability. We design the physical and logical (i.e., related to higher levels of the OSI stack than the physical) network according to a given target availability, that is how many hours in a year should the network minimum be operational and available to our customers. You will see availability given in percentage of the total hours in a year (e.g., 8,760 hours in a normal year and 8,784 hours in a leap year). So an availability of 99.9% means that we target a minimum operational time of our network of 8,751 hours, or, alternatively, accept a maximum of 9 hours of downtime. The reliability of a network refers to the probability hat the network will continue to function without failure for a given period. For example, say you have a mean time between failures (MTBF) of 8750 hours and you want to figure out what the likelihood is of operating without failure for 4,380 hours (half a year), you find that there is a ca. 60% chance of operating without a failure (or 40% that a failure may occur within the next 6 months). For a critical infrastructure the availability and reliability metrics are very important to consider in any design and planning process.

In contrast to the physical network depiction, a network graph representation abstracts the physical transport network into a mathematical model where graph nodes (or vertexes) represent the network’s many components and edges (or links) represent the physical and logical connections between these network’s many components. Modellizing the physical (and logical) network allows designers and planners to study in detail a networks robustness against many types of disruptions as well as its general functioning and performance.

Suppose we are using a graph approach in our design of a critical communications network. We then need to carefully consider various graph properties critical for the network’s robustness, security, reliability, and efficiency. To achieve this, one must strive for resilience and fault tolerance by designing for increased redundancy and availability involving multiple paths, edges, or connections between nodes, preventing single points of failure (SPoF). This involves creating a network where the number of independent paths between any two nodes is maximized (often subject to economics and feasibility boundary conditions). An optimal average degree of nodes should also be a design criterion. A higher degree of nodes enhances the graph’s, and thus the underlying network’s, resilience, thus avoiding increased vulnerability.

Scalability is a crucial network property. This is best achieved through a hierarchical structure (or topology) that allows for efficient network management as the network expands. The Modularity, which is another graph KPI, ensures that the network can integrate new nodes and edges without major reconfigurations, supporting civilian expansion and military operations or dual-purpose operations. To meet low-latency and high-throughput requirements, the shortest-path routing algorithms should be applied to allow us to minimize the latency or round-trip time (and thus increase throughput). Moreover, bandwidth management should be implemented, allowing the network to handle large data volumes in a prioritized manner (if required). This also ensures that the network can accommodate peak loads and prioritize critical communication when it is compromised.

Security is a paramount property of any communications network. In today’s environment with many real and dangerous cyber threats, it may be one of the most important topics to consider. Each node and link (or edge) in a network requires robust defenses against cyber threats. In our design, we need to think about encryption, authentication, intrusion, and anomaly detection systems. Network segmentation will help isolate critical defense communications from civilian traffic, preventing breaches from compromising the entire network. Survivability is enhanced by minimizing the Network Diameter, a graph property. A low (or lower) network diameter ensures that a network can quickly reroute traffic in case of failures and is an important design element for robustness against targeted attacks and random failures.

Likewise, interoperability is essential for seamless integration between civilian and military communication systems. Flexible protocols and specifications (e.g., Open API) enable different types of traffic and varying security requirements. These frameworks provide the structure, tools, and best practices needed to build and maintain secure communication systems. Thereby protecting against the various cyber threats we have today and expect in the future. Efficiency is achieved through effective load balancing (e.g., on a logical as well as physical level) to distribute traffic evenly across the network, prevent bottlenecks, optimize performance, and design for energy-efficient operations, particularly in remote or harsh environments or in case a part of the network has been compromised.

In order to support both civilian services and defense operations, accessibility and high availability are very important design requirements to consider when having a network with extensive large-scale coverage, including in very remote areas. Incorporating redundant communication links, such as satellite, fiber optic, and wireless, are design choices that allow for high availability even under adverse and disruptive conditions. It makes good sense in an environment such as Greenland to ensure that long-haul microwave links have a given level of redundancy either by satellite backhaul, submarine cable, or additional MW redundancy. While we always strive for our designs to be cost-effective, it may be a challenge if the circumstances dictate that the best redundancy (availability) solution is solved by non-terrestrial means (e.g., by satellite or submarine means). However, efficiency should be addressed by optimizing resource allocation to balance cost with performance, ensuring civil and defense needs are met without excessive expenditure, and sharing infrastructure where feasible to reduce costs while maintaining security through logical separation.

Ultra-secure transport networks are designed to meet stringent reliability, resilience, and security requirements. These type of networks are critical for civil and defense applications, ensuring continuous operation and protection against various threats. The important graph properties that such networks should exhibit include high connectivity, redundancy, low diameter, high node degree, network segmentation, robustness to attacks, scalability, efficient load balancing, geographical diversity, and adaptive routing.

High connectivity ensures multiple independent paths between any pair of nodes in the network, which is crucial for a communication network’s resilience and fault tolerance. This allows the network to maintain functionality even if several nodes or links fail, making it capable of withstanding targeted attacks or random failures without significant performance degradation. Redundancy, which involves having multiple backup paths and nodes, enhances fault tolerance and high availability by providing alternative routes for data transmission if primary paths fail. Redundancy also applies to critical network components such as switches, routers, and communication links, ensuring no or uncritical single point of failure.

A low diameter, the longest-shortest path between any two nodes, ensures data can travel quickly across the network, minimizing latency. This is especially important in time-sensitive applications. High node degree, meaning nodes are connected to many other nodes, increases the network’s robustness and allows for multiple paths for data to traverse, contributing to security and availability. However, it is essential to manage the trade-off between having a high node degree and the complexity of the network.

Network segmentation and compartmentalization will enhance security by limiting the impact of breaches or failures on a small part of the network. This is of particular importance when having a dual-use network design. Network segmentation divides the network into multiple smaller subnetworks. Each segment may have its own security and access control policies. Network compartmentalization involves designing isolated environments where, for example, data and functionalities are separated based on their criticality and sensitivity (this is, in general, a logical separation). Both strategies help contain cyber threats as well as prevent them from spreading across an entire network. Moreover, it also allows for a more granular control over network traffic and access. With this consideration, we should have a network that is robust against various types of attacks, including both physical and cyber attacks, by using secure protocols, encryption, authentication mechanisms, and intrusion detection systems. The aim of the network topology should be to minimize the impact of potential attacks on critical network nodes and links.

In a country such as Greenland, with settlements spread out over a very long distance and supported by very long and exposed transmission links (e.g., long-haul microwave links), geographical diversity is an essential design consideration that allows us to protect the functioning of services against localized disasters or failures. Typically, this involves distributing switching and management nodes, including data centers, across different geographic locations, ensuring that a failure in one area or with a main transport link does not disrupt the major parts of a network. This is particularly important for disaster recovery and business continuity. Finally, the network should support adaptive and dynamic routing protocols that can quickly respond to changes in the network topology, such as node failures or changes in traffic patterns. Such protocols will enhance the network’s resilience by automatically finding the best real-time data transmission paths.

TUSASS NETWORK AS A GRAPH.

Real maps, such as the Greenland map shown below at the left side of Figure 8, provide valuable geographical context and are essential for understanding the physical layout and extent of, for example, a transport network. A graph representation, as shown on the right side of Figure 8, on the other hand, offers a powerful and complementary perspective of the real-world network topology. It can emphasize the structural properties (and qualities) without those disappearing in geographical details that often are not relevant to the network functioning (if designed appropriately). A graph can contain many layers of network information that pretty much describe the network stack if required (e.g., from physical transport up through IP, TCP/IP, and to the application layers). It also supports many types of advanced analysis, design scenarios, and different types of simulations. A graph representation of a communications network is an invaluable tool for network design, planning, troubleshooting, analysis, and management.

Thus, the network graph approach offers several benefits for planning and operations. Firstly, the approach can often visualize the network’s topology better than a geographical map. It facilitates the understanding of various network (and graph) relationships and interconnections between the various network components. Secondly, the graph algorithms can be applied to the network graph and support the analysis of its characteristics, such as availability and redundancy scores, connectivity in general, the shortest paths, and so forth. This kind of analysis helps us identify critical nodes or links that may be sensitive to network and service disruption. It can also help significantly in maintaining and optimizing a network’s operation.

So, analyzing the our communication network’s graph representation makes it possible to identify potential weaknesses in the physical transport network, such as single points of failure (SPoF), bottlenecks, or areas with limited or weak redundancy. These identified weaknesses can then be addressed to enhance the network’s resilience, e.g., improving our network’s redundancy, availability and thus its overall reliability.

Figure 8 The chart above shows on the left side the topology of the (real) transport network of Tusass with the reference point in the Greenlandic settlements it connects. It should be noted that the actual transport network is slightly different as there are more hops between settlements than is shown here. On the right side is a graph representation of the Tusass transport network, shown on the left. The network graph represents the transport network on the west coast north and southbound. There are three main connection categories: (Black dashed line) Microwave (MW), (Orange dashed line) Submarine Cable, and (Blue solid line) Satellite, of which there are a GEO and a LEO arrangement. The size of the node, or settlements, represents the size of the population, which is also why Nuuk has the largest circle. The graph has been drawn consistent with the Kamada-Kawai layout, which is particularly useful for small to medium graphs, providing a reasonable, intuitive visualization of the structural relationship between nodes.

In the following, it is important to understand that due to Greenland’s specific conditions, such as weather and geography, building a robust transport network regarding reliability and redundancy will always be challenging, particularly when relying on the standard toolbox for designing, planning, and creating such networks. With geographical challenges should for example be understood the resulting lack of civil infrastructure connecting settlements … such as the lack of a road network.

The Table below provides key performance indicators (KPIs) for the Greenlandic (Tusass) transport network graph, as illustrated in Figure 8 above. It represents various aspects of the transport network’s structure and connectivity. This graph consists of 93 vertices (e.g., settlements and other connection points, such as long-haul MW radio sites) and 101 edges (transport connections), and it is fully connected, meaning all nodes are reachable within the network. There is only one subgraph, indicating no isolated segments as expected.

The Average Path Length suggests that it takes on average 39 steps to travel between any two nodes. This is a relatively high number, which may indicate a less efficient network. The Diameter of a network is defined as the longest shortest path between any two nodes. It can be shown that the value of the diameter lies between the value of the radius and twice that value (and not higher;-). The diameter is found to be 32, indicating a quite high maximum distance between the most distant nodes. This suggests that the network has a quite extensive reach, as is also obvious from the various illustrations of the transport network above (Figure 8) and below (Figure 11 & 12). Apart from the fact that such a high diameter may indicate potential inefficiencies, a large diameter can also mean that, in the worst-case scenarios, such as a compromised link or connectivity issues in general, communication between some nodes involves many steps (or hops), potentially leading to higher latency and slower data transmission. Related to the Diameter, the network Radius is the minimum eccentricity of any node, which is the shortest path from the most central node to the farthest node. Here, we find the radius to be 16, which means that even the most centrally located node is relatively far from some other nodes in the network. Something that is also very obvious from the various illustrations of the transport network. This emphasizes that the network has nodes that are significantly far apart. Without sufficient redundancy in place, such a transport network may be more sensitive to disruption of the connectivity.

From the perspective of redundancy, a large diameter and radius may imply that the network has fewer alternative paths between distant nodes (i.e., a lower redundancy score). This is, for example, the case between the northern point of Kullorsuaq and Aasiaat. Aasiaat is the first settlement (from the North) to be connected both by microwave and submarine cable and thus has an alternative connectivity solution to the long-haul microwave chain. If a critical node or link fails, the alternative path latency might be considerably longer than the compromised connectivity, such as would be the case with the alternative connectivity being satellite-based, leading to inefficiencies and possible reduced performance. This can also suggest potential capacity bottlenecks where specific paths are heavily relied upon without having enough capacity to act as the sole connectivity for a given transmission path. Thus, the vulnerability of the network to failures increases, resulting in reduced performance for customers in the affected area.

We find a Graph Density, at 0.024. This value indicates a sparse network with relatively few connections compared to the number of possible connections. The Clustering Coefficient is 0.014 and indicates that there are very few tightly-knit groups of nodes (again easily confirmed by visual inspection of the graph itself, see the various figures). The value of the Average Betweenness (ca. 423) measures how often nodes act as bridges along the shortest path between other nodes, indicating a significant central node (i.e., Nuuk).

The Average Closeness of 0.0003 and the Average Eigenvector Centrality of 0.105 provide insights into settlements’ influence and accessibility within the transport network. The Average Closeness measures of how close, on average, nodes are to each other. A high value indicates that nodes (or settlements) are close to each other meaning that the information (e.g., user data, signaling) being transported over the network spreads quickly and efficiently. And not surprisingly the opposite would be the case for a low average value. For our Tusass network the average closeness is very low and suggests that the network may face challenges in accessibility and efficiency, with nodes (settlements) being relatively far from one another. This typically will have an impact on the speed and effectiveness of communication across the network. The Average Eigenvector Centrality measures the overall importance (or influence) of nodes within a network. The term Eigenvector is a mathematical concept from linear algebra that represents the stable state of the network and provides insights into the structure of the graph and thus the network. For our Tusass network the average eigenvector value is (very) low and indicates a distribution of influence across several nodes that may actually prevent reliance on a single point of failure and, in general, such structures are thought to enhance a network’s resilience and redundancy. An Average Degree of ca. 2 means that each node has about 2 connections on average, indicating a hierarchical network structure with fewer direct connections and with a somewhat low level of redundancy, consistent with what can be observed from the various illustrations shown in this post. This do indicate that our network may be more vulnerable to disruption and failures and have a relative high latency (thus, a high round trip time).

Say that for some reason, the connection to Ilulissat, a settlement north of Aasiaat on the west coast with a little under 5 thousand people, is disrupted due to a connectivity issue between Ilulissat and Qasigiannguit, a neighboring settlement to Ilulissat with ca. a thousand people. This would today disconnect ca. 11 thousand people from receiving communications services or ca. 20% of Tusass’s customer base as all settlements north of Ilulissat would likewise be disconnected because of the reliance on the broken connection to also transport their data towards Nuuk and the internet using the submarine cables out of Greenland. In the terminology of the network graph, a broken connection (or edge as it is called in graph theory) that breaks up the network into two (or more) disconnected parts is called a Bridge. Thus, the connection between Ilulissat and Qasigiannguit is a bridge, as if it is broken, disconnecting the northern part of the long-haul microwave network above Ilulissat. Similarly, if Ilulissat were a central switching hub disrupted, it would disconnect the upper northern network from the network south of Ilulissat, and we would call Ilulissat an Articulation Point. For example, a submarine cable between Aasiaat and Ilulissat would provide redundancy for this particular event, mitigating a disruption of the microwave long-haul network between Ilulissat and Aasiaat that would disconnect at least 20% of the population from communications services.

The transport network has 44 Articulation Points and 57 Bridges, highlighting vulnerabilities where node or link failures could significantly disrupt the connectivity between parts of the network, disconnecting major parts of the network and thus disrupting services. A Modularity of 0.65 suggests a moderately high presence of distinct communities, with the network divided into 8 such communities (see Figure below).

Figure 9 In network analysis, a “natural” community (or cluster) is a group of nodes that are more densely connected to each other than to nodes outside the group. Natural communities are denser subgraphs within a larger network. Identifying such communities helps in understanding the structure and function of the network. In the above analysis of how Tusass’s transport network connects to the various settlements illustrates quiet well the various categories of connectivity (e.g., long-haul microwaves only, submarine cable redundancy, satellite redundancy, etc..) in the communications network of Tusass,

A Throughput (or Degree) of 202 indicates a network with an overall capacity for data transmission. The Degree is the average number of connections per node for a network graph. In a transport network, the degree indicates how many direct connections it has to other settlements. A higher degree implies better connectivity and potentially a higher resilience and redundancy. In a fully connected network with 93 nodes, the total degree would be 93 multiplied by 92, which equals 8,556. Therefore, a value of 202 is quite low in comparison, indicating that the network is far from fully connected, which anyway would be unusual for a transport network on this side. Our transport network is relatively sparse and, thus, resulting in a lower total degree, suggesting that fewer direct paths exist between nodes. This may potentially also mean less overall network redundancy. In the case of a node or link failure, there might be fewer alternative routes, which, as a consequence, can impact network reliability and resilience. Lower degree values can also indicate limited capacity for data transmission between nodes, potentially leading to congestion or bottlenecks if certain paths become over-utilized. This can, of course, then affect the efficiency and speed of data transfer within the network as traffic congestion levels increase.

The KPIs, shown in Table 1 below, collectively indicate that our Greenlandic transport network has several critical points and connections that could affect redundancy and availability. Particularly if they become compromised or experience outages. The high number of articulation points and bridges indicates possible design weaknesses, with the low density and average degree suggesting a limited level of redundancy. In fact, Tusass has, over several years, improved its transport network resilience, focusing on increasing the level of redundancy and reducing critical single points of failure. However, the changes and additions are costly and, due to the environmental conditions of Greenland, are also time-consuming, having fewer working days available for outdoor civil work projects.

Table 1 illustrates the most important graph KPIs, also described in the text above and below, that are associated with the graph representation of the Tusass transport network represented by the settlement connectivity (approximating but not one-to-one with the actual transport network).

In graph theory, an articulation point (see Figure 10 below) is a node that, if it is removed from the network, would split the network into disconnected parts. In our story, an articulation point would be one of our Greenlandic settlements. These types of points are thus important in maintaining network connectivity and serve as points in the network where alternative redundancy schemes might serve well. Therefore, creating additional redundancy in the network’s routing paths and implementing alternative connections will mitigate the impact of a failure of an articulation point, ensuring continued operations in case of a disruption. Basically, the more redundancy that a network has, the fewer articulation points the network will have; see also the illustration below.

Figure 10 The figure above illustrates the redundancy and availability of 3 simple undirected graphs with 4 nodes. The first graph is fully connected, with no articulation points or bridges, resulting in a redundancy and availability score of 100%. Thus I can remove a Node or a Connection from the graph and the remainder will remain full connected. The second graph, which is partly connected, has one articulation point and one bridge, leading to a redundancy and availability score of 75%. If I remove the third Node or the connection between Node 3 and Node 4, I would end with a disconnected Node 4 and a graph that has been broken up in 2 (e.g., if Node 3 is removed we have 2 sub-graphs {1,2} and {4}), The third graph, also partly connected, contains two articulation points and three bridges, resulting in a redundancy score of 0% and an availability score of 50%. Articulation points and bridges are highlighted in red to emphasize their critical roles in graph connectivity. Note: An articulation point is a node whose removal disconnects the graph and a bridge is an edge whose removal disconnects the graph.

Careful consideration of articulation points is crucial in preventing network partitioning, where removing a single node can disconnect the overall network into multiple sub-segments of the network. The connectivity between different segments is obviously critical for continuous data flow and service availability. Often, design and planning requirements dictate that if a network is broken into parts due to various disruption scenarios, these parts will remain functional and continue to provide a service that is possible with reduced performance. Network designers would make use of different strategies, such as increasing the physical redundancy of the transmission network as well as making use of routing algorithms on a higher level, such as multipath routing and diverse routing paths. Moreover, optimizing the placement of articulation points and routing paths (i.e., how traffic flows through the communications network) also maximizes resource utilization and may ensure optimal network performance and service availability for an operator’s customers.

Figure 11 illustrates the many articulation points of our Greenlandic settlements, represented as red stars in the graph of the Greenlandic transport network. Removing an articulation point (a critical node) would partition the graph into multiple disconnected components and may lead to severe service interruption.

In graph theory, a bridge is a network connection (or edge) whose removal would split the graph into multiple disconnected components. This type of connection is obviously critical for maintaining connectivity and facilitating communication between different network parts. In real life with real networks, the network designers would, in general, spend considerable time to ensure that such critical connections (i.e., so-called bridges) do not have an over-proportional impact on their network availability by, for example, building alternative connections (i.e., redundant connections) or ensuring that the impact of a compromised bridge would have a minimum impact in terms of the number of customers.

For our transport network in Greenland, the long-haul microwave transport network is overall less sensitive to disruption on a settlement level, as the underlying topology is like a long spine at high capacity and reasonable redundancy built-in with branches of MW radios that connect from the spine to a particular settlement. Thus, in most cases in this analysis, the long-haul MW radio site, in proximity to a given settlement, is the actual articulation point (not the settlement itself). The Nuuk data center, a central switching hub, is, by definition, an articulation point of very high criticality.

As discussed above and shown below (Figure 12), in the context of our transport network, bridges may play a crucial role in network resilience and fault tolerance. In our story, bridges represent the transport connections connecting Greenlandic settlements and the core network back in Nuuk (i.e., the master network node). In our representations, a bridge can, for example, be (1) a Microwave connection, (2) A submarine cable connection, and (3) a satellite connection provided by Tusass’s geo stationary satellite (e.g., Greensat) or by the low-earth orbiting OneWeb satellite. By identifying and managing bridges, network designers can mitigate the impact of link failures and disruptions, ensuring continuous operation and availability of services. Moreover, keeping network bridges in mind and minimizing them when planning a transport network will significantly reduce the risk of customer-affecting outages and keep the impact of transport disruption and the subsequent network partitioning to a minimum.

Figure 12 illustrates the many (edge) bridges and transport connections present in the graph of the Greenlandic transport network. Removing a bridge would split the network (graph) into multiple disconnected components, leading to network fragmentation and parts that may no longer sustain services. The above picture is common for long microwave chains with many hops (the connections themselves). The remedy is to make shorter hops, as Tusass is doing, and ensure that the connection itself is redundant equipment-wise (e.g., if one radio fails, there is another to take over). However, such a network would remain sensitive to any disruption of the MW site location and the large MW dish antenna.

Network designers should deploy redundancy mechanisms that would minimize the risk of the disruptive impact of compromised articulation points and bridges. They have several choices to choose from, such as multipath routing (e.g., ring topologies), link aggregation, and diverse routing paths to enhance redundancy and availability. These mechanisms will help minimize the impact of bridge failures and improve the overall network availability by increasing the level of network redundancy on a physical and logical level. Moreover, optimizing the placement of bridges and routing paths in a transport network will maximize resource utilization and ensure optimal network performance and service availability.

Knowing a given networks Articulation Points and Bridges will allow us to define an Availability and a Redundancy Score that we can use to evaluate and optimize a network’s robustness and reliability. Some examples of these concepts for simpler graphs (i.e., 4 nodes) are also shown in Figure 10 above. In the context of the Greenland transport network used here, these metrics can help us understand how resilient the network is to failures.

The Availability Score measures the proportion of nodes that are not articulation points, which might compromise our network’s overall availability if compromised. This score measures the risk of exposure to service disruption in case of a disconnection. As a reminder, the articulation point, or cut-vertex, is a node that, when removed, increases the number of components of the network and, thus, potentially the amount of disconnecting parts. The formula that is used to calculate the availability score is given by the total number of settlements (e.g., 93) minus the number of articulation points (e.g., 44) divided by the total number of settlements (e.g., 93). In this context, a higher availability score indicates a more robust network where fewer nodes are critical points of failure. Suppose we get a score that is close to one. In that case, this indicates that most nodes are not articulation points, suggesting that the network can sustain multiple node failures without significant loss of connectivity (see Figure 10 for a relatively simple illustration of this).

The Redundancy Score measures the proportion of connections that are not bridges, which could result in severe service disruptions to our customers if compromised. When a bridge is compromised or removed, it increases the number of network parts. The formula for the redundancy score is the total number of transport connections (edges, e.g., 101) minus the number of bridges (e.g., 57) divided by the total number of transport connections (edges, e.g., 101). Thus, in this context of redundancy, a higher redundancy score indicates a more resilient network where fewer edges are critical points of failure. If we get a redundancy score that is close to 100%, it would indicate that most of our (transport) connections cannot be categorized as bridges. This also suggests that our network can sustain multiple connectivity failures without it, resulting in a significant loss of overall connectivity and a severe service interruption.

Having more switching centers, or central hubs, can significantly enhance a communications network’s resilience, availability, and redundancy. It also reduces the consequences and impact of disruption to critical bridges in the network. Moreover, by distributing traffic, isolating failures, and providing multiple paths for data transmission, these central hubs may ensure continuous service to our customers and improve the overall network performance. In my opinion, implementing strategies to support multiple switching centers is essential for maintaining a robust and reliable communications infrastructure capable of withstanding various disruptions and enabling scaling to meet any future demands.

For our Greenlandic transport network shown above, we find an Availability Score of 53% and a Redundancy Score of 44%. While the scores may appear on the low side, we need to keep in mind that we are in Greenland with a population of 57 thousand mainly distributed along the west coast (from south to the north) in about 50+ settlements with 30%+ living in Nuuk. Tusass communications network connects to pretty much all settlements in Greenland, covering approximately 3,500+ km on the west coast (e.g., comparable to the distance from the top of Norway all the way down to the most southern point of Sicily), and irrespective of the number of people living in them. This is also a very clear desire, expectation, and direction that has been given by the Greenlandic administration (i.e., via the universal service obligation imposed on Tusass). The Tusass transport network is not designed with strict financial KPIs in mind and with the financial requirement that a given connection to a settlement would need to have a positive return on investment within a few years (as is the prevalent norm in our Industry). The transport network of Tusass has been designed to connect all communities of Greenland to an adequate level of quality and availability, prioritizing the coverage of the Greenlandic population (and the settlements they live in) rather than whether or not it makes hard financial sense. Tusass’s network is continuously upgraded and expanded as the demand for more advanced broadband services increases (as it does anywhere else in the world).

CRITICAL TECHNOLOGIES RELEVANT TO GREENLAND AND THE WIDER ARCTIC.

Greenland’s strategic location in the Arctic and its untapped natural resources, such as rare earth elements, oil, and gas, has increasingly drawn the attention of major global powers like the United States, Russia, and China. The melting Arctic ice due to climate change is opening new shipping routes and making these resources more accessible, escalating the geopolitical competition in the region.

Greenland must establish a defense and security strategy that minimizes its dependency on its natural allies and external actors to mitigate a situation where such may not be available or have the resources to commit to Greenland. An integral part of such a security strategy should be a dual-use, civil, and defense requirement whenever possible. Ensuring that Greenlandic society gets an immediate and sustainable return on investments in establishing a solid security framework.

5G technology offers significant advancements over previous generations of wireless networks, particularly in terms of private networking, speed, reliability, and latency across a variety of coverage platforms, e.g., (normal fixed) terrestrial antennas, vehicle-based (i.e., Cell on Wheels), balloon-based, drone-based, LEO-satellite based. This makes 5G ideal for setting up ad-hoc mobile coverage areas for military and critical civil applications. One of the key capabilities of 5G that supports these use cases is network slicing, which allows for the creation of dedicated virtual networks optimized for specific requirements.

Telia Norway has conducted trials together with the Norwegian Armed Forces in Norway to demonstrate the use of 5G for military applications (note: I think this is one of the best examples of an operator-defense collaboration on deployment innovation and directly applies to Arctic conditions). These trials included setting up ad-hoc 5G networks to support various military scenarios (including in an Arctic-like climate). The key findings demonstrated the ability to provide high-speed, low-latency communications in challenging environments, supporting real-time situational awareness and secure communications for military personnel. Ericsson has also partnered with the UK Ministry of Defense to trial 5G applications for military use. These trials focused on using 5G to support secure communications, enhance situational awareness, and enable the use of autonomous systems in military operations. NATO has conducted exercises incorporating 5G technology to evaluate its potential for improving command and control, situational awareness, and logistics in multi-national military operations. These exercises have shown the potential of 5G to enhance interoperability and coordination among allied forces. It is a very meaningful dual-use technology.

5G private networks offer a dedicated and secure network environment for specific organizations or use cases, which can be particularly beneficial in the Arctic and Greenland. These private networks can provide reliable communication and data transfer in remote and harsh environments, supporting military and civil applications. For instance, in Greenland, 5G private networks can enhance communication for scientific research stations, ensuring that data from environmental monitoring and climate research is transmitted securely and efficiently. They can also support critical infrastructure, such as power grids and transportation networks, by providing a reliable communication backbone. Moreover, in Greenland, the existing public telecommunications network may be designed in such a way that it essentially could operate as a “private” network in case transmission lines connecting settlements would be compromised (e.g., due to natural or unnatural causes), possibly a “thin” LEO satellite connection out of the settlement.

5G provides ultra-fast data speeds and low latency, enabling (near) real-time communication and data processing. This is crucial for military operations and emergency response scenarios where timely information is vital. Network slicing allows a single physical 5G network to be divided into multiple virtual networks, each tailored to specific applications or user groups. This ensures that critical communications are prioritized and reliable even during network congestion. It should be considered that for Greenland, the transport network (e.g., long-haul microwave network, routing choices, and satellite connections) might be limiting how fast the ultra-fast data speeds can become and may, at least along some transport routes, limit the round trip time performance (e.g., GEO satellite connections).

5G Enhanced Mobile Broadband (eMBB) provides high-speed internet access to support applications such as video streaming, augmented reality (AR), and virtual reality (VR) for situational awareness and training. Massive Machine-Type Communications (mMTC) supports a large number of IoT devices for monitoring and controlling equipment, sensors, and vehicles in both military and civil scenarios. Ultra-Reliable (Low-Latency) Communications (URLLC) ensures dependable and timely communication for critical applications such as command and control systems as well as unmanned and autonomous communication platforms (e.g., terrestrial, aerial, and underwater drones). I should note that designing defense and secure systems for ultra-low latency (< 10 ms) requirements would be a mistake as such cannot be guaranteed under all scenarios. The ultra-reliability (and availability) of transport connectivity is a critical challenge as it ensures that a given system has sufficient autonomy. Ultra-low latency of a given connectivity is much less critical.

For military (defense) applications, 5G can be rapidly deployed in the field using portable base stations to create a mobile (private) network. This is particularly useful in remote or hostile environments where traditional infrastructure is unavailable or has been compromised. Network slicing can create a secure, dedicated network for military operations. This ensures that sensitive data and communications are protected from interception and jamming. The low latency of 5G supports (near) real-time video feeds from drones, body cameras, and other surveillance equipment, enhancing situational awareness and decision-making in combat or reconnaissance missions.

Figure 13 The hierarchical coverage architecture shown above is relevant for military or, for example, search and rescue operations in remote areas like Greenland (or the Arctic in general), integrating multiple technological layers to ensure robust communication and surveillance. LEO satellites provide extensive broadband and SIGINT & IMINT coverage, supported by GEO satellites for stable links and data processing through ground stations. High Altitude Platforms (HAPs) offer 5G, IMINT, and SIGINT coverage at mid-altitudes, enhancing communication reach and resolution. The HAP system offers an extremely mobile and versatile platform for civil and defense scenarios. An ad-hoc private 5G network on the ground ensures secure, real-time communication for tactical operations. This multi-layered architecture is crucial for maintaining connectivity and operational efficiency in Greenland’s harsh and remote environments. The multi-layered communications network integrates IOT networks that may have been deployed in the past or in a specific mission context.

In critical civil applications, 5G can provide reliable communication networks for first responders during natural disasters or large-scale emergencies. Network slicing ensures that emergency services have priority access to the network, enabling efficient coordination and response. 5G can support the rapid deployment of communication networks in disaster-stricken areas, ensuring that affected populations can access critical services and information. Network slicing can allocate dedicated resources for smart city applications, such as traffic management, public safety, and environmental monitoring, ensuring that these services remain operational even during peak usage times. Thus, for Greenland, ensuring 5G availability would be through coastal settlements and possibly coastal coverage (outside settlements) of 5G at a lower frequency range (e.g., 600 – 900 MHz), prioritizing 5G coverage rather than 5G enhanced mobile broadband (i.e., any coverage at a high coverage probability is better than no coverage at certainty).

Besides 5G, what other technologies would otherwise be of importance in a Greenland Technology Strategy as it relates to its security and ensuring its investments and efforts also return beneficially to its society (e.g., a dual-use priority):

  • It would be advisable to increase the number of community networks within the overall network that can continue functioning if cut off from the main communications network. Thus, communications services in smaller and remote settlements depend less on a main or very few central communications control and management hubs. This requires on a local settlement level, or grouping of settlements, self-healing, remote (as opposed to a central hub) management, distributed databases, regional data center (typically a few racks), edge computing, local DNS, CDNs and content hosting, satellite connection, … Most telecom infrastructure manufacturing companies have today network in a box solutions that allow for such designs. Such solutions enable private 5G networks to function isolated from a public PLMN and fixed transport network.
  • It is essential to develop a (very) highly available and redundant digital transport infrastructure leveraging the existing topology strengthened by additional submarine cables (less critical than some of the other means of connectivity), increased transport ring- & higher-redundancy topologies, multi-level satellite connections (GEO, MEO & LEO, supplier redundancy) with more satellite ground gateways on Greenland (e.g., avoiding “off-Greenland” traffic routing). In addition, a remotely controlled stratospheric drone platform could provide additional connectivity redundancy at very high broadband speeds and low latencies.
  • Satellite backhaul solutions, operating, for example, from a Low Earth Orbit (LEO), such as shown in Figure below, are extending internet services to the farthest reaches of the globe. These satellites offer many benefits, as already discussed above, in connecting remote, rural, and previously un- and under-served areas with reliable internet services. Many remote regions lack foundational telecom infrastructure, particularly long-haul transport networks for carrying traffic away from remote populated areas. Satellite backhauls do not only offer a substantially better financial solution for enhancing internet connectivity to remote areas but are often the only viable solution for connectivity. The satellite backhaul solution is an important part of the toolkit to improve on redundancy and availability of particular very long and extensive long-haul microwave transport networks through remote areas (e.g., Greenland’s rugged and frequently hostile harsh coastal areas) where increasing the level of availability and redundancy with terrestrial means may be impractical (due to environmental factors) or incredibly costly.
    – LEO satellites provide several security advantages over GEO satellites when considering resistance to hostile actions to disrupt satellite communications. One significant factor is the altitude at which LEO satellites operate, which is between 500 and 2,000 kilometers, compared to GEO satellites, which are positioned approximately 36,000 kilometers above the equator. The lower altitude makes LEO satellites less vulnerable to long-range anti-satellite (ASAT) missiles.
    – LEO satellite networks are usually composed of large constellations with many satellites, often numbering in the dozens to hundreds. This extensive LEO network constellation provides some redundancy, meaning the network can still function effectively if some satellites are “taken out.” In contrast, GEO satellites are typically much fewer in number, and each satellite covers a much larger area, so losing even one GEO satellite will have a significant impact.
    – Another advantage of LEO satellites is their rapid movement across the sky relative to the Earth’s surface, completing an orbit in about 90 to 120 minutes. This constant movement makes it more challenging for hostile actors to track and target individual satellites for extended periods. In comparison, GEO satellites remain stationary relative to a fixed point on Earth, making them easier to locate and target.
    LEO satellites’ lower altitude also results in lower latency than GEO satellites. This can benefit secure, time-sensitive communications and is less susceptible to interception and jamming due to the reduced time delay. However, any security architecture of the critical transport infrastructure should not only rely on one type of satellite configuration.
    – Both GEO and LEO satellites have their purpose and benefits. Moreover, a hierarchical multi-dimensional topology, including stratospheric drones and even autonomous underwater vehicles, is worth considering when designing critical communications architecture. It is also worth remembering that public satellite networks may offer a much higher degree of communications redundancy and availability than defense-specific constellations. However, for SIGINT & IMINT collection, the defense-specific satellite constellations are likely much more advanced (unfortunately, they are also not as numerous as their civilian “cousins”). This said, a stratospheric aerial platform (e.g., HAP) would be substantially more powerful in IMINT and possibly also for some SIGINT tasks (or/and less costly & versatile) than a defense-specific satellite solution.
Figure 14 illustrates the architecture of a Low Earth Orbit (LEO) satellite backhaul system used by providers like OneWeb as well as StarLink with their so-called “Community Gateway” (i.e., using their Ka-band). It showcases the connectivity between terrestrial internet infrastructure (i.e., Satellite Gateways) and satellites in orbit, enabling high-speed data transmission. The network consists of LEO satellites that communicate with each other (inter-satellite Comms) using the Ku and Ka frequency bands. These satellites connect to ground-based satellite gateways (GW), which interface with Points of Presence (PoP) and Internet Exchange Points (IXP), integrating the space-based network with the terrestrial internet (WWW). Note: The indicated speeds and frequency bands (e.g., Ku: 12–18 GHz, Ka: 28–40 GHz) and data speeds illustrate the network’s capabilities.
Figure 15 illustrates an LEO satellite direct-to-device communication in remote areas without terrestrially-based communications infrastructure. Satellites are the only means of communication by a normal mobile device or classical satellite phone. Courtesy: DALL-E.
  • Establish an unmanned (remotely operated) stratospheric High Altitude Platform System (HAPS) (i.e., an advanced drone-based platform) or Unmanned Aerial Vehicles (UAV) over Greenland (or The Arctic region) with payload agnostic capabilities. This could easily be run out of existing Greenlandic ground-based aviation infrastructure (e.g., Kangerlussuaq, Nuuk, or many other community airports across Greenland). This platform could eventually become autonomous or require little human intervention. The high-altitude platform could support mission-critical ad-hoc networking for civil and defense applications (over Greenland). Such a multi-purpose platform can be used for IMINT and SIGINT (i.e., for both civil & defense) and civil communication means, including establishing connectivity to the ground-based transport network in case of disruptions. Lastly, a HAPS may also permanently offer high-quality and capacity 5G mobile services or act as a private ultra-secure 5G network in an ad-hoc mission-specific scenario. For a detailed account of stratospheric drones and how these compared with low-earth satellites, see my recent article “Stratospheric Drones & Low Earth Satellites: Revolutionizing Terrestrial Rural Broadband from the Skies?”.
    Stratospheric drones, which operate in the stratosphere at altitudes around 20 to 50 kilometers, offer several security advantages over traditional satellite communications and submarine communication cables, especially from a Greenlandic perspective. These drones are less accessible and harder to target due to their altitude, which places them out of reach for most ground-based anti-aircraft systems and well above the range of most manned aircraft. This makes them less vulnerable to hostile actions compared to satellites, which can be targeted by anti-satellite (ASAT) missiles, or submarine cables, which can be physically cut or damaged by underwater operations. The drones would stay over Greenlandic, or NATO, territory while by nature, design, and purpose, submarine communications cables and satellites, in general, are extending far beyond the territory of Greenland.
    – The mobility and flexibility of stratospheric drones allow them to be quickly repositioned as needed, making it difficult for adversaries to consistently target them. Unlike satellites that follow predictable orbits or submarine cables with fixed routes, these drones can change their location dynamically to respond to threats or optimize their coverage. This is particularly advantageous for Greenland, whose vast and harsh environment makes maintaining and protecting fixed communication infrastructure challenging.
    – Deploying a fleet of stratospheric drones provides redundancy and scalability. If one drone is compromised or taken out of service, others can fill the gap, ensuring continuous communication coverage. This distributed approach reduces the risk of a single point of failure, which is more pronounced with individual satellites or single submarine cables. For Greenland, this means a more reliable and resilient communication network that can adapt to disruptions.
    – Stratospheric drones can be rapidly deployed and recovered, making it an easier platform to maintain and upgrade them as needed compared to for example satellite based platforms and even terrestrial deployed networks. This quick deployment capability is crucial for Greenland, where harsh weather conditions can complicate the maintenance and repair of fixed infrastructure. Unlike satellites that require expensive and complex launches or submarine cables that involve extensive underwater laying and maintenance efforts, drones offer a more flexible and manageable solution.
    – Drones can also establish secure, line-of-sight communication links that are less susceptible to interception and jamming. Operating closer to the ground compared to satellites allows the use of higher frequencies narrower beams that are more difficult to jam. Additionally, drones can employ advanced encryption and frequency-hopping techniques to further secure their communications, ensuring that sensitive data remains protected. Stratospheric drones can also be equipped with advanced surveillance and countermeasure technologies to detect and respond to threats. For instance, they can carry sensors to monitor the electromagnetic spectrum for jamming attempts and deploy countermeasures to mitigate these threats. This proactive defense capability enhances their security profile compared to passive communication infrastructure like satellites or cables.
    – From a Greenlandic perspective, stratospheric drones offer significant advantages. They can be deployed over specific areas of interest, providing targeted communication coverage for remote or strategically important regions. This is particularly useful for covering Greenland’s vast and sparsely populated areas. Modern stratospheric drones are designed to support multi-dimensional payloads, or as it might also be called, payload agnostic (e.g., SIGINT & IMINT equipment, 5G base station and advanced antenna, laser communication systems, …) and stay operational for extended periods, ranging from weeks to months, ensuring sustained communication coverage without the need for frequent replacements or maintenance.
    – Last but not least, Greenland may be an ideal safe testing ground due to its vast, remote and thinly populated regions.
Figure 16 illustrates a Non-Terrestrial Network consisting of a stratospheric High Altitude Platform (HAP) drone-based constellation providing terrestrial Cellular broadband services to terrestrial mobile users delivered to their normal 5G terminal equipment that may range from smartphone and tablets to civil and military IOT networks and devices. Each hexagon represents a beam inside the larger coverage area of the stratospheric drone. One could assign three HAPs to cover a given area to deliver very high-availability services to a rural area. The operating altitude of a HAP constellation is between 10 and 50 km, with an optimum of around 20 km. It is assumed that there is inter-HAP connectivity, e.g., via laser links. Of course, it is also possible to contemplate having the gNB (full 5G radio node) in the stratospheric drone entirely, allowing easier integration with LEO satellite backhauls, for example. There might even be applications (e.g., defense, natural & unnatural disaster situations, …) where a standalone 5G SA core is integrated.
  • Unmanned Underwater Vehicles (UUV), also known as Autonomous Underwater Vehicles (AUV), are obvious systems to deploy for underwater surveillance & monitoring that may also have obvious dual-use purposes (e.g., fisheries & resource management, iceberg tracking and navigation, coastal defense and infrastructure protection such as for submarine cables). Depending on the mission parameters and type of AUV, the range is between up to 100 kilometers (e.g., REMUS100) to thousands of kilometers (e.g., SeaBed2030) and an operational time (endurance) from max. 24 hours (e.g., REMUS100, Bluefin-21), to multiple days (e.g., Boing Echo Voyager), to several months (SeaBed2030). A subset of this kind of underwater solution would be swarm-like AUV constellations. See Figure 17 below for an illustration.
  • Increase RD&T (Research, Development & Trials) on Arctic Internet of Things (A-IOT) (note: require some level of coverage, minimum satellite) for civil, defense/military (e.g., Military IOT nor M-IOT) and dual-use applications, such as surveillance & reconnaissance, environmental monitoring, infrastructure security, etc… (note: IOTs are not only for terrestrial use cases but also highly interesting for aquatic applications in combination with AUV/UUVs). Military IoT refers to integrating IoT technologies tailored explicitly for military applications. These devices enhance operational efficiency, improve situational awareness, and support decision-making processes in various military contexts. Military IoT encompasses various connected devices, sensors, and systems that collect, transmit, and analyze data to support defense and security operations. In the vast and remote regions of Greenland and the Arctic, military IoT devices can be deployed for continuous surveillance and reconnaissance. This includes using drones, such as advanced HAPS, equipped with cameras and sensors to monitor borders, track the movements of ships and aircraft, and detect any unauthorized activities. Military IoT sensors can also monitor Arctic environmental conditions, tracking ice thickness changes, weather patterns, and sea levels. Such data is crucial for planning and executing military operations in the challenging Arctic environment but is also of tremendous value for the Greenlandic society. The importance of dual-use cases, civil and defense, cannot be understated; here are some examples:
    Infrastructure Monitoring and Maintenance: (Military Use Case) IoT sensors can be deployed to monitor the structural integrity of military installations, such as bases and airstrips, ensuring they remain operational and safe for use. These sensors can detect stress, wear, and potential damage due to extreme weather conditions. These IoT devices and networks can also be deployed for perimeter defense and monitoring. (Civil Use Case) The same technology can be applied to civilian infrastructure, including roads, bridges, and public buildings. Continuous monitoring can help maintain these civil infrastructures by providing early warnings about potential failures, thus preventing accidents and ensuring public safety.
    Secure Communication NetworksMilitary Use Case: Military IoT devices can establish secure communication networks in remote areas, ensuring that military units can maintain reliable and secure communications even in the Arctic’s harsh conditions. This is critical for coordinating operations and responding to threats. Civil Use Case: In civilian contexts, these communication networks can enhance connectivity in remote Greenlandic communities, providing essential services such as emergency communications, internet access, and telemedicine. This helps bridge the digital divide and improve residents’ quality of life.
    Environmental Monitoring and Maritime SafetyMilitary Use Case: Military IoT devices, such as underwater sensor networks and buoys, can be deployed to monitor sea conditions, ice movements, and potential maritime threats. These devices can provide real-time data critical for naval operations, ensuring safe navigation and strategic planning. Civil Use Case: The same sensors and buoys can be used for civilian purposes, such as ensuring the safety of commercial shipping lanes, fishing operations, and maritime travel. Real-time monitoring of sea conditions and icebergs can prevent maritime accidents and enhance the safety of maritime activities.
    Fisheries Management and SurveillanceMilitary Use Case: IoT devices can monitor and patrol Greenlandic waters for illegal fishing activities and unauthorized maritime incursions. Drones and underwater sensors can track vessel movements, ensuring that military forces can respond to potential security threats. Civil Use Case: These monitoring systems can support fisheries management by tracking fish populations and movements, helping to enforce sustainable fishing practices and prevent overfishing. This data is important for the local economy, which heavily relies on fishing.
  • Implement Distributed Acoustic Sensing (DAS) on submarine cables. DAS utilizes existing fiber-optic cables, such as those used for telecommunications, to detect and monitor acoustic signals in the underwater environment. This innovative technology leverages the sensitivity of fiber-optic cables to vibrations and sound waves, allowing for the detection of various underwater activities. This could also be integrated with the AUV and A-IOTs-based sensor systems. It should be noted that jamming a DAS system is considerably more complex than jamming traditional radio-frequency (RF) or wireless communication systems. DAS’s significant security and defense advantages might justify deploying more submarine cables around Greenland. This investment is compelling because of enhanced surveillance and security, improved connectivity, and strategic and economic benefits. By leveraging DAS technology, Greenland could strengthen its national security, support economic development, and maintain its strategic importance in the Arctic region.
  • Greenland should widely embrace autonomous systems deployment and technologies based on artificial intelligence (AI). AI is a technology that could compensate for the challenges of having a vast geography, a hostile climate, and a small population. This may, by far, be one of the most critical components of a practical security strategy for Greenland. Getting experience with autonomous systems in a Greenlandic and Arctic setting should be prioritized. Collaboration & knowledge exchange with Canadian and American universities should be structurally explored, as well as other larger (friendly) countries with Arctic interests (e.g., Norway, Iceland, …).
  • Last but not least, cybersecurity is an essential, even foundational, component of the securitization of Greenland and the wider Arctic, addressing the protection of critical infrastructure, the integrity of surveillance and monitoring systems, and the defense against geopolitical cyber threats. The present state and level of maturity of cybersecurity and defense (against cyber threats) related to Greenland’s critical infrastructure has to improve substantially. Prioritizing cybersecurity may have to be at the expense of other critical activities due to limited resources with relevant expertise available to businesses in Greenland). Today, international collaboration is essential for Greenland to develop strong cyber defense capabilities, ensure secure communication networks, and implement effective incident response plans. However, it is essential for Greenland’s security that a cybersecurity architecture is tailor-made to the particularities of Greenland and allows Greenland to operate independently should friendly actors and allies not be in a position to provide assistance.
Figure 17 Above illustrates an Unmanned Underwater Vehicle (UUV) near the coast of Greenland inspecting a submarine cable. The UUV is a robotic device that operates underwater without a human onboard, controlled either autonomously or remotely. In and around Greenland’s coastline, UUVs may serve both defense and civilian purposes. For defense, they can patrol for submarines, monitor underwater traffic, and detect potential threats, enhancing maritime security. Civilian applications include search & rescue missions, scientific research, where UUVs map the seabed, study marine life, and monitor environmental changes, crucial for understanding climate change impacts. Additionally, they inspect underwater infrastructure like submarine cables, ensuring their integrity and functionality. UUVs’ versatility makes them invaluable for comprehensive underwater exploration and security along Greenland’s long coast line. Integrated defense architectures may combine the UUV, Distributed Acoustic Sensor (DAS) networks deployed at submarine cables, and cognitive AI-based closed-loop security solutions (e.g., autonomous operation). Courtesy: DALL-E.

How do we frame some of the above recommendations into a context of securitization in the academic sense of the word aligned with the Copenhagen School (as I understand it)? I will structure this as the “Securitizing Actor(s),” “Extraordinary Measures Required,” and the “Geopolitical Implications”:

Example 1: Improving Communications networks as a security priority.

Securitizing Actor(s): Greenland’s government, possibly supported by Denmark and international allies (e.g., The USA’s Pituffik Space Base on Greenland), frames the lack of higher availability and reliable communication networks as an existential threat to national security, economic development, and stability, including the ability to defend Greenland effectively during a global threat or crisis.

Extraordinary Measures Required: Greenland can invest in advanced digital communication technologies to address the threat. This includes upgrading infrastructure such as fiber-optic cables, satellite communication systems, stratospheric high-altitude platform (HAP) with IMINT, SIGINT, and broadband communications payload, and 5G wireless networks to ensure they are reliable and can handle increased data traffic. Implementing advanced cybersecurity measures to protect these networks from cyber threats is also crucial. Additionally, investments in broadband expansion to remote areas ensure comprehensive coverage and connectivity.

Geopolitical Implications: By framing the reliability and availability of digital communications networks as a national security issue, Greenland ensures that significant resources are allocated to upgrade and maintain these critical infrastructures. Greenland may also attract European Union investments to leapfrogging the critical communications infrastructure. This improves Greenland’s day-to-day communication and economic activities and enhances its strategic importance by ensuring secure and efficient information flow. Reliable digital networks are essential for attracting international investments, supporting digital economies, and maintaining social cohesion.

Example 2: Geopolitical Competition in the Arctic

Securitizing Actor(s): The Greenland government, aligned with Danish and international allies’ interests, views the increasing presence of Russian and Chinese activities in the Arctic as a direct threat to Greenland’s sovereignty and security.

Extraordinary Measures Required: In response, Greenland can adopt advanced surveillance and defense technologies, such as Distributed Acoustic Sensing (DAS) systems to monitor underwater activities and Unmanned Aerial & Underwater Vehicles (UAVs & UUVs) for continuous aerial surveillance. Additionally, deploying advanced communication networks, including satellite-based systems, ensures secure and reliable information flow.

Geopolitical Implications: By framing foreign powers’ increased activities as a security threat (e.g., Russia and China), Greenland can attract NATO and European Union investments and support for deploying cutting-edge surveillance and defense technologies. This enhances Greenland’s security infrastructure, deters potential adversaries, and solidifies its strategic importance within the alliance.

Example 3: Cybersecurity as a National Security Priority.

Securitizing Actor(s): Greenland, aligned with its allies, frames the potential for cyber-attacks on critical infrastructure (such as power grids, communication networks, and military installations) as an existential threat to national security.

Extraordinary Measures Required: To address this threat, Greenland can invest in state-of-the-art cybersecurity technologies, including artificial intelligence-driven threat detection systems, encrypted communication channels, and comprehensive incident response frameworks. Establishing partnerships with global cybersecurity firms and participating in international cybersecurity exercises can also be part of the strategy.

Geopolitical Implications: By securitizing cybersecurity, Greenland ensures that significant resources are allocated to protect its digital infrastructure. This safeguards its critical systems and enhances its attractiveness as a secure location for international investments, reinforcing its geopolitical stability and economic growth.

Example 4: Arctic IoT and Dual-Use Military IoT Networks as a Security Priority.

Securitizing Actor(s): Greenland’s government, supported by Denmark and international allies, frames the lack of Arctic IoT and dual-use military IoT networks as an existential threat to national security, economic development, and environmental monitoring.

Extraordinary Measures Required: Greenland can invest in deploying Arctic IoT and dual-use military IoT networks to address the threat. These networks involve a comprehensive system of interconnected sensors, devices, and communication technologies designed to operate in the harsh Arctic environment. This includes deploying sensors for environmental monitoring, enhancing surveillance capabilities, and improving communication and data-sharing across military and civilian applications.

Geopolitical Implications: By framing the lack of Arctic IoT and dual-use military IoT networks as a national security issue, Greenland ensures that significant resources are allocated to develop and maintain these advanced technological infrastructures. This improves situational awareness and operational efficiency and enhances Greenland’s strategic importance by providing real-time data and robust monitoring capabilities. Reliable IoT networks are essential for protecting critical infrastructure, supporting economic activities, and maintaining environmental and national security.

THE DANISH DEFENSE & SECURITY AGREEMENT COVERING THE PERIOD 2024 TO 2033.

Recently, Denmark approved its new defense and security agreement for the period 2024-2033. This strongly emphasizes Denmark’s strategic reorientation in response to the new geopolitical realities. A key element in the Danish commitment to NATO’s goals includes a spending level approaching and possibly superseding the 2% of GDP on defense by 2030. It is not 2% for the sake of 2%. There really is a lot to be done, and as soon as possible. The agreement entails significant financial investments totaling approximately 190 billion DKK (or ca. 25+ billion euros) over the next ten years to quantum leap defense capabilities and critical infrastructure.

The defense agreement emphasizes the importance of enhancing security in the Arctic region, including, of course, Greenland. Thus, Greenland’s strategic significance in the current geopolitical landscape is recognized, particularly in light of Russian activities and Chinese expressed intentions (e.g., re: the “Polar Silk Road”). The agreement aims to strengthen surveillance, sovereignty enforcement, and collaboration with NATO in the Arctic. As such, we should expect investments to improve surveillance capabilities that would strengthen the enforcement of Greenland’s sovereignty. Ensuring that Greenland and Denmark can effectively monitor and protect its Arctic territories (together with its allies). The defense agreement stresses the importance of supporting NATO’s mission in the Arctic region, contributing to collective defense and deterrence efforts.

What I very much like in the new defense agreement is the expressed focus on dual-use infrastructure investments that benefit Greenland’s defense (& military) and civilian sectors. This includes upgrading existing facilities and enhancing operational capabilities in the Arctic that allow a rapid response to security threats. The agreement ensures that defense investments also bring economic and social benefits to Greenlandic society, consistent with a dual-use philosophy. In order for this to become a reality, it will involve a close collaboration with local authorities, businesses, and research institutions to support the local economy and create new job opportunities (as well as ensure that there is a local emphasis on relevant education to ensure that such investments are locally sustainable and not relying on an “army” of Danes and others of non-Greenlandic origin).

The defense agreement unsurprisingly expresses a strong commitment to enhancing cybersecurity measures as well as addressing hybrid threats in Greenland. This reflects the broader security challenges of the new technology introduction required, the present cyber-maturity level, and, of course, the current (and future expected) geopolitical tensions. The architects behind the agreement have also realized that there is a big need to improve recruitment, retention, and appropriate training within the defense forces, ensuring that personnel are well-prepared to operate in the Arctic environment in general and in Greenland in particular.

It is great to see that the Danish “Defense and Security Agreement” for 2024-2033 reflects the principles of securitization by framing Greenland’s security as an existential threat and justifying substantial investments and strategic initiatives in response. The focus of the agreement is on enhancing critical infrastructure, surveillance platforms, and international cooperation while ensuring that the benefits of the local economy align with the concept of securitization. That is to ensure that Greenland is well-prepared to address current and future security challenges and anticipated threats in the Arctic region.

The agreement underscores the importance of advanced surveillance systems, such as, for example, satellite-based monitoring and sophisticated radar systems as mentioned in the agreement. These technologies are deemed important for maintaining situational awareness and ensuring the security of Denmark’s territories, including Greenland and the Arctic region in general. In order to improve response times as well as effectiveness, enhanced surveillance capabilities are essential for detecting and tracking potential threats. Moreover, such capabilities are also important for search and rescue, and many other civilian use cases are consistent with the intention to ensure that applied technologies for defense purposes have dual-use capabilities and can also be used for civilian purposes.

There are more cyber threats than ever before. These threats are getting increasingly sophisticated with the advance of AI and digitization in general. So, it is not surprising that cybersecurity technologies are also an important topic in the agreement. The increasing threat of cyber attacks, particularly against critical infrastructure and often initiated by hostile state actors, necessitates a robust cybersecurity defense in order to protect our critical infrastructure and the sensitive information it typically contains. This includes implementing advanced encryption, intrusion detection systems, and secure communication networks to safeguard against cyber threats.

The defense agreement also highlights the importance of having access to unmanned systems or drones. There are quite a few examples of such systems as discussed in some detail above, and can be found in my more extensive article “Stratospheric Drones & Low Earth Satellites: Revolutionizing Terrestrial Rural Broadband from the Skies?“. There are two categories of drones that may be interesting. One is the unmanned version that typically is remotely controlled in an operations center at a distance from the actual unmanned platform. The other is the autonomous (or semi-autonomous) drone version that is enabled by AI and many integrated sensors to operate independently of direct human control or at least largely without real-time human intervention. Examples such as Unmanned Vehicles (UVs) and Autonomous Vehicles (AVs) are typically associated with underwater (UUV/UAV) or aerial (UAV/AAV) platforms. This kind of technology provides versatile, very flexible surveillance & reconnaissance, and defense platforms that are not reliant on a large staff of experts to operate. They are particularly valuable in the Arctic region, where harsh environmental conditions can limit the effectiveness of manned missions.

The development and deployment of dual-use technologies are also emphasized in the agreement. These technologies, which have both civilian and military applications, are necessary for maximizing the return on investment in defense infrastructure. It may also, at the moment, be easier to find funding if it is defense-related. Technology examples include advancements in satellite communications and broadband networks, enhancing military capabilities, and civilian connectivity, particularly how those various communications technologies can seamlessly integrate with one another is very important.

Furthermore, artificial intelligence (AI) has been identified as a transformative technology for defense and security. While AI is often referred to as a singular technology. However, it is actually an umbrella term that encompasses a broad spectrum of frameworks, tools, and techniques that have a common basis in models that are being trained on large (or very large) sets of data in order to offer various predictive capabilities of increasing sophistication. This leads to the expectation that, for example, AI-driven analytics and decision-making applications will enhance the operational efficiency and, not unimportantly, the quality of real-time decision-making in the field (which may or may not be correct and for sure may be somewhat optimistic expectations at least at a basic level). AI-enabled defense platforms or applications are likely to result in improved threat detection as well as being able to support strategic planning. As long as the risk of false outcomes is acceptable, such a system will enrich the defense systems and provide significant advantages in managing complex and highly dynamic security environments and time-critical threat scenarios.

Lastly, the agreement stresses the need for advanced logistics and supply chain technologies. Efficient logistics are critical for sustaining military operations and ensuring the timely delivery of equipment and supplies. Automation, real-time tracking, and predictive analytics in logistics management can significantly improve the resilience and responsiveness of defense operations.

AT THIS POINT IN MY GREENLANDIC JOURNEY.

In my career, I have designed, planned, built, and operated telecommunications networks in many places under vastly different environmental conditions (e.g., geography and climate). The more I think about building robust and highly reliable communication networks in Greenland, including all the IT & compute enablers required, the more I appreciate how challenging and different it is to do so in Greenland. Tusass has built a robust and reliable transport network connecting nearly all settlements in Greenland down to the smallest size. Tusass operates and maintains this network under some of the harshest environmental conditions in the world, with an incredible dedication to all those settlements that depend on being connected to the outside world and where a compromised connection may have dire consequences for the unconnected community.

Figure 18 Shows a coastal radio site in Greenland. It illustrates one of the frequent issues of the critical infrastructure being covered by ice as well as snow. Courtesy: Tusass A/S (Greenland),

Comparing the capital spending level of Tusass in Greenland with the averages of other Western European countries, we find that Tusass does not invest significantly more of its revenue than the telco industry’s country averages of many other Western European countries. In fact, its 5-year average Capex to Revenue ratio is close to the Western European country average (19% over the period 2019 to 2023). In terms of capital investments compared to the revenue generating units (RGUs), Tusass does have the highest level of 18.7 euros per RGU per month, based on a 5-year average over the period 2019 to 2023, in comparison with the average of several Western European markets, coming out at 6.6 euros per RGU per month, as shown in the chart below. This difference is not surprising when considering the available population in Greenland compared to the populations in the countries considered in the comparison. The variation of capital investments for Tusass also shows a much larger variation than other countries due to substantially less population to bear the burden of financing big capital-intensive projects, such as the deployment of new submarine cables (e.g., typically coming out at 30 to 50 thousand euros per km), new satellite connections (normally 10+ million euros depending on the asset arrangement), RAN modernization (e.g., 5G), and so forth … For example, the average absolute capital spend was 14.0±1.5 million euros between 2019 and 2022, while 2023 was almost 40 million euros (a little less than 4% of the annual defense and security budget of Denmark) due to, according with Tusass annual report, RAN modernization (e.g., 5G), satellite (e.g., Greensat) and submarine cable investments (initial seabed investigation). All these investments bring better quality through higher reliability, integrity, and availability of Greenland’s critical communications infrastructure although there are not a large population (e.g., millions) to spread such these substantial investments over.

Figure 19 In a Western European context, Greenland does not, on average, invest substantially more in telecom infrastructure relative to its revenues and revenue-generating units (i.e., its customer service subscriptions) despite having a very low population of about 57 thousand and an area of 2.2 million square kilometers, the size of Alaska and only 33% smaller than India. The chart shows the country’s average Capex to Revenue ratio and the Capex in euros per RGU per month over the last 5 years (2019 to 2023) for Greenland (e.g., Tusass annual reports) and Western Europe (using data from New Street Research).

The capital investments required to leapfrog Greenland’s communications network availability and redundancy scores beyond 70% (versus 53% and 44%, respectively, in 2023) would be very substantial, requiring both additional microwave connections (including redesigns), submarine cables, and new satellite arrangements, and new ground stations (e.g., to or in settlements with more than a population of 1,000 inhabitants).

Those investments would serve the interests of the Greenlandic society and that of Denmark and NATO in terms of boosting the defense and security of Greenland, which is also consistent with all the relevant parties’ expressed intent of securitization of Greenland. The required capital investments related to further leapfrogging the safety, availability, and reliability, above and beyond the current plans, of the critical communications infrastructure would be far higher than previously capital spend levels by Tusass (and Greenland) and unlikely to be economically viable using conventional business financial metrics (e.g., net present value NPV > 0 and internal rate of return IRR > a given hurdle rate). The investment needs to be seen as geopolitical relevant for the security & safety of Greenland, and with a strong focus on dual-use technologies, also as beneficial to the Greenlandic society.

Even with unlimited funding and financing to enhance Greenland’s safety and security, the challenging weather conditions and limited availability of skilled resources mean that it will take considerable time to successfully complete such an extensive program. Designing, planning and building a solid defense and security architecture meaningful to Greenlandic conditions will take time. Though, I am also convinced that there are already pieces of the puzzle operational today that is important any future work.

Figure 18 An aerial view of one of Tusass’s west coast sites supporting coastal radio as well as hosting one of the many long-haul microwave sites along the west coast of Greenland. Courtesy: Tusass A/S (Greenland).

RECOMMENDATIONS.

A multifaceted approach is essential to ensure that Greenland’s strategic and infrastructure development aligns with its unique geographical and geopolitical context.

Firstly, Greenland should prioritize the development of dual-use critical infrastructure and the supporting architectures that can serve both civilian and defense (& military) purposes. For example expanding and upgrading airport facilities (e.g., as is happening with the new airport in Nuuk), enhancing broadband internet access (e.g., as Tusass is very much focusing on adding more submarine cables and satellite coverage), and developing advanced integrated communication platforms like satellite-based and unmanned aerial systems (UAS), such as payload agnostic stratospheric high altitude platforms (HAPs). Such dual-use infrastructure platforms could bolster the national security. Moreover it could support economic activities that would improve community connectivity, and enhance the quality of life for Greenland’s residents irrespective of where they live in Greenland. There is little doubt that securing funding from international allies (e.g., European Union, NATO, …) and public-private partnerships will be crucial in supporting the financing of these projects. Also ensuring that civil and defense needs are met efficiently and with the right balance.

Additionally, it is important to invest in critical enablers like advanced monitoring and surveillance technologies for the security & safety. Greenland should in particular focus on satellite monitoring, Distributed Acoustic Sensing (DAS) on its submarine cables, and Unmanned Vehicles for Underwater and Aerial applications (e.g., UUVs & UAVs). Such systems will enable a more comprehensive monitoring of activities around and over Greenland. This would allow Greenland to secure its maritime routes, and protecting Greenland’s natural resources (among other things). Enhanced surveillance capabilities will also provide multi-dimensional real-time data for national security, environmental monitoring, and disaster response scenarios. Collaborating with NATO and other international partners should focus on sharing technology know-how, expertise in general, and intelligence that will ensure that Greenland’s surveillance capabilities are on par with global standards.

Tusass’s transport network connecting (almost) all of Greenland’s settlements is an essential and critical asset for Greenland. It should be the backbone for any dual-use enhancement serving civil as well as defense scenarios. Adding additional submarine cables and more satellite connections are important (on-going) parts of those enhancements and will substantially increase both the network availability, resilience and hardening to disruptions natural as well as man-made kinds. However, increasing the communications networks ability to fully, or even partly, function in case of network parts being cut off from a few main switching centers may be something that could be considered. With todays technologies might also be affordable to do and fit well with Tusass’s multi-dimensional connectivity strategy using terrestrial means (e.g., microwave connections), sub-marine cables and satellites.

Last but not least, considering Greenland’s limited human resources, the technologies and advanced platforms implemented must have a large degree of autonomy and self-reliance. This will likely only be achieved with solid partnerships and strong alliances with Denmark and other natural allies, including the Nordic countries in and near the Arctic Circle (e.g., Island, Faroe Island, Norway, Sweden, Finland, The USA, and Canada). In particular, Norway has had recent experience with the dual use of ad-hoc and private 5G networking for defense applications. Joint operation of UUV and UAVs integrated with DAS and satellite constellation could be operated within the Arctic Circle. Developing and implementing advanced AI-based technologies should be a priority. Such collaborations could also make these advanced technologies much more affordable than if only serving one country. These technologies can compensate for the sparse population and vast geographical challenges that Greenland and the larger Arctic Circle pose, providing efficient and effective infrastructure management, surveillance, and economic development solutions. Achieving a very high degree of autonomous operation of the multi-dimensional technology landscape required for leapfrogging the security of Greenland, the Greenlandic Society, and its critical infrastructure would be essential for Greenland to be self-reliant and less dependent on substantial external resources that may be problematic to guaranty in times of crisis.

By focusing on these recommendations, Greenland can enhance its strategic importance, improve its critical infrastructure resilience, and ensure sustainable economic growth while maintaining its unique environmental heritage.

Being a field technician in Greenland poses some occupational hazards that is unknown in most other places. Apart from the harsh weather, remoteness of many of the infrastructure locations, on many occasions field engineers have encountered hungry polar bears in the field. The polar bear is a very dangerous predator that is always on the look out for its next protein-rich meal.

FURTHER READING.

  1. Tusass Annual Reports 2023 (more reports can be found here).
  2. Naalakkersuisut / Government of Greenland Ministry for Statehood and Foreign Affairs, “Greenland in the World — Nothing about us without us: Greenland’s Foreign, Security, and Defense Policy 2024-2033 – an Arctic Strategy.” (February 2024). The Danish title of this Document (also published in Greenlandic as the first language): “Grønland i Verden — Intet om os, uden os: Grønlands udenrigs-, sikkerheds- og forsvarspolitiske strategi for 2024-2033 — en Arktisk Strategi”.
  3. Martin Brum, “Greenland’s first security strategy looks west as the Arctic heats up.” Arctic Business Journal (February 2024).
  4. Marc Jacobsen, Ole Wæver, and Ulrik Pram Gad, “Greenland in Arctic Security: (De)securitization Dynamics under Climatic Thaw and Geopolitical Freeze.” (2024), University of Michigan Press. See also the video associated with the book launch. It’s not the best quality (sound/video), but if you just listen and follow the slides offline, it is actually really interesting.
  5. Michael Paul and Göran Swistek, “Russia in the Arctic: Development Plans, Military Potential, and Conflict Prevention,” SWP (Stiftung Wissenschaft und Politik) Research Paper, (February 2022). Some great maps are provided that clearly visualize the Arctic – Russia relationships.
  6. Marc Lanteigne, “The Rise (and Fall?) of the Polar Silk Road.” The Diplomat, (August 2022).
  7. Trym Eiterjord, “What the 14th Five-Year Plan says about China’s Arctic Interests”, The Arctic Institute, (November 2023). The link also includes references to several other articles related to the China-Arctic relationship from the Arctic Institute China Series 2023.
  8. Barry Buzan, Ole Wæver, and Jaap de Wilde, “Security: A New Framework for Analysis”, (1998), Lynne Rienner Publishers Inc..
  9. Kim Kyllesbech Larsen, The Next Frontier: LEO Satellites for Internet Services. | techneconomyblog, (March 2024).
  10. Kim Kyllesbech Larsen, Stratospheric Drones & Low Earth Satellites: Revolutionizing Terrestrial Rural Broadband from the Skies? | techneconomyblog, (January 2024).
  11. Deo, Narsingh. “Graph Theory with Applications to Engineering and Computer Science,” Dover Publications. This book is a reasonably accessible starting point for learning more about graphs. If this is new to you, I recommend going for the following Geeks for Geeks ” Introduction to Graph Data Structure” (April 2024), which provides a quick intro to the world of graphs.
  12. Mike Dano, “Pentagon puts 5G at center of US military’s communications future”, Light Reading (December 2020).
  13. Juan Pedro Tomas, “Telia to develop private 5G for Norway’s Armed Forces”, RCR Wireless (June 2022).
  14. Iain Morris, “Telia is building 5G cell towers for the battlefield”, Light Reading (June 2023).
  15. Saleem Khawaja, “How military uses of the IoT for defense applications are expanding”, Army Technology (March 2023).
  16. Mary Lee, James Dimarogonas, Edward Geist, Shane Manuel, Ryan A. Schwankhart, Bryce Downing, “Opportunities and Risks of 5G Military Use in Europe”, RAND (March 2023).
  17. Mike Dano, “NATO soldiers test new 5G tech“, Light Reading (October 2023).
  18. NATO publication, “5G Technology: Nokia Meets with NATO Allied Command Transformation to Discuss Military Applications”, (May 2024).
  19. Michael Hill, “NATO tests AI’s ability to protect critical infrastructure against cyberattacks” (January 2023).
  20. Forsvarsministeriet, Danmark, “Dansk forsvar og sikkerhed 2024-2033.” (June 2023): Danish Defense & Security Agreement (Part I).
  21. Forsvarsministeriet, Denmark, “Anden delaftale under forsvarsforliget 2024-2033“, (April 2024): Danish Defense & Security Agreement (Part II).
  22. The State Council Information Office of the People’s Republic of China, “China’s Arctic Policy”, (January 2018).

ACKNOWLEDGEMENT.

I greatly acknowledge my wife, Eva Varadi, for her support, patience, and understanding during the creative process of writing this article. I am incredible thankful to Tusass for providing many great pictures used in the post that illustrates the (good weather!) conditions that Tusass field technicians are faced with in the field working tirelessly on the critical communications infrastructure throughout Greenland. While the pictures shown in this post are really beautiful and breathtaking, the weather is unforgiven frequently stranding field workers for days at some of those remote site locations. Add to this picture the additional dangers of a hungry polar bear that will go to great length getting its weekly protein intake.

A Single Network Future.

How to think about a single network future? What does it entail, and what is it good for?

Well, imagine a world where your mobile device, unchanged and unmodified, connects to the nearest cell tower and satellites orbiting Earth, ensuring customers will always be best connected, getting the best service, irrespective of where they are. Satellite-based supplementary coverage (from space) seeks to deliver on this vision by leveraging superior economic coverage in terms of larger footprint (than feasible with terrestrial networks) and better latency (compared to geostationary satellite solutions) to bring connectivity directly to unmodified consumer handsets (e.g., smartphone, tablet, IoT devices), enhance emergency communication, and foster advancements in space-based technologies. The single network future does not only require certain technological developments, such as 3GPP Non-Terrestrial Network standardization efforts (e.g., Release 17 and forward). We also need the regulatory spectrum policy to change, allowing today’s terrestrially- and regulatory-bounded cellular frequency spectra to be re-used by satellite operators providing the same mobile service under satellite coverage in areas without terrestrial communications infrastructure, as mobile customers enjoy within the normal terrestrial cellular network.

It is estimated that less than 40% of the world’s population, or roughly 2.9 billion people, have never used the internet (as of 2023). That 60% of the world population have access to internet and 40% have not, is the digital divide. A massive gap most pronounced in developing countries, rural & remote areas, and among older populations and economically disadvantaged groups. Most of the 2.9 billion on the wrong side of the divide live in areas lacking terrestrial-based technology infrastructure that would readily facilitate access to the internet. It lacks the communications infrastructure because it may either be impractical or (and) un-economical to deploy, including difficulty in monetizing and yielding a positive return on investment over a relatively short period. Satellites that are allowed by regulatory means to re-use terrestrially-based cellular spectrum for supplementary (to terrestrial) coverage can largely solve the digital divide challenges (as long as affordable mobile devices and services are available to the unconnected).

This blog explores some of the details of the, in my opinion, forward-thinking FCC’s Supplementary Coverage from Space (SCS) framework and vision of a Single Network in which mobile cellular communication is not limited to tera firma but supplemented and enhanced by satellites, ensuring connectivity everywhere.

SUPPLEMENTARY COVERAGE FROM SPACE.

Federal Communications Commission (FCC) recently published a new regulatory framework (“Report & Order and further notice of proposed rulemaking“) designed to facilitate the integration of satellite and terrestrial networks to provide Supplemental Coverage from Space (SCS), marking a significant development toward achieving ubiquitous connectivity. In the following, I will use the terms “SCS framework” and ” SCS initiative” to cover the reference to the FCC’s regulatory framework. The SCS initiative, which, to my knowledge, is the first of its kind globally, aims to allow satellite operators and terrestrial service providers to collaborate, leveraging the spectrum previously allocated exclusively for terrestrial services to extend connectivity directly to consumer handsets, what is called satellite direct-to-device (D2D), especially in remote, unserved, and underserved areas. The proposal is expected to enhance emergency communication availability, foster advancements in space-based technologies, and promote the innovative and efficient use of spectrum resources.

The “Report and Order” formalizes a spectrum-use framework, adopting a secondary mobile-satellite service (MSS) allocation in specific frequency bands devoid of primary non-flexible-use legacy incumbents, both federal and non-federal. Let us break this down in a bit more informal language. So, the FCC proposes to designate certain parts of the radio frequency spectrum (see below) for mobile-satellite services on a “secondary” basis. In spectrum management, an allocation is deemed “secondary” when it allows for the operation of a service without causing interference to the “primary” services in the same band. This means that the supplementary satellite service, deemed secondary, must accept interference from primary services without claiming protection. Moreover, this only applies to locations that lack (i.e., devoid of) the use of a given frequency band by existing ” primary” spectrum users (i.e., incumbents), non-federal as well as federal primary uses.

The setup encourages collaboration and permits supplemental coverage from space (SCS) in designated bands where terrestrial licensees, holding all licenses for a channel throughout a geographically independent area (GIA), lease access to their terrestrial spectrum rights to a satellite operator. Furthermore, the framework establishes entry criteria for satellite operators to apply for or modify an existing “part 25” space station license for SCS operations, that is the regulatory requirements established by the FCC governing the licensing and operation of satellite communications in the United States. The framework also outlines a licensing-by-rule approach for terrestrial devices acting as SCS earth stations, referring to a regulatory and technological framework where conventional consumer devices, such as smartphones or tablets, are equipped to communicate directly with satellites (after all we do talk about Direct-2-Device).

The above picture showcases a moment in the remote Arizona desert where an individual receives a direct signal to the device from a Low-Earth Orbit (LEO) satellite to his or her smartphone. The remote area has no terrestrial cellular coverage, and supplementary coverage from space is the only way for individuals with a subscription to access their cellular services or make a distress call apart from using a costly satellite phone service. It should be remembered that the SCS service is likely to be capacity-limited due to the typical large satellite coverage area and possible limited available SCS spectrum bandwidth.

Additionally, the Further Notice of Proposed Rulemaking seeks further commentary on aspects such as 911 service provision and the protection of radio astronomy, indicating the FCC’s consistent commitment to refining and expanding the SCS framework responsibly. This commitment ensures that the framework will continue to evolve, adapting to new challenges and opportunities and providing a solid foundation for future developments.

BALANCING THE AIRWAVES IN THE USA.

Two agencies in the US manage the frequency spectrum, the Federal Communications Commission (FCC) and the National Telecommunications and Information Administration (NTIA) . They collaboratively manage and coordinate frequency spectrum use and reuse for satellites, among other applications, within the United States. This partnership is important for maintaining a balanced approach to spectrum management that supports federal and non-federal needs, ensuring that satellite communications and other services can operate effectively without causing harmful interference to each other.

The Federal Communications Commission, the FCC for short, is an independent agency that exclusively regulates all non-Federal spectrum use across the United States. FCC allocates spectrum licenses for commercial use, typically through spectrum auctions. A new or re-purposed commercialized spectrum has been reclaimed from other uses, both from federal uses and existing commercial uses. Spectrum can be re-purposed either because newer, more spectrally efficient technologies become available (e.g., the transition from analog to digital broadcasting) or it becomes viable to shift operation to other spectrum bands with less commercial value (and, of course, without jeopardizing existing operational excellence). It is also possible that spectrum, previously having been for exclusive federal use (e.g., military applications, fixed satellite uses, etc.), can be shared, such as the case with Citizens Broadband Radio Service (CBRS), which allows non-federal parties access to 150 MHz in the 3.5 GHz band (i.e., band 48). However, it has recently been concluded that (centralized) dynamic spectrum sharing only works in certain use cases and is associated with considerable implementation complexities. Multiple parties with possible vastly different requirements co-exist within a given band, which is a work in progress and may not be consistent with the commercialized spectrum operation required for high-quality broadband cellular operation.

Alongside the FCC, the National Telecommunications and Information Administration (NTIA) plays a crucial role in US spectrum management. The NTIA is the sole authority responsible for authorizing Federal spectrum use. It also serves as the principal adviser on telecommunications policies to the President of the United States, coordinating the views of the Executive Branch. The NTIA manages a significant portion of the spectrum, approximately 2,398 MHz (69%), within the range of 225 MHz to 3.7 GHz, known as the ‘beachfront spectrum’. Of the total 3,475 MHz, 591 MHz (17%) is exclusively for Federal use, and 1,807 MHz (52%) is shared or coordinated between Federal and non-Federal entities. This leaves 1,077 MHz (31%) for exclusive commercial use, which falls under the management of the FCC.

NTIA, in collaboration with the FCC, has been instrumental in the past in freeing up substantial C-band spectrum, 480 MHz in total, of which 100 MHz is conditioned on prioritized sharing (i.e., Auction 105), for commercial and shared use that subsequently has been auctioned off over the last three years raising USD 109 billion. In US Dollar (USD) per MHz per population count (pop), we have, on average, ca. USD 0.68 per MHz-pop from the C-band auctions in the US, compared to USD 0.13 per MHz-pop in Europe C-band auctions and USD 0.23 per MHz-pop in APAC auctions. It should be remembered that the United States exclusive-use spectrum licenses can be regarded as an indefinite-lived intangible asset, while European spectrum rights expire between 10 and 20 years. This may explain a big part of the difference between US-based spectrum pricing and Europe and Asia.

The FCC and the NTIA jointly manage all the radio spectrum in the United States, licensed (e.g., cellular mobile frequencies, TV signals) and unlicensed (e.g., WiFi, MW Owens). The NTIA oversees spectrum use for Federal purposes, while the FCC is responsible for non-Federal use. In addition to its role in auctioning spectrum licenses, the FCC is also authorized to redistribute licenses. This authority allows the FCC to play a vital role in ensuring efficient spectrum use and adapting to changing needs.

THE SINGLE NETWORK.

The Supplementary Coverage from Space (SCS) framework creates an enabling regulatory framework for satellite operators to provide mobile broadband services to unmodified mobile devices (i.e., D2D services), such as smartphones and other terrestrial cellular devices, in rural and remote areas without such services, where no or only scarce terrestrial infrastructure exists. By leveraging SCS, terrestrial cellular broadband services will be enhanced, and the combination may result in a unified network. This network will ensure continuous and ubiquitous access to communication services, overcoming geographical and environmental challenges. Thus, this led to the inception of the Single Network that can provide seamless connectivity across diverse environments, including remote, unserved, and underserved areas.

The above picture illustrates the idea behind the FCC’s SCS framework and “Single Network” on a high level. In this example, an LEO satellite provides direct-to-device (D2D) supplementary coverage in rural and remote areas, using an advanced phase-array antenna, to unmodified user equipment (e.g., smartphone, tablet, cellular-IoT, …) in the same frequency band (i.e., f1,sat) owned and used by a terrestrial operator operating a cellular network (f1). The LEO satellite operator must partner with the terrestrial spectrum owner to manage and coordinate the frequency re-use in areas where the frequency owner (i.e., mobile/cellular operator) does not have the terrestrial-based infrastructure to deliver a service to its customers (i.e., typically remote, rural areas where terrestrial infrastructure is impractical and uneconomic to deploy). The satellite operator has to avoid geographical regions where the frequency (e.g., f1) is used by the spectrum owner, typically in urban, suburban, and rural areas (where terrestrial cellular infrastructure has already been deployed and service offered).

How does the “Single Network” of FCC differ from the 3GPP Non-Terrestrial Network (NTN) standardization? Simply put, the “Single Network” is a regulatory framework that paves the way for satellite operators to re-use the terrestrial cellular spectrum on their non-terrestrial (satellite-based) network. The 3GPP NTN standardization initiatives, e.g., Release 16, 17 and 18+, are a technical effort to incorporate satellite communication systems within the 5G network architecture. Shortly, the following 3GPP releases are it relates to how NTN should function with terrestrial 5G networks;

  • Release 15 laid the groundwork for 5G New Radio (NR) and started to consider the broader picture of integrating non-terrestrial networks with terrestrial 5G networks. It marks the beginning of discussions on how to accommodate NTNs within the 5G framework, focusing on study items rather than specific NTN standards.
  • Release 16 took significant steps toward defining NTN by including study items and work items specifically aimed at understanding and specifying the adjustments needed for NR to support communication with devices served by NTNs. Release 16 focuses on identifying modifications to the NR protocol and architecture to accommodate the unique characteristics of satellite communication, such as higher latency and different mobility characteristics compared to terrestrial networks.
  • Release 17 further advancements in NTN specifications aiming to integrate specific technical solutions and standards for NTNs within the 5G architecture. This effort includes detailed specifications for supporting direct connectivity between 5G devices and satellites, covering aspects like signal timing, frequency bands, and protocol adaptations to handle the distinct challenges posed by satellite communication, such as the Doppler effect and signal delay.
  • Release 18 and beyond will continue to evolve its standards to enhance NTN support, addressing emerging requirements and incorporating feedback from early implementations. These efforts include refining and expanding NTN capabilities to support a broader range of applications and services, improving integration with terrestrial networks, and enhancing performance and reliability.

The NTN architecture ensures (should ensure) that satellite communications systems can seamlessly integrate into 5G networks, supporting direct communication between satellites and standard mobile devices. This integration idea includes adapting 5G protocols and technologies to accommodate the unique characteristics of satellite communication, such as higher latency and different signal propagation conditions. The NTN standardization aims to expand the reach of 5G services to global scales, including maritime, aerial, and sparsely populated land areas, thereby aligning with the broader goal of universal service coverage.

The FCC’s vision of a “single network” and the 3GPP NTN standardization aims to integrate satellite and terrestrial networks to extend connectivity, albeit from slightly different angles. The FCC’s concept provides a regulatory and policy framework to enable such integration across different network types and service providers, focusing on the broad goal of universal connectivity. In contrast, 3GPP’s NTN standardization provides the technical specifications and protocols to make this integration possible, particularly within next-generation (5G) networks. At the same time, 3GPP’s NTN efforts address the technical underpinnings required to realize that vision in practice, especially for 5G technologies. The FCC’s “single network” concept lays the regulatory foundation for enabling satellite and terrestrial cellular network service integration to the same unmodified device portfolio. Together, they are highly synergistic, addressing the regulatory and technical challenges of creating a seamlessly connected world.

Depicting a moment in the Colorado mountains, a hiker receives a direct signal from a Low Earth Orbit (LEO) satellite supplementary coverage to their (unmodified) smartphone. The remote area has no terrestrial cellular coverage. It should be remembered that the SCS service is likely to be capacity-limited due to the typical large satellite coverage area and possible limited available SCS spectrum bandwidth.

SINGLE NETWORK VS SATELLITE ATC

The FCC’s Single Network vision and the Supplemental Coverage from Space (SCS) concept, akin to the Satellite Ancillary Terrestrial Component (ATC) architectural concept (an area that I spend a significant portion of my career working on operationalizing and then defending … a different story though), share a common goal of merging satellite and terrestrial networks to fortify connectivity. These strategies, driven by the desire to enhance the reach and reliability of communication services, particularly in underserved regions, hold the promise of expanded service coverage.

The Single Network and SCS initiatives broadly focus on comprehensively integrating satellite services with terrestrial infrastructures, aiming to directly connect satellite systems with standard consumer devices across various services and frequency bands. This expansive approach seeks to ensure ubiquitous connectivity, significantly closing the coverage gaps in current network deployments. Conversely, the Satellite ATC concept is more narrowly tailored, concentrating on using terrestrial base stations to complement and enhance satellite mobile services. This method explicitly addresses the need for improved signal availability and service reliability in urban or obstructed areas by integrating terrestrial components within the satellite network framework.

Although the Single Network and Satellite ATC shared goals, the paths to achieving them diverge significantly in the application, regulatory considerations, and technical execution. The SCS concept, for instance, involves navigating regulatory challenges associated with direct-to-device satellite communications, including the complexities of spectrum sharing and ensuring the harmonious coexistence of satellite and terrestrial services. This highlights the intricate nature of network integration, making your audience more aware of the regulatory and technical hurdles in this field.

The distinction between the two concepts lies in their technological and implementation specifics, regulatory backdrop, and focus areas. While both aim to weave together the strengths of satellite and terrestrial technologies, the Single Network and SCS framework envisions a more holistic integration of connectivity solutions, contrasting with the ATC’s targeted approach to augmenting satellite services with terrestrial network support. This illustrates the evolving landscape of communication networks, where the convergence of diverse technologies opens new avenues for achieving seamless and widespread connectivity.

THE RELATED SCS FREQUENCIES & SPECTRUM.

The following frequency bands and the total bandwidth associated with the frequency have by the FCC been designated for Supplemental Coverage from Space (SCS):

  • 70MHz @ 600 MHz Band
  • 96 MHz @ 700 MHz Band
  • 50 MHz @ 800 MHz Band
  • 130 MHz @ Broadband PCS
  • 10 MHz @ AWS-H Block

The above comprises a total frequency bandwidth of 350+ MHz, currently used for terrestrial cellular services across the USA. According to the FCC, the above frequency bands and spectrum can also be used for satellite direct-to-device SCS services to normal mobile devices without built-in satellite transceiver functionality. Of course, this is subject to spectrum owners’ approval and contractual and commercial arrangements.

Moreover, the 758-769/788-799 MHz band, licensed to the First Responder Network Authority (FirstNet), is also eligible for SCS under the established framework. This frequency band has been selected to enhance connectivity in remote, unserved, and underserved areas by facilitating collaborations between satellite and terrestrial networks within these specific frequency ranges.

SpaceX recently reported a peak download speed of 17 Mb/s from a satellite direct to an unmodified Samsung Android Phone using 2×5 MHz of T-Mobile USA’s PCS (i.e., the G-block). The speed corresponds to a downlink spectral efficiency of ~3.4 Mbps/MHz/beam, which is pretty impressive. Using this as rough guidance for the ~350 MHz, we should expect this to be equivalent to an approximate download speed of ca. 600 Mbps (@ 175 MHz) per satellite beam. As the satellite antenna technology improves, we should expect that spectral efficiency will also increase, resulting in increasing downlink throughput.

SCS INFANCY, BUT ALIVE AND KICKING.

In the FCC’s framework on the Supplemental Coverage from Space (SCS), the partnership between SpaceX and T-Mobile is described as a collaborative effort where SpaceX would utilize a block of T-Mobile’s mid-band Personal Communications Services (PCS G-Block) spectrum across a nationwide footprint. This initiative aims to provide service to T-Mobile’s subscribers in rural and remote locations, thereby addressing coverage gaps in T-Mobile’s terrestrial network. The FCC has facilitated this collaboration by allowing SpaceX and T-Mobile to deploy and test their proposed SCS system while their pending applications and the FCC’s proceedings continue.

Specifically, SpaceX has been authorized (by FCC’s Space Bureau) to deploy a modified version of its second-generation (2nd generation) Starlink satellites with SCS-capable antennas that can operate in specific frequencies. FCC authorized experimental testing on terrestrial locations for SpaceX and T-Mobile to progress with their SCS system, although SpaceX’s requests for broader authority remain under consideration by the FCC.

Lynk Global has partnered with mobile network operators (MNOs) outside the United States to allow the MNOs’ customers to send texts using Lynk’s satellite network. In 2022, the FCC authorized Lynk’s request to operate a non-geostationary satellite orbit (NGSO) satellite system (e.g., Low-Earth Orbit, Medium Earth Orbit, or Highly-Elliptical Orbit) intended for text message communications in locations outside the United States and in countries where Lynk has obtained agreements with MNOs and the required local regulatory approval. Lynk aims to deploy ten mobile-satellite service (MSS) satellites as part of a “cellular-based satellite communications network” operating on cellular frequencies globally in the 617-960 MHz band (i.e., within the UHF band), targeting international markets only.

Lynk has announced contracts with more than 30 MNOs (full list not published) covering over 50 countries for Lynk’s “satellite-direct-to-standard-mobile-phone-system,” which provides emergency alerts and two-way Short Message Service (SMS) messaging. Lynk currently has three LEO satellites in orbit as of March 2023, and they plan to expand their constellation to include up to 5,000 satellites with 50 additional satellites planned for end of 2024, and with that substantially broadening its geographic coverage and service capabilities​​. Lynk recently claimed that they had in Hawaii achieved repeated successful downlink speeds above 10 Mbps with several mass market unmodified smartphones (10+ Mbps indicates a spectral efficiency of 2+ Mbps/MHz/beam). Lynk Mobile has also, recently (July 2023) demonstrated (as a proof of concept) phone calls via their LEO satellite between two unmodified smartphones (see the YouTube link).

AST SpaceMobile is also mentioned for its partnerships with several MNOs, including AT&T and Vodafone, to develop its direct-to-device or satellite-to-smartphone service. Overall AST SpaceMobile has announced it has entered into “more than 40 agreements and understandings with mobile network operators globally” (e.g., AT&T, Vodafone, Rakuten, Orange, Telefonica, TIM, MTN, Ooredoo, …). In 2020, AST filed applications with the FCC seeking U.S. market access for gateway links in the V-band for its SpaceMobile satellite system, which is planned to consist of 243 LEO satellites. AST clarified that its operation in the United States would collaborate with terrestrial licensee partners without seeking to operate independently on terrestrial frequencies​​.

AST SpaceMobile BlueWalker 3 (BW3) LEO satellite 64 square-meter phased array. Source: AST SpaceMobile.

AST SpaceMobile’s satellite antenna design marks a pioneering step in satellite communications. AST recently deployed the largest commercial phased array antenna into Low Earth Orbit (LEO). On September 10, 2022, AST SpaceMobile launched its prototype direct-to-device testbed BlueWalker 3 (BW3) satellite. This mission marked a significant step forward in the company’s efforts to test and validate its technology for providing direct-to-cellphone communication via a Low Earth Orbit (LEO) satellite network. The launch of BW3 aimed to demonstrate the capabilities of its large phased array antenna, a critical component for the AST’s targeted global broadband service.

The BW3’s phased array antenna with a surface area of 64 square meters is technologically quite advanced (actually, I find it very beautiful and can’t wait to see the real thing for their commercial constellation) and designed for dynamic beamforming as one would expect for a state-of-art direct-to-device satellite. The BlueWalker 3, a proof of concept design, supports a frequency range of 100 MHz in the UHF band, with 5 MHz channels and a spectral efficiency expected to be 3 Mbps/MHz/channel. This capability is crucial for establishing direct-to-device communications, as it allows the satellite to concentrate its signals on specific geographic areas or directly on mobile devices, enhancing the quality of coverage and minimizing potential interference with terrestrial networks. AST SpaceMobile is expected to launch the first 5 of 243 LEO satellites, BlueBirds, on SpaceX’s Falcon 9 in the 2nd quarter of 2024. The first 5 will be similar to BW3 design including the phased array antenna. Subsequent AST satellites are expected to be larger with substantially up-scaled phased array antenna supporting an even larger frequency span covering the most of the UHF band and supporting 40 MHz channels with peak download speeds of 120 Mbps (using their estimated 3 Mbps/MHz/channel).

These above examples underscore the the ongoing efforts and potential of satellite service providers like Starlink/SpaceX, Lynk Global, and AST SpaceMobile within the evolving SCS framework. The examples highlight the collaborative approach between satellite operators and terrestrial service providers to achieve ubiquitous connectivity directly to unmodified cellular consumer handsets.

PRACTICAL PREREQUISITES.

In general, the satellite operator would need a terrestrial frequency license owner willing to lease out its spectrum for services in areas where that spectrum has not been deployed on its network infrastructure or where the license holder has no infrastructure deployed. And, of course, a terrestrial communication service provider owning spectrum and interested in extending services to remote areas would need a satellite operator to provide direct-to-device services to its customers. Eventually, terrestrial operators might see an economic benefit in decommissioning uneconomical rural terrestrial infrastructure and providing satellite broadband cellular services instead. This may be particularly interesting in low-density rural and remote areas supported today by a terrestrial communications infrastructure.

Under the SCS framework, terrestrial spectrum owners can make leasing arrangements with satellite operators. These agreements would allow satellite services to utilize the terrestrial cellular spectrum for direct satellite communication with devices, effectively filling coverage gaps with satellite signals. This kind of arrangement could be similar to the one between T-Mobile USA and StarLink to offer cellular services in the absence of T-Mobile cellular infrastructure, e.g., mainly remote and rural areas.

As the regulatory body for non-federal frequencies, the FCC delineates a regulatory environment that specifies the conditions under which the spectrum can be shared or used by terrestrial and satellite services, minimizing the risk of harmful interference (which both parties should be interested in anyway). This includes setting technical standards and identifying suitable frequency bands supporting dual use. The overarching goal is to bolster the reach and reliability of cellular networks in remote areas, enhancing service availability.

For terrestrial cellular networks and spectrum owners, this means adhering to FCC regulations that govern these new leasing arrangements and the technical criteria designed to protect incumbent services from interference. The process involves meticulous planning and, if necessary, implementing measures to mitigate interference, ensuring that the integration of satellite and terrestrial networks proceeds smoothly.

Moreover, the SCS framework should leapfrog innovation and allow network operators to broaden their service offerings into areas where they are not present today. This could include new applications, from emergency communications facilitated by satellite connectivity to IoT deployments and broadband access in underserved locations.

Depicting a moment somewhere in the Arctic (e.g., Greenland), an eco-tourist receives a direct signal from a Low Earth Orbit (LEO) satellite supplementary coverage to their (unmodified) smartphone. The remote area has no terrestrial cellular coverage. It should be remembered that the SCS service is likely to be capacity-limited due to the typical large satellite coverage area and possible limited available SCS spectrum bandwidth. Several regulatory, business, and operational details must be in place for the above service to work.

TECHNICAL PREREQUISITES FOR DELIVERING SATELLITE SCS SERVICES.

Satellite constellations providing D2D services are naturally targeting supplementary coverage of geographical areas where no terrestrial cellular services are present at the target frequency bands used by the satellite operator.

As the satellite operator has gotten access to the terrestrial cellular spectrum for its supplementary coverage direct-to-device service, it has a range of satellite technical requirements that either need to be in place of an existing constellation (though that might require some degree of foresight) or a new satellite would need to be designed consistent with frequency band and range, the targeted radio access technology such as LTE or 5G (assuming the ambition eventually is beyond messaging), and the device portfolio that the service aims to support (e.g., smartphone, tablet, IoTs, …). In general, I would assume that existing satellite constellations would not automatically support SCS services they have not been designed for upfront. It would make sense (economically) if a spectrum arrangement already exists between the satellite and terrestrial cellular spectrum owner and operator.

Direct-to-device LEO satellites directly connect to unmodified mobile devices such as smartphones, tablets, or other personal devices. This necessitates a design that can accommodate low-power signals and small antennas typically found on consumer devices. Therefore, these satellites often incorporate advanced beamforming capabilities through phased array antennas to focus signals precisely on specific geographic locations, enhancing signal strength and reliability for individual users. Moreover, the transceiver electronics must be highly sensitive and capable of handling simultaneous connections, each potentially requiring different levels of service quality. As the satellite provides services over remote and scarcely populated areas, at least initially, there is no need for high-capacity designs, e.g., typically requiring terrestrial cellular-like coverage areas and large frequency bandwidths. The satellites are designed to operate in frequency bands compatible with terrestrial consumer devices, necessitating coordination and compliance with various regulatory standards compared to traditional satellite services.

Implementing satellite-based SCS successfully hinges on complying with many fairly sophisticated technical requirements, such as phased array antenna design and transceiver electronics, enabling direct communication with consumer devices terrestrially. The phased array antenna, a cornerstone of this architecture, must possess advanced beamforming capabilities, allowing it to dynamically focus and steer its signal beams towards specific geographic areas or even moving targets on the Earth’s surface. This flexibility is super important for maximizing the coverage and quality of the communication link with individual devices, which might be spread across diverse and often challenging terrains. The antenna design needs to be wideband and highly efficient to handle the broad spectrum of frequencies designated for SCS operations, ensuring compatibility with the communication standards used by consumer devices (e.g., 4G LTE, 5G).

An illustration of a LEO satellite with a phased array antenna providing direct to smartphone connectivity at a 850 MHz carrier frequency. All practical purposes the antenna beamforming at a LEO altitude can be considered far-field. Thus the electromagnetic fields behave as planar waves and the antenna array becomes more straightforward to design and to manage performance (e.g., beam steering at very high accuracy).

Designing phased array antennas for satellite-based direct-to-device services, envisioned by the SCS framework, requires considering various technical design parameters to ensure the system’s optimal performance and efficiency. These antennas are crucial for effective direct-to-device communication, encompassing multiple technical and practical considerations.

The SCS frequency band not only determines the operational range of the antenna but also its ability to communicate effectively with ground-based devices through the Earth’s atmosphere; in this respect, lower frequencies are better than higher frequencies. The frequency, or frequencies, significantly influences the overall design of the antenna, affecting everything from its physical dimensions to the materials used in its construction. The spacing and configuration of the antenna elements are carefully planned to prevent interference while maximizing coverage and connectivity efficiency. Typically, element spacing is kept around half the operating frequency wavelength, and the configuration involves choosing linear, planar, or circular arrays.

Beamforming capabilities are at the heart of the phased array design, allowing for the precise direction of communication beams toward targeted areas on the ground. This necessitates advanced signal processing to adjust signal phases dynamically and amplitudes, enabling the system to focus its beams, compensate for the satellite’s movement, and handle numerous connections.

The antenna’s polarization strategy is chosen to enhance signal reception and minimize interference. Dual (e.g., horizontal & vertical) or circular (e.g., right or left hand) polarization ensures compatibility with a wide range of devices and as well as more efficient spectrum use. Polarization refers to the orientation of the electromagnetic waves transmitted or received by an antenna. In satellite communications, polarization is used to differentiate between signals and increase the capacity of the communication link without requiring additional frequency bandwidth.

Physical constraints of size, weight, and form factor are also critical, dictated by the satellite’s design and launch parameters, including the launch cost. The antenna must be compact and lightweight to fit within the satellite’s structure and comply with launch weight limitations, impacting the satellite’s overall design and deployment mechanisms.

Beyond the antenna, the transceiver electronics within the satellite play an important role. These must be capable of handling high-throughput data to accommodate simultaneous connections, each demanding reliable and quality service. Sensitivity is another critical factor, as the electronics need to detect and process the relatively weak signals sent by consumer-grade devices, which possess much less power than traditional ground stations. Moreover, given the energy constraints inherent in satellite platforms, these transceiver systems must efficiently manage the power to maintain optimal operation over long durations as it directly relates to the satellite’s life span.

Operational success also depends on the satellite’s compliance with regulatory standards, particularly frequency use and signal interference. Achieving this requires a deep integration of technology and regulatory strategy, ensuring that the satellite’s operations do not disrupt existing services and align with global communication protocols.

CONCERNS.

The FCC’s Supplemental Coverage from Space (SCS) framework has been met with both anticipation and critique, reflecting diverse stakeholder interests and concerns. While the framework aims to enhance connectivity by integrating satellite and terrestrial networks, several critiques and concerns have been raised:

Interference concerns: A primary critique revolves around potential interference with existing terrestrial services. Stakeholders worry that SCS operations might disrupt the current users, including terrestrial mobile networks and other satellite services. A significant challenge is ensuring that SCS services coexist harmoniously with these incumbent services without causing harmful interference.

Certification of terrestrial mobile devices: FCC requires that terrestrial mobile devices has to be certified SCS. The expressed concerns have been multifaceted, reflecting the complexities of integrating satellite communication capabilities into standard consumer mobile devices. These concerns, as in particular highlighted in the FCC’s SCS framework, revolving around technical, regulatory, and practical aspects. As 3GPP NTN standardization are considering changes to mobile devices that would enhance the direct connectivity between device and satellite, it may at least for devices developed for NTN communication make sense to certify those.

Spectrum allocation and management: Spectrum allocation for SCS poses another concern, particularly the repurposing of spectrum bands previously dedicated to other uses. Critics argue that spectrum reallocation must be carefully managed to avoid disadvantaging existing services or limiting future innovation in those bands.

Regulatory and licensing framework: The complexity of the regulatory and licensing framework for SCS services has also been a point of contention. Critics suggest that the framework could be burdensome for new entrants or more minor players, potentially stifling innovation and competition in the satellite and telecommunications industries.

Technical and operational challenges: The technical requirements for SCS, including the need for advanced phased array antennas and the integration of satellite systems with terrestrial networks, pose significant challenges. Concerns about the feasibility and cost of developing and deploying the necessary technology at scale have been raised.

Market and economic impacts: There are concerns about the SCS framework’s economic implications, particularly its impact on existing market dynamics. Critics worry that the framework might favor certain players or technologies, potentially leading to market consolidation or barriers to entry for innovative solutions.

Environmental and space traffic management: The increased deployment of satellites for SCS services raises concerns about space debris and the sustainability of space activities. Critics emphasize the need for robust space traffic management and debris mitigation strategies to ensure the long-term viability of space operations.

Global coordination and equity: The global nature of satellite communications underscores the need for international coordination and equitable access to SCS services. Critics point out the importance of ensuring that the benefits of SCS extend to all regions, particularly those currently underserved by telecommunications infrastructure.

FURTHER READING.

  1. FCC-CIRC2403-03, Report and Order and further notice of proposed rulemaking, related to the following context: “Single Network Future: Supplemental Coverage from Space” (February 2024).
  2. A. Vanelli-Coralli, N. Chuberre, G. Masini, A. Guidotti, M. El Jaafari, “5G Non-Terrestrial Networks.”, Wiley (2024). A recommended reading for deep diving into NTN networks of satellites, typically the LEO kind, and High-Altitude Platform Systems (HAPS) such as stratospheric drones.
  3. Kim Kyllesbech Larsen, The Next Frontier: LEO Satellites for Internet Services. | techneconomyblog, (March 2024).
  4. Kim Kyllesbech Larsen, Stratospheric Drones: Revolutionizing Terrestrial Rural Broadband from the Skies? | techneconomyblog, (January 2024).
  5. Kim Kyllesbech Larsen, Spectrum in the USA – An overview of Today and a new Tomorrow. | techneconomyblog, (May 2023).
  6. Starlink, “Starlink specifications” (Starlink.com page). The following Wikipedia resource is also quite good: Starlink.
  7. R.K. Mailloux, “Phased Array Antenna Handbook, 3rd Edition”, Artech House, (September 2017).
  8. Professor Emil Björnson, “Basics of Antennas and Beamforming”, (2019). Provides a high-level understand of what beamforming is in relative simple terms.
  9. Professor Emil Björnson, “Physically Large Antenna Arrays: When the Near-Field Becomes Far-Reaching”, (2022). Provides a high-level understand of what phased array and their working in relative simple terms with lots of simply illustrations. I also recommend to check Prof. Björnson’s “Reconfigurable intelligent surfaces: Myths and realities” (2020).
  10. AST SpaceMobile website: https://ast-science.com/ Constellation Areas: Internet, Direct-to-Cell, Space-Based Cellular Broadband, Satellite-to-Cellphone. 243 LEO satellites planned. 2 launched.
  11. Jon Brodkin, “Google and AT&T invest in Starlink rival for satellite-to-smartphone service”, Ars Technica (January 2024). There is a very nice picture of AST’s 64 square meter large BlueWalker 3 phased array antenna (i.e., with a total supporting bandwidth of 100 MHz with a channels of 5 MHz and a theoretical spectral efficiency of 3 Mbps/MHz/channel).
  12. Lynk Global website: https://lynk.world/ (see also FCC Order and Authorization). It should be noted that Lynk can operate within 617 to 960 MHz (Space-to-Earth) and 663 to 915 MHz (Earth-to-Space). However, only outside the USA. Constellation Area: IoT / M2M, Satellite-to-Cellphone, Internet, Direct-to-Cell. 8 LEO satellites out of 10 planned.
  13. NewSpace Index: https://www.newspace.im/ I find this resource to have excellent and up-to-date information on commercial satellite constellations.
  14. Up-to-date rocket launch schedule and launch details can be found here: https://www.rocketlaunch.live/

ACKNOWLEDGEMENT.

I greatly acknowledge my wife, Eva Varadi, for her support, patience, and understanding during the creative process of writing this article.

The Next Frontier: LEO Satellites for Internet Services.

THE SPACE RACE IS ON.

If all current commercial satellite plans were to be realized within the next decade, we would have more, possibly substantially more, than 65 thousand satellites circling Earth. Today, that number is less than 10 thousand, with more than half that number realized by StarLink’s Low Earth Orbit (LEO) constellation over the last couple of years (i.e., since 2018).

While the “Arms Race” during the Cold War was “a thing” mainly between The USA and the former Soviet Union, the Space Race will, in my opinion, be “battled out” between the commercial interests of the West against the political interest of China (as illustrated in Figure 1 below). The current numbers strongly indicate that Europe, Canada, the Middle East, Africa, and APAC (minus China) will likely and largely be left on the sideline to watch the US and China impose, in theory, a “duopoly” in LEO satellite-based services. However, in practice, it will be a near-monopoly when considering security concerns between the West and the (re-defined) East block.

Figure 1 Illustrates my thesis that we will see a Space Race over the next 10 years between a (or very few) commercial LEO constellation, represented by a Falcon-9 like design (for maybe too obvious reasons), and a Chinese-state owned satellite constellation. (Courtesy: DALL-E).

As of end of 2023, more than 50% of launched and planned commercial LEO satellites are USA-based. Of those, the largest fraction is accounted for by the US-based StarLink constellation (~75%). More than 30% are launched or planned by Chinese companies headed by the state-owned Guo Wang constellation rivaling Elon Musk’s Starlink in ambition and scale. Europe comes in at a distant number 3 with about 8% of the total of fixed internet satellites. Apart from being disappointed, alas, not surprised by the European track record, it is somewhat more baffling that there are so few Indian and African satellite (there are none) constellations given the obvious benefits such satellites could bring to India and the African continent.

India is a leading satellite nation with a proud tradition of innovative satellite designs and manufacturing and a solid track record of satellite launches. However, regarding commercial LEO constellations, India still needs to catch up on some opportunities here. Having previously worked on the economics and operationalizing a satellite ATC (i.e., a satellite service with an ancillary terrestrial component) internet service across India, it is mind-blowing (imo) how much economic opportunity there is to replace by satellite the vast terrestrial cellular infrastructure in rural India. Not to mention a quantum leap in communication broadband services resilience and availability that could be provided. According to the StarLink coverage map, the regulatory approval in India for allowing StarLink (US) services is still pending. In the meantime, Eutelsat’s OneWeb (EU) received regulatory approval in late 2023 for its satellite internet service over India in collaboration with Barthi Enterprises (India), that is also the largest shareholder in the recently formed Eutelsat Group with 21.2%. Moreover, Jio’s JioSpaceFiber satellite internet services were launched in several Indian states at the end of 2023, using the SES (EU) MEO O3b mPower satellite constellation. Despite the clear satellite know-how and capital available, it appears there is little activity for Indian-based LEO satellite development, taking up the competition with international operators.

The African continent is attracting all the major LEO satellite constellations such as StarLink (US), OneWeb (EU), Amazon Kuipers (US), and Telesat Lightspeed (CAN). However, getting regulatory approval for their satellite-based internet services is a complex, time-consuming, and challenging process with Africa’s 54 recognized sovereign countries. I would expect that we will see the Chinese-based satellite constellations (e.g., Guo Wang) taking up here as well due to the strong ties between China and several of the African nations.

This article is not about SpaceX’s StarLink satellite constellation. Although StarLink is mentioned a lot and used as an example. Recently, at the Mobile World Congress 2024 in Barcelona, talking to satellite operators (but not StarLink) providing fixed broadband satellite services, we joked about how long into a meeting we could go before SpaceX and StarLink would be mentioned (~ 5 minutes where the record, I think).

This article is about the key enablers (frequencies, frequency bandwidth, antenna design, …) that make up an LEO satellite service, the LEO satellite itself, the kind of services one should expect from it, and its limitations.

There is no doubt that LEO satellites of today have an essential mission: delivering broadband internet to rural and remote areas with little or no terrestrial cellular or fixed infrastructure to provide internet services. Satellites can offer broadband internet to remote areas with little population density and a population spread out reasonably uniformly over a large area. A LEO satellite constellation is not (in general) a substitute for an existing terrestrial communications infrastructure. Still, it can enhance it by increasing service availability and being an important remedy for business continuity in remote rural areas. Satellite systems are capacity-limited as they serve vast areas, typically with limited spectral resources and capacity per unit area.

In comparison, we have much smaller coverage areas with demand-matched spectral resources in a terrestrial cellular network. It is also easier to increase capacity in a terrestrial cellular system by adding more sectors or increasing the number of sites in an area that requires such investments. Adding more cells, and thus increasing the system capacity, to satellite coverage requires a new generation of satellites with more advanced antenna designs, typically by increasing the number of phased-array beams and more complex modulation and coding mechanisms that boost the spectral efficiency, leading to increased capacity and quality for the services rendered to the ground. Increasing the system capacity of a cellular communications system by increasing the number of cells (i.e., cell splitting) works the same in satellite systems as it does for a terrestrial cellular system.

So, on average, LEO satellite internet services to individual customers (or households), such as those offered by StarLink, are excellent for remote, lowly populated areas with a nicely spread-out population. If we de-average this statement. Clearly, within the satellite coverage area, we may have towns and settlements where, locally, the population density can be fairly large despite being very small over the larger footprint covered by the satellite. As the capacity and quality of the satellite is a shared resource, serving towns and settlements of a certain size may not be the best approach to providing a sustainable and good customer experience as the satellite resources exhaust rapidly in such scenarios. In such scenarios, a hybrid architecture is of much better use as well as providing all customers in a town or settlement with the best service possible leveraging the existing terrestrial communications infrastructure, cellular as well as fixed, with that of a satellite backhaul broadband connection between a satellite ground gateway and the broadband internet satellite. This is offered by several satellite broadband providers (both from GEO, MEO and LEO orbits) and has the beauty of not only being limited to one provider. Unfortunately, this particular finesse, is often overlooked by the awe of massive scale of the StarLink constellation.

AND SO IT STARTS.

When I compared the economics of stratospheric drone-based cellular coverage with that of LEO satellites and terrestrial-based cellular networks in my previous article, “Stratospheric Drones: Revolutionizing Terrestrial Rural Broadband from the Skies?”, it was clear that even if LEO satellites are costly to establish, they provide a substantial cost advantage over cellular coverage in rural and remote areas that are either scarcely covered or not at all. Although the existing LEO satellite constellations have limited capacity compared to a terrestrial cellular network and would perform rather poorly over densely populated areas (e.g., urban and suburban areas), they can offer very decent fixed-wireless-access-like broadband services in rural and remote areas at speeds exceeding even 100 Mbps, such as shown by the Starlink constellation. Even if the provided speed and capacity is likely be substantially lower than what a terrestrial cellular network could offer, it often provides the missing (internet) link. Anything larger than nothing remains infinitely better.

Low Earth Orbit (LEO) satellites represent the next frontier in (novel) communication network architectures, what we in modern lingo would call non-terrestrial networks (NTN), with the ability to combine both mobile and fixed broadband services, enhancing and substituting terrestrial networks. The LEO satellites orbit significantly closer to Earth than their Geostationary Orbit (GEO) counterparts at 36 thousand kilometers, typically at altitudes between 300 to 2,000 kilometers, LEO satellites offer substantially reduced latency, higher bandwidth capabilities, and a more direct line of sight to receivers on the ground. It makes LEO satellites an obvious and integral component of non-terrestrial networks, which aim to extend the reach of existing fixed and mobile broadband services, particularly in rural, un-and under-served, or inaccessible regions as a high-availability element of terrestrial communications networks in the event of natural disasters (flooding, earthquake, …), or military conflict, in which the terrestrial networks are taken out of operation.

Another key advantage of LEO satellite is that the likelihood of a line-of-sight (LoS) to a point on the ground is very high compared to establishing a LoS for terrestrial cellular coverage that, in general, would be very low. In other words, the signal propagation from a LEO satellite closely approximates that of free space. Thus, all the various environmental signal loss factors we must consider for a standard terrestrial-based cellular mobile network do not apply to our satellite with signal propagation largely being determined by the distance between the satellite and the ground (see Figure 2).

Figure 2 illustrates the difference between terrestrial cellular coverage from a cell tower and that of a Low Earth Orbit (LEO) Satellite. The benefit of seeing the world from above is that environmental and physical factors have substantially less impact on signal propagation and quality primarily being impacted by distance as it approximates free space propagation with signal attenuation mainly determined by the Line-of-Sight (LoS) distance from antenna to Earth. This situation is very different for a terrestrial-based cellular tower with its radiated signal being substantially compromised by environmental factors.

Low Earth Orbit (LEO) satellites, compared to GEO and MEO-based higher-altitude satellite systems, in general, have simpler designs and smaller sizes, weights, and volumes. Their design and architecture are not just a function of technological trends but also a manifestation of their operational environment. The (relative) simplicity of LEO satellites also allows for more standardized production, allowing for off-the-shelf components and modular designs that can be manufactured in larger quantities, such as the case with CubeSats standard and SmallSats in general. The lower altitude of LEO satellites translates to a reduced distance from the launch site to the operational orbit, which inherently affects the economics of satellite launches. This proximity to Earth means that the energy required to propel a satellite into LEO is significantly less than needed to reach Geostationary Earth Orbit (GEO), resulting in lower launch costs.

The advent of LEO satellite constellations marks an important shift in how we approach global connectivity. With the potential to provide ubiquitous internet coverage in rural and remote places with little or no terrestrial communications infrastructure, satellites are increasingly being positioned as vital elements in global communication. The LEO satellites, as well as stratospheric drones, have the ability to provide economical internet access, as addressed in my previous article, in remote areas and play a significant role in disaster relief efforts. For example, when terrestrial communication networks may be disrupted after a natural disaster, LEO satellites can quickly re-establish communication links to normal cellular devices or ad-how earth-based satellite systems, enabling efficient coordination of rescue and relief operations. Furthermore, they offer a resilient network backbone that complements terrestrial infrastructure.

The Internet of Things (IoT) benefits from the capabilities of LEO satellites. Particular in areas where there is little or no existing terrestrial communications networks. IoT devices often operate in remote or mobile environments, from sensors in agricultural fields to trackers across shipping routes. LEO satellites provide reliable connectivity to IoT networks, facilitating many applications, such as non- and near real-time monitoring of environmental data, seamless asset tracking over transcontinental journeys, and rapid deployment of smart devices in smart city infrastructures. As an example, let us look at the minimum requirements for establishing a LEO satellite constellation that can gather IoT measurements. At an altitude of 550 km the satellite would take ca. 1.5 hour to return to a given point on its orbit. Earth rotates (see also below) which require us to deploy several orbital planes to ensure that we have continuous coverage throughout the 24 hours of a day (assuming this is required). Depending on the satellite antenna design, the target coverage area, and how often a measurement is required, a satellite constellation to support an IoT business may not require much more than 20 (lower measurement frequency) to 60 (higher measurement frequency, but far from real real-time data collection) LEO satellites (@ 550 km).

For defense purposes, LEO satellite systems present unique advantages. Their lower orbits allow for high-resolution imagery and rapid data collection, which are crucial for surveillance, reconnaissance, and operational awareness. As typically more LEO satellites will be required, compared to a GEO satellite, such systems also offer a higher degree of redundancy in case of anti-satellite (ASAT) warfare scenarios. When integrated with civilian applications, military use cases can leverage the robust commercial infrastructure for communication and geolocation services, enhancing capabilities while distributing the system’s visibility and potential targets.

Standalone military LEO satellites are engineered for specific defense needs. These may include hardened systems for secure communication, resistance to jamming, and interception. For instance, they can be equipped with advanced encryption algorithms to ensure secure transmission of sensitive military data. They also carry tailored payloads for electronic warfare, signal intelligence, and tactical communications. For example, they can host sensors for detecting and locating enemy radar and communication systems, providing a significant advantage in electronic warfare. As the line between civilian and military space applications blurs, dual-use LEO satellite systems are emerging, capable of serving civilian broadband and specialized military requirements. It should be pointed out that there also military applications, such as signal gathering, that may not be compatible with civil communications use cases.

In a military conflict, the distributed architecture and lower altitude of LEO constellations may offer some advantages regarding resilience and targetability compared to GEO and MEO-based satellites. Their more significant numbers (i.e., 10s to 1000s) compared to GEO, and the potential for quicker orbital resupply can make them less susceptible to complete system takedown. However, their lower altitudes could make them accessible to various ASAT technologies, including ground-based missiles or space-based kinetic interceptors.

It is not uncommon to encounter academic researchers and commentators who give the impression that LEO satellites could replace existing terrestrial-based infrastructures and solve all terrestrial communications issues known to man. That is (of course) not the case. Often, such statements appears to be based an incomplete understanding of the capacity limitation of satellite systems. Due to satellites’ excellent coverage with very large terrestrial footprints, the satellite capacity is shared over very large areas. For example, consider an LEO satellite at 550 km altitude. The satellite footprint, or coverage area (aka ground swath), is the area on the Earth’s surface over which the satellite can establish a direct line of sight. The satellite footprint in our example diameter would be ca. five thousand five hundred kilometers. An equivalent area of ca. 23 million square kilometers is more than twice that of the USA (or China or Canada). Before you get too excited, the satellite antenna will typically restrict the surface area the satellite will cover. The extent of the observable world that is seen at any given moment by the satellite antenna is defined as the Field of View (FoV) and can vary from a few degrees (narrow beams, small coverage area) to 40 degrees or higher (wide beams, large coverage areas). At a FoV of 20 degrees, the antenna footprint would be ca. 2 thousand 400 kilometers, equivalent to a coverage area of ca. 5 million square kilometers.

In comparison, for a FoV of 0.8 degrees, the antenna footprint would only be 100 kilometers. If our satellite has a 16-satellite beam capability, it would translate into a coverage diameter of 24 km per beam. For the StarLink system based on the Ku-band (13 GHz) and a cell downlink (Satellite-to-Earth) capacity of ca. 680 Mbps (in 250 MHz) we would have ca. 2 Mbps per km2 unit coverage area. Compared to a terrestrial rural cellular site with 85 MHz (Downlink, Base station antenna to customer terminal), it would deliver 10+ Mbps per km2 unit coverage area.

It is always good to keep in mind that “Satellites mission is not to replace terrestrial communications infrastructures but supplement and enhance them”, and furthermore, “Satellites offer the missing (internet) link in areas where there is no terrestrial communications infrastructure present”. Satellites offer superior coverage to any terrestrial communications infrastructure. Satellites limitations are in providing capacity, and quality, at population scale as well as supporting applications and access technologies requiring very short latencies (e.g., smaller than 10 ms).

In the following, I will focus on terrestrial cellular coverage and services that LEO satellites can provide. At the end of my blog, I hope I have given you (the reader) a reasonable understanding of how terrestrial coverage, capacity, and quality work in a (LEO) satellite system and have given you an impression of key parameters we can add to the satellite to improve those.

EARTH ROTATES, AND SO DO SATELLITES.

Before getting into the details of low earth orbit satellites, let us briefly get a couple of basic topics off the table. Skipping this part may be a good option if you are already into and in the know satellites. Or maybe carry on an get a good laugh of those terra firma cellular folks that forgot about the rotation of Earth 😉

From an altitude and orbit (around Earth) perspective, you may have heard of two types of satellites: The GEO and the LEO satellites. Geostationary (GEO) satellites are positioned in a geostationary orbit at ~36 thousand kilometers above Earth. That the satellite is geostationary means it rotates with the Earth and appears stationary from the ground, requiring only one satellite to maintain constant coverage over an area that can be up to one-third of Earth’s surface. Low Earth Orbit (LEO) satellites are positioned at an altitude between 300 to 2000 kilometers above Earth and move relative to the Earth’s surface at high speeds, requiring a network or constellation to ensure continuous coverage of a particular area.

I have experienced that terrestrial cellular folks (like myself) when first thinking about satellite coverage are having some intuitive issues with satellite coverage. We are not used to our antennas moving away from the targeted coverage area, and our targeted coverage area, too, is moving away from our antenna. The geometry and dynamics of terrestrial cellular coverage are simpler than they are for satellite-based coverage. For LEO satellite network planners, it is not rocket science (pun intended) that the satellites move around in their designated orbit over Earth at orbital speeds of ca. 70 to 80 km per second. Thus, at an altitude of 500 km, a LEO satellite orbits Earth approximately every 1.5 hours. Earth, thankfully, rotates. Compared to its GEO satellite “cousin,” the LEO satellite ” is not “stationary” from the perspective of the ground. Thus, as Earth rotates, the targeted coverage area moves away from the coverage provided by the orbital satellite.

We need several satellites in the same orbit and several orbits (i.e., orbital planes) to provide continuous satellite coverage of a target area. This is very different from terrestrial cellular coverage of a given area (needles to say).

WHAT LEO SATELLITES BRING TO THE GROUND.

Anything is infinitely more than nothing. The Low Earth Orbit satellite brings the possibility of internet connectivity where there previously was nothing, either because too few potential customers spread out over a large area made terrestrial-based services hugely uneconomical or the environment is too hostile to build normal terrestrial networks within reasonable economics.

Figure 3 illustrates a low Earth satellite constellation providing internet to rural and remote areas as a way to solve part of the digital divide challenge in terms of availability. Obviously, the affordability is likely to remain a challenge unless subsidized by customers who can afford satellite services in other places where availability is more of a convenience question. (Courtesy: DALL-E)

The LEO satellites represent a transformative shift in internet connectivity, providing advantages over traditional cellular and fixed broadband networks, particularly for global access, speed, and deployment capabilities. As described in “Stratospheric Drones: Revolutionizing Terrestrial Rural Broadband from the Skies?”, LEO satellite constellations, or networks, may also be significantly more economical than equivalent cellular networks in rural and remote areas where the economics of coverage by satellite, as depicted in the above Figure 3, is by far better than by traditional terrestrial cellular means.

One of the foremost benefits of LEO satellites is their ability to offer global coverage as well as reasonable broadband and latency performance that is difficult to match with GEO and MEO satellites. The GEO stationary satellite obviously also offers global broadband coverage, the unit coverage being much more extensive than for a LEO satellite, but it is not possible to offer very low latency services, and it is more difficult to provide high data rates (in comparison to a LEO satellite). LEO satellites can reach the most remote and rural areas of the world, places where laying cables or setting up cell towers is impractical. This is a crucial step in delivering communications services where none exist today, ensuring that underserved populations and regions gain access to internet connectivity.

Another significant advantage is the reduction in latency that LEO satellites provide. Since they orbit much closer to Earth, typically at an altitude between 350 to 700 km, compared to their geostationary counterparts that are at 36 thousand kilometers altitude, the time it takes for a communications signal to travel between the user and the satellite is significantly reduced. This lower latency is crucial for enhancing the user experience in real-time applications such as video calls and online gaming, making these activities more enjoyable and responsive.

An inherent benefit of satellite constellations is their ability for quick deployment. They can be deployed rapidly in space, offering a quicker solution to achieving widespread internet coverage than the time-consuming and often challenging process of laying cables or erecting terrestrial infrastructure. Moreover, the network can easily be expanded by adding more satellites, allowing it to dynamically meet changing demand without extensive modifications on the ground.

LEO satellite networks are inherently scalable. By launching additional satellites, they can accommodate growing internet usage demands, ensuring that the network remains efficient and capable of serving more users over time without significant changes to ground infrastructure.

Furthermore, these satellite networks offer resilience and reliability. With multiple satellites in orbit, the network can maintain connectivity even if one satellite fails or is obstructed, providing a level of redundancy that makes the network less susceptible to outages. This ensures consistent performance across different geographical areas, unlike terrestrial networks that may suffer from physical damage or maintenance issues.

Another critical advantage is (relative) cost-effectiveness compared to a terrestrial-based cellular network. In remote or hard-to-reach areas, deploying satellites can be more economical than the high expenses associated with extending terrestrial broadband infrastructure. As satellite production and launch costs continue to decrease, the economics of LEO satellite internet become increasingly competitive, potentially reducing the cost for end-users.

LEO satellites offer a promising solution to some of the limitations of traditional connectivity methods. By overcoming geographical, infrastructural, and economic barriers, LEO satellite technology has the potential to not just complement but effectively substitute terrestrial-based cellular and fixed broadband services, especially in areas where such services are inadequate or non-existent.

Figure 4 below provides an overview of LEO satellite coverage with fixed broadband services offered to customers in the Ku band with a Ka backhaul link to ground station GWs that connect to, for example, the internet. Having inter-satellite communications (e.g., via laser links such as those used by Starlink satellites as per satellite version 1.5) allows for substantially less ground-station gateways. Inter-satellite laser links between intra-plane satellites are a distinct advantage in ensuring coverage for rural and remote areas where it might be difficult, very costly, and impractical to have a satellite ground station GW to connect to due to the lack of global internet infrastructure.

Figure 4 In general, a satellite is required to have LoS to its ground station gateway (GW); in other words, the GW needs to be within the coverage footprint of the satellite. For LEO satellites, which are at low altitudes, between 300 and 2000 km, and thus have a much lower footprint than MEO and GEO satellites, this would result in a need for a substantial amount of ground stations. This is depicted in (a) above. With inter-satellite laser links (SLL), e.g., those implemented by Starlink, it is possible to reduce the ground station gateways significantly, which is particularly helpful in rural and very remote areas. These laser links enable direct communication between satellites in orbit, which enhances the network’s performance, reliability, and global reach.

Inter-satellite laser links (ISLLs), or, as it is also called Optical Inter-satellite Links (OISK), are an advanced communication technology utilized by satellite constellations, such as for example Starlink, to facilitate high-speed secure data transmission directly between satellites. Inter-satellite laser links are today (primarily) designed for intra-plane communication within satellite constellations, enabling data transfer between satellites that share the same orbital plane. This is due to the relatively stable geometries and predictable distances between satellites in the same orbit, which facilitate maintaining the line-of-sight connections necessary for laser communications. ISLLs mark a significant departure from traditional reliance on ground stations for inter-satellite communication, and as such the ISL offers many benefits, including the ability to transmit data at speeds comparable to fiber-optic cables. Additionally, ISLLs enable satellite constellations to deliver seamless coverage across the entire planet, including over oceans and polar regions where ground station infrastructure is limited or non-existent. The technology also inherently enhances the security of data transmissions, thanks to the focused nature of laser beams, which are difficult to intercept.

However, the deployment of ISLLs is not without challenges. The technology requires a clear line of sight between satellites, which can be affected by their orbital positions, necessitating precise control mechanisms. Moreover, the theoretical limit to the number of satellites linked in a daisy chain is influenced by several factors, including the satellite’s power capabilities, the network architecture, and the need to maintain clear lines of sight. High-power laser systems also demand considerable energy, impacting the satellite’s power budget and requiring efficient management to balance operational needs. The complexity and cost of developing such sophisticated laser communication systems, combined with very precise pointing mechanisms and sensitive detectors, can be quite challenging and need to be carefully weighted against building satellite ground stations.

Cross-plane ISLL transmission, or the ability to communicate between satellites in different orbital planes, presents additional technical challenges, as it is technically highly challenging to maintain a stable line of sight between satellites moving in different orbital planes. However, the potential for ISLLs to support cross-plane links is recognized as a valuable capability for creating a fully interconnected satellite constellation. The development and incorporation of cross-plane ISLL capabilities into satellites are an area of active research and development. Such capabilities would reduce the reliance on ground stations and significantly increase the resilience of satellite constellations. I see the development as a next-generation topic together with many other important developments as described in the end of this blog. However, the power consumption of the ISLL is a point of concern that needs careful attention as it will impact many other aspects of the satellite operation.

THE DIGITAL DIVIDE.

The digital divide refers to the “internet haves and haves not” or “the gap between individuals who have access to modern information and communication technology (ICT),” such as the internet, computers, and smartphones, and those who do not have access. This divide can be due to various factors, including economic, geographic, age, and educational barriers. Essentially, as illustrated in Figure 5, it’s the difference between the “digitally connected” and the “digitally disconnected.”.

The significance of the digital divide is considerable, impacting billions of people worldwide. It is estimated that a little less than 40% of the world’s population, or roughly 2.9 billion people, had never used the internet (as of 2023). This gap is most pronounced in developing countries, rural areas, and among older populations and economically disadvantaged groups.

The digital divide affects individuals’ ability to access information, education, and job opportunities and impacts their ability to participate in digital economies and the modern social life that the rest of us (i.e., the other side of the divide or the privileged 60%) have become used to. Bridging this divide is crucial for ensuring equitable access to technology and its benefits, fostering social and economic inclusion, and supporting global development goals.

Figure 5 illustrates the digital divide, that is, the gap between individuals with access to modern information and communication technology (ICT), such as the internet, computers, and smartphones, and those who do not have access. (Courtesy: DALL-E)

CHALLENGES WITH LEO SATELLITE SOLUTIONS.

Low-Earth-orbit satellites offer compelling advantages for global internet connectivity, yet they are not without challenges and disadvantages when considered substitutes for cellular and fixed broadband services. These drawbacks underscore the complexities and limitations of deploying LEO satellite technology globally.

The capital investment required and the ongoing costs associated with designing, manufacturing, launching, and maintaining a constellation of LEO satellites are substantial. Despite technological advancements and increased competition driving costs down, the financial barrier to entry remains high. Compared to their geostationary counterparts, the relatively short lifespan of LEO satellites necessitates frequent replacements, further adding to operational expenses.

While LEO satellites offer significantly reduced latency (round trip times, RTT ~ 4 ms) compared to geostationary satellites (RTT ~ 240 ms), they may still face latency and bandwidth limitations, especially as the number of users on the satellite network increases. This can lead to reduced service quality during peak usage times, highlighting the potential for congestion and bandwidth constraints. This is also the reason why the main business model of LEO satellite constellations is primarily to address coverage and needs in rural and remote locations. Alternatively, the LEO satellite business model focuses on low-bandwidth needs such as texting, voice messaging, and low-bandwidth Internet of Things (IoT) services.

Navigating the regulatory and spectrum management landscape presents another challenge for LEO satellite operators. Securing spectrum rights and preventing signal interference requires coordination across multiple jurisdictions, which can complicate deployment efforts and increase the complexity of operations.

The environmental and space traffic concerns associated with deploying large numbers of satellites are significant. The potential for space debris and the sustainability of low Earth orbits are critical issues, with collisions posing risks to other satellites and space missions. Additionally, the environmental impact of frequent rocket launches raises further concerns.

FIXED-WIRELESS ACCESS (FWA) BASED LEO SATELLITE SOLUTIONS.

Using the NewSpace Index database, updated December 2023, there are currently more than 6,463 internet satellites launched, of which 5,650 (~87%) from StarLink, and 40,000+ satellites planned for launch, with SpaceX’s Starlink satellites having 11,908 planned (~30%). More than 45% of the satellites launched and planned support multi-application use cases. Thus internet, together with, for example, IoT (~4%) and/or Direct-2-Device (D2D, ~39%). The D2D share is due to StarLink’s plans to provide services to mobile terminals with their latest satellite constellation. The first six StarLink v2 satellites with direct-to-cellular capability were successfully launched on January 2nd, 2024. Some care should be taken in the share of D2D satellites in the StarLink number as it does not consider the different form factors of the version 2 satellite that do not all include D2D capabilities.

Most LEO satellites, helped by StarLink satellite quantum, operational and planned, support satellite fixed broadband internet services. It is worth noting that the Chinese Guo Wang constellation ranks second in terms of planned LEO satellites, with almost 13,000 planned, rivaling the StarLink constellation. After StarLink and Guo Wang are counted there is only 34% or ca. 16,000 internet satellites left in the planning pool across 30+ satellite companies. While StarLink is privately owned (by Elon Musk), the Guo Wang (國網 ~ “The state network”) constellation is led by China SatNet and created by the SASAC (China’s State-Owned Assets Supervision and Administration Commission). SASAC oversees China’s biggest state-owned enterprises. I expect that such an LEO satellite constellation, which would be the second biggest LEO constellation, as planned by Guo Wang and controlled by the Chinese State, would be of considerable concern to the West due to the possibility of dual-use (i.e., civil & military) of such a constellation.

StarLink coverage as of March 2024 (see StarLink’s availability map) does not provide services in Russia, China, Iran, Iraq, Afghanistan, Venezuela, and Cuba (20% of Earth’s total land base surface area). There are still quite a few countries in Africa and South-East Asia, including India, where regulatory approval remains pending.

Figure 6 NewSpace Index data of commercial satellite constellations in terms of total number of launched and planned (top) per company (or constellation name) and (bottom) per country.

While the term FWA, fixed wireless access, is not traditionally used to describe satellite internet services, the broadband services offered by LEO satellites can be considered a form of “wireless access” since they also provide connectivity without cables or fiber. In essence, LEO satellite broadband is a complementary service to traditional FWA, extending wireless broadband access to locations beyond the reach of terrestrial networks. In the following, I will continue to use the term FWA for the fixed broadband LEO satellite services provided to individual customers, including SMEs. As some of the LEO satellite businesses eventually also might provide direct-to-device (D2D) services to normal terrestrial mobile devices, either on their own acquired cellular spectrum or in partnership with terrestrial cellular operators, the LEO satellite operation (or business architecture) becomes much closer to terrestrial cellular operations.

Figure 7 Illustrating a Non-Terrestrial Network consisting of a Low Earth Orbit (LEO) satellite constellation providing fixed broadband services, such as Fixed Wireless Access, to individual terrestrial users (e.g., Starlink, Kuiper, OneWeb,…). Each hexagon represents a satellite beam inside the larger satellite coverage area. Note that, in general, there will be some coverage overlap between individual satellites, ensuring a continuous service. The operating altitude of an LEO satellite constellation is between 300 and 2,000 km, with most aiming to be at 450 to 550 km altitude. It is assumed that the satellites are interconnected, e.g., laser links. The User Terminal antenna (UT) is dynamically orienting itself after the best line-of-sight (in terms of signal quality) to a satellite within UT’s field-of-view (FoV). The FoV has not been shown in the picture above so as not to overcomplicate the illustration.

Low Earth Orbit (LEO) satellite services like Starlink have emerged to provide fixed broadband internet to individual consumers and small to medium-sized enterprises (SMEs) targeting rural and remote areas often where no other broadband solutions are available or with poor legacy copper- or coax-based infrastructure. These services deploy constellations of satellites orbiting close to Earth to offer high-speed internet with the significant advantage of reaching rural and remote areas where traditional ground-based infrastructure is absent or economically unfeasible.

One of the most significant benefits of LEO satellite broadband is the ability to deliver connectivity with lower latency compared to traditional satellite internet delivered by geosynchronous satellites, enhancing the user experience for real-time applications. The rapid deployment capability of these services also means that areas in dire need of internet access can be connected much quicker than waiting for ground infrastructure development. Additionally, satellite broadband’s reliability is less affected by terrestrial challenges, such as natural disasters that can disrupt other forms of connectivity.

The satellite service comes with its challenges. The cost of user equipment, such as satellite dishes, can be a barrier for some users. So, can the installation process be of the terrestrial satellite dish required to establish the connection to the satellite. Moreover, services might be limited by data caps or experience slower speeds after reaching certain usage thresholds, which can be a drawback for users with high data demands. Weather conditions can also impact the signal quality, particularly at the higher frequencies used by the satellite, albeit to a lesser extent than geostationary satellite services. However, the target areas where the fixed broadband satellite service is most suited are rural and remote areas that either have no terrestrial broadband infrastructure (terrestrial cellular broadband or wired broadband such as coax or fiber)

Beyond Starlink, other providers are venturing into the LEO satellite broadband market. OneWeb is actively developing a constellation to offer internet services worldwide, focusing on communities that are currently underserved by broadband. Telesat Lightspeed is also gearing up to provide broadband services, emphasizing the delivery of high-quality internet to the enterprise and government sectors.

Other LEO satellite businesses, such as AST SpaceMobile and Lynk Mobile, are taking a unique approach by aiming to connect standard mobile phones directly to their satellite network, extending cellular coverage beyond the reach of traditional cell towers. More about that in the section below (see “New Kids on the Block – Direct-to-Devices LEO satellites”).

I have been asked why I appear somewhat dismissive of the Amazon’s Project Kuiper in a previous version of article particular compared to StarLink (I guess). The expressed mission is to “provide broadband services to unserved and underserved consumers, businesses in the United States, …” (FCC 20-102). Project Kuiper plans for a broadband constellation of 3,226 microsatellites at 3 altitudes (i.e., orbital shells) around 600 km providing fixed broadband services in the Ka-band (i.e.,~ 17-30 GHz). In its US-based FCC (Federal Communications Commission) filling and in the subsequent FCC authorization it is clear that the Kuiper constellation primarily targets contiguous coverage of the USA (but mentions that services cannot be provided in the majority of Alaska, … funny I thought that was a good definition of a underserved remote and scarcely populated area?). Amazon has committed to launch 50% (1,618 satellites) of their committed satellites constellation before July 2026 (until now 2+ has been launched) and the remaining 50% before July 2029. There is however far less details on the Kuiper satellite design, than for example is available for the various versions of the StarLink satellites. Given the Kuiper will operate in the Ka-band there may be more frequency bandwidth allocated per beam than possible in the StarLink satellites using the Ku-band for customer device connectivity. However, Ka-band is at a higher frequency which may result in a more compromised signal propagation. In my opinion based on the information from the FCC submissions and correspondence, the Kuiper constellation appear less ambitious compared to StarLink vision, mission and tangible commitment in terms of aggressive launches, very high level of innovation and iterative development on their platform and capabilities in general. This may of course change over time and as more information becomes available on the Amazon’s Project Kuiper.

FWA-based LEO satellite solutions – takeaway:

  • LoS-based and free-space-like signal propagation allows high-frequency signals (i.e., high throughput, capacity, and quality) to provide near-ideal performance only impacted by the distance between the antenna and the ground terminal. Something that is, in general, not possible for a terrestrial-based cellular infrastructure.
  • Provides satellite fixed broadband internet connectivity typically using the Ku-band in geographically isolated locations where terrestrial broadband infrastructure is limited or non-existent.
  • Lower latency (and round trip time) compared to MEO and GEO satellite internet solutions.
  • Current systems are designed to provide broadband internet services in scarcely populated areas and underserved (or unserved) regions where traditional terrestrial-based communications infrastructures are highly uneconomical and/or impractical to deploy.
  • As shown in my previous article (i.e., “Stratospheric Drones: Revolutionizing Terrestrial Rural Broadband from the Skies?”), LEO satellite networks may be an economical interesting alternative to terrestrial rural cellular networks in countries with large scarcely populated rural areas requiring tens of thousands of cellular sites to cover. Hybrid models with LEO satellite FWA-like coverage to individuals in rural areas and with satellite backhaul to major settlements and towns should be considered in large geographies.
  • Resilience to terrestrial disruptions is a key advantage. It ensures functionality even when ground-based infrastructure is disrupted, which is an essential element for maintaining the Business Continuity of an operator’s telecommunications services. Particular hierarchical architectures with for example GEO-satellite, LEO satellite and Earth-based transport infrastructure will result in very high reliability network operations (possibly approaching ultra-high availability, although not with service parity).
  • Current systems are inherently capacity-limited due to their vast coverage areas (i.e., lower performance per unit coverage area). In the peak demand period, they will typically perform worse than terrestrial-based cellular networks (e.g., LTE or 5G).
  • In regions where modern terrestrial cellular and fixed broadband services are already established, satellite broadband may face challenges competing with these potentially cheaper, faster, and more reliable services, which are underpinned by the terrestrial communications infrastructure.
  • It is susceptible to weather conditions, such as heavy rain or snow, which can degrade signal quality. This may impact system capacity and quality, resulting in inconsistent customer experience throughout the year.
  • Must navigate complex regulatory environments in each country, which can affect service availability and lead to delays in service rollout.
  • Depending on the altitude, LEO satellites are typically replaced on a 5—to 7-year cycle due to atmospheric drag (which increases as altitude decreases; thus, the lower the altitude, the shorter a satellite’s life). This ultimately means that any improvements in system capacity and quality will take time to be thoroughly enjoyed by all customers.

SATELLITE BACKHAUL SOLUTIONS.

Figure 8 illustrates the architecture of a Low Earth Orbit (LEO) satellite backhaul system used by providers like OneWeb as well as StarLink with their so-called “Community Gateway”. It showcases the connectivity between terrestrial internet infrastructure (i.e., Satellite Gateways) and satellites in orbit, enabling high-speed data transmission. The network consists of LEO satellites that communicate with each other (inter-satellite Comms) using the Ku and Ka frequency bands. These satellites connect to ground-based satellite gateways (GW), which interface with Points of Presence (PoP) and Internet Exchange Points (IXP), integrating the space-based network with the terrestrial internet (WWW). Note: The indicated speeds and frequency bands (e.g., Ku: 12–18 GHz, Ka: 28–40 GHz) and data speeds illustrate the network’s capabilities.

LEO satellites providing backhaul connectivity, such as shown in Figure 8 above, are extending internet services to the farthest reaches of the globe. These satellites offer many benefits, as already discussed above, in connecting remote, rural, and previously un- and under-served areas with reliable internet services. Many remote regions lack foundational telecom infrastructure, particularly long-haul transport networks needed for carrying traffic away from remote populated areas. Satellite backhauls do not only offer a substantially better financial solution for enhancing internet connectivity to remote areas but are often the only viable solution for connectivity.

Take, for example, Greenland. The world’s largest non-continental island, the size of Western Europe, is characterized by its sparse population and distinct unconnected by road settlement patterns mainly along the West Coast (as well as a couple of settlements on the East Coast), influenced mainly by its vast ice sheets and rugged terrain. With a population of around 56+ thousand, primarily concentrated on the west coast, Greenland’s demographic distribution is spread out over ca. 50+ settlements and about 20 towns. Nuuk, the capital, is the island’s most populous city, housing over 18+ thousand residents and serving as the administrative, economic, and cultural hub. Terrestrial cellular networks serve settlements’ and towns’ communication and internet services needs, with the traffic carried back to the central switching centers by long-haul microwave links, sea cables, and satellite broadband connectivity. Several settlements connectivity needs can only be served by satellite backhaul, e.g., settlements on the East Coast (e.g., Tasiilaq with ca. 2,000 inhabitants and Ittoqqotooormiit (an awesome name!) with around 400+ inhabitants). LEO satellite backhaul solutions serving Satellite-only communities, such as those operated and offered by OneWeb (Eutelsat), could provide a backhaul transport solution that would match FWA latency specifications due to better (round trip time) performance than that of a GEO satellite backhaul solution.

It should also be clear that remote satellite-only settlements and towns may have communications service needs and demand that a localized 4G (or 5G) terrestrial cellular network with a satellite backhaul can serve much better than, for example, relying on individual ad-hoc connectivity solution from for example Starlink. When the area’s total bandwidth demand exceeds the capacity of an FWA satellite service, a localized terrestrial network solution with a satellite backhaul is, in general, better.

The LEO satellites should offer significantly reduced latency compared to their geostationary counterparts due to their closer proximity to the Earth. This reduction in delay is essential for a wide range of real-time applications and services, from adhering to modern radio access (e.g., 4G and 5G) requirements, VoIP, and online gaming to critical financial transactions, enhancing the user experience and broadening the scope of possible services and business.

Among the leading LEO satellite constellations providing backhaul solutions today are SpaceX’s Starlink (via their community gateway), aiming to deliver high-speed internet globally with a preference of direct to consumer connectivity; OneWeb, focusing on internet services for businesses and communities in remote areas; Telesat’s Lightspeed, designed to offer secure and reliable connectivity; and Amazon’s Project Kuiper, which plans to deploy thousands of satellites to provide broadband to unserved and underserved communities worldwide.

Satellite backhaul solutions – takeaway:

  • Satellite-backhaul solutions are excellent, cost-effective solution for providing an existing isolated cellular (and fixed access) network with high-bandwidth connectivity to the Internet (such as in remote and deep rural areas).
  • LEO satellites can reduce the need for extensive and very costly ground-based infrastructure by serving as a backhaul solution. For some areas, such as Greenland, the Sahara, or the Brazilian rainforest, it may not be practical or economical to connect by terrestrial-based transmission (e.g., long-haul microwave links or backbone & backhaul fiber) to remote settlements or towns.
  • An LEO-based backhaul solution supports applications and radio access technologies requiring a very low round trip time scale (RTT<50 ms) than is possible with a GEO-based satellite backhaul. However, the optimum RTT will depend on where the LEO satellite ground gateway connects to the internet service provider and how low the RTT can be.
  • The collaborative nature of a satellite-backhaul solution allows the terrestrial operator to focus on and have full control of all its customers’ network experiences, as well as optimize the traffic within its own network infrastructure.
  • LEO satellite backhaul solutions can significantly boost network resilience and availability, providing a secure and reliable connectivity solution.
  • Satellite-backhaul solutions require local ground-based satellite transmission capabilities (e.g., a satellite ground station).
  • The operator should consider that at a certain threshold of low population density, direct-to-consumer satellite services like Starlink might be more economical than constructing a local telecom network that relies on satellite backhaul (see above section on “Fixed Wireless Access (FWA) based LEO satellite solutions”).
  • Satellite backhaul providers require regulatory permits to offer backhaul services. These permits are necessary for several reasons, including the use of radio frequency spectrum, operation of satellite ground stations, and provision of telecommunications services within various jurisdictions.
  • The Satellite life-time in orbit is between 5 to 7 years depending on the LEO altitude. A MEO satellite (2 to 36 thousand km altitude) last between 10 to 20 years (GEO). This also dictates the modernization and upgrade cycle as well as timing of your ROI investment case and refinancing needs.

NEW KIDS ON THE BLOCK – DIRECT-TO-DEVICE LEO SATELLITES.

A recent X-exchange (from March 2nd):

Elon Musk: “SpaceX just achieved peak download speed of 17 Mb/s from a satellite direct to unmodified Samsung Android Phone.” (note: the speed correspond to a spectral efficiency of ~3.4 Mbps/MHz/beam).

Reply from user: “That’s incredible … Fixed wireless networks need to be looking over their shoulders?”

Elon Musk: “No, because this is the current peak speed per beam and the beams are large, so this system is only effective where there is no existing cellular service. This services works in partnership with wireless providers, like what @SpaceX and @TMobile announced.”

Figure 9 illustrating a LEO satellite direct-to-device communication in a remote areas without any terrestrially-based communications infrastructure. Satellite being the only means of communications either by a normal mobile device or by classical satphone. (Courtesy: DALL-E).

Low Earth Orbit (LEO) Satellite Direct-to-Device technology enables direct communication between satellites in orbit and standard mobile devices, such as smartphones and tablets, without requiring additional specialized hardware. This technology promises to extend connectivity to remote, rural, and underserved areas globally, where traditional cellular network infrastructure is absent or economically unfeasible to deploy. The system can offer lower latency communication by leveraging LEO satellites, which orbit closer to Earth than geostationary satellites, making it more practical for everyday use. The round trip time (RTT), the time it takes the for the signal to travel from the satellite to the mobile device and back, is ca. 4 milliseconds for a LEO satellite at 550 km compared to ca. 240 milliseconds for a geosynchronous satellite (at 36 thousand kilometers altitude).

The key advantage of a satellite in low Earth orbit is that the likelihood of a line-of-sight to a point on the ground is very high compared to establishing a line-of-sight for terrestrial cellular coverage that, in general, would be very low. In other words, the cellular signal propagation from a LEO satellite closely approximates that of free space. Thus, all the various environmental signal loss factors we must consider for a standard terrestrial-based mobile network do not apply to our satellite. In other, more simplistic words, the signal propagation directly from the satellite to the mobile device is less compromised than it typically would be from a terrestrial cellular tower to the same mobile device. The difference between free-space propagation, which considers only distance and frequency, and the terrestrial signal propagation models, which quantifies all the gains and losses experienced by a terrestrial cellular signal, is very substantial and in favor of free-space propagation.  As our Earth-bound cellular intuition of signal propagation often gets in the way of understanding the signal propagation from a satellite (or antenna in the sky in general), I recommend writing down the math using the formula of free space propagation loss and comparing this with terrestrial cellular link budget models, such as for example the COST 231-Hata Model (relatively simple) or the more recent 3GPP TR 38.901 Model (complex). In rural and sub-urban areas, depending on the environment, in-door coverage may be marginally worse, fairly similar, or even better than from terrestrial cell tower at a distance. This applies to both the uplink and downlink communications channel between the mobile device and the LEO satellite, and is also the reason why higher frequency (with higher frequency bandwidths available) use on LEO satellites can work better than in a terrestrial cellular network.

However, despite its potential to dramatically expand coverage, after all that is what satellites do, LEO Satellite Direct-to-Device technology is not a replacement for terrestrial cellular services and terrestrial communications infrastructures for several reasons: (a) Although the spectral efficiency can be excellent, the frequency bandwidth (in MHz) and data speeds (in Mbps) available through satellite connections are typically lower than those provided by ground-based cellular networks, limiting its use for high-bandwidth applications. (b) The satellite-based D2D services are, in general, capacity-limited and might not be able to handle higher user density typical for urban areas as efficiently as terrestrial networks, which are designed to accommodate large numbers of users through dense deployment of cell towers. (c) Environmental factors like buildings or bad weather can more significantly impact satellite communications’ reliability and quality than terrestrial services. (d) A satellite D2D service requires regulatory approval (per country), as the D2D frequency typically will be limited to terrestrial cellular services and will have to be coordinated and managed with any terrestrial use to avoid service degradation (or disruption) for customers using terrestrial cellular services also using the frequency. The satellites will have to be able to switch off their D2D service when the satellite covers jurisdictions that have not provided approval or where the relevant frequency/frequencies are in use terrestrially.

Using the NewSpace Index database, updated December 2023, there are current more than 8,000 Direct-to Device (D2D), or Direct-2-Cell (D2C), satellites planned for launch, with SpaceX’s Starlink v2 having 7,500 planned. The rest, 795 satellites, are distributed on 6 other satellite operators (e.g. AST Mobile, Sateliot (Spain), Inmarsat (HEO-orbit), Lynk,…). If we look at satellites designed for IoT connectivity we get in total 5,302, with 4,739 (not including StarLink) still planned, distributed out over 50+ satellite operators. The average IoT satellite constellation including what is currently planned is ~95 satellites with the majority targeted for LEO. The the satellite operators included in the 50+ count have confirmed funding with a minimum amount of US$2 billion (half of the operators have only funding confirmed without an amount). About 2,937 (435 launched) satellites are being planned to only serve IoT markets (note: I think this seems a bit excessive). With Swarm Technologies, a SpaceX subsidiary rank number 1 in terms of both launched and planned satellites. Swarm Technologies having launched at least 189 CubeSats (e.g., both 0.25U and 1U types) and have planned an addition 150. The second ranked IoT-only operator is Orbcomm with 51 satellites launched and an additional 52 planned. The average launched of the remaining IoT specific satellites operators are 5 with on average planning to launch 55 (over 42 constellations).

There are also 3 satellite operators (i.e., Chinese-based Galaxy Space: 1,000 LEO-sats; US-based Mangata Networks: 791 MEO/HEO-sats, and US-based Omnispace: 200 LEO?-sats) that have planned a total of 2,000 satellites to support 5G applications with their satellite solutions and one operator (i.e., Hanwha Systems) has planned 2,000 LEO satellites for 6G.

The emergence of LEO satellite direct-to-device (D2D) services, as depicted in the Figure 10 below, is at the forefront of satellite communication innovations, offering a direct line of connectivity between devices that bypasses the need for traditional cellular-based ground-based network infrastructure (e.g., cell towers). This approach benefits from the relatively short distance of hundreds of kilometers between LEO satellites and the Earth, reducing communication latency and broadening bandwidth capabilities compared to their geostationary counterparts. One of the key advantages of LEO D2D services is their ability to provide global coverage with an extensive number of satellites, i.e., in their 100s to 1000s depending the targeted quality of service, to support the services, ensuring that even the most remote and underserved areas have access to reliable communication channels. They are also critical in disaster resilience, maintaining communications when terrestrial networks fail due to emergencies or natural disasters.

Figure 10 This schematic presents the network architecture for satellite-based direct-to-device (D2D) communication facilitated by Low Earth Orbit (LEO) satellites, exemplified by collaborations like Starlink and T-Mobile US, Lynk Mobile, and AST Space Mobile. It illustrates how satellites in LEO enable direct connectivity between user equipment (UE), such as standard mobile devices and IoT (Internet of Things) devices, using terrestrial cellular frequencies and VHF/UHF bands. The system also shows inter-satellite links operating in the Ka-band for seamless network integration, with satellite gateways (GW) linking the space-based network to ground infrastructure, including Points of Presence (PoP) and Internet Exchange Points (IXP), which connect to the wider internet (WWW). This architecture supports innovative services like Omnispace and Astrocast, offering LEO satellite IoT connectivity. The network could be particularly crucial for defense and special operations in remote and challenging environments, such as the deserts or the Arctic regions of Greenland, where terrestrial networks are unavailable. As an example shown here, using regular terrestrial cellular frequencies in both downlink (~300 MHz to 7 GHz) and uplinks (900 MHz or lower to 2.1 GHz) ensures robust and versatile communication capabilities in diverse operational contexts.

While the majority of the 5,000+ Starlink constellation is 13 GHz (Ku-band), at the beginning of 2024, SpaceX launched a few 2nd generation Starlink satellites that support direct connections from the satellite to a normal cellular device (e.g., smartphone), using 5 MHz of T-Mobile USA’s PCS band (1900 MHz). The targeted consumer service, as expressed by T-Mobile USA, provides texting capabilities across the USA for areas with no or poor existing cellular coverage. This is fairly similar to services at similar cellular coverage areas presently offered by, for example, AST SpaceMobileOmniSpace, and Lynk Global LEO satellite services with reported maximum downlink speed approaching 20 Mbps. The so-called Direct-2-Device, where the device is a normal smartphone without satellite connectivity functionality, is expected to develop rapidly over the next 10 years and continue to increase the supported user speeds (i.e., utilized terrestrial cellular spectrum) and system capacity in terms of smaller coverage areas and higher number of satellite beams.

Table 1 below provides an overview of the top 13 LEO satellite constellations targeting (fixed) internet services (e.g., Ku band), IoT and M2M services, and Direct-to-Device (or Direct-to-Cell, D2C) services. The data has been compiled from the NewSpace Index website, which should be with data as of 31st of December 2023. The Top-satellite constellation rank has been based on the number of launched satellites until the end of 2023. Two additional Direct-2-Cell (D2C or Direct-to-Device, D2D) LEO satellite constellations are planned for 2024-2025. One is SpaceX Starlink 2nd generation, which launched at the beginning of 2024, using T-Mobile USA’s PCS Band to connect (D2D) to normal terrestrial cellular handsets. The other D2D (D2C) service is Inmarsat’s Orchestra satellite constellation based on L-band (for mobile terrestrial services) and Ka for fixed broadband services. One new constellation (Mangata Networks, see also the NewSpace constellation information) targeting 5G services. With two 5G constellations already launched, i.e., Galaxy Space (Yinhe) launched 8 LEO satellites, 1,000 planned using Q- and V-bands (i.e., not a D2D cellular 5G service), and OmniSpace launched two satellites and appear to have planned a total of 200 satellites. Moreover, currently, there is one planned constellation targeting 6G by the South Korean Hanwha Group (a bit premature, but interesting to follow nevertheless) with 2,000 6G (LEO) satellites planned.

Most currently launched and planned satellite constellations offering (or plan to provide) Direct-2-Cell services, including IoT and M2M, are designed for low-frequency bandwidth services that are unlikely to compete with terrestrial cellular networks’ quality of service where reasonable good coverage (or better) exists.

Table 1 An overview of the Top-14 LEO satellite constellations targeting (fixed) internet services (e.g., Ku band), IoT and M2M services, and Direct-to-Device (or direct-to-cell) services. The data has been compiled from the NewSpace Index website, which should be with data as of 31st of December 2023.

The deployment of LEO D2D services also navigates a complicated regulatory landscape, with the need for harmonized spectrum allocation across different regions. Managing interference with terrestrial cellular networks and other satellite operations is another interesting challenge albeit complex aspect, requiring sophisticated solutions to ensure signal integrity. Moreover, despite the cost-effectiveness of LEO satellites in terms of launch and operation, establishing a full-fledged network for D2D services demands substantial initial investment, covering satellite development, launch, and the setup of supporting ground infrastructure.

LEO satellites with D2D-based capabilities – takeaway:

  • Provides lower-bandwidth services (e.g., GPRS/EDGE/HSDPA-like) where no existing terrestrial cellular service is present.
  • (Re-)use on Satellite of the terrestrial cellular spectrum.
  • D2D-based satellite services may become crucial in business continuity scenarios, providing redundancy and increased service availability to existing terrestrial cellular networks. This is particularly essential as a remedy for emergency response personnel in case terrestrial networks are not functional. Limited capacity (due to little assigned frequency bandwidth) over a large coverage area serving rural and remote areas with little or no cellular infrastructure.
  • Securing regulatory approval for satellite services over independent jurisdictions is a complex and critical task for any operator looking to provide global or regional satellite-based communications. The satellite operator may have to switch off transmission over jurisdictions where no permission has been granted.
  • If the spectrum is also deployed on the ground, satellite use of it must be managed and coordinated (due to interference) with the terrestrial cellular networks.
  • Require lowly or non-utilized cellular spectrum in the terrestrial operator’s spectrum portfolio.
  • D2D-based communications require a more complex and sophisticated satellite design, including the satellite antenna resulting in higher manufacturing and launch cost.
  • The IoT-only commercial satellite constellation “space” is crowded with a total of 44 constellations (note: a few operators have several constellations). I assume that many of those plans will eventually not be realized. Note that SpaceX Swarm Technology is leading and in terms of total numbers (available in the NewSpace Index) database will remain a leader from the shear amount of satellites once their plan has been realized. I expect we will see a Chinese constellation in this space as well unless the capability will be built into the Guo Wang constellation.
  • The Satellite life-time in orbit is between 5 to 7 years depending on the altitude. This timeline also dictates the modernization and upgrade cycle as well as timing of your ROI investment and refinancing needs.
  • Today’s D2D satellite systems are frequency-bandwidth limited. However, if so designed, satellites could provide a frequency asymmetric satellite-to-device connection. For instance, the downlink from the satellite to the device could utilize a high frequency (not used in the targeted rural or remote area) and a larger bandwidth, while the uplink communication between the terrestrial device and the LEO satellite could use a sufficiently lower frequency and smaller frequency bandwidth.

MAKERS OF SATELLITES.

In the rapidly evolving space industry, a diverse array of companies specializes in manufacturing satellites for Low Earth Orbit (LEO), ranging from small CubeSats to larger satellites for constellations similar to those used by OneWeb (UK) and Starlink (USA). Among these, smaller companies like NanoAvionics (Lithuania) and Tyvak Nano-Satellite Systems (USA) have carved out niches by focusing on modular and cost-efficient small satellite platforms typically below 25 kg. NanoAvionics is renowned for its flexible mission support, offering everything from design to operation services for CubeSats (e.g., 1U, 3U, 6U) and larger small satellites (100+ kg). Similarly, Tyvak excels in providing custom-made solutions for nano-satellites and CubeSats, catering to specific mission needs with a comprehensive suite of services, including design, manufacturing, and testing.

UK-based Surrey Satellite Technology Limited (SSTL) stands out for its innovative approach to small, cost-effective satellites for various applications, with cost-effectiveness in achieving the desired system’s performance, reliability, and mission objectives at a lower cost than traditional satellite projects that easily runs into USD 100s of million. SSTL’s commitment to delivering satellites that balance performance and budget has made it a popular satellite manufacturer globally.

On the larger end of the spectrum, companies like SpaceX (USA) and Thales Alenia Space (France-Italy) are making significant strides in satellite manufacturing at scale. SpaceX has ventured beyond its foundational launch services to produce thousands of small satellites (250+ kg) for its Starlink broadband constellation, which comprises 5,700+ LEO satellites, showcasing mass satellite production. Thales Alenia Space offers reliable satellite platforms and payload integration services for LEO constellation projects.

With their extensive expertise in aerospace and defense, Lockheed Martin Space (USA) and Northrop Grumman (USA) produce various satellite systems suitable for commercial, military, and scientific missions. Their ability to support large-scale satellite constellation projects from design to launch demonstrates high expertise and reliability. Similarly, aerospace giants Airbus Defense and Space (EU) and Boeing Defense, Space & Security (USA) offer comprehensive satellite solutions, including designing and manufacturing small satellites for LEO. Their involvement in high-profile projects highlights their capacity to deliver advanced satellite systems for a wide range of use cases.

Together, these companies, from smaller specialized firms to global aerospace leaders, play crucial roles in the satellite manufacturing industry. They enable a wide array of LEO missions, catering to the burgeoning demand for satellite services across telecommunications, Earth observation, and beyond, thus facilitating access to space for diverse clients and applications.

ECONOMICS.

Before going into details, let’s spend some time on an example illustrating the basic components required for building a satellite and getting it to launch. Here, I point at a super cool alternative to the above-mentioned companies, the USA-based startup Apex, co-founded by CTO Max Benassi (ex-SpaceX and Astra) and CEO Ian Cinnamon. To get an impression of the macro-components of a satellite system, I recommend checking out the Apex webpage and “playing” with their satellite configurator. The basic package comes at a price tag of USD 3.2 million and a 9-month delivery window. It includes a 100 kg satellite bus platform, a power system, a communication system based on X-band (8 – 12 GHz), and a guidance, navigation, and control package. The basic package does not include a solar array drive assembly (SADA), which plays a critical role in the operation of satellites by ensuring that the satellite’s solar panels are optimally oriented toward the Sun. Adding the SADA brings with it an additional USD 500 thousand. Also, the propulsion mechanism (e.g., chemical or electric; in general, there are more possibilities) is not provided (+ USD 450 thousand), nor are any services included (e.g., payload & launch vehicle integration and testing, USD 575 thousand), including SADAs, propulsion, and services, Apex will have a satellite launch ready for an amount of close to USD 4.8 million.

However, we are not done. The above solution still needs to include the so-called payload, which relates to the equipment or instruments required to perform the LEO satellite mission (e.g., broadband communications services), the actual satellite launch itself, and the operational aspects of a successful post-launch (i.e., ground infrastructure and operation center(s)).

Let’s take SpaceX’s Starlink satellite as an example illustrating mission and payload more clearly. The Starlink satellite’s primary mission is to provide fixed-wireless access broadband internet to an Earth-based fixed antenna using. The Starlink payload primarily consists of advanced broadband internet transmission equipment designed to provide high-speed internet access across the globe. This includes phased-array antennas for communication with user terminals on the ground, high-frequency radio transceivers to facilitate data transmission, and inter-satellite links allowing satellites to communicate in orbit, enhancing network coverage and data throughput.

The economical aspects of launching a Low Earth Orbit (LEO) satellite project span a broad spectrum of costs from the initial concept phase to deployment and operational management. These projects commence with research and development, where significant investments are made in designengineering, and the iterative process of prototyping and testing to ensure the satellite meets its intended performance and reliability standards in harsh space conditions (e.g., vacuum, extreme temperature variations, radiation, solar flares, high-velocity impacts with micrometeoroids and man-made space debris, erosion, …).

Manufacturing the satellite involves additional expenses, including procuring high-quality components that can withstand space conditions and assembling and integrating the satellite bus with its mission-specific payload. Ensuring the highest quality standards throughout this process is crucial to minimizing the risk of in-orbit failure, which can substantially increase project costs. The payload should be seen as the heart of the satellite’s mission. It could be a set of scientific instruments for measuring atmospheric data, optical sensors for imaging, transponders for communication, or any other equipment designed to fulfill the satellite’s specific objectives. The payload will vary greatly depending on the mission, whether for Earth observation, scientific research, navigation, or telecommunications.

Of course, there are many other types and more affordable options for LEO satellites than a Starlink-like one (although we should also not ignore achievements of SpaceX and learn from them as much as possible). As seen from Table 1, we have a range of substantially smaller satellite types or form factors. The 1U (i.e., one unit) CubeSat is a satellite with a form factor of 10x10x11.35 cm3 and weighs no more than 1.33 kilograms. A rough cost range for manufacturing a 1U CubeSat could be from USD 50 to 100+ thousand depending on mission complexity and payload components (e.g., commercial-off-the-shelf or application or mission-specific design). The range includes considering the costs associated with the satellite’s design, components, assembly, testing, and initial integration efforts. The cost range, however, does not include other significant costs associated with satellite missions, such as launch services, ground station operations, mission control, and insurance, which is likely to (significantly) increase the total project cost. Furthermore, we have additional form factors, such as 3U CubeSat (10x10x34.05 cm3, <4 kg), manufacturing cost in the range of USD 100 to 500+ thousand, 6U CubeSat (20x10x34 cm3, <12 kg), that can carry more complex payload solutions than the smaller 1U and 3U, with the manufacturing cost in the range of USD 200 thousand to USD 1+ million and 12U satellites (20x20x34 cm3, <24 kg) that again support complex payload solutions and in general will be significantly more expensive to manufacture.

Securing a launch vehicle is one of the most significant expenditures in a satellite project. This cost not only includes the price of the rocket and launch itself but also encompasses integration, pre-launch services, and satellite transportation to the launch site. Beyond the launch, establishing and maintaining the ground segment infrastructure, such as ground stations and a mission control center, is essential for successful satellite communication and operation. These facilities enable ongoing tracking, telemetry, and command operations, as well as the processing and management of the data collected by the satellite.

The SpaceX Falcon rocket is used extensively by other satellite businesses (see above Table 1) as well as by SpaceX for their own Starlink constellation network. The rocket has a payload capability of ca. 23 thousand kg and a volume handling capacity of approximately 300 cubic meters. SpaceX has launched around 60 Starlink satellites per Falcon 9 mission for the first-generation satellites. The launch cost per 1st generation satellite would then be around USD 1 million per satellite using the previously quoted USD 62 million (2018 figure) for a Falcon 9 launch. The second-generation Starlink satellites are substantially more advanced compared to the 1st generation. They are also heavier, weighing around a thousand kilograms. A Falcon 9 would only be able to launch around 20 generation 2 satellites (only considering the weight limitation), while a Falcon Heavy could lift ca. 60 2nd gen. satellites but also at a higher price point of USD 90 million (2018 figure). Thus the launch cost per satellite would be between USD 1.5 million using Falcon Heavy and USD 3.1 million using Falcon 9. Although the launch cost is based on price figures from 2018, the expected efficiency gained from re-use may have either kept the cost level or reduced it further as expected, particularly with Falcon Heavy.

Satellite businesses looking to launch small volumes of satellites, such as CubeSats, have a variety of strategies at their disposal to manage launch costs effectively. One widely adopted approach is participating in rideshare missions, where the expenses of a single launch vehicle are shared among multiple payloads, substantially reducing the cost for each operator. This method is particularly attractive due to its cost efficiency and the regularity of missions offered by, for example, SpaceX. Prices for rideshare missions can start from as low as a few thousand dollars for very small payloads (like CubeSats) to several hundred thousand dollars for larger small satellites. For example, SpaceX advertises rideshare prices starting at $1 million for payloads up to 200 kg. Alternatively, dedicated small launcher services cater specifically to the needs of small satellite operators, offering more tailored launch options in terms of timing and desired orbit. Companies such as Rocket Lab (USA) and Astra (USA) launch services have emerged, providing flexibility that rideshare missions might not, although at a slightly higher cost. However, these costs remain significantly lower than arranging a dedicated launch on a larger vehicle. For example, Rocket Lab’s Electron rocket, specializing in launching small satellites, offers dedicated launches with prices starting around USD 7 million for the entire launch vehicle carrying up to 300 kg. Astra has reported prices in the range of USD 2.5 million for a dedicated LEO launch with their (discontinued) Rocket 3 with payloads of up to 150 kg. The cost for individual small satellites will depend on their share of the payload mass and the specific mission requirements.

Satellite ground stations, which consist of arrays of phased-array antennas, are critical for managing the satellite constellation, routing internet traffic, and providing users with access to the satellite network. These stations are strategically located to maximize coverage and minimize latency, ensuring that at least one ground station is within the line of sight of satellites as they orbit the Earth. As of mid-2023, Starlink operated around 150 ground stations worldwide (also called Starlink Gateways), with 64 live and an additional 33 planned in the USA. The cost of constructing a ground station would be between USD 300 thousand to half a million not including the physical access point, also called the point-of-presence (PoP), and transport infrastructure connecting the PoP (and gateway) to the internet exchange where we find the internet service providers (ISPs) and the content delivery networks (CDNs). The Pop may add another USD 100 to 200 thousand to the ground infrastructure unit cost. The transport cost from the gateway to the Internet exchange can vary a lot depending on the gateway’s location.

Insurance is a critical component of the financial planning for a satellite project, covering risks associated with both the launch phase and the satellite’s operational period in orbit. These insurances are, in general, running at between 5% to 20% of the total project cost depending on the satellite value, the track record of the launch vehicle, mission complexity, and duration (i.e., typically 5 – 7 years for a LEO satellite at 500 km) and so forth. Insurance could be broken up into launch insurance and insurance covering the satellite once it is in orbit.

Operational costs, the Opex, include the day-to-day expenses of running the satellite, from staffing and technical support to ground station usage fees.

Regulatory and licensing fees, including frequency allocation and orbital slot registration, ensure the satellite operates without interfering with other space assets. Finally, at the end of the satellite’s operational life, costs associated with safely deorbiting the satellite are incurred to comply with space debris mitigation guidelines and ensure a responsible conclusion to the mission.

The total cost of an LEO satellite project can vary widely, influenced by the satellite’s complexity, mission goals, and lifespan. Effective project management and strategic decision-making are crucial to navigating these expenses, optimizing the project’s budget, and achieving mission success.

Figure 11 illustrates an LEO CubeSat orbiting above the Earth, capturing the satellite’s compact design and its role in modern space exploration and technology demonstration. Note that the CubeSat design comes in several standardized dimensions, with the reference design, also called 1U, being almost 1 thousandth of a cubic meter and weighing less than 1.33 kg. More advanced CubeSat satellites would typically be 6U or higher.

CubeSats (e.g., 1U, 3U, 6U, 12U):

  • Manufacturing Cost: Ranges from USD 50,000 for a simple 1U CubeSat to over USD 1 million for a more complex missions supported by 6U (or higher) CubeSat with advanced payloads (and 12U may again amount to several million US dollars).
  • Launch Cost: This can vary significantly depending on the launch provider and the rideshare opportunities, ranging from a few thousand dollars for a 1U CubeSat on a rideshare mission to several million dollars for a dedicated launch of larger CubeSats or small satellites.
  • Operational Costs: Ground station services, mission control, and data handling can add tens to hundreds of thousands of dollars annually, depending on the mission’s complexity and duration.

Small Satellites (25 kg up to 500 kg):

  • Manufacturing Cost: Ranges from USD 500,000 to over 10 million, depending on the satellite’s size, complexity, and payload requirements.
  • Launch Cost: While rideshare missions can reduce costs, dedicated launches for small satellites can range from USD 10 million to 62 million (e.g., Falcon 9) and beyond (e.g., USD 90 million for Falcon Heavy).
  • Operational Costs: These are similar to CubeSats, but potentially higher due to the satellite’s larger size and more complex mission requirements, reaching several hundred thousand to over a million dollars annually.

The range for the total project cost of a LEO satellite:

Given these considerations, the total cost range for a LEO satellite project can vary from as low as a few hundred thousand dollars for a simple CubeSat project utilizing rideshare opportunities and minimal operational requirements to hundreds of millions of dollars for more complex small satellite missions requiring dedicated launches and extensive operational support.

It is important to note that these are rough estimates, and the actual cost can vary based on specific mission requirements, technological advancements, and market conditions.

CAPACITY AND QUALITY

Figure 12 Satellite-based cellular capacity, or quality measured, by the unit or total throughput in Mbps is approximately driven by the amount of spectrum (in MHz) times the effective spectral efficiency (in Mbps/MHz/units) times the number of satellite beams resulting in cells on the ground.

The overall capacity and quality of satellite communication systems, given in Mbps, is on a high level, the product of three key factors: (i) the amount of frequency bandwidth in MHz allocated to the satellite operations multiplied by (ii) the effective spectral efficiency in Mbps per MHz over a unit satellite-beam coverage area multiplied by (iii) the number of satellite beams that provide the resulting terrestrial cell coverage. Thus, in other words:

Satellite Capacity (in Mbps) =
Frequency Bandwidth in MHz ×
Effective Spectral Efficiency in Mbps/MHz/Beam ×
Number of Beams (or Cells)

Consider a satellite system supporting 8 beams (and thus an equivalent of terrestrial coverage cells), each with 250 MHz allocated within the same spectral frequency range, can efficiently support ca. 680 Mbps per beam. This is achieved with an antenna setup that effectively provides a spectral efficiency of ~2.7 Mbps/MHz/cell (or beam) in the downlink (i.e., from the satellite to the ground). Moreover, the satellite typically will have another frequency and antenna configuration that establishes a robust connection to the ground station that connects to the internet via, for example, third-party internet service providers. The 680 Mbps is then shared among users that are within the satellite beam coverage, e.g., if you have 100 customers demanding a service, the speed each would experience on average would be around 7 Mbps. This may not seem very impressive compared to the cellular speeds we are used to getting with an LTE or 5G terrestrial cellular service. However, such speeds are, of course, much better than having no means of connecting to the internet.

Higher frequencies (i.e., in the GHz range) used to provide terrestrial cellular broadband services are in general quiet sensitive to the terrestrial environment and non-LoS propagation. It is a basic principle of physics that signal propagation characteristics, including the range and penetration capabilities of an electromagnetic waves, is inversely related to their frequency. Vegetation and terrain becomes an increasingly critical factor to consider in higher frequency propagation and the resulting quality of coverage. For example trees, forests, and other dense foliage can absorb and scatter radio waves, attenuating signals. The type and density of vegetation, along with seasonal changes like foliage density in summer versus winter, can significantly impact signal strength. Terrains often include varied topographies such as housing, hills, valleys, and flat plains, each affecting signal reach differently. For instance, housing, hilly or mountainous areas may cause signal shadowing and reflection, while flat terrains might offer less obstruction, enabling signals to travel further. Cellular mobile operators tend to like high frequencies (GHz) for cellular broadband services as it is possible to get substantially more system throughput in bits per second available to deliver to our demanding customers than at frequencies in the MHz range. As can be observed in Figure 12 above, we see that the frequency bandwidth is a multiplier for the satellite capacity and quality. Cellular mobile operators tend to “dislike” higher frequencies because of their poorer propagation conditions in their terrestrially based cellular networks resulting in the need for increased site densification at a significant incremental capital expense.

The key advantage of a LEO satellite is that the likelihood of a line-of-sight to a point on the ground is very high compared to establishing a line-of-sight for terrestrial cellular coverage that, in general, would be very low. In other words, the cellular signal propagation from an satellite closely approximates that of free space. Thus, all the various environmental signal loss factors we must consider for a standard terrestrial-based mobile network do not apply to our satellite having only to overcome the distance from the satellite antenna to the ground.

Let us first look at the satellite frequency component of the above satellite capacity, and quality, formula:

FREQUENCY SPECTRUM FOR SATELLITES.

The satellite frequency spectrum encompasses a range of electromagnetic frequencies allocated specifically for satellite communication. These frequencies are divided into bands, commonly known as L-band (e.g., mobile broadband), S-band (e.g., mobile broadband), C-band, X-band (e.g., mainly used by military), Ku-band (e.g., fixed broadband), Ka-band (e.g., fixed broadband), and V-band. Each serves different satellite applications due to its distinct propagation characteristics and capabilities. The spectrum bandwidth used by satellites refers to the width of the frequency range that a satellite system is licensed to use for transmitting and receiving signals.

Careful management of satellite spectrum bandwidth is critical to prevent interference with terrestrial communications systems. Since both satellite and terrestrial systems can operate on similar frequency ranges, there is a potential for crossover interference, which can degrade the performance of both systems. This is particularly important for bands like C-band and Ku-band, which are also used for terrestrial cellular networks and other applications like broadcasting.

Using the same spectrum for both satellite and terrestrial cellular coverage within the same geographical area is challenging due to the risk of interference. Satellites transmit signals over vast areas, and if those signals are on the same frequency as terrestrial cellular systems, they can overpower the local ground-based signals, causing reception issues for users on the ground. Conversely, the uplink signals from terrestrial sources can interfere with the satellite’s ability to receive communications from its service area.

Regulatory bodies such as the International Telecommunication Union (ITU) are crucial in mitigating these interference issues. They coordinate the allocation of frequency bands and establish regulations that govern their use. This includes defining geographical zones where certain frequencies may be used exclusively for either terrestrial or satellite services, as well as setting limits on signal power levels to minimize the chance of interference. Additionally, technology solutions like advanced filtering, beam shaping, and polarization techniques are employed to further isolate satellite communications from terrestrial systems, ensuring that both may coexist and operate effectively without mutual disruption.

The International Telecommunication Union (ITU) has designated several frequency bands for Fixed Satellite Services (FSS) and Mobile Satellite Services (MSS) that can be used by satellites operating in Low Earth Orbit (LEO). The specific bands allocated for satellite services, FSS and MSS, are determined by the ITU’s Radio Regulations, which are periodically updated to reflect global telecommunication’s evolving needs and address emerging technologies. Here are some of the key frequency bands commonly considered for FSS and MSS with LEO satellites:

V-Band 40 GHz to 75 GHz (microwave frequency range).
The V-band is appealing for Low Earth Orbit (LEO) satellite constellations designed to provide global broadband internet access. LEO satellites can benefit from the V-band’s capacity to support high data rates, which is essential for serving densely populated areas and delivering competitive internet speeds. The reduced path loss at lower altitudes, compared to GEO, also makes the V-band a viable option for LEO satellites. Due to the higher frequencies offered by V-band it also is significant more sensitive to atmospheric attenuation, (e.g., oxygen absorption around 60 GHz), including rain fade, which is likely to affect signal integrity. This necessitates the development of advanced technologies for adaptive coding and modulation, power amplification, and beamforming to ensure reliable communication under various weather conditions. Several LEO satellite operators have expressed an interest in operationalizing the V-band in their satellite constellations (e.g., StarLink, OneWeb, Kuiper, Lightspeed). This band should be regarded as an emergent LEO frequency band.

Ka-Band 17.7 GHz to 20.2 GHz (Downlink) & 27.5 GHz to 30.0 GHz (Uplink).
The Ka-band offers higher bandwidths, enabling greater data throughput than lower bands. Not surprising this band is favored by high-throughput satellite solutions. It is widely used by fixed satellite services (FSS). This makes it ideal for high-speed internet services. However, it is more susceptible to absorption and scattering by atmospheric particles, including raindrops and snowflakes. This absorption and scattering effect weakens the signal strength when it reaches the receiver. To mitigate rain fade effects in the Ka-band, satellite, and ground equipment must be designed with higher link margins, incorporating more powerful transmitters and more sensitive receivers. Additionally, adaptive modulation and coding techniques can be employed to adjust the signal dynamically in response to changing weather conditions. Overall, the system is more costly and, therefore, primarily used for satellite-to-earth ground station communications and high-performance satellite backhaul solutions.

For example, Starlink and OneWeb use the Ka-band to connect to satellite Earth gateways and point-of-presence, which connect to Internet Exchange and the wider internet. It is worth noticing that the terrestrial 5 G band n256 (26.5 to 29.5 GHz) falls within the Ka-band’s uplink frequency band. Furthermore, SES’s mPower satellites, operating at Middle Earth Orbit (MEO), operate exclusively in this band, providing internet backhaul services.

Ku-Band 12.75 GHz to 13.25 GHz (Downlink) & 14.0 GHz to 14.5 GHz (Uplink).
The Ku-band is widely used for FSS satellite communications, including fixed satellite services, due to its balance between bandwidth availability and susceptibility to rain fade. It is suitable for broadband services, TV broadcasting, and backhaul connections. For example, Starlink and OneWeb satellites are using this band to provide broadband services to earth-based customer terminals.

X-Band 7.25 GHz to 7.75 GHz (Downlink) & 7.9 GHz to 8.4 GHz (Uplink).
The X-band in satellite applications is governed by international agreements and national regulations to prevent interference between different services and to ensure the efficient use of the spectrum. The X-band is extensively used for secure military satellite communications, offering advantages like high data rates and relative resilience to jamming and eavesdropping. It supports a wide range of military applications, including mobile command, control, communications, computer, intelligence, surveillance, and reconnaissance (i.e., C4ISR) operations. Most defense-oriented satellites operate at geostationary orbit, ensuring constant coverage of specific geographic areas (e.g., Airbus Skynet constellations, Spain’s XTAR-EUR, and France’s Syracuse satellites). Most European LEO defense satellites, used primarily for reconnaissance, are fairly old, with more than 15 years since the first launch, and are limited in numbers (i.e., <10). The most recent European LEO satellite system is the French-based Multinational Space-based Imaging System (MUSIS) and Composante Spatiale Optique (CSO), where the first CSO components were launched in 2018. There are few commercial satellites utilizing the X-band.

C-Band 3.7 GHz to 4.2 GHz (Downlink) & 5.925 GHz to 6.425 GHz (Uplink)
C-band is less susceptible to rain fade and is traditionally used for satellite TV broadcasting, maritime, and aviation communications. However, parts of the C-band are also being repurposed for terrestrial 5G networks in some regions, leading to potential conflicts and the need for careful coordination. The C-band is primarily used in geostationary orbit (GEO) rather than Low Earth Orbit (LEO), due to the historical allocation of C-band for fixed satellite services (FSS) and its favorable propagation characteristics. I haven’t really come across any LEO constellation using the C-band. GEO FSS satellite operators using this band extensively are SES (Luxembourg), Intelsat (Luxembourg/USA), Eutelsat (France), Inmarsat (UK), etc..

S-Band 2.0 GHz to 4.0 GHz
S-band is used for various applications, including mobile communications, weather radar, and some types of broadband services. It offers a good compromise between bandwidth and resistance to atmospheric absorption. Both Omnispace (USA) and Globalstar (USA) LEO satellites operate in this band. Omnispace is also interesting as they have expressed intent to have LEO satellites supporting the 5G services in the band n256 (26.5 to 29.5 GHz), which falls within the uplink of the Ka-band.

L-Band 1.0 GHz to 2.0 GHz
L-band is less commonly used for fixed satellite services but is notable for its use in mobile satellite services (MSS), satellite phone communications, and GPS. It provides good coverage and penetration characteristics. Both Lynk Mobile (USA), offering Direct-2-Device, IoT, and M2M services, and Astrocast (Switzerland), with their IoT/M2M services, are examples of LEO satellite businesses operating in this band.

UHF 300 MHz to 3.0 GHz
The UHF band is more widely used for satellite communications, including mobile satellite services (MSS), satellite radio, and some types of broadband data services. It is favored for its relatively good propagation characteristics, including the ability to penetrate buildings and foliage. For example, Fossa Systems LEO pico-satellites (i.e., 1p form-factor) use this frequency for their IoT and M2M communications services.

VHF 30 MHz to 300 MHz

The VHF band is less commonly used in satellite communications for commercial broadband services. Still, it is important for applications such as satellite telemetry, tracking, and control (TT&C) operations and amateur satellite communications. Its use is often limited due to the lower bandwidth available and the higher susceptibility to interference from terrestrial sources. Swarm Technologies (USA and a SpaceX subsidiary) using 137-138 MHz (Downlink) and 148-150 MHz (Uplink). However, it appears that they have stopped taking new devices on their network. Orbcomm (USA) is another example of a satellite service provider using the VHF band for IoT and M2M communications. There is very limited capacity in this band due to many other existing use cases, and LEO satellite companies appear to plan to upgrade to the UHF band or to piggyback on direct-2-cell (or direct-2-device) satellite solutions, enabling LEO satellite communications with 3GPP compatible IoT and M2M devices.

SATELLITE ANTENNAS.

Satellites operating in Geostationary Earth Orbit (GEO), Medium Earth Orbit (MEO), and Low Earth Orbit (LEO) utilize a variety of antenna types tailored to their specific missions, which range from communication and navigation to observation (e.g., signal intelligence). The satellite’s applications influence the selection of an antenna, the characteristics of its orbit, and the coverage area required.

Antenna technology is intrinsically linked to spectral efficiency in satellite communications systems and of course any other wireless systems. Antenna designs influence how effectively a communication system can transmit and receive signals within a given frequency band, which is the essence of spectral efficiency (i.e., how much information per unit time in bits per second can I squeeze through my communications channel).

Thus, advancements in antenna technology are fundamental to improving spectral efficiency, making it a key area of research and development in the quest for more capable and efficient communication systems.

Parabolic dish antennas are prevalent for GEO satellites due to their high gain and narrow beam width, making them ideal for broadcasting and fixed satellite services. These antennas focus a tight beam on specific areas on Earth, enabling strong and direct signals essential for television, internet, and communication services. Horn antennas, while simpler, are sometimes used as feeds for larger parabolic antennas or for telemetry, tracking, and command functions due to their reliability. Additionally, phased array antennas are becoming more common in GEO satellites for their ability to steer beams electronically, offering flexibility in coverage and the capability to handle multiple beams and frequencies simultaneously.

Phased-array antennas are indispensable in for MEO satellites, such as those used in navigation systems like GPS (USA), BeiDou (China), Galileo (European), or GLONASS (Russian). These satellite constellations cover large areas of the Earth’s surface and can adjust beam directions dynamically, a critical feature given the satellites’ movement relative to the Earth. Patch antennas are also widely used in MEO satellites, especially for mobile communication constellations, due to their compact and low-profile design, making them suitable for mobile voice and data communications.

Phased-array antennas are very important for LEO satellites use cases as well, which include broadband communication constellations like Starlink and OneWeb. Their (fast) beam-steering capabilities are essential for maintaining continuous communication with ground stations and user terminals as the satellites quickly traverse the sky. The phased-array antenna also allow for optimizing coverage with both narrow as well as wider field of view (from the perspective of the satellite antenna) that allow the satellite operator to trade-off cell capacity and cell coverage.

Simpler Dipole antennas are employed for more straightforward data relay and telemetry purposes in smaller satellites and CubeSats, where space and power constraints are significant factors. Reflect array antennas, which offer a mix of high gain and beam steering capabilities, are used in specific LEO satellites for communication and observation applications (e.g., for signal intelligence gathering), combining features of both parabolic and phased array antennas.

Mission specific requirements drive the choice of antenna for a satellite. For example, GEO satellites often use high-gain, narrowly focused antennas due to their fixed position relative to the Earth, while MEO and LEO satellites, which move relatively closer to the Earth’s surface, require antennas capable of maintaining stable connections with moving ground terminals or covering large geographical areas.

Advanced antenna technologies such as beamforming, phased-arrays, and Multiple In Multiple Out (MMO) antenna configurations are crucial in managing and utilizing the spectrum more efficiently. They enable precise targeting of radio waves, minimizing interference, and optimizing bandwidth usage. This direct control over the transmission path and signal shape allows more data (bits) to be sent and received within the same spectral space, effectively increasing the communication channel’s capacity. In particular, MIMO antenna configurations and advanced antenna beamforming have enabled terrestrial mobile cellular access technologies (e.g., LTE and 5G) to quantum leap the effective spectral efficiency, broadband speed and capacity orders of magnitude above and beyond older technologies of 2G and 3G. Similar principles are being deployed today in modern advanced communications satellite antennas, providing increased capacity and quality within the satellite cellular coverage area provided by the satellite beam.

Moreover, antenna technology developments like polarization and frequency reuse directly impact a satellite system’s ability to maximize spectral resources. Allowing simultaneous transmissions on the same frequency through different polarizations or spatial separations effectively double the capacity without needing additional spectrum.

WHERE DO WE END UP.

If all current commercial satellite plans were realized, within the next decade, we would have more, possibly substantially more than 65 thousand satellites circling Earth. Today, that number is less than 10 thousand, with more than half that number realized by StarLink’s LEO constellation. Imagine the increase in, and the amount of, space debris circling Earth within the next 10 years. This will likely pose a substantial increase in operational risk for new space missions and will have to be addressed urgently.

Over the next decade, we may have at least 2 major LEO satellite constellations. One from Starlink with an excess of 12 thousand satellites, and one from China, the Guo Wang, the state network, likewise with 12 thousand LEO satellites. One global satellite constellation is from an American-based commercial company; the other is a worldwide satellite constellation representing the Chinese state. It would not be too surprising to see that by 2034, the two satellite constellations will divide Earth in part, being serviced by a commercial satellite constellation (e.g., North America, Europe, parts of the Middle East, some of APAC including India, possibly some parts of Africa). Another part will likely be served by a Chinese-controlled LEO constellation providing satellite broadband service to China, Russia, significant parts of Africa, and parts of APAC.

Over the next decade, satellite services will undergo transformative advancements, reshaping the architecture of global communication infrastructures and significantly impacting various sectors, including broadband internet, global navigation, Earth observation, and beyond. As these services evolve, we should anticipate major leaps in satellite technologies, driven by innovation in propulsion systems, miniaturization of technology, advancements in onboard processing capabilities, increasing use of AI and machine learning leapfrogging satellites operational efficiency and performance, breakthrough in material science reducing weight and increasing packing density, leapfrogs in antenna technology, and last but not least much more efficient use of the radio frequency spectrum. Moreover, we will see the breakthrough innovation that will allow better co-existence and autonomous collaboration of frequency spectrum utilization between non-terrestrial and terrestrial networks reducing the need for much regulatory bureaucracy that might anyway be replaced by decentralized autonomous organizations (DAOs) and smart contracts. This development will be essential as satellite constellations are being integrated into 5G and 6G network architectures as the non-terrestrial network cellular access component. This particular topic, like many in this article, is worth a whole new article on its own.

I expect that over the next 10 years we will see electronically steerable phased-array antennas, as a notable advancement. These would offer increased agility and efficiency in beamforming and signal direction. Their ability to swiftly adjust beams for optimal coverage and connectivity without physical movement makes them perfect for the dynamic nature of Low Earth Orbit (LEO) satellite constellations. This technology will becomes increasingly cost-effective and energy-efficient, enabling widespread deployment across various satellite platforms (not only LEO designs). The advance in phased-array antenna technology will facilitate substantial increase in the satellite system capacity by increasing the number of beams, the variation on beam size (possibly down to a customer ground station level), and support multi-band operations within the same antenna.

Another promising development is the integration of metamaterials in antenna design, which will lead to more compact, flexible, and lightweight antennas. The science of metamaterials is super interesting and relates to manufacturing artificial materials to have properties not found in naturally occurring materials with unique electromagnetic behaviors arising from their internal structure. Metamaterial antennas is going to offer superior performance, including better signal control and reduced interference, which is crucial for maintaining high-quality broadband connections. These materials are also important for substantially reducing the weight of the satellite antenna, while boosting its performance. Thus, the technology will also support bringing the satellite launch cost down dramatically.

Although primarily associated MIMO antennas with terrestrial networks, I would also expect that massive MIMO technology will find applications in satellite broadband systems. Satellite systems, just like ground based cellular networks, can significantly increase their capacity and efficiency by utilizing many antenna elements to simultaneously communicate with multiple ground terminals. This could be particularly transformative for next-generation satellite networks, supporting higher data rates and accommodating more users. The technology will increase the capacity and quality of the satellite system dramatically as it has done on terrestrial cellular networks.

Furthermore, advancements in onboard processing capabilities will allow satellites to perform more complex signal processing tasks directly in space, reducing latency and improving the efficiency of data transmission. Coupled with AI and machine learning algorithms, future satellite antennas could dynamically optimize their operational parameters in real-time, adapting to changes in the network environment and user demand.

Additionally, research into quantum antenna technology may offer breakthroughs in satellite communication, providing unprecedented levels of sensitivity and bandwidth efficiency. Although still early, quantum antennas could revolutionize signal reception and transmission in satellite broadband systems. In the context of LEO satellite systems, I am particularly excited about utilizing the Rydberg Effect to enhance system sensitivity could lead to massive improvements. The heightened sensitivity of Rydberg atoms to electromagnetic fields could be harnessed to develop ultra-sensitive detectors for radio frequency (RF) signals. Such detectors could surpass the performance of traditional semiconductor-based devices in terms of sensitivity and selectivity, enabling satellite systems to detect weaker signals, improve signal-to-noise ratios, and even operate effectively over greater distances or with less power. Furthermore, space could potentially be the near-ideal environment for operationalizing Rydberg antenna and communications systems as space had near-perfect vacuum, very low-temperatures (in Earth shadow at least or with proper thermal management), relatively free of electromagnetic radiation (compared to Earth), as well as its micro-gravity environment that may facilitate long-range “communications” between Rydberg atoms. This particular topic may be further out in the future than “just” a decade from now, although it may also be with satellites we will see the first promising results of this technology.

One key area of development will be the integration of LEO satellite networks with terrestrial 5G and emerging 6G networks, marking a significant step in the evolution of Non-Terrestrial Network (NTN) architectures. This integration promises to deliver seamless, high-speed connectivity across the globe, including in remote and rural areas previously underserved by traditional broadband infrastructure. By complementing terrestrial networks, LEO satellites will help achieve ubiquitous wireless coverage, facilitating a wide range of applications and use cases from high-definition video streaming to real-time IoT data collection.

The convergence of LEO satellite services with 5G and 6G will also spur network management and orchestration innovation. Advanced techniques for managing interference, optimizing handovers between terrestrial and non-terrestrial networks, and efficiently allocating spectral resources will be crucial. It would be odd not to mention it here, so artificial intelligence and machine learning algorithms will, of course, support these efforts, enabling dynamic network adaptation to changing conditions and demands.

Moreover, the next decade will likely see significant improvements in the environmental sustainability of LEO satellite operations. Innovations in satellite design and materials, along with more efficient launch vehicles and end-of-life deorbiting strategies, will help mitigate the challenges of space debris and ensure the long-term viability of LEO satellite constellations.

In the realm of global connectivity, LEO satellites will have bridged the digital divide, offering affordable and accessible internet services to billions of people worldwide unconnected today. In 2023 the estimate is that there are about 3 billion people, almost 40% of all people in the world today, that have never used internet. In the next decade, it must be our ambition that with LEO satellite networks this number is brought down to very near Zero. This will have profound implications for education, healthcare, economic development, and global collaboration.

FURTHER READING.

  1. A. Vanelli-Coralli, N. Chuberre, G. Masini, A. Guidotti, M. El Jaafari, “5G Non-Terrestrial Networks.”, Wiley (2024). A recommended reading for deep diving into NTN networks of satellites, typically the LEO kind, and High-Altitude Platform Systems (HAPS) such as stratospheric drones.
  2. I. del Portillo et al., “A technical comparison of three low earth orbit satellite constellation systems to provide global broadband,” Acta Astronautica, (2019).
  3. Nils Pachler et al., “An Updated Comparison of Four Low Earth Orbit Satellite Constellation Systems to Provide Global Broadband” (2021).
  4. Starlink, “Starlink specifications” (Starlink.com page). The following Wikipedia resource is quite good as well: Starlink.
  5. Quora, “How much does a satellite cost for SpaceX’s Starlink project and what would be the cheapest way to launch it into space?” (June 2023). This link includes a post from Elon Musk commenting on the cost involved in manufacturing the Starlink satellite and the cost of launching SpaceX’s Falcon 9 rocket.
  6. Michael Baylor, “With Block 5, SpaceX to increase launch cadence and lower prices.”, nasaspaceflight.com (May, 2018).
  7. Gwynne Shotwell, TED Talk from May 2018. She quotes here a total of USD 10 billion as a target for the 12,000 satellite network. This is just an amazing visionary talk/discussion about what may happen by 2028 (in 4-5 years ;-).
  8. Juliana Suess, “Guo Wang: China’s Answer to Starlink?”, (May 2023).
  9. Makena Young & Akhil Thadani, “Low Orbit, High Stakes, All-In on the LEO Broadband Competition.”, Center for Strategic & International Studies CSIS, (Dec. 2022).
  10. AST SpaceMobile website: https://ast-science.com/ Constellation Areas: Internet, Direct-to-Cell, Space-Based Cellular Broadband, Satellite-to-Cellphone. 243 LEO satellites planned. 2 launched.
  11. Lynk Global website: https://lynk.world/ (see also FCC Order and Authorization). It should be noted that Lynk can operate within 617 to 960 MHz (Space-to-Earth) and 663 to 915 MHz (Earth-to-Space). However, only outside the USA. Constellation Area: IoT / M2M, Satellite-to-Cellphone, Internet, Direct-to-Cell. 8 LEO satellites out of 10 planned.
  12. Omnispace website: https://omnispace.com/ Constellation Area: IoT / M2M, 5G. Ambition to have the world’s first global 5G non-terrestrial network. Initial support 3GPP-defined Narrow-Band IoT radio interface. Planned 200 LEO and <15 MEO satellites. So far, only 2 satellites have been launched.
  13. NewSpace Index: https://www.newspace.im/ I find this resource to have excellent and up-to-date information on commercial satellite constellations.
  14. R.K. Mailloux, “Phased Array Antenna Handbook, 3rd Edition”, Artech House, (September 2017).
  15. A.K. Singh, M.P. Abegaonkar, and S.K. Koul, “Metamaterials for Antenna Applications”, CRC Press (September 2021).
  16. T.L. Marzetta, E.G. Larsson, H. Yang, and H.Q. Ngo, “Fundamentals of Massive MIMO”, Cambridge University Press, (November 2016).
  17. G.Y. Slepyan, S. Vlasenko, and D. Mogilevtsev, “Quantum Antennas”, arXiv:2206.14065v2, (June 2022).
  18. R. Huntley, “Quantum Rydberg Receiver Shakes Up RF Fundamentals”, EE Times, (January 2022).
  19. Y. Du, N. Cong, X. Wei, X. Zhang, W. Lou, J. He, and R. Yang, “Realization of multiband communications using different Rydberg final states”, AIP Advances, (June 2022). Demonstrating the applicability of the Rydberg effect in digital transceivers in the Ku and Ka bands.

ACKNOWLEDGEMENT.

I greatly acknowledge my wife, Eva Varadi, for her support, patience, and understanding during the creative process of writing this article.