If Greenland were digitally disconnected tomorrow, how much of its public sector could still operate?

If Greenland were digitally cut off tomorrow, how much of its public sector would still function? The uncomfortable answer: very little. The truth is that not only would the public sector break down, but society as a whole would likely also break down the longer a digital isolation would be in effect. This article outlines why it does not necessarily have to be this way and suggests that some remedies and actions can be taken to minimize the impact of an event where Greenland would be digitally isolated from the rest of the internet for an extended period (e.g., weeks to months).

We may like, or feel tempted, to think of digital infrastructure as neutral plumbing. But as I wrote earlier, “digital infrastructure is no longer just about connectivity, but about sovereignty and resilience.” Greenland today has neither.

A recent Sermitsiaq article on Greenland’s “Digital Afhængighed af Udlandet” by Poul Krarup, which describes research work done by the Tænketanken Digital Infrastruktur, laid it bare and crystal clear: the backbone of Greenland’s administration, email, payments, and even municipal services, runs on servers and platforms that are located mainly outside Greenland (and Denmark). Global giants in Europe and the US hold the keys. Greenland doesn’t. My own research reveals just how dramatic this dependency is. The numbers from my own study of 315 Greenlandic public-sector domains make it painfully clear: over 70% of web/IP hosting is concentrated among just three foreign providers, including Microsoft, Google, and Cloudflare. For email exchanges (MX), it’s even worse: the majority of MX records sit entirely outside Greenland’s control.

So imagine the cable is cut, the satellite links fail, or access to those platforms is revoked. Schools, hospitals, courts, and municipalities. How many could still function? How many could even switch on a computer?

This isn’t a thought experiment. It’s a wake-up call.

In my earlier work on Greenland’s critical communications infrastructure, “Greenland: Navigating Security and Critical Infrastructure in the Arctic – A Technology Introduction”, I have pointed out both the resilience and the fragility of what exists today. Tusass has built and maintained a transport network that keeps the country connected under some of the harshest Arctic conditions. That achievement is remarkable, but it is also costly and economically challenging without external subsidies and long-term public investment. With a population of just 57,000 people, Greenland faces challenges in sustaining this infrastructure on market terms alone.

DIGITAL SOVEREIGNTY.

What do we mean when we use phrases like “the digital sovereignty of Greenland is at stake”? Let’s break down the complex language (for techies like myself). Sovereignty in the classical sense is about control over land, people, and institutions. Digital sovereignty extends this to the virtual space. It is primarily about controlling data, infrastructure, and digital services. As societies digitalize, critical aspects of sovereignty move into the digital sphere, such as,

  • Infrastructure as territory: Submarine cables, satellites, data centers, and cloud platforms are the digital equivalents of ports, roads, and airports. If you don’t own or control them, you depend on others to move your “digital goods.”
  • Data as a resource: Just as natural resources are vital to economic sovereignty, data has become the strategic resource of the digital age. Those who store, process, and govern data hold significant power over decision-making and value creation.
  • Platforms as institutions: Social media, SaaS, and search engines act like global “public squares” and administrative tools. If controlled abroad, they may undermine local political, cultural, or economic authority.

The excellent book by Anu Bradford, “Digital Empires: The Global Battle to Regulate Technology,” describes how the digital world is no longer a neutral, borderless space but is increasingly shaped by the competing influence of three distinct “empires.” The American model is built around the dominance of private platforms, such as Google, Amazon, and Meta, where innovation and market power drive the agenda. The scale and ubiquity of Silicon Valley firms have enabled them to achieve a global reach. In contrast, the Chinese model fuses technological development with state control. Here, digital platforms are integrated into the political system, used not only for economic growth but also for surveillance, censorship, and the consolidation of authority. Between these two poles lies the European model, which has little homegrown platform power but exerts influence through regulation. By setting strict rules on privacy, competition, and online content, Europe has managed to project its legal standards globally, a phenomenon Bradford refers to as the “Brussels effect” (which is used here in a positive sense). Bradford’s analysis highlights the core dilemma for Greenland. Digital sovereignty cannot be achieved in isolation. Instead, it requires navigating between these global forces while ensuring that Greenland retains the capacity to keep its critical systems functioning, its data governed under its own laws, and its society connected even when global infrastructures falter. The question is not which empire to join, but how to engage with them in a way that strengthens Greenland’s ability to determine its own digital future.

In practice, this means that Greenland’s strategy cannot be about copying one of the three empires, but rather about carving out a space of resilience within their shadow. Building a national Internet Exchange Point ensures that local traffic continues to circulate on the island rather than being routed abroad, even when external links fail. Establishing a sovereign GovCloud provides government, healthcare, and emergency services with a secure foundation that is not dependent on distant data centers or foreign jurisdictions. Local caching of software updates, video libraries, and news platforms enables communities to operate in a “local mode” during disruptions, preserving continuity even when global connections are disrupted. These measures do not create independence from the digital empires. Still, they give Greenland the ability to negotiate with them from a position of greater strength, ensuring that participation in the global digital order does not come at the expense of local control or security.

FROM DAILY RESILIENCE TO STRATEGIC FRAGILITY.

I have argued that integrity, robustness, and availability must be the guiding principles for Greenland’s digital backbone, both now and in the future.

  • Integrity means protecting against foreign influence and cyber threats through stronger cybersecurity, AI support, and autonomous monitoring.
  • Robustness requires diversifying the backbone with new submarine cables, satellite systems, and dual-use assets that can serve both civil and defense needs.
  • Availability depends on automation and AI-driven monitoring, combined with autonomous platforms such as UAVs, UUVs, IoT sensors, and distributed acoustic sensing on submarine cables, to keep services running across vast and remote geographies with limited human resources.

The conclusion I drew in my previous work remains applicable today. Greenland must develop local expertise and autonomy so that critical communications are not left vulnerable to outside actors in times of crisis. Dual-use investments are not only about defense; they also bring better services, jobs, and innovation.

Source: Tusass Annual Report 2023 with some additions and minor edits.

The Figure above illustrates the infrastructure of the Greenlandic sole telecommunications provider, Tusass. Note that Tusass is the incumbent and only telecom provider in Greenland. Currently, five hydropower plants (shown above, location only indicative) provide more than 80% of Greenland’s electricity demand. Greenland is entering a period of significant infrastructure transformation, with several large projects already underway and others on the horizon. The most visible change is in aviation. Following the opening of the new international airport in Nuuk in 2024, with its 2,200-meter runway capable of receiving direct flights from Europe and North America, attention has turned to Ilulissat, on the Northwestern Coast of Greenland, and Qaqortoq. Ilulissat is being upgraded with its own 2,200-meter runway, a new terminal, and a control tower, while the old 845-meter strip is being converted into an access road. In southern Greenland, a new airport is being built in Qaqortoq, with a 1,500-meter runway scheduled to open around 2026. Once completed, these three airports, Nuuk, Ilulissat, and Qaqortoq, the largest town in South Greenland, will together handle roughly 80 percent of Greenland’s passenger traffic, reshaping both tourism and domestic connectivity. Smaller projects, such as the planned airport at Ittoqqortoormiit and changes to heliport infrastructure in East Greenland, are also part of this shift, although on a longer horizon.

Beyond air travel, the next decade is likely to bring new developments in maritime infrastructure. There is growing interest in constructing deep-water ports, both to support commercial shipping and to enable the export of minerals from Greenland’s interior. Denmark has already committed around DKK 1.6 billion (approximately USD 250 million) between 2026 and 2029 for a deep-sea port and related coastal infrastructure, with several proposals directly linked to mining ventures. In southern Greenland, for example, the Tanbreez multi-element rare earth project lies within reach of Qaqortoq, and the new airport’s specifications were chosen with freight requirements in mind. Other mineral prospects, ranging from rare earths to nickel and zinc, will require their own supporting infrastructure, roads, power, and port facilities, if the project transitions from exploration to production. The timelines for these mining and port projects are less certain than for the airports, since they depend on market conditions, environmental approvals, and financing. Yet it is clear that the 2025–2035 period will be decisive for Greenland’s economic and strategic trajectory. The combination of new airports, potential deep-water harbors, and the possible opening of significant mining operations would amount to the largest coordinated build-out of Greenlandic infrastructure in decades. Moreover, several submarine cable projects have been mentioned that would strengthen international connectivity to Greenland, as well as strengthen the redundancy and robustness of settlement connectivity, in addition to the existing long-haul microwave network connecting all settlements along the west coast from North to South.

And this is precisely why the question of a sudden digital cut-off matters so much. Without integrity, robustness, and availability built into the communications infrastructure, Greenland’s public sector and its critical infrastructure remain dangerously exposed. What looks resilient in daily operation could unravel overnight if the links to the outside world were severed or internal connectivity were compromised. In particular, the dependency on Nuuk is a critical risk.

GREENLAND’s DIGITAL INFRASTRUCTURE BY LAYER.

Let’s peel the digital onion layer by layer of Greenland’s digital infrastructure.

Is Greenland’s digital infrastructure broken down by the layers upon which society’s continuous functioning depends? This illustration shows how applications, transport, routing, and interconnect all depend on the external connectivity.

Greenland’s digital infrastructure can be understood as a stack of interdependent layers, each of which reveals a set of vulnerabilities. This is illustrated by the Figure above. At the top of the stack lie the applications and services that citizens, businesses, and government rely on every day. These include health IT systems, banking platforms, municipal services, and cloud-based applications. The critical issue is that most of these services are hosted abroad and have no local “island mode.” In practice, this means that if Greenland is digitally cut off, domestic apps and services will fail to function because there is no mechanism to run them independently within the country.

Beneath this sits the physical transport layer, which is the actual hardware that moves data. Greenland is connected internationally by just two subsea cables, routed via Iceland and Canada. A few settlements, such as Tasiilaq, remain entirely dependent on satellite links, while microwave radio chains connect long stretches of the west coast. At the local level, there is some fiber deployment, but it is limited to individual settlements rather than forming part of a national backbone. This creates a transport infrastructure that, while impressive given Greenland’s geography, is inherently fragile. Two cables and a scattering of satellites do not amount to genuine redundancy for a nation. The next layer is IP/TCP transport, where routing comes into play. Here, too, the system is basic. Greenland relies on a limited set of upstream providers with little true diversity or multi-homing. As a result, if one of the subsea cables is cut, large parts of the country’s connectivity collapse, because traffic cannot be seamlessly rerouted through alternative pathways. The resilience that is taken for granted in larger markets is largely absent here.

Finally, at the base of the stack, interconnect and routing expose the structural dependency most clearly. Greenland operates under a single Autonomous System Number (ASN). An ASN is a unique identifier assigned to a network operator (like Tusass) that controls its own routing on the Internet. It allows the network to exchange traffic and routing information with other networks using the Border Gateway Protocol (BGP). In Greenland, there is no domestic internet exchange point (IXP) or peering between local networks. All traffic must be routed abroad first, whether it is destined for Greenland or beyond. International transit flows through Iceland and Canada via the subsea cables, and via geostationary GreenSat satellite connectivity through Grand Canaria as a limited (in capacity) fallback that connected via the submarine network back to Greenland. There is no sovereign government cloud, almost no local caching for global platforms, and only a handful of small data centers (being generous with the definition here). The absence of scaled redundancy and local hosting means that virtually all of Greenland’s digital life depends on international connections.

GREENLAND’s DIGITAL LIFE ON A SINGLE THREAD.

Considering the many layers described above, a striking picture emerges: applications, transport, routing, and interconnect are all structured in ways that assume continuous external connectivity. What appears robust on a day-to-day basis can unravel quickly. A single cable cut, upstream outage, or local transmission fault in Greenland does not just slow down the internet. It can also disrupt it. It can paralyze everyday life across almost every sector, as much of the country’s digital backbone relies on external connectivity and fragile local transport. For the government, the reliance on cloud-hosted systems abroad means that email, document storage, case management, and health IT systems would go dark. Hospitals and clinics could lose access to patient records, lab results, and telemedicine services. Schools would be cut off from digital learning platforms and exam systems that are hosted internationally. Municipalities, which already lean on remote data centers for payroll, social services, and citizen portals, would struggle to process even routine administrative tasks. In finance, the impact would be immediate. Greenland’s card payment and clearing systems are routed abroad; without connectivity, credit and debit card transactions could no longer be authorized. ATMs would stop functioning. Shops, fuel stations, and essential suppliers would be forced into cash-only operations at best, and even that would depend on whether their local systems can operate in isolation. The private sector would be equally disrupted. Airlines, shipping companies, and logistics providers all rely on real-time reservation and cargo systems hosted outside Greenland. Tourism, one of the fastest-growing industries, is almost entirely dependent on digital bookings and payments. Mining operations under development would be unable to transmit critical data to foreign partners or markets. Even at the household level, the effects could be highly disruptive. Messaging apps, social media, and streaming platforms all require constant external connections; they would stop working instantly. Online banking and digital ID services would be unreachable, leaving people unable to pay bills, transfer money, or authenticate themselves for government services. As there are so few local caches or hosting facilities in Greenland, even “local” digital life evaporates once the cables are cut. So we will be back to reading books and paper magazines again.

This means that an outage can cascade well beyond the loss of entertainment or simple inconvenience. It undermines health care, government administration, financial stability, commerce, and basic communication. In practice, the disruption would touch every citizen and every institution almost immediately, with few alternatives in place to keep essential civil services running.

GREENLAND’s DIGITAL INFRASTRUCTURE EXPOSURE: ABOUT THE DATA.

In this inquiry, I have primarily analyzed two pillars of Greenland’s digital presence: web/IP hosting, as well as MX (mail exchange) hosting. These may sound technical, but they are fundamental to understanding. Web/IP hosting determines where Greenland’s websites and online services physically reside, whether inside Greenland’s own infrastructure or abroad in foreign data centers. MX hosting determines where email is routed and processed, and is crucial for the operation of government, business, and everyday communication. Together, these layers form the backbone of a country’s digital sovereignty.

What the data shows is sobering. For example, the Government’s own portal nanoq.gl is hosted locally by Tele Greenland (i.e., Tusass GL), but its email is routed through Amazon’s infrastructure abroad. The national airline, airgreenland.gl, also relies on Microsoft’s mail servers in the US and UK. These are not isolated cases. They illustrate the broader pattern of dependence. If hosting and mail flows are predominantly external, then Greenland’s resilience, control, and even lawful access are effectively in the hands of others.

The data from the Greenlandic .gl domain space paints a clear and rather bleak picture of dependency and reliance on the outside world. My inquiry covered 315 domains, resolving more than a thousand hosts and IPs and uncovering 548 mail exchangers, which together form a dependency network of 1,359 nodes and 2,237 edges. What emerges is not a story of local sovereignty but of heavy reliance on external, that is, outside Greenland, hosting.

When broken down, it becomes clear how much of the Greenlandic namespace is not even in use. Of the 315 domains, only 190 could be resolved to a functioning web or IP host, leaving 125 domains, or about 40 percent, with no active service. For mail exchange, the numbers are even more striking: only 98 domains have MX records, while 217 domains, it would appear, cannot be used for email, representing nearly seventy percent of the total. In other words, the universe of domains we can actually analyze shrinks considerably once you separate the inactive or unused domains from those that carry real digital services.

It is within this smaller, active subset that the pattern of dependency becomes obvious. The majority of the web/IP hosting we can analyze is located outside Greenland, primarily on infrastructure controlled by American companies such as Cloudflare, Microsoft, Google, and Amazon, or through Danish and European resellers. For email, the reliance is even more complete: virtually all MX hosting that exists is foreign, with only two domains fully hosted in Greenland. This means that both Greenland’s web presence and its email flows are overwhelmingly dependent on servers and policies beyond its own borders. The geographic spread of dependencies is extensive, spanning the US, UK, Ireland, Denmark, and the Netherlands, with some entries extending as far afield as China and Panama. This breadth raises uncomfortable questions about oversight, control, and the exposure of critical services to foreign jurisdictions.

Security practices add another layer of concern. Many domains lack the most basic forms of email protection. The Sender Policy Framework (SPF), which instructs mail servers on which IP addresses are authorized to send on behalf of a domain, is inconsistently applied. DomainKeys Identified Mail (DKIM), which uses cryptographic signatures to verify that an email originates from the claimed sender, is also patchy. Most concerning is that Domain-based Message Authentication, Reporting, and Conformance (DMARC), a policy that allows a domain to instruct receiving mail servers on how to handle suspicious emails (for example, reject or quarantine them), is either missing or set to “none” for many critical domains. Without SPF, DKIM, and DMARC properly configured, Greenlandic organizations are wide open to spoofing and phishing, including within government and municipal domains.

Taken together, the picture is clear. Greenland’s digital backbone is not in Greenland. Its critical web and mail infrastructure lives elsewhere, often in the hands of hyperscalers far beyond Nuuk’s control. The question practically asks itself: if those external links were cut tomorrow, how much of Greenland’s public sector could still function?

GREENLAND’s DIGITAL INFRASTRUCTURE EXPOSURE: SOME KEY DATA OUT OF A VERY RICH DATASET.

The Figure shows the distribution of Greenlandic (.gl) web/IP domains hosted on a given country’s infrastructure. Note that domains are frequently hosted in multiple countries. However, very few (2!) have an overlap with Greenland.

The chart of Greenland (.gl) Web/IP Infrastructure Hosting by Supporting Country reveals the true geography of Greenland’s digital presence. The data covers 315 Greenlandic domains, of which 190 could be resolved to active web or IP hosts. From these, I built a dependency map showing where in the world these domains are actually served.

The headline finding is stark: 57% of Greenlandic domains depend on infrastructure in the United States. This reflects the dominance of American companies such as Cloudflare, Microsoft, Google, and Amazon, whose services sit in front of or fully host Greenlandic websites. In contrast, only 26% of domains are hosted on infrastructure inside Greenland itself (primarily through Tele Greenland/Tusass). Denmark (19%), the UK (14%), and Ireland (13%) appear as the next layers of dependency, reflecting the role of regional resellers, like One.com/Simply, as well as Microsoft and Google’s European data centers. Germany, France, Canada, and a long tail of other countries contribute smaller shares.

It is worth noting that the validity of this analysis hinges on how the data are treated. Each domain is counted once per country where it has active infrastructure. This means a domain like nanoq.gl (the Greenland Government portal) is counted for both Greenland and its foreign dependency through Amazon’s mail services. However, double-counting with Greenland is extremely rare. Out of the 190 resolvable domains, 73 (38%) are exclusively Greenlandic, 114 (60%) are solely foreign, and only 2 (~1%) domains are hybrids, split between Greenland and another country. Those two are Nanoq.gl and airgreenland.gl, both of which combine a Greenland presence with foreign infrastructure. This is why the Figure above shows percentages that add up to more than 100%. They represent the dependency footprint. The share of Greenlandic domains that touch each country. They do not represent a pie chart of mutually exclusive categories. What is most important to note, however, is that the overlap with Greenland is vanishingly small. In practice, Greenlandic domains are either entirely local or entirely foreign. Very few straddle the boundary.

The conclusion is sobering. Greenland’s web presence is deeply externalized. With only a quarter of domains hosted locally, and more than half relying on US-controlled infrastructure, the country’s digital backbone is anchored outside its borders. This is not simply a matter of physical location. It is about sovereignty, resilience, and control. The dominance of US, Danish, and UK providers means that Greenland’s citizens, municipalities, and even government services are reliant on infrastructure they do not own and cannot fully control.

Figure shows the distribution of Greenlandic (.gl) domains by the supporting country for the MX (mail exchange) infrastructure. It shows that nearly all email services are routed through foreign providers.

The Figure above of the MX (mail exchange) infrastructure by supporting country reveals an even more pronounced pattern of external reliance than in the case of web hosting. From the 315 Greenlandic domains examined, only 98 domains had active MX records. These are the domains that can be analyzed for mail routing and were used in the analysis below.

Among them, 19% of all Greenlandic domains send their mail through US-controlled infrastructure, primarily Microsoft’s Outlook/Exchange services and Google’s Gmail. The United Kingdom (12%), Ireland (9%), and Denmark (8%) follow, reflecting the presence of Microsoft and Google’s European data centers and Danish resellers. France and Australia appear with smaller shares at 2%, and beyond that, the contributions of other countries are negligible. Greenland itself barely registers. Only two domains, accounting for 1% of the total, utilize MX infrastructure hosted within Greenland. The rest rely on servers beyond its borders. This result is consistent with our sovereignty breakdown: almost all Greenlandic email is foreign-hosted, with just two domains entirely local and one hybrid combining Greenlandic and foreign providers.

Again, the validity of this analysis rests on the same method as the web/IP chart. Each domain is counted once per country where its MX servers are located. Percentages do not add up to 100% because domains may span multiple countries; however, crucially, as with web hosting, double-counting with Greenland is vanishingly rare. In fact, virtually no Greenlandic domains combine local and foreign MX; they are either foreign-only or, in just two cases, local-only.

The story is clear and compelling: Greenland’s email infrastructure is overwhelmingly externalized. Where web hosting still accounts for a quarter of domains within the country, email sovereignty is almost nonexistent. Nearly all communication flows through servers controlled by US, UK, Ireland, or Denmark. The implication is sobering. In the event of disruption, policy disputes, or surveillance demands, Greenland has little autonomous control over its most basic digital communications.

A sector-level view of how Greenland’s web/IP domains are hosted, local vs externally (outside Greenland).

This chart provides a sector-level view of how Greenlandic domains are hosted, distinguishing between those resolved locally in Greenland and those hosted outside of Greenland. It is based on the subset of 190 domains for which sufficient web/IP hosting information was available. Importantly, the categorization relies on individual domains, not on companies as entities. A single company or institution may own and operate multiple domains, which are counted separately for the purpose of this analysis. There is also some uncertainty in sector assignment, as many domains have ambiguous names and were categorized using best-fit rules.

The distribution highlights the uneven exercise of digital sovereignty across sectors. In education and finance, the dependency is absolute: 100 percent of domains are hosted externally, with no Greenland-based presence at all. It should not come as a big surprise that ninety percent of government domains are hosted in Greenland, while only 10 percent are hosted outside. From a Digital Government sovereignty perspective, this would obviously be what should be expected. Transportation shows a split, with about two-thirds of domains hosted locally and one-third abroad, reflecting a mix of Tele Greenland-hosted (Tusass GL) domains alongside foreign-hosted services, such as airgreenland.gl. According to the available data, Energy infrastructure is hosted entirely abroad, underscoring possibly one of the most critical vulnerabilities in the dataset. By contrast, telecom domains, unsurprisingly, given Tele Greenland’s role, are entirely local, making it the only sector with 100 percent internal hosting. Municipalities present a more positive picture, with three-quarters of domains hosted locally and one-quarter abroad, although this still represents a partial external dependency. Finally, the large and diverse “Other” category, which contains a mix of companies, organizations, and services, is skewed towards foreign hosting (67 percent external, 33 percent local).

Taken together, the results underscore three important points. First, sector-level sovereignty is highly uneven. While telecom, municipal, and Governmental web services retain more local control, most finance, education, and energy domains are overwhelmingly external. We should keep in mind that when a Greenlandic domain resolves to local infrastructure, it indicates that the frontend web hosting, the visible entry point that users connect to, is located within Greenland, typically through Tele Greenland (i.e., Tusass GL). However, this does not automatically mean that the entire service stack is local. Critical back-end components such as databases, authentication services, payment platforms, or integrated cloud applications may still reside abroad. In practice, a locally hosted domain therefore guarantees only that the web interface is served from Greenland, while deeper layers of the service may remain dependent on foreign infrastructure. This distinction is crucial when evaluating genuine digital sovereignty and resilience. However, the overall pattern is unmistakable. Greenland’s digital presence remains heavily reliant on foreign hosting, with only pockets of local sovereignty.

A sector-level view of the share of locally versus externally (i.e., outside Greenland) MX (mail exchange) hosted Greenlandic domains (.gl).

The Figure above provides a sector-level view of how Greenlandic domains handle their MX (mail exchange) infrastructure, distinguishing between those hosted locally and those that rely on foreign providers. The analysis is based on the subset of 94 domains (out of 315 total) where MX hosting could be clearly resolved. In other words, these are the domains for which sufficient DNS information was available to identify the location of their mail servers. As with the web/IP analysis, it is important to note two caveats: sector classification involves a degree of interpretation, and the results represent individual domains, not individual companies. A single organization may operate multiple domains, some of which are local and others external.

The results are striking. For most sectors, such as education, finance, transport, energy, telecom, and municipalities, the dependence on foreign MX hosting is total. 100 percent of identified domains rely on external providers for email infrastructure. Even critical sectors such as energy and telecom, where one might expect a more substantial local presence, are fully externalized. The government sector presents a mixed picture. Half of the government domains examined utilize local MX hosting, while the other half are tied to foreign providers. This partial local footprint is significant, as it shows that while some government email flows are retained within Greenland, an equally large share is routed through servers abroad. The “other” sector, which includes businesses, NGOs, and various organizations, shows a small local footprint of about 3 percent, with 97 percent hosted externally. Taken together, the Figure paints a more severe picture of dependency than the web/IP hosting analysis.

While web hosting still retained about a quarter of domains locally, in the case of email, nearly everything is external. Even in government, where one might expect strong sovereignty, half of the domains are dependent on foreign MX servers. This distinction is critical. Email is the backbone of communication for both public and private institutions, and the routing of Greenland’s email infrastructure almost entirely abroad highlights a deep vulnerability. Local MX records guarantee only that the entry point for mail handling is in Greenland. They do not necessarily mean that mail storage or filtering remains local, as many services rely on external processing even when the MX server is domestic.

The broader conclusion is clear. Greenland’s sovereignty in digital communications is weakest in email. Across nearly all sectors, external providers control the infrastructure through which communication must pass, leaving Greenland reliant on systems located far outside its borders. Irrespective of how the picture painted here may appear severe in terms of digital sovereignty, it is not altogether surprising. The almost complete externalization of Greenlandic email infrastructure is not surprising, given that most global email services are provided by U.S.-based hyperscalers such as Microsoft and Google. This reliance on Big Tech is the norm worldwide, but it carries particular implications for Greenland, where dependence on foreign-controlled communication channels further limits digital sovereignty and resilience.

The analysis of the 94 MX hosting entries shows a striking concentration of Greenlandic email infrastructure in the hands of a few large players. Microsoft dominates the picture with 38 entries, accounting for just over 40 percent of all records, while Amazon follows with 20 entries, or around 21 percent. Google, including both Gmail and Google Cloud Platform services, contributes an additional 8 entries, representing approximately 9% of the total. Together, these three U.S. hyperscalers control nearly 70 percent of all Greenlandic MX infrastructure. By contrast, Tele Greenland (Tusass GL) appears in only three cases, equivalent to just 3 percent of the total, highlighting the minimal local footprint. The remaining quarter of the dataset is distributed across a long tail of smaller European and global providers such as Team Blue in Denmark, Hetzner in Germany, OVH and O2Switch in France, Contabo, Telenor, and others. The distribution, however you want to cut it, underscores the near-total reliance on U.S. Big Tech for Greenland’s email services, with only a token share remaining under national control.

Out of 179 total country mentions across the dataset, the United States is by far the most dominant hosting location, appearing in 61 cases, or approximately 34 percent of all country references. The United Kingdom follows with 38 entries (21 percent), Ireland with 28 entries (16 percent), and Denmark with 25 entries (14 percent). France (4 percent) and Australia (3 percent) form a smaller second tier, while Greenland itself appears only three times (2 percent). Germany also accounts for three entries, and all other countries (Austria, Norway, Spain, Czech Republic, Slovakia, Poland, Canada, and Singapore) occur only once each, making them statistically marginal. Examining the structure of services across locations, approximately 30 percent of providers are tied to a single country, while 51 percent span two countries (for example, UK–US or DK–IE). A further 18 percent are spread across three countries, and a single case involved four countries simultaneously. This pattern reflects the use of distributed or redundant MX services across multiple geographies, a characteristic often found in large cloud providers like Microsoft and Amazon.

The key point is that, regardless of whether domains are linked to one, two, or three countries, the United States is present in the overwhelming majority of cases, either alone or in combination with other countries. This confirms that U.S.-based infrastructure underpins the backbone of Greenlandic email hosting, with European locations such as the UK, Ireland, and Denmark acting primarily as secondary anchors rather than true alternatives.

WHAT DOES IT ALL MEAN?

Greenland’s public digital life overwhelmingly runs on infrastructure it does not control. Of 315 .gl domains, only 190 even have active web/IP hosting, and just 98 have resolvable MX (email) records. Within that smaller, “real” subset, most web front-ends are hosted abroad and virtually all email rides on foreign platforms. The dependency is concentrated, with U.S. hyperscalers—Microsoft, Amazon, and Google—accounting for nearly 70% of MX services. The U.S. is also represented in more than a third of all MX hosting locations (often alongside the UK, Ireland, or Denmark). Local email hosting is almost non-existent (two entirely local domains; a few Tele Greenland/Tusass appearances), and even for websites, a Greenlandic front end does not guarantee local back-end data or apps.

That architecture has direct implications for sovereignty and security. If submarine cables, satellites, or upstream policies fail or are restricted, most government, municipal, health, financial, educational, and transportation services would degrade or cease, because their applications, identity systems, storage, payments, and mail are anchored off-island. Daily resilience can mask strategic fragility: the moment international connectivity is severely compromised, Greenland lacks the local “island mode” to sustain critical digital workflows.

This is not surprising. U.S. Big Tech dominates email and cloud apps worldwide. Still, it may pose a uniquely high risk for Greenland, given its small population, sparse infrastructure, and renewed U.S. strategic interest in the region. Dependence on platforms governed by foreign law and policy erodes national leverage in crisis, incident response, and lawful access. It exposes citizens to outages or unilateral changes that are far beyond Nuuk’s control.

The path forward is clear: treat digital sovereignty as critical infrastructure. Prioritize local capabilities where impact is highest (government/municipal core apps, identity, payments, health), build island-mode fallbacks for essential services, expand diversified transport (additional cables, resilient satellite), and mandate basic email security (SPF/DKIM/DMARC) alongside measurable locality targets for hosting and data. Only then can Greenland credibly assure that, even if cut off from the world, it can still serve its people.

CONNECTIVITY AND RESILIENCE: GREENLAND VERSUS OTHER SOVEREIGN ISLANDS.

Sources: Submarine cable counts from TeleGeography/SubmarineNetworks.com; IXPs and ASNs from Internet Society Pulse/Peering DB and RIR data; GDP and Population from IMF/Worldband (2023/2024); Internet penetration from ITU and National Statistics.

The comparative table shown above highlights Greenland’s position among other sovereign and autonomous islands in terms of digital infrastructure. With two international submarine cables, Greenland shares the same level of cable redundancy as the Faroe Islands, Malta, the Maldives, Seychelles, Cuba, and Fiji. This places it in the middle tier of island connectivity: above small states like Comoros, which rely on a single cable, but far behind island nations such as Cyprus, Ireland, or Singapore, which have built themselves into regional hubs with multiple independent international connections.

Where Greenland diverges is in the absence of an Internet Exchange Point (IXP) and its very limited number of Autonomous Systems (ASNs). Unlike Iceland, which couples four cables with three IXPs and over ninety ASNs, Greenland remains a network periphery. Even smaller states such as Malta, Seychelles, or Mauritius operate IXPs and host more ASNs, giving them greater routing autonomy and resilience.

In terms of internet penetration, Greenland fares relatively well, with a rate of over 90 percent, comparable to other advanced island economies. Yet the country’s GDP base is extremely limited, comparable to the Faroe Islands and Seychelles, which constrains its ability to finance major independent infrastructure projects. This means that resilience is not simply a matter of demand or penetration, but rather a question of policy choices, prioritization, and regional partnerships.

Seen from a helicopter’s perspective, Greenland is neither in the worst nor the best position. It has more resilience than single-cable states such as Comoros or small Pacific nations. Still, it lags far behind peer islands that have deliberately developed multi-cable redundancy, local IXPs, and digital sovereignty strategies. For policymakers, this raises a fundamental challenge: whether to continue relying on the relative stability of existing links, or to actively pursue diversification measures such as a national IXP, additional cable investments, or regional peering agreements. In short, Greenland’s digital sovereignty depends less on raw penetration figures and more on whether its infrastructure choices can elevate it from a peripheral to a more autonomous position in the global network.

HOW TO ELEVATE SOUTH GREENLAND TO A PREFERRED TO A PREFFERED DIGITAL HOST FOR THE WORLD … JUST SAYING, WHY NOT!

At first glance, South Greenland and Iceland share many of the same natural conditions that make Iceland an attractive hub for data centers. Both enjoy a cool North Atlantic climate that allows year-round free cooling, reducing the need for energy-intensive artificial systems. In terms of pure geography and temperature, towns such as Qaqortoq and Narsaq in South Greenland are not markedly different from Reykjavík or Akureyri. From a climatic standpoint, there is no inherent reason why Greenland should not also be a viable location for large-scale hosting facilities.

The divergence begins not with climate but with energy and connectivity. Iceland spent decades developing a robust mix of hydropower and geothermal plants, creating a surplus of cheap renewable electricity that could be marketed to international hyperscale operators. Greenland, while rich in hydropower potential, has only a handful of plants tied to local demand centers, with no national grid and limited surplus capacity. Without investment in larger-scale, interconnected generation, it cannot guarantee the continuous, high-volume power supply that international data centers demand. Connectivity is the other decisive factor. Iceland today is connected to four separate submarine cable systems, linking it to Europe and North America, which gives operators confidence in redundancy and low-latency routes across the Atlantic. South Greenland, by contrast, depends on two branches of the Greenland Connect system, which, while providing diversity to Iceland and Canada, does not offer the same level of route choice or resilience. The result is that Iceland functions as a transatlantic bridge, while Greenland remains an endpoint.

For South Greenland to move closer to Iceland’s position, several changes would be necessary. The most important would be a deliberate policy push to develop surplus renewable energy capacity and make it available for export into data center operations. Parallel to this, Greenland would need to pursue further international submarine cables to break its dependence on a single system and create genuine redundancy. Finally, it would need to build up the local digital ecosystem by fostering an Internet Exchange Point and encouraging more networks to establish Autonomous Systems on the island, ensuring that Greenland is not just a transit point but a place where traffic is exchanged and hosted, and, importantly, making money on its own Digital Infrastructure and Sovereignty. South Greenland already shares the climate advantage that underpins Iceland’s success, but climate alone is insufficient. Energy scale, cable diversity, and deliberate policy have been the ingredients that have allowed Iceland to transform itself into a digital hub. Without similar moves, Greenland risks remaining a peripheral node rather than evolving into a sovereign center of digital resilience.

A PRACTICAL BLUEPRINT FOR GREENLAND TOWARDS OWNING ITS DIGITAL SOVEREIGNTY.

No single measure eliminates Greenland’s dependency on external infrastructure, banking, global SaaS, and international transit, which are irreducible. But taken together, these steps described below maximize continuity of essential functions during cable cuts or satellite disruption, improve digital sovereignty, and strengthen bargaining power with global vendors. The trade-off is cost, complexity, and skill requirements, which means Greenland must prioritize where full sovereignty is truly mission-critical (health, emergency, governance) and accept graceful degradation elsewhere (social media, entertainment, SaaS ERP).

A. Keep local traffic local (routing & exchange).

Proposal: Create or strengthen a national IXP in Nuuk, with a secondary node (e.g., Sisimiut or Qaqortoq). Require ISPs, mobile operators, government, and major content/CDNs to peer locally. Add route-server policies with “island-mode” communities to ensure that intra-Greenland routes stay reachable even if upstream transit is lost. Deploy anycasted recursive DNS and host authoritative DNS for .gl domains on-island, with secondaries abroad.

Pros:

  • Dramatically reduces the latency, cost, and fragility of local traffic.
  • Ensures Greenland continues to “see itself” even if cut off internationally.
  • DNS split-horizon prevents sensitive internal queries from leaking off-island.

Cons:

  • Needs policy push. Voluntary peering is often insufficient in small markets.
  • Running redundant IXPs is a fixed cost for a small economy.
  • CDNs may resist deploying nodes without incentives (e.g., free rack and power).

A natural and technically well-founded reaction, especially given Greenland’s monopolistic structure under Tusass, is that an IXP or multiple ASNs might seem redundant. Both content and users reside on the same Tusass network, and intra-Greenland traffic already remains local at Layer 3. Adding an IXP would not change that in practice. Without underlying physical or organizational diversity, an exchange point cannot create redundancy on its own.

However, over the longer term, an IXP can still serve several strategic purposes. It provides a neutral routing and governance layer that enables future decentralization (e.g., government, education, or sectoral ASNs), strengthens “island-mode” resilience by isolating internal routes during disconnection from the global Internet, and supports more flexible traffic management and security policies. Notably, an IXP also offers a trust and independence layer that many third-party providers, such as hyperscalers, CDNs, and data-center networks, typically require before deploying local nodes. Few global operators are willing to peer inside the demarcation of a single national carrier’s network. A neutral IXP provides them with a technical and commercial interface independent of Tusass’s internal routing domain, thereby making on-island caching or edge deployments more feasible in the future. In that sense, this accurately reflects today’s technical reality. The IXP concept anticipates tomorrow’s structural and sovereignty needs, bridging the gap between a functioning monopoly network and a future, more open digital ecosystem.

In practice (and in my opinion), Tusass is the only entity in Greenland with the infrastructure, staff, and technical capacity to operate an IXP. While this challenges the ideal of neutrality, it need not invalidate the concept if the exchange is run on behalf of Naalakkersuisut (the Greenlandic self-governing body) or under a transparent, multi-stakeholder governance model. The key issue is not who operates the IXP, but how it is governed. Suppose Tusass provides the platform while access, routing, and peering policies are openly managed and non-discriminatory. In that case, the IXP can still deliver genuine benefits: local routing continuity, “island-mode” resilience, and a neutral interface that encourages future participation by hyperscalers, CDNs, and sectoral networks.

B. Host public-sector workloads on-island.

Proposal: Stand up a sovereign GovCloud GL in Nuuk (failover in another town, possible West-East redundancy), operated by a Greenlandic entity or tightly contracted partner. Prioritize email, collaboration, case handling, health IT, and emergency comms. Keep critical apps, archives, and MX/journaling on-island even if big SaaS (like M365) is still used abroad.

Pros:

  • Keeps essential government operations functional in an isolation event.
  • Reduces legal exposure to extraterritorial laws, such as the U.S. CLOUD Act.
  • Provides a training ground for local IT and cloud talent.

Cons:

  • High CapEx + ongoing OpEx; cloud isn’t a one-off investment.
  • Scarcity of local skills; risk of over-reliance on a few engineers.
  • Difficult to replicate the breadth of SaaS (ERP, HR, etc.) locally; selective hosting is realistic, full stack is not.

C. Make email & messaging “cable- and satellite-outage proof”.

Proposal: Host primary MX and mailboxes in GovCloud GL with local antispam, journaling, and security. Use off-island secondaries only for queuing. Deploy internal chat/voice/video systems (such as Matrix, XMPP, or local Teams/Zoom gateways) to ensure that intra-Greenland traffic never routes outside the country. Define an “emergency federation mode” to isolate traffic during outages.

Pros:

  • Ensures communication between government, hospitals, and municipalities continues during outages.
  • Local queues prevent message loss even if foreign relays are unreachable.
  • Pre-tested emergency federation builds institutional muscle memory.

Cons:

  • Operating robust mail and collaboration platforms locally is a resource-intensive endeavor.
  • Risk of user pushback if local platforms feel less polished than global SaaS.
  • The emergency “mode switch” adds operational complexity and must be tested regularly.

D. Put the content edge in Greenland.

Proposal: Require or incentivize CDN caches (Akamai, Cloudflare, Netflix, OS mirrors, software update repos, map tiles) to be hosted inside Greenland’s IXP(s).

Pros:

  • Improves day-to-day performance and cuts transit bills.
  • Reduces dependency on subsea cables for routine updates and content.
  • Keeps basic digital life (video, software, education platforms) usable in isolation.

Cons:

  • CDNs deploy based on scale; Greenland’s market may be marginal without a subsidy.
  • Hosting costs (power, cooling, rackspace) must be borne locally.
  • Only covers cached/static content; dynamic services (banking, SaaS) still break without external connectivity.

E. Implement into law & contracts.

Proposal: Mandate data residency for public-sector data; require “island-mode” design in procurement. Systems must demonstrate the ability to authenticate locally, operate offline, maintain usable data, and retain keys under Greenlandic custody. Impose peering obligations for ISPs and major SaaS/CDNs.

Pros:

  • Creates a predictable baseline for sovereignty across all agencies.
  • Prevents future procurement lock-in to non-resilient foreign SaaS.
  • Gives legal backing to technical requirements (IXP, residency, key custody).

Cons:

  • May raise the costs of IT projects (compliance overhead).
  • Without a strong enforcement, rules risk becoming “checkbox” exercises.
  • Possible trade friction if foreign vendors see it as protectionist.

F. Strengthen physical resilience

Proposal: Maintain and upgrade subsea cable capacity (Greenland Connect and Connect North), add diversity (spur/loop and new landings), and maintain long-haul microwave/satellite as a tertiary backup. Pre-engineer quality of service downgrades for graceful degradation.

Pros:

  • Adds true redundancy. Nothing replaces a working subsea cable.
  • Tertiary paths (satellite, microwave) keep critical services alive during failures.
  • Clear QoS downgrades make service loss more predictable and manageable.

Cons:

  • High (possibly very high) CapEx. New cable segments cost tens to hundreds of millions of euros.
  • Satellite/microwave backup cannot match the throughput of subsea cables.
  • International partners may be needed for funding and landing rights.

Security & trust

Proposal: Deploy local PKI and HSMs for the government. Enforce end-to-end encryption. Require local custody of cryptographic keys. Audit vendor remote access and include kill switches.

Pros:

  • Prevents data exposure via foreign subpoenas (without Greenland’s knowledge).
  • Local trust anchors give confidence in sovereignty claims.
  • Kill switches and audit trails enhance vendor accountability.

Cons:

  • PKI and HSM management requires very specialized skills.
  • Adds operational overhead (key lifecycle, audits, incident response).
  • Without strong governance, there is a risk of “security theatre” rather than absolute security.

On-island first as default. A key step for Greenland is to make on-island first the norm so that local-to-local traffic stays local even if Atlantic cables fail. Concretely, stand up a national IXP in Nuuk to keep domestic traffic on the island and anchor CDN caches; build a Greenlandic “GovCloud” to host government email, identity, records, and core apps; and require all public-sector systems to operate in “island mode” (continue basic services offline from the rest of the world). Pair this with local MX, authoritative DNS, secure chat/collaboration, and CDN caches, so essential content and services remain available during outages. Back it with clear procurement rules on data residency and key custody to reduce both outage risk and exposure to foreign laws (e.g., CLOUD Act), acknowledging today’s heavy—if unsurprising—reliance on U.S. hyperscalers (Microsoft, Amazon, Google).

What this changes, and what it doesn’t. These measures don’t aim to sever external ties. They should rebalance them. The goal is graceful degradation that keeps government services, domestic payments, email, DNS, and health communications running on-island, while accepting that global SaaS and card rails will go dark during isolation. Finally, it’s also worth remembering that local caching is only a bridge, not a substitute for global connectivity. In the first days of an outage, caches would keep websites, software updates, and even video libraries available, allowing local email and collaboration tools to continue running smoothly. But as the weeks pass, those caches would inevitably grow stale. News sites, app stores, and streaming platforms would stop refreshing, while critical security updates, certificates, and antivirus definitions would no longer be available, leaving systems exposed to risk. If isolation lasted for months, the impact would be much more profound. Banking and card clearing would be suspended, SaaS-driven ERP systems would break down, and Greenland would slide into a “local only” economy, relying on cash and manual processes. Over time, the social impact would also be felt, with the population cut off from global news, communication, and social platforms. Caching, therefore, buys time, but not independence. It can make an outage manageable in the short term, yet in the long run, Greenland’s economy, security, and society depend on reconnecting to the outside world.

The Bottom line. Full sovereignty is unrealistic for a sparse, widely distributed country, and I don’t think it makes sense to strive for that. It just appears impractical. In my opinion, partial sovereignty is both achievable and valuable. Make on-island first the default, keep essential public services and domestic comms running during cuts, and interoperate seamlessly when subsea links and satellites are up. This shifts Greenland from its current state of strategic fragility to one of managed resilience, without overlooking the rest of the internet.

ACKNOWLEDGEMENT.

I want to acknowledge my wife, Eva Varadi, for her unwavering support, patience, and understanding throughout the creative process of writing this article. I would also like to thank Dr. Signe Ravn-Højgaard, from “Tænketanken Digital Infrastruktur”, and the Sermitsiaq article “Digital afhængighed af udlandet” (“Digital dependency on foreign countries”) by Paul Krarup, for inspiring this work, which is also a continuation of my previous research and article titled “Greenland: Navigating Security and Critical Infrastructure in the Arctic – A Technology Introduction”. I would like to thank Lasse Jarlskov for his insightful comments and constructive feedback on this article. His observations regarding routing, OSI layering, and the practical realities of Greenland’s network architecture were both valid and valuable, helping refine several technical arguments and improve the overall clarity of the analysis.

CODE AND DATASETS.

The Python code and datasets used in the analysis are available on my public GitHub: https://github.com/drkklarsen/greenland_digital_infrastructure_mapping (the code is still work in progress, but it is functional and will generate similar data as analyzed in this article).

ABBREVIATION LIST.

ASN — Autonomous System Number: A unique identifier assigned to a network operator that controls its own routing on the Internet, enabling the exchange of traffic with other networks using the Border Gateway Protocol (BGP).

BGP — Border Gateway Protocol: The primary routing protocol of the Internet, used by Autonomous Systems to exchange information about which paths data should take across networks.

CDN — Content Delivery Network: A system of distributed servers that cache and deliver content (such as videos, software updates, or websites) closer to users, reducing latency and dependency on international links.

CLOUD Act — Clarifying Lawful Overseas Use of Data Act: A U.S. law that allows American authorities to demand access to data stored abroad by U.S.-based cloud providers, raising sovereignty and privacy concerns for other countries.

DMARC — Domain-based Message Authentication, Reporting and Conformance: An email security protocol that tells receiving servers how to handle messages that fail authentication checks, protecting against spoofing and phishing.

DKIM — DomainKeys Identified Mail: An email authentication method that uses cryptographic signatures to verify that a message has not been altered and truly comes from the claimed sender.

DNS — Domain Name System: The hierarchical system that translates human-readable domain names (like example.gl) into IP addresses that computers use to locate servers.

ERP — Enterprise Resource Planning A type of integrated software system that organizations use to manage business processes such as finance, supply chain, HR, and operations.

GL — Greenland (country code top-level domain, .gl) The internet country code for Greenland, used for local domain names such as nanoq.gl.

GovCloud — Government Cloud: A sovereign or dedicated cloud infrastructure designed for hosting public-sector applications and data within national jurisdiction.

HSM — Hardware Security Module: A secure physical device that manages cryptographic keys and operations, used to protect sensitive data and digital transactions.

IoT — Internet of Things: A network of physical devices (sensors, appliances, vehicles, etc.) connected to the internet, capable of collecting and exchanging data.

IP — Internet Protocol: The fundamental addressing system of the Internet, enabling data packets to be sent from one computer to another.

ISP — Internet Service Provider: A company or entity that provides customers with access to the internet and related services.

IXP — Internet Exchange Point: A physical infrastructure where networks interconnect directly to exchange internet traffic locally rather than through international transit links.

MX — Mail Exchange (Record): A type of DNS record that specifies the mail servers responsible for receiving email on behalf of a domain.

PKI — Public Key Infrastructure: A framework for managing encryption keys and digital certificates, ensuring secure electronic communications and authentication.

SaaS — Software as a Service: Cloud-based applications delivered over the internet, such as Microsoft 365 or Google Workspace, are usually hosted on servers outside the country.

SPF — Sender Policy Framework: An email authentication protocol that defines which mail servers are authorized to send email on behalf of a domain, reducing the risk of forgery.

Tusass is the national telecommunications provider of Greenland, formerly Tele Greenland, responsible for submarine cables, satellite links, and domestic connectivity.

UAV — Unmanned Aerial Vehicle: An aircraft without a human pilot on board, often used for surveillance, monitoring, or communications relay.

UUV — Unmanned Underwater Vehicle: A robotic submarine used for monitoring, surveying, or securing undersea infrastructure such as cables.

Can LEO Satellites close the Gigabit Gap of Europe’s Unconnectables?

Is LEO satellite broadband a cost-effective and capable option for rural areas of Europe? Given that most seem to agree that LEO satellites will not replace mobile broadband networks, it seems only natural to ask whether LEO satellites might help the EU Commission’s Digital Decade Policy Programme (DDPP) 2030 goal of having all EU households (HH) covered by gigabit connections delivered by so-called very high-capacity networks, including gigabit capable fiber-optic and 5G networks, by 2030 (i.e., only focusing on the digital infrastructure pillar of the DDPP).

As of 2023, more than €80 billion had been allocated in national broadband strategies and through EU funding instruments, including the Connecting Europe Facility and the Recovery and Resilience Facility. However, based on current deployment trajectories and cost structures, an additional €120 billion or more is expected to close the remaining connectivity gap from the approximately 15.5 million rural homes without a gigabit option in 2023. This brings the total investment requirement to over €200 billion. The shortfall is most acute in rural and hard-to-reach regions where network deployment is significantly more expensive. In these areas, connecting a single household with high-speed broadband infrastructure, especially via FTTP, can easily exceed €10,000 in public subsidy, given the long distances and low density of premises. It would be a very “cheap” alternative for Europe if a non-EU-based (i.e., USA) satellite constellation could close the gigabit coverage gap even by a small margin. However, given some of the current geopolitical factors, 200 billion euros could enable Europe to establish its own large LEO satellite constellation if it can match (or outperform) the unitary economics of SpaceX, rather than its IRIS² satellite program.

In this article, my analysis focuses on direct-to-dish low Earth orbit (LEO) satellites with expected capabilities comparable to, or exceeding, those projected for SpaceX’s Starlink V3, which is anticipated to deliver up to 1 Terabit per second of total downlink capacity. For such satellites to represent a credible alternative to terrestrial gigabit connectivity, several thousand would need to be in operation. This would allow overlapping coverage areas, increasing effective throughput to household outdoor dishes across sparsely populated regions. Reaching such a scale may take years, even under optimistic deployment scenarios, highlighting the importance of aligning policy timelines with technological maturity.

GIGABITS IN THE EU – WHERE ARE WE, AND WHERE DO WE THINK WE WILL GO?

  • In 2023, Fibre-to-the-Premises (FTTP) rural HH coverage was ca. 52%. For the EU28, this means that approximately 16 million rural homes lack fiber coverage.
  • By 2030, projected FTTP deployment in the EU28 will result in household coverage reaching almost 85% of all rural homes (under so-called BaU conditions), leaving approximately 5.5 million households without it.
  • Due to inferior economics, it is estimated that approximately 10% to 15% of European households are “unconnectable” by FTTP (although not necessarily by FWA or broadband mobile in general).
  • EC estimated (in 2023) that over 80 billion euros in subsidies have been allocated in national budgets, with an additional 120 billion euros required to close the gigabit ambition gap by 2030 (e.g., over 10,000 euros per remaining rural household in 2023).

So, there is a considerable number of so-called “unconnectable” households within the European Union (i.e., EU28). These are, for example, isolated dwellings away from inhabited areas (e.g., settlements, villages, towns, and cities). They often lack the most basic fixed communications infrastructure, although some may have old copper lines or only relatively poor mobile coverage.

The figure below illustrates the actual state of FTTP deployment in rural households in 2023 (orange bars) as well as a Rural deployment scenario that extends FTTP deployment to 2030, using the maximum of the previous year’s deployment level and the average of the last three years’ deployment levels. Any level above 80% grows by 1% pa (arbitrarily chosen). The data source for the above is “Digital Decade 2024: Broadband Coverage in Europe 2023” by the European Commission. The FTTP pace has been chosen individually for suburban and rural areas to match the expectations expressed in the reports for 2030.

ARE LEO DIRECT-TO-DISH (D2D) SATELLITES A CREDIBLE ALTERNATIVE FOR THE “UNCONNECTABLES”?

  • For Europe, a non-EU-based (i.e., US-based) satellite constellation could be a very cost-effective alternative to closing the gigabit coverage gap.
  • Megabit connectivity (e.g., up to 100+ Mbps) is already available today with SpaceX Starlink LEO satellites in rural areas with poor broadband alternatives.
  • The SpaceX Starlink V2 satellite can provide approximately 100 Gbps (V1.5 ~ 20+ Gbps), and its V3 is expected to deliver 1,000 Gbps within the satellite’s coverage area, with a maximum coverage radius of over 500 km.
  • The V3 may have 320 beams (or more), each providing approximately ~3 Gbps (i.e., 320 x 3 Gbps is ca. 1 Tbps). With a frequency re-use factor of 40, 25 Gbps can be supplied within a unique coverage area. With “adjacent” satellites (off-nadir), the capacity within a unique coverage area can be enhanced by additional beams that overlap the primary satellite (nadir).
  • With an estimated EU28 “unconnectable” household density of approximately 1.5 per square kilometer, the LEO satellite constellation would cover more than 20,000 households, each with a capacity of 20 Gbps over an area of 15,000 square kilometers.
  • At a peak-hour user concurrency of 15% and a per-user demand of 1 Gbps, the backhaul demand would reach 3 terabits per second (Tbps). This means we have an oversubscription ratio of approximately 3:1, which must be met by a single 1 Tbps satellite, or could be served by three overlapping satellites.
  • This assumes a 100% take-up rate of the unconnectable HHs and that each would select a 1 Gbps service (assuming such would be available). In rural areas, the take-up rate may not be significantly higher than 60%, and not all households will require a 1 Gbps service.
  • This also assumes that there are no alternatives to LEO satellite direct-to-dish service, which seems unlikely for at least some of the 20,000 “unconnectable” households. Given the typical 5G coverage conditions associated with the frequency spectrum license conditions, one might hope for some decent 5G coverage; alas, it is unlikely to be gigabit in deep rural and isolated areas.

For example, consider the Starlink LEO satellite V1.5, which has a total capacity of approximately 25 Gbps, comprising 32 beams that deliver 800 Mbps per beam, including dual polarization, to a ground-based user dish. It can provide a maximum of 6.4 Gbps over a minimum area of ca. 6,000 km² at nadir with an Earth-based dish directly beneath the satellite. If the coverage area is situated in a UK-based rural area, for example, we would expect to find, on average, 150,000 rural households using an average of 25 rural homes per km². If a household demands 100 Mbps at peak, only 60 households can be online at full load concurrently per area. With 10% concurrency, this implies that we can have a total of 600 households per area out of 150,000 homes. Thus, 1 in 250 households could be allowed to subscribe to a Starlink V1.5 if the target is 100 Mbps per home and a concurrency factor of 10% within the coverage area. This is equivalent to stating that the oversubscription ratio is 250:1, and reflects the tension between available satellite capacity and theoretical rural demand density. In rural UK areas, the beam density is too high relative to capacity to allow universal subscription at 100 Mbps unless more satellites provide overlapping service. For a V1.5 satellite, we can support four regions (i.e., frequency reuse groups), each with a maximum throughput of 6.4 Gbps. Thus, the satellite can support a total of 2,400 households (i.e., 4 x 600) with a peak demand of 100 Mbps and a concurrency rate of 10%. As other satellites (off-nadir) can support the primary satellite, it means that some areas’ demand may be supported by two to three different satellites, providing a multiplier effect that can increase the capacity offered. The Starlink V2 satellite is reportedly capable of supporting up to a total of 100 Gbps (approximately four times that of V1.5), while the V3 will support up to 1 Tbps, which is 40 times that of V1.5. The number of beams and, consequently, the number of independent frequency groups, as well as spectral efficiency, are expected to be improved over V1.5, which are factors that will enhance the overall total capacity of the newer Starlink satellite generations.

By 2030, the EU28 rural areas are expected to achieve nearly 85% FTTP coverage under business-as-usual deployment scenarios. This would leave approximately 5.5 million households, referred to as “unconnectables,” without direct access to high-speed fiber. These households are typically isolated, located in sparsely populated or geographically challenging regions, where the economics of fiber deployment become prohibitively uneconomical. Although there may be alternative broadband options, such as FWA, 5G mobile coverage, or copper, it is unlikely that such “unconnectable” homes would sustainably have a gigabit connection.

This may be where LEO satellite constellations enter the picture as a possible alternative to deploying fiber optic cables in uneconomical areas, such as those that are unconnectable. The anticipated capabilities of Starlink’s third-generation (V3) satellites, offering approximately 1 Tbps of total downlink capacity with advanced beamforming and frequency reuse, already make them a viable candidate for servicing low-density rural areas, assuming reasonable traffic models similar to those of an Internet Service Provider (ISP). With modest overlapping coverage from two or three such satellites, these systems could deliver gigabit-class service to tens of thousands of dispersed households without (much) oversubscription, even assuming relatively high concurrency and usage.

Considering this, there seems little doubt that an LEO constellation, just slightly more capable than SpaceX’s Starlink V3 satellite, appears to be able to fully support the broadband needs of the remaining unconnected European households expected by 2030. This also aligns well with the technical and economic strengths of LEO satellites: they are ideally suited for delivering high-capacity service to regions where population density is too low to justify terrestrial infrastructure, yet digital inclusion remains equally essential.

LOW-EARTH ORBIT SATELLITES DIRECT-TO-DISRUPTION.

I have in my blog “Will LEO Satellite Direct-to-Cell Networks make Terrestrial Networks Obsolete?” I provided some straightforward reasons why the LEO satellite with direct to an unmodified smartphone capabilities (e.g., Lynk Global, AST Spacemobile) would not make existing cellular network obsolete and would be of most value in remote or very rural areas where no cellular coverage would be present (as explained very nicely by Lynk Global) offering a connection alternative to satellite phones such as Iridium, and thus being complementary existing terrestrial cellular networks. Thus, despite the hype, we should not expect a direct disruption to regular terrestrial cellular networks from LEO satellite D2C providers.

Of course, the question could also be asked whether LEO satellites directed to an outdoor (terrestrial) dish could threaten existing fiber optic networks, the business case, and the value proposition. After all, the SpaceX Starlink V3 satellite, not yet operational, is expected to support 1 Terabit per second (Tbps) over a coverage area of several thousand kilometers in diameter. It is no doubt an amazing technological achievement for SpaceX to have achieved a 10x leap in throughput from its present generation V2 (~100 Gbps).

However, while a V3-like satellite may offer an (impressive) total capacity of 1 Tbps, this capacity is not uniformly available across its entire footprint. It is distributed across multiple beams, potentially 256 or more, each with a bandwidth of approximately 4 Gbps (i.e., 1 Tbps / 256 beams). With a frequency reuse factor of, for example, 5, the effective usable capacity per unique coverage area becomes a fraction of the satellite’s total throughput. This means that within any given beam footprint, the satellite can only support a limited number of concurrent users at high bandwidth levels.

As a result, such a satellite cannot support more than roughly a thousand households with concurrent 1 Gbps demand in any single area (or, alternatively, about 10,000 households with 100 Mbps concurrent demand). This level of support would be equivalent to a small FTTP (sub)network serving no more than 20,000 households at a 50% uptake rate (i.e., 10,000 connected homes) and assuming a concurrency of 10%. A deployment of this scale would typically be confined to a localized, dense urban or peri-urban area, rather than the vast rural regions that LEO systems are expected to serve.

In contrast, a single Starlink V3-like satellite would cover a vast region, capable of supporting similar or greater numbers of users, including those in remote, low-density areas that FTTP cannot economically reach. The satellite solution described here is thus not designed to densify urban broadband, but rather to reach rural, remote, and low-density areas where laying fiber is logistically or economically impractical. Therefore, such satellites and conventional large-scale fiber networks are not in direct competition, as they cannot match their density, scale, or cost-efficiency in high-demand areas. Instead, it complements fiber infrastructure by providing connectivity and reinforces the case for hybrid infrastructure strategies, in which fiber serves the dense core, and LEO satellites extend the digital frontier.

However, terrestrial providers must closely monitor their FTTP deployment economics and refrain from extending too far into deep rural areas beyond a certain household density, which is likely to increase over time as satellite capabilities improve. The premise of this blog is that capable LEO satellites by 2030 could serve unconnected households that are unlikely to have any commercial viability for terrestrial fiber and have no other gigabit coverage option. Within the EU28, this represents approximately 5.5 million remote households. A Starlink V3-like 1 Tbps satellite could provide a gigabit service (occasionally) to those households and certainly hundreds of megabits per second per isolated household. Moreover, it is likely that over time, more capable satellites will be launched, with SpaceX being the most likely candidate for such an endeavor if it maintains its current pace of innovation. Such satellites will likely become increasingly interesting for household densities above 2 households per square kilometer. However, suppose an FTTP network has already been deployed. In that case, it seems unlikely that the satellite broadband service would render the terrestrial infrastructure obsolete, as long as it is priced competitively in comparison to the satellite broadband network.

LEO satellite direct-to-dish (D2D) based broadband networks may be a credible and economical alternative to deploying fiber in low-density rural households. The density boundary of viable substitution for a fiber connection with a gigabit satellite D2D connection may shift inward (from deep rural, low-density household areas). This reinforces the case for hybrid infrastructure strategies, in which fiber serves the denser regions and LEO satellites extend the digital frontier to remote and rural areas.

THE USUAL SUSPECT – THE PUN INTENDED.

By 2030, SpaceX’s Starlink will operate one of the world’s most extensive low Earth orbit (LEO) satellite constellations. As of early 2025, the company has launched more than 6,000 satellites into orbit; however, most of these, including V1, V1.5, and V2, are expected to cease operation by 2030. Industry estimates suggest that Starlink could have between 15,000 and 20,000 operational satellites by the end of the decade, which I anticipate to be mainly V3 and possibly a share of V4. This projection depends largely on successfully scaling SpaceX’s Starship launch vehicle, which is designed to deploy up to 60 or more next-generation V3 satellites per mission with the current cadence. However, it is essential to note that while SpaceX has filed applications with the International Telecommunication Union (ITU) and obtained FCC authorization for up to 12,000 satellites, the frequently cited figure of 42,000 satellites includes additional satellites that are currently proposed but not yet fully authorized.

The figure above, based on an idea of John Strand of Strand Consult, provides an illustrative comparison of the rapid innovation and manufacturing cycles of SpaceX LEO satellites versus the slower progression of traditional satellite development and spectrum policy processes, highlighting the growing gap between technological advancement and regulatory adaptation. This is one of the biggest challenges that regulatory institutions and policy regimes face today.

Amazon’s Project Kuiper has a much smaller planned constellation. The Federal Communications Commission (FCC) has authorized Amazon to deploy 3,236 satellites under its initial phase, with a deadline requiring that at least 1,600 be launched and operational by July 2026. Amazon began launching test satellites in 2024 and aims to roll out its service in late 2025 or early 2026. On April 28, 2025, Amazon launched its first 27 operational satellites for Project Kuiper aboard a United Launch Alliance Atlas (ULA) V rocket from Cape Canaveral, Florida. This marks the beginning of Amazon’s deployment of its planned 3,236-satellite constellation aimed at providing global broadband internet coverage. Though Amazon has hinted at potential expansion beyond its authorized count, any Phase 2 remains speculative and unapproved. If such an expansion were pursued and granted, the constellation could eventually grow to 6,000 satellites, although no formal filings have yet been made to support the higher amount.

China is rapidly advancing its low Earth orbit (LEO) satellite capabilities, positioning itself as a formidable competitor to SpaceX’s Starlink by 2030. Two major Chinese LEO satellite programs are at the forefront of this effort: the Guowang (13,000) and Qianfan (15,000) constellations. So, by 2030, it is reasonable to expect that China will field a national LEO satellite broadband system with thousands of operational satellites, focused not just on domestic coverage but also on extending strategic connectivity to Belt and Road Initiative (BRI) countries, as well as regions in Africa, Asia, and South America. Unlike SpaceX’s commercially driven approach, China’s system is likely to be closely integrated with state objectives, combining broadband access with surveillance, positioning, and secure communication functionality. While it remains unclear whether China will match SpaceX’s pace of deployment or technological performance by 2030, its LEO ambitions are unequivocally driven by geopolitical considerations. They will likely shape European spectrum policy and infrastructure resilience planning in the years ahead. Guowang and Qianfan are emblematic of China’s dual-use strategy, which involves developing technologies for both civilian and military applications. This approach is part of China’s broader Military-Civil Fusion policy, which seeks to integrate civilian technological advancements into military capabilities. The dual-use nature of these satellite constellations raises concerns about their potential military applications, including surveillance and communication support for the People’s Liberation Army.

AN ILLUSTRATION OF COVERAGE – UNITED KINGDOM.

It takes approximately 172 Starlink beams to cover the United Kingdom, with 8 to 10 satellites overhead simultaneously. To have a persistent UK coverage in the order of 150 satellite constellations across appropriate orbits. Starlink’s 53° inclination orbital shell is optimized for mid-latitude regions, providing frequent satellite passes and dense, overlapping beam coverage over areas like southern England and central Europe. This results in higher throughput and more consistent connectivity with fewer satellites. In contrast, regions north of 53°N, such as northern England and Scotland, lie outside this optimal zone and depend on higher-inclination shells (70° and 97.6°), which have fewer satellites and wider, less efficient beams. As a result, coverage in these Northern areas is less dense, with lower signal quality and increased latency.

For this blog, I developed a Python script, with fewer than 600 lines of code (It’s a physicist’s code, so unlikely to be super efficient), to simulate and analyze Starlink’s satellite coverage and throughput over the United Kingdom using real orbital data. By integrating satellite propagation, beam modeling, and geographic visualization, it enables a detailed assessment of regional performance from current Starlink deployments across multiple orbital shells. Its primary purpose is to assess how the currently deployed Starlink constellation performs over UK territory by modeling where satellites pass, how their beams are steered, and how often any given area receives coverage. The simulation draws live TLE (Two-Line Element) data from Celestrak, a well-established source for satellite orbital elements. Using the Skyfield library, the code propagates the positions of active Starlink satellites over a 72-hour period, sampling every 5 minutes to track their subpoints across the United Kingdom. There is no limitation on the duration or sampling time. Choosing a more extended simulation period, such as 72 hours, provides a more statistically robust and temporally representative view of satellite coverage by averaging out orbital phasing artifacts and short-term gaps. It ensures that all satellites complete multiple orbits, allowing for more uniform sampling of ground tracks and beam coverage, especially from shells with lower satellite densities, such as the 70° and 97.6° inclinations. This results in smoother, more realistic estimates of average signal density and throughput across the entire region.

Each satellite is classified into one of three orbital shells based on inclination angle: 53°, 70°, and 97.6°. These shells are simulated separately and collectively to understand their individual and combined contributions to UK coverage. The 53° shell dominates service in the southern part of the UK, characterized by its tight orbital band and high satellite density (see the Table below). The 70° shell supplements coverage in northern regions, while the 97.6° polar shell offers sparse but critical high-latitude support, particularly in Scotland and surrounding waters. The simulation assumes several (critical) parameters for each satellite type, including the number of beams per satellite, the average beam radius, and the estimated throughput per beam. These assumptions reflect engineering estimates and publicly available Starlink performance information, but are deliberately simplified to produce regional-level coverage and throughput estimates, rather than user-specific predictions. The simulation does not account for actual user terminal distribution, congestion, or inter-satellite link (ISL) performance, focusing instead on geographic signal and capacity potential.

These parameters were used to infer beam footprints and assign realistic signal density and throughput values across the UK landmass. The satellite type was inferred from its shell (e.g., most 53° shell satellites are currently V1.5), and beam properties were adjusted accordingly.

The table above presents the core beam modeling parameters and satellite-specific assumptions used in the Starlink simulation over the United Kingdom. It includes general values for beam steering behavior, such as Gaussian spread, steering limits, city-targeting probabilities, and beam spacing constraints, as well as performance characteristics tied to specific satellite generations to the extent it is known (e.g., Starlink V1.5, V2 Mini, and V2 Full). These assumptions govern the placement of beams on the Earth’s surface and the capacity each beam can deliver. For instance, the City Exclusion Radius of 0.25 degrees corresponds to a ~25 km buffer around urban centers, where beam placement is probabilistically discouraged. Similarly, the beam radius and throughput per beam values align with known design specifications submitted by SpaceX to the U.S. Federal Communications Commission (FCC), particularly for Starlink’s V1.5 and V2 satellites. The table above also defines overlap rules, specifying the maximum number of beams that can overlap in a region and the maximum number of satellites that can contribute beams to a given point. This helps ensure that simulations reflect realistic network constraints rather than theoretical maxima.

Overall, the developed code offers a geographically and physically grounded simulation of how the existing Starlink network performs over the UK. It helps explain observed disparities in coverage and throughput by visualizing the contribution of each shell and satellite generation. This modeling approach enables planners and researchers to quantify satellite coverage performance at national and regional scales, providing insight into both current service levels and facilitating future constellation evolution, which is not discussed here.

The figure illustrates a 72-hour time-averaged Starlink coverage density over the UK. The asymmetric signal strength pattern reflects the orbital geometry of Starlink’s 53° inclination shell, which concentrates satellite coverage over southern and central England. Northern areas receive less frequent coverage due to fewer satellite passes and reduced beam density at higher latitudes.

This image above presents the Starlink Average Coverage Density over the United Kingdom, a result from a 72-hour simulation using real satellite orbital data from Celestrak. It illustrates the mean signal exposure across the UK, where color intensity reflects the frequency and density of satellite beam illumination at each location.

At the center of the image, a bright yellow core indicating the highest signal strength is clearly visible over the English Midlands, covering cities such as Birmingham, Leicester, and Bristol. The signal strength gradually declines outward in a concentric pattern—from orange to purple—as one moves northward into Scotland, west toward Northern Ireland, or eastward along the English coast. While southern cities, such as London, Southampton, and Plymouth, fall within high-coverage zones, northern cities, including Glasgow and Edinburgh, lie in significantly weaker regions. The decline in signal intensity is especially apparent beyond the 56°N latitude. This pattern is entirely consistent with what we know about the structure of the Starlink satellite constellation. The dominant contributor to coverage in this region is the 53° inclination shell, which contains 3,848 satellites spread across 36 orbital planes. This shell is designed to provide dense, continuous coverage to heavily populated mid-latitude regions, such as the southern United Kingdom, continental Europe, and the continental United States. However, its orbital geometry restricts it to a latitudinal range that ends near 53 to 54°N. As a result, southern and central England benefit from frequent satellite passes and tightly packed overlapping beams, while the northern parts of the UK do not. Particularly, Scotland lies at or beyond the shell’s effective coverage boundary.

The simulation may indicate how Starlink’s design prioritizes population density and market reach. Northern England receives only partial benefit, while Scotland and Northern Ireland fall almost entirely outside the core coverage of the 53° shell. Although some coverage in these areas is provided by higher inclination shells (specifically, the 70° shell with 420 satellites and the 97.6° polar shell with 227 satellites), these are sparser in both the number of satellites and the orbital planes. Their beams may also be broader and less (thus) less focused, resulting in lower average signal strength in high-latitude regions.

So, why is the coverage not textbook nice hexagon cells with uniform coverage across the UK? The simple answer is that real-world satellite constellations don’t behave like the static, idealized diagrams of hexagonal beam tiling often used in textbooks or promotional materials. What you’re seeing in the image is a time-averaged simulation of Starlink’s actual coverage over the UK, reflecting the dynamic and complex nature of low Earth orbit (LEO) systems like Starlink’s. Unlike geostationary satellites, LEO satellites orbit the Earth roughly every 90 minutes and move rapidly across the sky. Each satellite only covers a specific area for a short period before passing out of view over the horizon. This movement causes beam coverage to constantly shift, meaning that any given spot on the ground is covered by different satellites at different times. While individual satellites may emit beams arranged in a roughly hexagonal pattern, these patterns move, rotate, and deform continuously as the satellite passes overhead. The beams also vary in shape and strength depending on their angle relative to the Earth’s surface, becoming elongated and weaker when projected off-nadir, i.e., when the satellite is not directly overhead. Another key reason lies in the structure of Starlink’s orbital configuration. Most of the UK’s coverage comes from satellites in the 53° inclination shell, which is optimized for mid-latitude regions. As a result, southern England receives significantly denser and more frequent coverage than Scotland or Northern Ireland, which are closer to or beyond the edge of this shell’s optimal zone. Satellites serving higher latitudes originate from less densely populated orbital shells at 70° and 97.6°, which result in fewer passes and wider, less efficient beams.

The above heatmap does not illustrate a snapshot of beam locations at a specific time, but rather an averaged representation of how often each part of the UK was covered over a simulation period. This type of averaging smooths out the moment-to-moment beam structure, revealing broader patterns of coverage density instead. That’s why we see a soft gradient from intense yellow in the Midlands, where overlapping beams pass more frequently, to deep purple in northern regions, where passes are less common and less centered.

Illustrates an idealized hexagonal beam coverage footprint over the UK. For visual clarity, only a subset of hexagons is shown filled with signal intensity (yellow core to purple edge), to illustrate a textbook-like uniform tiling. In reality, satellite beams from LEO constellations, such as Starlink, are dynamic, moving, and often non-uniform due to orbital motion, beam steering, and geographic coverage constraints.

The two charts below provide a visual confirmation of the spatial coverage dynamics behind the Starlink signal strength distribution over the United Kingdom. Both are based on a 72-hour simulation using real Starlink satellite data obtained from Celestrak, and they accurately reflect the operational beam footprints and orbital tracks of currently active satellites over the United Kingdom.

This figure illustrates time-averaged Starlink coverage density over the UK with beam footprints (left) and satellite ground tracks (right) by orbital shell. The high density of beams and tracks from the 53° shell over southern UK leads to stronger and more consistent coverage. At the same time, northern regions receive fewer, more widely spaced passes from higher-inclination shells (70° and 97.6°), resulting in lower aggregate signal strength.

The first chart displays the beam footprints (i.e., the left side chart above) of Starlink satellites across the UK, color-coded by orbital shell: cyan for the 53° shell, green for the 70° shell, and magenta for the 97° polar shell. The concentration of cyan beam circles in southern and central England vividly demonstrates the dominance of the 53° shell in this region. These beams are tightly packed and frequent, explaining the high signal coverage in the earlier signal strength heatmap. In contrast, northern England and Scotland are primarily served by green and magenta beams, which are more sparse and cover larger areas — a clear indication of the lower beam density from the higher-inclination shells.

The second chart illustrates the satellite ground tracks (i.e., the right side chart above) over the same period and geographic area. Again, the saturation of cyan lines in the southern UK underscores the intensive pass frequency of satellites in the 53° inclined shell. As one moves north of approximately 53°N, these tracks vanish almost entirely, and only the green (70° shell) and magenta (97° shell) paths remain. These higher inclination tracks cross through Scotland and Northern Ireland, but with less spatial and temporal density, which supports the observed decline in average signal strength in those areas.

Together, these two charts provide spatial and orbital validation of the signal strength results. They confirm that the stronger signal levels seen in southern England stem directly from the concentrated beam targeting and denser satellite presence of the 53° shell. Meanwhile, the higher-latitude regions rely on less saturated shells, resulting in lower signal availability and throughput. This outcome is not theoretical — it reflects the live state of the Starlink constellation today.

The figure illustrates the estimated average Starlink throughput across the United Kingdom over a 72-hour window. Throughput is highest over southern and central England due to dense satellite traffic from the 53° orbital shell, which provides overlapping beam coverage and short revisit times. Northern regions experience reduced throughput from sparser satellite passes and less concentrated beam coverage.

The above chart shows the estimated average throughput of Starlink Direct-2-Dish across the United Kingdom, simulated over 72 hours using real orbital data from Celestrak. The values are expressed in Megabits per second (Mbps) and are presented as a heatmap, where higher throughput regions are shown in yellow and green, and lower values fade into blue and purple. The simulation incorporates actual satellite positions and coverage behavior from the three operational inclination shells currently providing Starlink service to the UK. Consistent with the signal strength, beam footprint density, and orbital track density, the best quality and most supplied capacity are available south of the 53°N inclination.

The strongest throughput is concentrated in a horizontal band stretching from Birmingham through London to the southeast, as well as westward into Bristol and south Wales. In this region, the estimated average throughput peaks at over 3,000 Mbps, which can support more than 30 concurrent customers each demanding 100 Mbps within the coverage area or up to 600 households with an oversubscription rate of 1 to 20. This aligns closely with the signal strength and beam density maps also generated in this simulation and is driven by the dense satellite traffic of the 53° inclination shell. These satellites pass frequently over southern and central England, where their beams overlap tightly and revisit times are short. The availability of multiple beams from different satellites at nearly all times drives up the aggregate throughput experienced at ground level. Throughput falls off sharply beyond approximately 54°N. In Scotland and Northern Ireland, values typically stay well below 1,000 Mbps. This reduction directly reflects the sparser presence of higher-latitude satellites from the 70° and 97.6° shells, which are fewer in number and more widely spaced, resulting in lower revisit frequencies and broader, less concentrated beams. The throughput map thus offers a performance-level confirmation of the underlying orbital dynamics and coverage limitations seen in the satellite and beam footprint charts.

While the above map estimates throughput in realistic terms, it is essential to understand why it does not reflect the theoretical maximum performance implied by Starlink’s physical layer capabilities. For example, a Starlink V1.5 satellite supports eight user downlink channels, each with 250 MHz of bandwidth, which in theory amounts to a total of 2 GHz of spectrum. Similarly, if one assumes 24 beams, each capable of delivering 800 Mbps, that would suggest a satellite capacity in the range of approximately 19–20 Gbps. However, these peak figures assume an ideal case with full spectrum reuse and optimized traffic shaping. In practice, the estimated average throughput shown here is the result of modeling real beam overlap and steering constraints, satellite pass timing, ground coverage limits, and the fact that not all beams are always active or directed toward the same location. Moreover, local beam capacity is shared among users and dynamically managed by the constellation. Therefore, the chart reflects a realistic, time-weighted throughput for a given geographic location, not a per-satellite or per-user maximum. It captures the outcome of many beams intermittently contributing to service across 72 hours, modulated by orbital density and beam placement strategy, rather than theoretical peak link rates.

A valuable next step in advancing the simulation model would be the integration of empirical user experience data across the UK footprint. If datasets such as comprehensive Ookla performance measurements (e.g., Starlink-specific download and upload speeds, latency, and jitter) were available with sufficient geographic granularity, the current Python model could be calibrated and validated against real-world conditions. Such data would enable the adjustment of beam throughput assumptions, satellite visibility estimates, and regional weighting factors to better reflect the actual service quality experienced by users. This would enhance the model’s predictive power, not only in representing average signal and throughput coverage, but also in identifying potential bottlenecks, underserved areas, or mismatches between orbital density and demand.

It is also important to note that this work relies on a set of simplified heuristics for beam steering, which are designed to make the simulation both tractable and transparent. In this model, beams are steered within a fixed angular distance from each satellite’s subpoint, with probabilistic biases against cities and simple exclusion zones (i.e., I operate with an exclusion radius of approximately 25 km or more). However, in reality, Starlink’s beam steering logic is expected to be substantially more advanced, employing dynamic optimization algorithms that account for real-time demand, user terminal locations, traffic load balancing, and satellite-satellite coordination via laser interlinks. Starlink has the crucial (and obvious) operational advantage of knowing exactly where its customers are, allowing it to direct capacity where it is needed most, avoid congestion (to an extent), and dynamically adapt coverage strategies. This level of real-time awareness and adaptive control is not replicated in this analysis, which assumes no knowledge of actual user distribution and treats all geographic areas equally.

As such, the current Python code provides a first-order geographic approximation of Starlink coverage and capacity potential, not a reflection of the full complexity and intelligence of SpaceX’s actual network management. Nonetheless, it offers a valuable structural framework that, if calibrated with empirical data, could evolve into a much more powerful tool for performance prediction and service planning.

Median Starlink download speeds in the United Kingdom, as reported by Ookla, from Q4 2022 to Q4 2024, indicate a general decline through 2023 and early 2024, followed by a slight recovery in late 2024. Source: Ookla.com.

The decline in real-world median user speeds, observed in the chart above, particularly from Q4 2023 to Q3 2024, may reflect increasing congestion and uneven coverage relative to demand, especially in areas outside the dense beam zones of the 53° inclination shell. This trend supports the simulation’s findings: while orbital geometry enables strong average coverage in the southern UK, northern regions rely on less frequent satellite passes from higher-inclination shells, which limits performance. The recovery of the median speed in Q4 2024 could be indicative of new satellite deployments (e.g., more V2 Minis or V2 Fulls) beginning to ease capacity constraints, something future simulation extensions could model by incorporating launch timelines and constellation updates.

Illustrates a European-based dual-use Low Earth Orbit (LEO) satellite constellation providing broadband connectivity to Europe’s millions of unconnectables by 2030 on a secure and strategic infrastructure platform covering Europe, North Africa, and the Middle East.

THE 200 BILLION EUROS QUESTION – IS THERE A PATH TO A EUROPEAN SPACE INDEPENDENCE?

Let’s start with the answer! Yes!

Is €200 billion, the estimated amount required to close the EU28 gigabit gap between 2023 and 2030, likely to enable Europe to build its own LEO satellite constellation and potentially develop one that is more secure, inclusive, and strategically aligned with its values and geopolitical objectives? In comparison, the European Union’s IRIS² (Infrastructure for Resilience, Interconnectivity and Security by Satellite) program has been allocated a total budget of 10+ billion euros aiming at building 264 LEO satellites (1,200 km) and 18 MEO satellites (8,000 km) mainly by the European “Primes” (i.e., the usual “suspects” of legacy defense contractors) by 2030. For that amount, we should even be able to afford our dedicated European stratospheric drone program for real-world use cases, as opposed to, for example, Airbus’s (AALTO) Zephyr fragile platform, which, imo, is more an impressive showcase of an innovative, sustainable (solar-driven) aerial platform than a practical, robust, high-performance communications platform.

A significant portion of this budget should be dedicated to designing, manufacturing, and launching a European-based satellite constellation. If Europe could match the satellite cost price of SpaceX, and not that of IRIS² (which appears to be large based on legacy satellite platform thinking or at least the unit price tag is), it could launch a very substantial number of EU-based LEO satellites within 200 billion euros (also for a lot less obviously). It easily matches the number of SpaceX’s long-term plans and would vastly surpass the satellites authorized under Kuiper’s first phase. To support such a constellation, Europe must invest heavily in launch infrastructure. While Ariane 6 remains in development, it could be leveraged to scale up the Ariane program or develop a reusable European launch system, mirroring and improving upon the capabilities of SpaceX’s Starship. This would reduce long-term launch costs, boost autonomy, and ensure deployment scalability over the decade. Equally essential would be establishing a robust ground segment covering the deployment of a European-wide ground station network, edge nodes, optical interconnects, and satellite laser communication capabilities.

Unlike Starlink, which benefits from SpaceX’s vertical integration, and Kuiper, which is backed by Amazon’s capital and logistics empire, a European initiative would rely heavily on strong multinational coordination. With 200 billion euros, possibly less if the usual suspects (i.e., ” Primes”) are managed accordingly, Europe could close the technology gap rapidly, secure digital sovereignty, and ensure that it is not dependent on foreign providers for critical broadband infrastructure, particularly for rural areas, government services, and defense.

Could this be done by 2030? Doubtful, unless Europe can match SpaceX’s impressive pace of innovation. That is at least to match the 3 years (2015–2018) it took SpaceX to achieve a fully reusable Falcon 9 system and the 4 years (2015–2019) it took to go from concept to the first operational V1 satellite launch. Elon has shown it is possible.

KEY TAKEAWAYS.

LEO satellite direct-to-dish broadband, when strategically deployed in underserved and hard-to-reach areas, should be seen not as a competitor to terrestrial networks but as a strategic complement. It provides a practical, scalable, and cost-effective means to close the final connectivity gap, one that terrestrial networks alone are unlikely to bridge economically. In sparsely populated rural zones, where fiber deployment becomes prohibitively expensive, LEO satellites may render new rollouts obsolete. In these cases, satellite broadband is not just an alternative. It may be essential. Moreover, it can also serve as a resilient backup in areas where rural fiber is already deployed, especially in regions lacking physical network redundancy. Rather than undermining terrestrial infrastructure, LEO extends its reach, reinforcing the case for hybrid connectivity models central to achieving EU-wide digital reach by 2030.

Instead of continuing to subsidize costly last-mile fiber in uneconomical areas, European policy should reallocate a portion of this funding toward the development of a sovereign European Low-Earth Orbit (LEO) satellite constellation. A mere 200 billion euros, or even less, would go a very long way in securing such a program. Such an investment would not only connect the remaining “unconnectables” more efficiently but also strengthen Europe’s digital sovereignty, infrastructure resilience, and strategic autonomy. A European LEO system should support dual-use applications, serving both civilian broadband access and the European defense architecture, thereby enhancing secure communications, redundancy, and situational awareness in remote regions. In a hybrid connectivity model, satellite broadband plays a dual role: as a primary solution in hard-to-reach zones and as a high-availability backup where terrestrial access exists, reinforcing a layered, future-proof infrastructure aligned with the EU’s 2030 Digital Decade objectives and evolving security imperatives.

Non-European dependence poses strategic trade-offs: The rise of LEO broadband providers, SpaceX, and China’s state-aligned Guowang and Qianfan, underscores Europe’s limited indigenous capacity in the Low Earth Orbit (LEO) space. While non-EU options may offer faster and cheaper rural connectivity, reliance on foreign infrastructure raises concerns about sovereignty, data governance, and security, especially amid growing geopolitical tensions.

LEO satellites, especially those similar or more capable than Starlink V3, can technically support the connectivity needs of Europe’s 2030s “unconnectable” (rural) households. Due to geography or economic constraints, these homes are unlikely to be reached by FTTP even under the most ambitious business-as-usual scenarios. A constellation of high-capacity satellites could serve these households with gigabit-class connections, especially when factoring in user concurrency and reasonable uptake rates.

The economics of FTTP deployment sharply deteriorate in very low-density rural regions, reinforcing the need for alternative technologies. By 2030, up to 5.5 million EU28 households are projected to remain beyond the economic viability of FTTP, down from 15.5 million rural homes in 2023. The European Commission has estimated that closing the gigabit gap from 2023 to 2030 requires around €200 billion. LEO satellite broadband may be a more cost-effective alternative, particularly with direct-to-dish architecture, at least for the share of unconnectable homes.

While LEO satellite networks offer transformative potential for deep rural coverage, they do not pose a threat to existing FTTP deployments. A Starlink V3 satellite, despite its 1 Tbps capacity, can serve the equivalent of a small fiber network, about 1,000 homes at 1 Gbps under full concurrency, or roughly 20,000 homes with 50% uptake and 10% busy-hour concurrency. FTTP remains significantly more efficient and scalable in denser areas. Satellites are not designed to compete with fiber in urban or suburban regions, but rather to complement it in places where fiber is uneconomical or otherwise unviable.

The technical attributes of LEO satellites make them ideally suited for sparse, low-density environments. Their broad coverage area and increasingly sophisticated beamforming and frequency reuse capabilities allow them to efficiently serve isolated dwellings, often spread across tens of thousands of square kilometers, where trenching fiber would be infeasible. These technologies extend the digital frontier rather than replace terrestrial infrastructure. Even with SpaceX’s innovative pace, it seems unlikely that this conclusion will change substantially within the next five years, at the very least.

A European LEO constellation could be feasible within a € 200 billion budget: The €200 billion gap identified for full gigabit coverage could, in theory, fund a sovereign European LEO system capable of servicing the “unconnectables.” If Europe adopts leaner, vertically integrated innovation models like SpaceX (and avoids legacy procurement inefficiencies), such a constellation could deliver comparable technical performance while bolstering strategic autonomy.

The future of broadband infrastructure in Europe lies in a hybrid strategy. Fiber and mobile networks should continue to serve densely populated areas, while LEO satellites, potentially supplemented by fixed wireless and 5G, offer a viable path to universal coverage. By 2030, a satellite constellation only slightly more capable than Starlink V3 could deliver broadband to virtually all of Europe’s remaining unconnected homes, without undermining the business case for large-scale FTTP networks already in place.

CAUTIONARY NOTE.

While current assessments suggest that a LEO satellite constellation with capabilities on par with or slightly exceeding those anticipated for Starlink V3 could viably serve Europe’s remaining unconnected households by 2030, it is important to acknowledge the speculative nature of these projections. The assumptions are based on publicly available data and technical disclosures. Still, it is challenging to have complete visibility into the precise specifications, performance benchmarks, or deployment strategies of SpaceX’s Starlink satellites, particularly the V3 generation, or, for that matter, Amazon’s Project Kuiper constellation. Much of what is known comes from regulatory filings (e.g., FCC), industry reports and blogs, Reddit, and similar platforms, as well as inferred capabilities. Therefore, while the conclusions drawn here are grounded in credible estimates and modeling, they should be viewed with caution until more comprehensive and independently validated performance data become available.

THE SATELLITE’S SPECS – MOST IS KEPT A “SECRET”, BUT THERE IS SOME LIGHT.

Satellite capacity is not determined by a single metric, but instead emerges from a tightly coupled set of design parameters. Variables such as spectral efficiency, channel bandwidth, polarization, beam count, and reuse factor are interdependent. Knowing a few of them allows us to estimate, bound, or verify others. This is especially valuable when analyzing or validating constellation design, performance targets, or regulatory filings.

For example, consider a satellite that uses 250 MHz channels with 2 polarizations and a spectral efficiency of 5.0 bps/Hz. These inputs directly imply a channel capacity of 1.25 Gbps and a beam capacity of 2.5 Gbps. If the satellite is intended to deliver 100 Gbps of total throughput, as disclosed in related FCC filings, one can immediately deduce that 40 beams are required. If, instead, the satellite’s reuse architecture defines 8 x 250 MHz channels per reuse group with a reuse factor of 5, and each reuse group spans a fixed coverage area. Both the theoretical and practical throughput within that area can be computed, further enabling the estimation of the total number of beams, the required spectrum, and the likely user experience. These dependencies mean that if the number of user channels, full bandwidth, channel bandwidth, number of beams, or frequency reuse factor is known, it becomes possible to estimate or cross-validate the others. This helps identify design consistency or highlight unrealistic assumptions.

In satellite systems like Starlink, the total available spectrum is limited. This is typically divided into discrete channels, for example, eight 250 MHz channels (as is the case for Starlink’s Ku-band downlink to the user’s terrestrial dish). A key architectural advantage of spot-beam satellites (e.g., with spots that are at least 50 to 80 km wide) is that frequency channels can be reused in multiple spatially separated beams, as long as the beams do not interfere with one another. This is not based on a fixed reuse factor, as seen in terrestrial cellular systems, but on beam isolation, achieved through careful beam shaping, angular separation, and sidelobe control (as also implemented in the above Python code for UK Starlink satellite coverage, albeit in much simpler ways). For instance, one beam covering southern England can use the same frequency channels as another beam covering northern Scotland, because their energy patterns do not overlap significantly at ground level. In a constellation like Starlink’s, where hundreds or even thousands of beams are formed across a satellite footprint, frequency reuse is achieved through simultaneous but non-overlapping spatial beam coverage. The reuse logic is handled dynamically on board or through ground-based scheduling, based on real-time traffic load and beam geometry.

This means that for a given satellite, the total instantaneous throughput is not only a function of spectral efficiency and bandwidth per beam, but also of the number of beams that can simultaneously operate on overlapping frequencies without causing harmful interference. If a satellite has access to 2 GHz of bandwidth and 250 MHz channels, then up to 8 distinct channels can be formed. These channels can be replicated across different beams, allowing many more than 8 beams to be active concurrently, each using one of those 8 channels, as long as they are separated enough in space. This approach allows operators to scale capacity dramatically through dense spatial reuse, rather than relying solely on expanding spectrum allocations. The ability to reuse channels across beams depends on antenna performance, beamwidth, power control, and orbital geometry, rather than a fixed reuse pattern. The same set of channels is reused across non-interfering coverage zones enabled by directional spot beams. Satellite beams can be “stacked on top of each other” up to the number of available channels, or they can be allocated optimally across a coverage area determined by user demand.

Although detailed specifications of commercial satellites, whether in operation or in the planning phase, are usually not publicly disclosed. However, companies are required to submit technical filings to the U.S. Federal Communications Commission (FCC). These filings include orbital parameters, frequency bands in use, EIRP, and antenna gain contours, as well as estimated capabilities of the satellite and user terminals. The FCC’s approval of SpaceX’s Gen2 constellation, for instance, outlines many of these values and provides a foundation upon which informed estimates of system behavior and performance can be made. The filings are not exhaustive and may omit sensitive performance data, but they serve as authoritative references for bounding what is technically feasible or likely in deployment.

ACKNOWLEDGEMENT.

I would like to acknowledge my wife, Eva Varadi, for her unwavering support, patience, and understanding throughout the creative process of writing this article.

FURTHER READINGS.

Kim K. Larsen, “Will LEO Satellite Direct-to-Cellular Networks Make Traditional Mobile Networks Obsolete?”, A John Strand Consult Report, (January 2025). This has also been published in full on my own Techneconomy blog.

Kim K. Larsen, “The Next Frontier: LEO Satellites for Internet Services.” Techneconomyblog (March 2024).

Kim K. Larsen, “Stratospheric Drones & Low Earth Satellites: Revolutionizing Terrestrial Rural Broadband from the Skies?” Techneconomyblog (January 2024).

Kim K. Larsen, “A Single Network Future“, Techneconomyblog (March 2024).

NOTE: My “Satellite Coverage Concept Model,” which I have applied to Starlink Direct-2-Dish coverage and Services in the United Kingdom, is not limited to the UK alone but can be straightforwardly generalized to other countries and areas.

The Nature of Telecom Capex – a 2024 Update.

Part of this blog has also been published in Telecom Analysis, titled “Navigating the Future of Telecom Capex: Western Europe’s Telecom Investment 2024 to 2030.” and some of the material has been updated to reflect the latest available data in some areas (e.g., fiber deployment in Western Europe).

Over the last three years, I have extensively covered the details of the Western European telecom sector’s capital expense levels and the drivers behind telecom companies’ capital investments. These accounts can be found in “The Nature of Telecom Capex—a 2023 Update” from 2023 and my initial article from 2022. This new version of “The Nature of Telecom Capex – a 2024 Update” is also different compared to the issues of 2022 and 2023 in that it focuses on the near future Capex demands from 2024 to 2030 and what we may expect from our Industry capital spending over the next 7 years.

For Western Europe, Capex levels in 2023 were lower than in 2022, a relatively rare but not unique occurrence that led many industry analysts to conclude the “End of Capex” and that from now on, “Capex will surely decline.” The compelling and logical explanations were also evident, pointing out that “data traffic (growth) is in decline”, “overproduction of bandwidth”, “5G is not what it was heralded to be”, “No interest in 6G”, “Capital is too expensive” and so forth. These “End to Capex” conclusions were often made on either aggregated data or selected data, depending on the availability of data.

Having worked on Capex planning and budgeting since the early 2000s for one of the biggest telecom companies in Europe, Deutsche Telecom AG, building what has been described as best-practice Capex models, my outlook is slightly less “optimistic” about the decline and “End” of Capex spending by the Industry. Indeed, for those expecting that a Telco’s capital planning is only impacted by hyper-rational insights glued to real-world tangibles and driven by clear strategic business objectives, I beg you to modify that belief somewhat.

Figure 1 illustrates the actual telecom Capex development for Western Europe between 2017 and 2023, with projected growth from 2024 (with the first two quarters’ actual Capex levels) to 2026, represented by the orange-colored dashed lines. The light dashed line illustrates the annual baseline Capex level before 5G and fiber deployment acceleration. The light solid line shows the corresponding Telco Capex to Revenue development, including an assessment for 2024 to 2026, with an annual increase of ca. 500 million euros. Source: New Street Research European Quarterly Review, covering 15 Western European countries (see references at the end of the blog) and 56+ telcos from 2017 to 2024, with 2024 covering the year’s first two quarters.

Western Europe’s telecommunications Capex fell between 2022 and 2023 for the first time in some years, from the peak of 51 billion euros in 2022. The overall development from 2017 to 2023 is illustrated below, including a projected Capex development covering 2024 to 2026 using each Telco’s revenue projections as a simple driver for the expected Capex level (i.e., inherently assuming that the planned Capex level is correlated to the anticipated, or targeted, revenue of the subsequent year).

The reduction in Capex between 2022 and 2023 comes from 29 out of 56 Telcos reducing their Capex level in 2023 compared to 2022. In 8 out of 15 countries, the Telco Capex levels were decreased by ca. 2.3 billion euros compared to their 2022 Capex levels. Likewise, 7 countries spent approximately 650 million euros more than their 2022 levels together. If we compared the 1st and 2nd half of 2023 with 2022, there was an unprecedented Capex reduction in the 2nd half of 2023 compared to any other year from 2017 to 2023. It really gives the impression that many ( at least 36 out of 56) Telcos put their feet on the break in 2023. 29 Telcos out of the 36 broke their spending in the last half of 2023 and ended the year with an overall lower spending than in 2022. Of the 8 countries with a lower Capex spend in 2023, the UK, France, Italy, and Spain make up more than 80%. Of the countries with a higher Capex in 2023, Germany, Netherlands, Belgium, and Austria make up more than 80%.

For a few of the countries with lower Capex levels in 2023, one could argue that they more or less finished their 5G rollout and have so high fiber-to-the-home penetration levels that more fiber is on account of overbuilt and of a substantially smaller scale than in the past (e.g., France, Norway, Spain, Portugal, Denmark, and Sweden). For other countries with a lower investment level than the previous year, such as the UK, Italy, and Greece, 2022 and 2023 saw substantial consolidation activity in the markets (e.g., Vodafone UK & C.K. Hutchinson 3, Wind Hellas rounded up in Nova Greece, …). In fact, Spain (e.g., Masmovil), Norway (e.g., Ice Group), and Denmark (e.g., Telia DK) also experienced consolidation activities that will generally lower companies’ spending levels initially. One would expect, as to some extent visible in the first half of 2024, that countries that spend less due to consolidation activities would increase their Capex levels in the next two to three years after an initial replanning period.

WESTERN EUROPE – THE BIG CAPEX OVERVIEW.

Figure 2 Shows on a country-level the 5-year average Capex spend (over the period 2019 to 2023) and the Capex in 2023. Source: New Street Research European Quarterly Review 2017 to 2024 (Q2).

When attempting to understand Telco Capex, or any Capex with a “built-in” cyclicity, one really should look at more than one or two years. Figure 2 above provides the comparison with the average Capex spend over the period 2019 to 2023 and the Capex spend in 2023. The five year Capex average captures the initial stages of 5G deployment in Europe, 5G deployment in general, COVID capacity investments (in fixed networks), the acceleration of Fiber rollout in many countries in Europe (e.g., Germany, UK, Netherlands, …), the financial (inflationary) crisis of increasing costly capital, and so forth. In my opinion 2023 is a reflection of the 2021-2022 financial crisis and that most of the 5G has been deployed to cover current market needs. As we have seen before, Telco investments are often 12 to 18 month out of synch with financial crisis years, and thus it is from that perspective also not surprising that 2023 might be a lower Capex year than in the past. Although, as is also evident from Figure 2, only 5 countries had a lower Capex level in 2023 than the previous 5 years average level.

Figure 3 Illustrates the Capex development over the last 5 years from 2019 to 2023 with the color Green showing years where the subsequent year had a higher Capex level, and color Red that the subsequent year had a lower Capex level. From a Western Europe perspective only 2023 had a lower Capex level than the previous year (compared to the last 5 years). Source: New Street Research European Quarterly Review 2017 to 2024 (Q2).

Using Capex to Revenue ratios of the Telco industry are prone to some uncertainty. This is particular the case when individual Telcos are compared. In general, I recommend to make comparisons over a given period of time, like 3 or 5 year periods, as it averages out some of the natural variation between Telcos and countries (e.g., one country or Telco may have started its 5G deployment earlier than others). Even that approach has to be taken with some caution as some Telcos may fully incur Capex for fiber deployments and others may make wholesale agreements with open Fiberco’s (for example) and only incur last-mile access or connection Capex. Although, of smaller relative Capex scale nowadays, Telcos increasingly have Towercos managing and building their passive infrastructure for their cell site demand. Some may still fully build their own cell sites, incurring proportionally higher Capex per new site deployed, which of course may lead to structural Capex differences between such Telcos. Having these cautionary remarks in mind, I believe that Capex to Revenue ratios does provide a means of comparing Countries or Telcos and it does give provide a picture of the capital investment intensity compared to the market performance. A country comparison of the 5-year (period: 2019 to 2023) average Capex to Revenue ratio is illustrated in Figure 3 below for the 15 markets considered in this blog.

Figure 4 Shows on a country-level the 5-year average Capex to Revenue ratios over the period 2019 to 2023. Source: New Street Research European Quarterly Review 2017 to 2024 (Q2).

Comparing Capex per capita and Capex as a percentage of GDP may offer insights into how capital investments are prioritized in relation to population size and economic output. These two metrics could highlight different aspects of investment strategies, providing a more comprehensive understanding of national economic priorities and critical infrastructure development levels. Such a comparison is show in Figure 15 below.

Capex per capita, shown in Figure 5 left hand side, measures the average amount of investment allocated to each person within a country. This metric is particularly useful for understanding the intensity of investment relative to the population, indicating how much infrastructure, technology, or other capital resources are being made available on a per-person basis. A higher Capex per capita suggests significant investment in areas like public services, infrastructure, or economic development, which could improve quality of life or boost productivity. Comparing this measure across countries helps identify disparities in investment levels, revealing which nations are placing greater emphasis on infrastructure development or economic expansion. For example, a country with a high Capex per capita likely prioritizes public goods such as transportation, energy, or digital infrastructure, potentially leading to better economic outcomes and higher living standards over time. The 5-year average Capex level does show a strong positive linear relationship with the Country population (R² = 0.9318, chart not shown), suggesting that ca. 93% of the variation in Capex can be explained by the variation in population. The trend implies that as the population increases, Capex also tends to increase, likely reflecting higher investment needs to accommodate larger populations. It should be noted that that a countries surface area is not a significant factor influencing Capex. While some countries with larger land areas might exhibit a higher Capex level, the overall trend is not strong.

Capex as a percentage of GDP, shown in Figure 5 right hand side, measures the proportion of a country’s economic output devoted to capital investment. This ratio provides context for understanding investment levels relative to the size of the economy, showing how much emphasis is placed on growth and development. A higher Capex-to-GDP ratio can indicate an aggressive investment strategy, commonly seen in developing economies or countries undergoing significant infrastructure expansion. Conversely, a lower ratio might suggest efficient capital allocation or, in some cases, underinvestment that could constrain future economic growth. This metric helps assess the sustainability of investment levels and reflects economic priorities. For instance, a high Capex-to-GDP ratio in a developed country could indicate a focus on upgrading existing infrastructure, whereas in a developing economy, it may signify efforts to close infrastructure gaps, modernization efforts (e.g., optical fiber replacing copper infrastructure per fixed broadband transformation) and accelerating growth. The 5-year average Capex level does show a strong positive linear relationship with the Country GDP (R² = 0.9389, chart not shown), suggesting that ca. 94% of the variation in Capex can be explained by the variation in the country GDP. While a few data points show some deviation from this trend, the overall fit is very strong, reinforcing the notion that larger economies generally allocate more resources to capital investments.

The insights gained from both Capex per capita and Capex as a percentage of GDP are complementary, providing a fuller picture of a country’s investment strategy. While Capex per capita reflects individual investment levels, Capex as a percentage of GDP reveals the scale of investment in relation to the overall economy. For example, a country with high Capex per capita but a low Capex-to-GDP ratio (e.g., Denmark, Norway, …) may have a wealthy population where individual investment levels are significant, but the size of the economy is such that these investments constitute a relatively small portion of total economic activity. Conversely, a country with a high Capex-to-GDP ratio but low Capex per capita (e.g., Greece) may be dedicating a substantial portion of its economic resources to infrastructure in an effort to drive growth, even if the per-person investment remains modest.

Figure 5 Illustrates two charts that compare the average capital expenditures over a 5-year period from 2019 to 2023. The left chart shows Capex per capita in euros, with Switzerland leading at 230 euros, while Spain has the lowest at 75 euros. The right chart depicts Capex as a percentage of GDP, where Greece tops the list at 0.47%, and Sweden is at the bottom with 0.16%. These metrics provide insights into how different countries allocate investments relative to their population size and economic output, revealing varying levels of investment intensity and economic priorities. It should be noted that Capex levels are strongly correlated with both the size of the population and the size of the economy as measured by the GDP. Source: New Street Research European Quarterly Review 2017 to 2024 (Q2).

FORWARD TO THE PAST.

Almost 15 years ago, I gave a presentation at the “4G World China” conference in Beijing titled “Economics of 4G Introduction in Growth Markets”. The idea was that a mobile operator’s capital demand would cycle between 8% (minimum) and 13% (maximum), usually with one replacement cycle before migrating to the next-generation radio access technology. This insight was backed up by best-practice capital demand models considering market strategy and growth Capex drivers. It involved also involved the insights of many expert discussions.

Figure 6 illustrates my expectations of how Capex would relate before, during, and after LTE deployment in Western Europe. Source: “Economics of 4G Introduction in Growth Markets” at “4G World China”, 2011.

For the careful observer, you will see that I expected, back in 2011, the typical Capex maintenance cycle in Western European markets between infrastructure and technology modernization periods to be no more than 8% and that Capex in the maintenance years would be 30% lower than required in the peak periods. I have yet to see a mobile operation with such a low capital intensity unless they effectively share their radio access network and/or by cost-structure “magic” (i.e., cost transformation), move typical mobile Capex items to Opex (by sourcing or optimizing the cost structure between fixed and mobile business units).

I retrospectively underestimated the industry’s willingness to continue increasing capital investments in existing networks, often ignoring the obvious optimization possibilities between their fixed and mobile broadband networks (due to organizational politics) and, of course, what has and still is a major industrial contagious infliction: “Metus Crescendi Exponentialis” (i.e., the fear of the exponential growth aka the opportunity to spend increasingly lots of Capex). From 2000 to today, the Western European Capex to Revenue ratio has been approximately between 11% and 21%, although it has been growing since around 2012 (see details in “The Nature of Telecom Capex—a 2023 Update”).

CAPEX DEVELOPMENT FROM 2024 TO 2026.

From the above Figure 1, it should be no surprise that I do not expect Capex to continue to decline substantially over the next couple of years, as we saw between 2022 and 2023. In fact, I anticipate that 2024 will be around the level of 2023, after which we will experience modest annual increases of 600 to 700 million euros. Countries with high 5G and Fiber-to-the-Home (FTTH) coverage (e.g., France, Netherlands, Norway, Spain, Portugal, Denmark, and Sweden) will keep their Capex levels possible with some modest declines with single-digit percentage points. Countries such as Germany, the UK, Austria, Belgium, and Greece are still European laggards in terms of FTTH coverage, being far below the 80+% of other Western European countries such as France, Spain, Portugal, Netherlands, Denmark, Sweden, and Norway. Such countries may be expected to continue to increase their Capex as they close the FTTH coverage gap. Here, it is worth remembering that several fiber acquisition strategies aiming at connecting homes with fiber result in a lower Capex than if a Telco aims to build all the required fiber infrastructure.

Consolidation Capex.

Telecom companies tend to scale back Capex during consolidation due to uncertainty, the desire to avoid redundancy, and the need to preserve cash. However, after regulatory approval and the deal’s closing, Capex typically rises as the company embarks on network integration, system migration, and infrastructure upgrades necessary to realize the merger’s benefits. This post-merger increase in Capex is crucial for achieving operational synergies, enhancing network performance, and maintaining a competitive edge in the telecom market.

If we look at the period 2021 to 2024, we have had the following consolidation and acquisition examples:

  • UK: In May 2021, Virgin Media and the O2 (Telefonica) UK merger was approved. They announced the intention to consolidate on May 7th, 2020.
  • UK: Vodafone UK and Three UK announced their intention to merge in June 2023. The final decision is expected by the end of 2024.
  • Spain: Orange and MasMovil announced their intent to consolidate in July 2023. Merger approval was given in February 2024. Conditions were imposed on the deal for MasMovil to divestitures its frequency spectrum.
  • Italy: The potential merger between Telecom Italia (TIM) and Open Fiber was first discussed in 2020 when the idea emerged to create a national fiber network in Italy by merging TIM’s fixed access unit, FiberCop, with Open Fiber. a Memorandum of Understanding was signed in May 2022.
  • Greece: Wind Hellas acquisition by United Group (Nova) was announced in August 2021 and finalized in January 2022 (with EU approval in December 2021).
  • Denmark: Norlys’s acquisition of Telia Denmark was first announced on April 25, 2023, and approved by the Danish competition authority in February 2024.

Thus, we should also expect that the bigger in-market consolidations may, in the short term (next 2+ years), lead to increased Capex spending during the consolidation phase, after which Capex (& Opex) synergies hopefully kick in. Typically, 2 budgetary cycles minimum before this would be expected to be observed. Consolidation Capex usually amounts to a couple of percentage points of total consolidated revenue, with some other bigger items being postponed to the tail end of a consolidation unless it is synergetic with the required integration.

The High-risk Suppler Challenge to Western Europe’s Telcos.

When assessing whether Capex will increase or decrease over the next few years (e.g., up to 2030), we cannot ignore the substantial Capex amounts associated with replacing high-risk suppliers (e.g., Huawei, ZTE) from Western European telecom networks. Today, the impact is mainly on mobile critical infrastructure, which is “limited” to core networks and 5G radio access networks (although some EU member states may have extended the reach beyond purely 5G). Particularly if (or when?) the current European Commission’s 5G Toolbox (legal) Framework (i.e., “The EU Toolbox for 5G Security”) is extended to all broadband network infrastructure (e.g., optical and IP transport network infrastructure, non-mobile backend networking & IT systems) and possibly beyond to also address Optical Network Terminal (ONT) and Customer Premise Equipment (note: ONT’s can be integrated in the CPE or alternatively separated from the CPE but installed at the customers premise). To an extent, it is thought-provoking that the EU emphasis has only been on 5G-associated critical infrastructure rather than the vast and ongoing investment of fiber-optical, next-generation fixed broadband networks across all European Union member states (and beyond). In particular, this may appear puzzling when the European Union has subsidized these new fiber-optical networks by up to 50%. Considering that the fixed-broadband traffic is 8 to 10 times that of the mobile traffic, and all mobile (and wireless) traffic passes through the fixed broadband network and associated local as well as global internet critical infrastructure.

As far back as 2013, the European Parliament raised some concerns about the degree of involvement (market share) of Chinese companies in the EU’s telecommunications sector. It should be remembered that in 2013, Europe’s sentiment was generally positive and optimistic toward collaboration with China, as evidenced by the European Commission’s report “EU-China 2020 Strategic Agenda for Cooperation” (2013). Historically, the development of the EU’s 5G Toolbox for Security was the result of a series of events from about 2008 (after the financial crisis) to 2019 (and to today), characterized by growing awareness in Europe of China’s strategic ambitions, the expansion of the BRI (Belt and Road Initiative, 2013), DSR (Digital Silk Road, an important part of BRI 2.0, 2015), and China’s National Intelligence Law (2017) requiring Chinese companies to cooperate with the Chinese Government on intelligence matters, as well as several high-profile cybersecurity incidents (e.g., APT, Operation Cloud Hopper, …), and increased scrutiny of Chinese technology providers and their influence on critical communications infrastructure across pretty much the whole of Europe. These factors collectively drove the EU to adopt a more cautious and coordinated approach to addressing security risks in the context of 5G and beyond.

Figure 7 illustrates Western society, including Western Europe, ‘s concern about Chinese technology presence in its digital infrastructure. A substantial “hidden” capital expense (security debt) is tied to Western Telco’s telecom infrastructures, mobile and fixed.

The European Commission’s 2023 second report on the implementation of the EU 5G cybersecurity toolbox offers an in-depth examination of the risks posed by high-risk suppliers, focusing on Chinese-origin infrastructure, such as equipment from Huawei and ZTE. The report outlines the various stages of implementation across EU Member States and provides recommendations on how to mitigate risks associated with Chinese infrastructure. It considers 5G and fixed broadband networks, including Customer Premise Equipment (CPE) devices like modems and routers placed at customer sites.

The EU Commission defines a high-risk supplier in the context of 5G cybersecurity based on several objective criteria to reduce security threats in telecom networks. A supplier may be classified as high-risk if it originates from a non-EU country with strong governmental ties or interference, particularly if its legal and political systems lack democratic safeguards, security protections, or data protection agreements with the EU. Suppliers susceptible to governmental control in such countries pose a higher risk.

A supplier’s ability to maintain a reliable and uninterrupted supply chain is also critical. A supplier may be considered high-risk if it is deemed vulnerable in delivering essential telecom components or ensuring consistent service. Corporate governance is another important aspect. Suppliers with opaque ownership structures or unclear separation from state influence are more likely to be classified as high-risk due to the increased potential for external control or lack of transparency.

A supplier’s cybersecurity practices also play a significant role. If the quality of the supplier’s products and its ability to implement security measures across operations are considered inadequate, this may raise concerns. In some cases, country-specific factors, such as intelligence assessments from national security agencies or evidence of offensive cyber capabilities, might heighten the risk associated with a particular supplier.

Furthermore, suppliers linked to criminal activities or intelligence-gathering operations undermining the EU’s security interests may also be considered high-risk.

To summarize what may make a telecom supplier a high-risk supplier:

  • Of non-EU origin.
  • Strong governmental ties.
  • The country of origin lacks democratic safeguards.
  • The country of origin lacks security protection or data protection agreements with the EU.
  • Associated supply chain risks of interruption.
  • Opaque ownership structure.
  • Unclear separation from state influence.
  • Ability to independently implement security measures shielding infrastructure from interference (e.g., sabotage, espionage, …).

These criteria are applied to ensure that telecom operators, and eventually any business with critical infrastructure, become independent of a single supplier, especially those that pose a higher risk to the security and stability of critical infrastructure.

Figure 8 above summarizes the current European legislative framework addressing high-risk suppliers in critical infrastructure, with an initial focus on 5G infrastructure and networks.

Regarding 5G infrastructure, the EU report reiterates the urgency for EU Member States to immediately implement restrictions on high-risk suppliers. The EU policy highlights the risks of state interference and cybersecurity vulnerabilities posed by the close ties between Chinese companies like Huawei and ZTE and the Chinese government. Following groundwork dating back to the 2008s EU Directive on Critical Infrastructure Protection (EPCIP), The EU’s Digital Single Market Strategy (2015), the (first) Network and Information Security (NIS) directive (2016), and early European concern about 5G societal impact and exposure to cybersecurity (2015 – 2017), the EU toolbox published in January 2020 is designed to address these risks by urging Member States to adopt a coordinated approach. As of 2023, a second EU report was published on the member state’s progress in implementing the EU Toolbox for 5G Cybersecurity. While many Member States have established legal frameworks that give national authorities the power to assess supplier risks, only 10 have fully imposed restrictions on high-risk suppliers in their 5G networks. The report criticizes the slow pace of action in some countries, which increases the EU’s collective exposure to security threats.

Germany, having one of the largest, in absolute numbers, Chinese RAN deployments in Western Europe, has been singled out for its apparent reluctance to address the high-risk supplier challenge in the last couple of years (see also notes in “Further Readings” at the back of this blog). Germany introduced its regulation on Chinese high-risk suppliers in July 2024 with a combination of their Telekommunikationsgesetz (TKG) and IT-Sicherheitsgesetz 2.0. The German government announced that starting in 2026, it will ban critical components from Huawei and ZTE in its 5G networks due to national security concerns. This decision aligns Germany with other European countries working to limit reliance on high-risk suppliers. Germany has been slower in implementing such measures than others in the EU, but the regulation marks a significant step towards strengthening its telecom infrastructure security. Light Reading has estimated that a German Huawei ban would cost €2.5B and take years for German telcos. This estimate seems very optimistic and certainly would require very substantial discounts from the supplier that would be chosen to replace, for example, their Huawei installations with, e.g., for Telekom Deutschland that would be ca. 50+% of their ca. 38+ thousand sites, and it is difficult for me to believe that that kind of economy would apply to all telcos in Western Europe with high-risk suppliers. I also believe it ignores de-commissioning costs and changes to the backend O&M systems. I expect telco operators will try to push the timeline for replacement until most of their high-risk supplier infrastructure is written off and ripe for modernization, which for Germany would most likely happen after 2026. One way or another, we should expect an increase in mobile Capex spending towards the end of the decade as the German operators are swapping out their Chinese RAN suppliers (which may only be a small part of their Capital spend if the ban is extended beyond 5G).

The European Commission recommends that restrictions cover critical and highly sensitive assets, such as the Radio Access Network (RAN) and core network functions, and urges member states to define transition periods to phase out existing equipment from high-risk suppliers. The transition periods, however, must be short enough to avoid prolonging dependency on these suppliers. Notably, the report calls for an immediate halt to installing new equipment from high-risk vendors, ensuring that ongoing deployment does not undermine EU security.

When it comes to fixed broadband services, the report extends its concerns beyond 5G. It stresses that many Member States are also taking steps to ensure that the fixed network infrastructure is not reliant on high-risk suppliers. Fourteen (14) member states have either implemented or plan to restrict Chinese-origin equipment in their fixed networks. Furthermore, nine (9) countries have adopted technology-neutral legislation, meaning the restrictions apply across all types of networks, not just 5G. This implies that Chinese-origin infrastructure, including transport network components, will eventually face the same scrutiny and restrictions as 5G networks. While the report does not explicitly call for a total ban on all Chinese-origin equipment, it stresses the need for detailed assessments of supplier risks and restrictions where necessary based on these assessments.

While the EU’s “5G Security Toolbox” focuses on 5G networks, Denmark’s approach, the “Danish Investment Screening Act,” which took effect on the 1st of July 2021, goes much further by addressing the security of fixed broadband, 4G, and transport networks. This broad regulatory focus helps Denmark ensure the security of its entire communications ecosystem, recognizing that vulnerabilities in older or supporting networks could still pose serious risks. A clear example of Denmark’s comprehensive approach to telecommunications security beyond 5G is when the Danish Center for Cybersikkerhed (CFCS) required TDC Net to remove Chinese DWDM equipment from its optical transport network. TDC Net claimed that the consequence of the CFCS requirement would result in substantial costs to TDC Net that they had not considered in their budgets. CFCS has regulatory and legal authority within Denmark, particularly in relation to national cybersecurity. CFCS is part of the Danish Defense Intelligence Service, which places it under the Ministry of Defense. Denmark’s regulatory framework is not only one of the sharpest implementations of the EU’s 5G Toolkit but also one of the most extensive in protecting its national telecom infrastructure across multiple layers and generations of technology. The Danish approach could be a strong candidate to serve as a blueprint for expanded EU regulation beyond 5G high-risk suppliers and thus become applicable to fixed broadband and transport networks, resulting in substantial additional Capex towards the end of the decade.

While not singled out as a unique risk category, customer premises equipment (CPE) from high-risk suppliers is mentioned in the context of broader network security measures. Some Member States have indicated plans to ensure that CPE is subject to strict procurement standards, potentially using EU-wide certification schemes to vet the security of such devices. CPE may be included in future security measures if it presents a significant risk to the network. Many CPEs have been integrated with the optical network terminal, or ONT, which is architecturally a part of the fixed broadband infrastructure, serving as a demarcation point between the fiber optic network and the customer’s internal network. Thus, ONT is highly likely to be considered and included in any high-risk supplier limitations that may come soon. Any CPE replacement program would likely be associated on its own with considerable Capex and cost for operators and their customers in general. The CPE quantum for the European Union (including the UK, cheeky, I know) is between 200 and 250 million CPEs, including various types of CPE devices, such as routers, modems, ONTs, and other network equipment deployed for residential and commercial users. It is estimated that 30% to 40% of these CPEs may be linked to high-risk Chinese suppliers. The financial impact of a systematic CPE replacement program in the EU (including the UK) could be between 5 to 8 billion euros in capital expenses, ignoring the huge operational costs of executing such a replacement program.

The Data Growth Slow Down – An Opportunities for Lower Capex?

How do we identify whether a growth dynamics, such as data growth, is exponential or self-limiting?

Exponential growth dynamics have the same (percentage) growth rate indefinitely. Self-limiting growth dynamics, or s-curve behavior, will have a declining growth rate. Natural systems are generally self-limiting, although they might exhibit exponential growth over a short term, typically in the initial growth phase. So, if you are in doubt (which you should not be), calculate the growth rate of your growth dynamics from the beginning until now. If that growth rate is constant (over several time intervals), your dynamics are exponential in nature (at least over the period you looked at); if not … well, your growth process is most likely self-limiting.

Telco Capex increases, and Telco Capex decreases. Capex is, in nature, cyclic, although increasing over time. Most European markets will have access to 550 to 650 MHz downlink spectrum depending on SDL deployment levels below 4 GHz. Assuming 4 (1) Mbps per DL (UL) MHz per sector effective spectral efficiency, 10 traffic hours per day, and ca. 350 to 400 thousand mobile sites (3 sectors each) across Western Europe, the carrying mobile capacity in Bytes is in the order of 140 Exa Bytes (EB) per Month (note: if I had chosen 2 and 0.5 Mbps per MHz per sector, carrying capacity would be ca. 70 EB/Month). It is clear that this carrying capacity limit will continue to increase with software releases, innovation, advanced antenna deployment with higher order MiMo, and migration from older radio access technologies to the newest (increasing the effective spectral efficiency).

According to Ericsson Mobility Visualizer, Western Europe saw a mobile data demand per month of 11 EB in 2023 (see Figure below). The demand for mobile data in 2023 was almost 10 times lower than the (conservatively) estimated carrying capacity of the underlying mobile networks.

Figure 9 illustrates the actual demanded data volume in EB per month. I have often observed that when planners estimate their budgetary demand for capacity expansions, they use the current YoY growth rate and apply it to the future (assuming their growth dynamics are geometrical). I call this the “Naive Expectations” assumption (fallacy) that obviously leads to the overprovision of network capacity and less efficient use of Capex, as opposed to the “Informed Expectations” approach based on the more realistic S-Curve dynamic growth dynamics. I have rarely seen the “Naive Expectations” fallacy challenged by CFOs or non-technical leadership responsible for the Telco budgets and economic health. Although not a transparent approach, it is a “great” way to add a “bit” of Capex cushion for other Capex uncertainties.

It should be noted that the Ericsson data treats traffic generated by fixed wireless access (FWA) separately (which, by the way, makes sense). Thus, the 11 EB for 2023 does not include FWA traffic. Ericsson only has a global forecast for FWA traffic starting from 2023 (note: it is not clear whether 2023 is actual FWA traffic or estimated). To get an impression of the long-term impact of FWA traffic, we can apply the same S-curve approach as the one used for mobile data traffic above, according to what I call the “Informed expectations” approach. Even with the FWA traffic, it is difficult to see a situation that, on average (at least), would pose any challenge to existing mobile networks. Particularly, the carrying capacity can easily be increased by deploying more advanced antennas (e.g., higher order MiMo), and, in general, it is expected to improve with each new software release forthcoming.

Figure 10 above uses Ericsson’s Mobile Visualizer data for Western Europe’s mobile and fixed wireless access (FWA) traffic. It gives us an idea of the total traffic expectations if the current usage dynamics continue. Ericsson only provides a global FWA forecast from 2023 to 2029. I have assumed WEU takes its proportional mobile share of the FWA traffic. Note: For the period up to and including 2023, it seems a bit rich in its FWA expectations, imo.

So, by all means, the latest and greatest mobile networks are, without much doubt, in most places, over-dimensioned from the perspective of their carrying bytes potential, the volumetric capacity, and what is demanded in terms of data volume. They also appear to remain so for a very long time unless the current demand dynamics fundamentally change (which is, of course, always a possibility, as we have seen historically).

However, that our customers get their volumetric demand satisfied is generally a reflection of the quality in terms of bits per second (a much more fundamental unit than volume) satisfied. Thus, the throughput, or speed, should be good enough for the customer to unhindered enjoy their consumption, which, as a consequence, generates the Bytes that most Telco executives have told themselves they understand and like to base their pricing on (and I would argue judging by my experience outside Europe more often than not maybe really don’t get). It is not uncommon that operators with complex volumetric pricing become more obsessed with data volume rather than optimum quality (that might, in fact, generate even more volume). The figure below is a snapshot from August 2024 of the median speeds customers enjoy in mobile as well as fixed broadband networks in Western Europe. In most cases in Europe, customers today enjoy substantially faster fixed-broadband services than they would get in mobile networks. One should expect that this would change how Telcos (at least integrated Telcos) would design and plan their mobile networks and, consequently, maybe dramatically reduce the amount of Mobile Capex we spend. There is little evidence that this is happening yet. However, I do anticipate, most likely naively, that the Telco industry would revise how mobile networks are architected, designed, and built with 6G.

Figure 11 shows that apart from one Western European country (Greece, also a fixed broadband laggard), all other markets have superior fixed broadband downlink speeds compared to what mobile networks can deliver. Note that the speed measurement data is based on the median statistic. Source: Speedtest Global Index, August 2024.

A Crisis of Too Much of a “Good” Thing?

Analysys Mason recently (July 2024) published a report titled “A Crisis of Overproduction in Bandwidth Means that Telecoms Capex Will Inevitably Fall.” The report explores the evolving dynamics of capital expenditure (Capex) in the telecom industry, highlighting that the industry is facing a turning point. The report argues that the telecom sector has reached a phase of bandwidth overproduction, where the infrastructure built to deliver data has far exceeded demand, leading to a natural decline in Capex over the coming years.

According to the Analysys Mason report, global Capex in the telecom sector has already peaked, with two significant investment surges behind it: the rollout of 5G networks in mobile infrastructure and substantial investments in fiber-to-the-premises (FTTP) networks. Both of these infrastructure developments were seen as essential for future-proofing networks, but now that the peaks in these investments have passed, Capex is expected to fall. The report predicts that by 2030, the Capex intensity (the proportion of revenue spent on capital investments) will drop from around 20% to 12%. This reduction is due to the shift from building new infrastructure to optimizing and maintaining existing networks.

The main messages that I take away from the Analysys Mason report are the following:

  • Overproduction of bandwidth: Telecom operators have invested heavily in building their networks. However, demand for data and bandwidth is no longer growing at the exponential rates seen in previous years.
  • Shifting Capex Trends: The telecom industry is experiencing two peaks: one in mobile spending due to the initial 5G coverage rollout and another in fixed broadband due to fiber deployments. Now that these peaks have passed, Capex is expected to decline.
  • Impact of lower data growth: The stagnation in mobile and fixed data demand, combined with the overproduction of mobile and fixed bandwidth, makes further large-scale investment in network expansion unnecessary.

My take on Analysys Mason’s conclusions is that with the cyclic nature of Telco investments, it is natural to expect that Capex will go up and down. That Capex will cycle between 20% (peak deployment phase) and 12% (maintenance phase) seems very agreeable. However, I would expect that the maintenance level would continue to increase as time goes by unless we fundamentally change how we approach mobile investments.

That network capacity is built up at the beginning of a new technology cycle (e.g., 5G NR, GPON, XGPON, XSGPON-based FTTH), it is also not surprising that the amount of available capacity will appear substantial. I would not call it a bandwidth overproduction crisis (although I agree that the overhead of provisioned carrying capacity compared to demand expectations seems historically high); it manifests the technologies we have developed and deployed today. For 5G NR real-world conditions, users could see peak DL speeds ranging from 200 Mbps to 1 Gbps with median 5G DL speeds of 100+ Mbps. The lower end of this range applies in areas with fewer available resources (e.g., less spectrum, fewer MIMO streams). In comparison, the higher end reflects better conditions, such as when a user is close to the cell tower with optimal signal conditions. The quality of fiber-connected households at current GPON and XGPON technology would be sustainable at 1 to 10 Gbps downstream to the in-home ONT/CPE. However, the in-home quality experienced over WiFi would depend a lot on how the WiFi network has been deployed and how many concurrent users there are at any given time. As backhaul and backbone transmission solutions to mobile and fixed access will be modern and fiber-based, there is no reason to believe that user demand should be limited in any way (anytime soon), given a well-optimized, modern fiber-optic network should be able to reach up to 100 Tbps (e.g., 10 EB per month with 10 traffic hours per day).

Germany, the UK, Belgium, and a few smaller Western countries will continue their fiber deployment for some years to bring their fiber coverage up to the level of countries such as France, Spain, Portugal, and the Netherlands. It is difficult to believe that these countries would not continue to invest substantial money to raise their fiber coverage from their current low levels. Countries with less than 60% fiber-to-the-home coverage have a share of 50+ % of the overall Western European Capex level.

The fact that the Telco industry would eventually experience lower growth rates should not surprise anyone. That has been in the cards since growth began. The figure below takes actual mobile data from Ericsson’s Mobile Visualizer. It applies a simple S-curve growth model dynamics to those data that actually do a very good job of accounting for the behavior. A geometrical growth model (or exponential growth dynamics), while possibly accounting for the early stages of technology adaptation and the resulting data growth, is not a reasonable model to apply here and is not supported by the actual data.

Figure 12 provides the actual Exa Bytes (EB) monthly with a fitted S-Curve extrapolated beyond 2023. The S-Curve is described by the Data Demand Limit (Ls), Growth Rate (k), and the Inflection Year (T0), where growth transitions from acceleration to deceleration. Source: Ericsson Mobile Visualizer resource.

The growth dynamic, applied to the data we extract from the markets shown in the above Figure, indicates that in Western Europe and the CEE (Central Eastern Europe), the inflection point should be expected around 2025. This is the year when the growth rates begin to decline. In Western Europe (and CEE), we would expect the growth rate to become less than 10% by 2030, assuming that no fundamental changes to the growth dynamic occur. The inflection point for the North American markets (i.e., The USA and Canada) is around 2033; this is expected to happen a bit earlier (2030) for Asia. Based on the current growth dynamics, North America will experience growth rates below 10% by 2036. For Asia, this event is expected to take place around 2033. How could FWA traffic growth change these results? The overall behavior would not change. The inflection point may happen later, thus the onset of slower growth rates, and the time when we would expect a growth rate lower than 10% would be a couple of years after the inflection year.

Let us just for fun (usually the best reason) construct a counterfactual situation. Let us assume that data growth continues to follow geometric (exponential) growth indefinitely without reaching a saturation point or encountering any constraints (e.g., resource limits, user behavior limitations). The premise is that user demand for mobile and fixed-line data will continue to grow at a constant, accelerating rate. For mobile data growth, we use the 27% YoY growth of 2023 and use this growth rate for our geometrical growth model. Thus, every ca. 3 years, the demand would double.

If telecom data usage continued to grow geometrically, the implications would (obviously) be profound:

  • Exponential network demand: Operators would face exponentially increasing demand on their networks, requiring constant and massive investments in capacity to handle growing traffic. Once we reach the limits of the carrying capacity of the network, we have three years (with a CAGR of 27%) until demand has doubled. Obviously, any spectrum position would quickly become insufficient, resulting in massive investments in new infrastructure (sites in mobile and more fiber) would be needed. Capacity would become the growth limiting factor.
  • Costs: The capital expenditures (Capex) required to keep pace with geometric growth would skyrocket. Operators must continually upgrade or replace network equipment, expand physical infrastructure, and acquire additional spectrum to support the growing data loads. This would lead to unsustainable business models unless prices for services rose dramatically, making such growth scenarios unaffordable for consumers but long before that for the operators themselves.
  • Environmental and Physical Limits: The physical infrastructure necessary to support geometric growth (cell towers, fiber optic cables, data centers) would also have environmental consequences, such as increased energy consumption and carbon emissions. Additionally, telecom providers would face the law of diminishing returns as building out and maintaining these networks becomes less economically feasible over time.
  • Consumer Experience: The geometric growth model assumes that user behavior will continue to change dramatically. Consumers would need to find new ways to utilize vast amounts of bandwidth beyond streaming and current data-heavy applications. Continuous innovation in data-hungry applications would be necessary to keep up with the increased data usage.

The counterfactual argument shows that geometric growth, while useful for the early stages of data expansion, becomes unrealistic as it leads to unsustainable economic, physical, and environmental demands. The observed S-curve growth is more appropriate for describing mobile data demand because it accounts for saturation, the limits of user behavior, and the constraints of telecom infrastructure investment.

Back to Analysys Mason’s expected, and quite reasonable, consequence of the (progressively) lower data growth: large-scale investment would become unnecessary.

While the assertion is reasonable, as said, mobile obsolescence hits the industry every 5 to 7 years, regardless of whether there is a new radio access technology (RAT) to take over. I don’t think this will change, or maybe the Industry will spend much more on software annually than previously and less on hardware modernization during obsolescence transformations. Though I suspect that the software would impose increasingly harder requirements on the underlying hardware (whether on-prem or in the cloud), modernization investments into the hardware part would continue to be substantial. This is not even considering the euphoria that may come around the next generation RAT (e.g., 6G).

The fixed broadband fiber infrastructure’s economical and useful life is much longer than that of the mobile infrastructure. The optical transmission equipment is likewise used for access, aggregation, and backbone (although not as long as the optical fiber itself). Additionally, fiber-based fixed broadband networks are operationally (much) more efficient than their mobile counterparts, alluding to the need to re-architect and redesign how they are being built as they are no longer needed inside customer dwellings. Overall, it is not unreasonable to expect that fixed broadband modernization investments will occur less frequently than for mobile networks.

Is Enough Customer Bandwidth a Thing?

Is there an optimum level of bandwidth in bits per second at which a customer is fully (optimized) served? Beyond that, whether the network could provide far more speed or quality does not matter.

For example. for most mobile devices, phones, and tablets, much more than 10 Mbps for streaming would not make much of a viewing difference for the typical customer. Given the assumptions about eyesight and typical viewing distances, more than 90% of people would not notice an improvement in viewing experience on a mobile phone or tablet beyond 1080p resolution. Increasing the resolution beyond that point—such as to 1440p (Quad HD) or 4K would likely not provide a noticeably better experience for most users, as their visual acuity limits their ability to discern finer details on small screens. This means the focus for improving mobile and tablet displays shifts from resolution to other factors like color accuracy, brightness, and contrast rather than chasing higher pixel counts. An optimization strategy that should not necessarily result in higher bandwidth requirements, although moving to higher color depth or more brightness / dynamic range (e.g., HDR vs SDR) would lead to a moderate increase in the required data ranges.

A throughput between 50 and 100 Mbps for fixed broadband TV streaming currently provides an optimum viewing experience. Of course, a fixed broadband household may have many concurrent bandwidth demands that would justify a 1 Gbps fiber to the home or maybe even 10 Gbps downstream to serve the whole household at an optimum experience at any time.

Figure 13 provides the data rate ranges for a streaming format, device type, and typical screen size. The data rate required for streaming video content is determined by various factors, including video resolution, frame rate, compression, and screen size. The data rate calculation (in Mbps) for different streaming formats follows a process that involves estimating the amount of data required to encode each frame and multiplying by the frame rate and compression efficiency. The methodology can be found in many places. See also my blog “5G Economics – An Introduction (Chapter 1)” from Dec. 2016.

Let’s move into high-end and fully immersive virtual reality experiences. The user bandwidth requirement may exceed 100 Mbps and possibly even require a Gbps sustainable bandwidth delivered to the user device to provide an optimum experience. However, jitter and latency performance may not make such full immersion or high-end VR experiences fully optimal over mobile or fixed networks with long distances to the supporting (edge) data centers and cloud servers where the related application may reside. In my opinion, this kind of ultra-high-end specialized service might be better run exclusively on location.

Size Matter.

I once had a CFO who was adamant that an organization’s size on its own would drive a certain amount of Capex. I would, at times, argue that an organization’s size should depend on the number of activities required to support customers (or, more generally, the number of revenue-generating units (RGUs), your given company has or expects to have) and the revenue those generate. In my logic, at the time, the larger a country in terms of surface area, population, and households, the more capex-related activities would be required, thus also resulting in the need for a bigger organization. If you have more RGU, it might also not be too surprising that the organization would be bigger.

Since then, I have scratched my head many times when I look at country characteristics, the RGUs, and Revenues, asking how that can justify a given size of Telco organizations, knowing that there are other Telcos out there that spend the same or more Capex with a substantially smaller organization (also after considering the difference in sourcing strategies). I have never been with an organization that irrespective of its size did not feel pressured work-wise and believed it was too lightly staffed to operate, irrespective of the Capex and activities under management.

Figure 14 illustrates the correlation between the Capex and the number of FTEs in a Telco organization. It should be noted that the upper right point results in a very good correlation of 0.75. Without this point, the correlation would be around 0.25. Note that sourcing does have a minor effect on the correlation.

The above figure illustrates a strong correlation between Capex and the number of people in a Telco organization. However, the correlation would be weaker without the upper right data point. In the data shown here, you will find no correlation between FTEs and a country’s size, such as population or surface area, which is also the case for Capex. There is a weak correlation between FTEs and RGU and a stronger correlation with Revenues. Capex, in general, is very strongly correlated with Revenues. The best multi-linear regression model, chosen by p-value, is a model where Capex relates to FTEs and RGUs. For a Telco with 1000 employees and 1 million RGUs, approximately 50% of the Capex could be explained by the number of FTEs. Of course, in the analysis above, we must remember that correlation does not imply causation. You will have telcos that, in most Capex driver aspects, should be reasonably similar in their investment profiles over time, except the telco with the largest organization will consistently invest more in Capex. While I think this is, in particular, an incumbent vs challenger issue, it is a much broader issue in our industry.

Having spent most of my 20+ year career in Telecom being involved in Capex planning and budgeting, it is clear that the size of an organization plays a role in the size of a Capex budget. Intuitively, it should not be too surprising. Suppose the Capex is lower than the capacity of your organization. In that case, you may have to lay off people with the risk you might be short of resources in the future as you may cycle through modernization or a new technology introduction. On the other hand, if the Capex needs are substantially larger than the organization can cope with, including any sourcing agreements in place, it may not make too much sense to ask for more than what can be managed with the resources available (apart from it being sub-optimal for cash flow optimization).

Telco companies that have fixed and mobile broadband infrastructure in their portfolio with organizations that are poorly optimized and with strict demarcation lines between people working on fixed broadband and mobile broadband will, in general, have much worse Capex efficiencies compared to fully fixed-mobile converged organizations (not to mention suffering from poorer operational efficiencies and work practices compared to integrated organizations). Here, the size of, for example, a mobile organization will drive behavior that rather would spend above and beyond Capex in their Radio Access Network infrastructure than use more clever and proven solutions (e.g., Opanga’s RAIN) to optimize quality and capacity needs across their mobile networks.

In general, the resistance to utilize smarter solutions and clever ideas that may save Capex (and/or Opex) is manifesting in a many-fold of behaviors that I have observed over my 25+ year career (and some I might even have adapted on occasion … but shhhh;-).

Budget heuristics:

  • 𝗦𝗶𝘇𝗲 𝗱𝗼𝗲𝘀𝗻𝘁 𝗺𝗮𝘁𝘁𝗲𝗿 𝗽𝗮𝗿𝗮𝗱𝗶𝗴𝗺 Irrespective of size, my organization will always be busy and understaffed.
  • 𝗧𝗵𝗲 𝗚𝗼𝗹𝗱𝗶𝗹𝗼𝗰𝗸𝘀 𝗙𝗮𝗹𝗹𝗮𝗰𝘆 My organization’s size and structure will determine its optimum Capex spending profile, allowing it to stay busy (and understaffed).
  • 𝗧𝗮𝗻𝗴𝗶𝗯𝗹𝗲 𝗕𝗶𝗮𝘀 A hardware (infrastructure-based) solution is better and more visible than a software solution. I feel more comfortable with my organization being busy with hardware.
  • 𝗧𝗵𝗲 𝗦𝘂𝗻𝗸 𝗖𝗼𝘀𝘁 𝗙𝗮𝗹𝗹𝗮𝗰𝘆 I don’t trust (allegedly) clever software solutions that may lower or postpone my Capex needs and, by that, reduce the need for people in my organization.
  • 𝗕𝘂𝗱𝗴𝗲𝘁 𝗠𝗮𝘅𝗶𝗺𝗶𝘇𝗮𝘁𝗶𝗼𝗻 𝗧𝗲𝗻𝗱𝗲𝗻𝗰𝘆 My organization’s importance and my self-importance are measured by how much Capex I have in my budget. I will resist giving part of my budget away to others.
  • 𝗦𝘁𝗮𝘁𝘂𝘀 𝗤𝘂𝗼 𝗕𝗶𝗮𝘀 I will resist innovation that may reduce my Capex budget, even if it may also help reduce my Opex.
  • 𝗝𝗼𝗯 𝗣𝗿𝗼𝘁𝗲𝗰𝘁𝗶𝗼𝗻𝗶𝘀𝗺 I resist innovation that may result in a more effective organization, i.e., fewer FTEs.
  • 𝗖𝗮𝗽𝗮𝗰𝗶𝘁𝘆 𝗖𝗼𝗺𝗳𝗼𝗿𝘁 𝗦𝘆𝗻𝗱𝗿𝗼𝗺𝗲: The more physical capacity I build into my network, the more we can relax. Our goal is a “Zero Worry Network.”
  • 𝗧𝗵𝗲 𝗙𝗲𝗮𝗿 𝗙𝗮𝗰𝘁𝗼𝗿: The leadership is “easy to scare” when arguing for more capacity Capex opposed to the “if-not”-consequences. (e.g., losing best network awards, poorer customer experience, …).
  • 𝗧𝗵𝗲 𝗕𝘂𝗱𝗴𝗲𝘁 𝗜𝗻𝗲𝗿𝘁𝗶𝗮 Return on Investment (ROI) prioritization is rarely considered (rigorously), particularly after a budget has been released.

𝗔 𝘄𝗮𝗿𝗻𝗶𝗻𝗴: although each is observable in the live, the reader should be aware that there is also a fair amount of deliberate ironic provocation in the above heuristics.

We should never underestimate that within companies, two things make you important (including self-important and self-worthy) … It is: (1) The size of your organization and (2) the amount of money, your budget size, you have for your organization to be busy with.

Any innovation that may lower an organization’s size and budget will be met with resistance from that organization.

The Balancing Act of Capex to Opex Transformations.

Telco cost structures and Capex have evolved significantly due to accounting changes, valuation strategies, technological advancements, and economic pressures. While shifts like IFRS (International Financial Reporting Standards), issued by the International Accounting Standards Board (IASB), have altered how costs are reported and managed, changes in business strategies, such as cell site spin-offs, cloud migrations, and the transition to software-defined networks, have reshaped Capex allocations somewhat. At the same time, economic crises and competitive pressures have influenced Telcos to continually reassess their capital investments, balancing the need to optimize value, innovation, and growth with financial diligence.

One of the most significant drivers of change has been the shift in accounting standards, particularly with the introduction of IFRS16, which replaced the older GAAP-based approaches. Under IFRS16, nearly all leases are now recognized on the balance sheet as right-of-use assets and corresponding liabilities. This change has particularly impacted Telcos, which often engage in long-term leases for cell sites, network infrastructure, and equipment. Previously, under GAAP (Generally Accepted Accounting Principles), many leases were treated as operating leases, keeping them off the balance sheet, and their associated costs were considered operational expenditures (Opex). Now, under IFRS16, these leases are capitalized, leading to an increase in reported Capex as assets and liabilities grow to reflect the leased infrastructure. This shift has redefined how Telcos manage and report their Capex, as what was previously categorized as leasing costs now appears as capital investments, altering key financial metrics like EBITDA and debt ratios that would appear stronger post-IFRS16.

Simultaneously, valuation strategies and financial priorities have driven significant shifts in Telco Capex. Telecom companies have increasingly focused on enhancing metrics such as EBITDA and capital efficiency, leading them to adopt strategies to reduce heavy capital investments. One such strategy is the cell site spin-off, where Telcos sell off their tower and infrastructure assets to specialized independent companies or create separate entities that manage these assets. These spin-offs have allowed Telcos to reduce the Capex tied to maintaining physical assets, replacing it with leasing arrangements, which shift costs towards operational expenses. As a result, Capex related to infrastructure declines, freeing up resources for investments in other areas such as technology upgrades, customer services, and digital transformation. The spun-off infrastructures often result in significant cash inflows from sales. The telcos can then use this cash to improve their balance sheets by reducing debt, reinvesting in new technologies, or distributing higher dividends to shareholders. However, this shift may also reduce control over critical network infrastructure and create long-term lease obligations, resulting in substantial operational expenses as telcos will have to pay the rental costs on the spun-off infrastructure, increasing Opex pressure. I regularly see analysts using the tower spin-off as an argument for why Capex requirements of telcos are no longer wholly trustworthy and, in particular, in comparison with the past capital spending as the passive part of the cell site built used to be a substantial share mobile site Capex of up to 50% to 60% for a standard site built and beyond that for special sites. I believe that as not many new cell sites are being built any longer, and certainly not as many as in the 90s and 2000s, this effect is very minor on the overall Capex. Most new sites are built at a maintenance level, covering new residential or white spot areas.

When considering mobile network evolution and the impact of higher frequencies, it is important not to default to the assumption that more cell sites will always be necessary. If all things are equal, the coverage cell range of a high carrier frequency would be shorter (often much shorter) than the coverage range at a lower frequency. However, all things are not equal. This misconception arises from a classical coverage approach, where the frequency spectrum is radiated evenly across the entire cell area. However, modern cellular networks employ advanced technologies such as beamforming, which allows for more precise and efficient distribution of radio energy. Beamforming concentrates signal power in specific directions rather than thinly spreading it across a wide area, effectively increasing reach and signal quality without additional sites. Furthermore, the support for asymmetric downlink (higher) and uplink (lower) carrier frequencies allows for high-quality service downlink and uplink in situations where the uplink might be challenged at higher frequencies.

Moreover, many mobile networks today have already been densified to accommodate coverage needs and capacity demands. This densification often occurred when spectrum resources were scarce, and the solution was to add more sites for improved performance rather than simply increasing coverage. As newer frequency bands become available, networks can leverage beamforming and existing densification efforts to meet coverage and capacity requirements without necessarily expanding the number of cell sites. Thus, the focus should be optimizing the deployment of advanced technologies like beamforming and Massive MIMO rather than increasing the site count by default. In many cases, densified networks are already equipped to handle higher frequencies, making additional sites unnecessary for coverage alone.

The migration to public cloud solutions from, for example, Amazon’s AWS or Microsoft Azure is another factor influencing the Capex of Telcos. Historically, telecom companies relied on significant upfront Capex to build and maintain their own data centers or switching locations (as they were once called, as these were occupied mainly by the big legacy telecom proprietary telco switching infrastructure), network operations centers, and IT (monolithic) infrastructure. However, with the rise of cloud computing, Telcos are increasingly migrating to cloud-based solutions, reducing the need for large-scale physical infrastructure investments. This shift from hardware to cloud services changes the composition of Capex as the need for extensive data center investments declines, and more flexible, subscription-based cloud services are adopted. Although Capex for physical infrastructure decreases, there is a shift towards Opex as Telcos pay for cloud services on a usage basis.

Further, the transition to software-defined networks (SDNs) and software-centric telecom solutions has transformed the nature of Telco Capex. In the past, Telcos heavily depended on proprietary hardware for network management, which required substantial Capex to purchase and maintain physical equipment. However, with the advancement of virtualization and SDNs, telcos have shifted away from hardware-intensive solutions to more software-driven architectures. This transition reduces the need for continuous Capex on physical assets like routers, switches, and servers and increases investment in software development, licensing, and cloud-based platforms. The software-centric model allows, in theory, Telcos to innovate faster and reduce long-term infrastructure costs.

The Role of Capex in Financial Statements.

Capital expenditures play a critical role in shaping a telecommunications company’s financial health, influencing its income statement, balance sheet, and cash flow statements in various ways. At the same time, Telcos establish financial guardrails to manage the impact of Capex spending on dividends, liquidity, and future cash needs.

In the income statement (see Figure 15 below), Capex does not appear directly as an expense when it is incurred. Instead, it is capitalized on the balance sheet and then expensed over time through depreciation (for tangible assets) or amortization (for intangible assets). This gradual recognition of the Capex expenditure leads to higher depreciation or amortization charges over future periods, reducing the company’s net income. While the immediate impact of Capex is not seen on the income statement, the long-term effects can improve revenue when investments enhance capacity and quality, as with technological upgrades like 5G infrastructure. However, these benefits are offset by the fact that depreciation lowers profitability in the short term (as the net profit is lowered). The last couple of radio access technology (RAT) generations have, in general, caused an increase in telcos’ operational expenses (i.e., Opex) as more cell sites are required, heavier site configurations are implemented (e.g., multi-band antennas, massive MiMo antennas), and energy consumption has increased in absolute terms. Despite every new generation having become relatively more energy efficient in terms of the kWh/GB, in absolute terms, this is not the case, and that matters for the income statement and the incurred operational expenses.

Figure 15 illustrates the typical income statement one may find in a telco’s annual report or official financial statements. The purpose here is to show where Capex may have an influence although Capex will not be directly stated in the Income Statement. Note: the numbers in the above financial statement are for illustration only representing a Telco with 35% EBITDA margin, 20% Capex to Revenue Ratio and a Tax rate of 22%.

On the balance sheet (see Figure 16 below), Capex increases the value of a company’s fixed assets, typically recorded as property, plant, and equipment (PP&E). As new assets are added, the company’s overall asset base grows. However, this is balanced by the accumulation of depreciation, which gradually reduces the book value of these assets over time. How Capex is financed also affects the company’s liabilities or equity. If debt is used to finance Capex, the company’s liabilities increase; if equity financing is used, shareholders’ equity increases. The Balance Sheet together with the Depreciation & Amortization (D&A), typically given in the income statement, can help us estimate the amount of Capex a Telco has spend. The capital expense, typically not directly reported in a companies financial statements, can be estimated by adding the changes between subsequent years of PP&E and Intangible Assets to the D&A.

Figure 16 illustrates the balance sheet one may find in a telco’s annual report or official financial statements. The purpose here is to show where Capex may have an influence. Knowing the Depreciation & Amortization (D&A) typically shown in the Income Statement, the change in PP&E and Intangible Assets (between two subsequent years) will provide an estimate of the Capex of the current year. Note: the numbers in the above financial statement are for illustration only representing a Telco with 35% EBITDA margin, 20% Capex to Revenue Ratio and a Tax rate of 22%.

In the cash flow statement, Capex appears as an outflow under the category of cash flows from investing activities, representing the company’s spending on long-term assets. In the short term, this creates a significant reduction in cash. However, well-planned Capex to enhance infrastructure or expand capacity can lead to higher operating cash flows in the future. If Capex is funded through debt or equity issuance, the inflow of funds will be reflected under cash flows from financing activities.

Figure 17 illustrates the Cash Flow Statements one may find in a telco’s annual report or official financial statements (might have a bit more details than what usually would be provided). We would typically get a 70+% impression of a Telco’s Capex level by looking at the “Net Cash Flow Used in Investing Activities”, unless we are offered Purchases of Tangible and Intangible Assets. Note: the numbers in the above financial statement are for illustration only representing a Telco with 35% EBITDA margin, 20% Capex to Revenue Ratio and a Tax rate of 22%.

To ensure Capex does not overly strain the company’s financial health or limit returns to shareholders, Telcos put in place financial guardrails. Regarding dividends, many companies set specific dividend payout ratios, ensuring that a portion of earnings or free cash flow is consistently returned to shareholders. This practice balances returning value to shareholders while retaining sufficient earnings to fund operations and investments. It is also not unusual that Telco’s commit a given dividend level to shareholders, that as a consequence may place a limit on Capex spending or result in Capex tasking within a given planning period, as management must balance cash outflows between shareholder returns and strategic investments. This may lead to prioritizing essential projects, delaying less critical investments, or seeking alternative financing to maintain both Capex and dividend commitments. Additionally, Telcos often use dividend coverage ratios to ensure they can sustain dividend payouts even during periods of heavy capital expenditure.

Some telcos have chosen not to commit dividends to shareholders in order to maximize Capex investments, aiming to reinvest profits into the business to drive long-term growth and create higher shareholder value. This strategy prioritizes network expansion, technological upgrades, and new market opportunities over immediate cash returns, allowing the company to maintain financial flexibility and pursue strategic objectives more aggressively. When a telco decides to start paying dividends, it may indicate that management believes there are fewer high-value investment opportunities that can deliver returns above the company’s cost of capital. The decision to pay dividends often reflects the view that shareholders may derive greater value from the cash than the company could generate by reinvesting it. Often it signals a shift to a higher degree of maturity (e.g., corporate or market wise) from having been a growth focused company (i.e., the Telco has past the inflection point of growth). An example of maturity, and maybe less about growth opportunities, is the case of T-Mobile USA which in 2024 announced that it would start to pay dividend for the first time in its history targeting a 10 percent annually per share (note: Deutsche Telekom AG gained ownership in 2001, the company was founded in 1994).

Liquidity management is another consideration. Companies monitor their liquidity through current or quick ratios to ensure they can meet short-term obligations without cutting dividends or pausing important Capex projects. To provide an additional safety net, Telcos often maintain cash reserves or access to credit lines to handle immediate financial needs without disrupting long-term investment plans.

Regarding debt management, Telcos must carefully balance using debt to finance Capex. Companies often track their debt-to-equity ratio to avoid over-leveraging, which can lead to higher interest expenses and reduced financial flexibility. Another common metric is net debt to EBITDA, which ensures that debt levels remain manageable concerning the company’s earnings. To avoid breaching agreements with lenders, Telcos often operate under covenants that limit the amount they can spend on Capex without negatively affecting their ability to service debt or pay dividends.

Telcos also plan long-term cash flow to ensure Capex investments align with future financial needs. Many companies establish a capital allocation framework that prioritizes projects with the highest returns, ensuring that investments in infrastructure or technology do not jeopardize future cash flow. Free cash flow (FCF) is a particularly important metric in this context, as it represents the amount of cash available after covering operating expenses and Capex. A positive FCF ensures the company can meet future cash needs while returning value to shareholders through dividends or share buybacks.

Capex budgeting and prioritization are also essential tools for managing large investments. Companies assess the expected return on investment (ROI) and the payback period for Capex projects, ensuring that capital is allocated efficiently. Projects with assumed high strategic value, such as 5G infrastructure upgrades, household fiber coverage, or strategic fiber overbuilt, are often prioritized for their potential to drive long-term revenue growth. Monitoring the Capex-to-sales ratio helps ensure that capital investments are aligned with revenue growth, preventing over-investment in infrastructure that may not yield sufficient returns.

CAPEX EXPECTATIONS 2024 to 2026.

Considering all of the 54 telcos, ignoring MasMovil and WindHellas that are in the process of being integrated, in the pool of New Street Research Quarterly review each with their individual as well as country “peculiarities” (e.g., state of 5G deployment, fiber-optical coverage, fiber uptake, merger-resulting integration Capex, general revenue trends, …), it is possible to get a directional idea of how Capex will develop for each individual telco as well as the overall trend. This is illustrated in the Figure below on a Western European level.

I expect that we will not see a Capex reduction in 2024, supported by how Capex in the third and fourth quarters usually behave compared to the first two quarters, and due to integration and transformation Capex that will carry from 2023 into 2024 and possibly with a tail-end in 2024. I expect most telcos will cut back on new mobile investments, even if some might start ripping out radio access infrastructure from Chinese suppliers. However, I also believe that telcos will try to delay replacement to 2026 to 2028, when the first round of 5G modernization activities would be expected (and even overdue for some countries).

While 5G networks have made significant advancements, the rollout of 5G SA remains limited. By the end of 2023, only five of 39 markets analyzed by GSMA have reached near-complete adoption of 5G SA networks. 17 markets had yet to launch 5G SA at all. One of the primary barriers is the high cost of investment required to build the necessary infrastructure. The expansion and densification of 5G networks, such as installing more base stations, are essential to support 5G SA. According to GSMA, many operators are facing financial hurdles, as returns in many markets have been flat, and any increase is mainly due to inflationary price corrections rather than incremental or new usage occurring. I suspect that telcos may also be more conservative (and even more realistic, maybe) in assessing the real economic potential of the features being enabled by migrating to 5G SA, e.g., advanced network slicing, ultra-low latency, and massive IoT capabilities in comparison with the capital investments and efforts that they would need to incur. I should point out that any core network investments supporting 5G SA would not be expected to have a visible impact on telcos Capex budgets as this would be expected to be less than 10% of the mobile capex.

Figure 18 shows the 2022 status of homes covered by fiber in 16 Western European countries, as well as the number of households remaining. It should be noted that a 100% coverage level may be unlikely, and this data does not consider fiber overbuilt (i.e., multiple companies covering the same households with their individual fiber deployments). Fiber overbuilt becomes increasingly likely as the coverage exceeds 80% (on a geographical regional/city basis). The percentages (yellow color) above the chart show the share of Total 2022 Western European Capex for the country, e.g., Germany’s share of the 2022 Capex was 18% and had ca. 19% of all German households covered with fiber. Source: based on Omdia & Point Topic’s “Broadband Coverage in Europe 2013-2022” (EU Commission Report).

In 2022, a bit more than 50% of all Western European households were covered by fiber (see Figure 18 above), which amounts to approximately 85 million households with fiber coverage. This also leaves approximately 80 million households without fiber reach. Almost 60% of households without fiber coverage are in Germany (38%) and the UK (21%). Both Germany and the UK contributed about 40% of the total Western European Capex spend in 2022.

Moreover, I expect there are still Western European markets where the Capex priority is increasing the fiber-optic household coverage. In 2022, there was a peak in new households covered by fiber in Western Europe (see Figure 15 below), with 13+ million households covered according to the European Commission’s report “Broadband Coverage in Europe 2013-2022“. Germany (a fiber laggard) and the UK, which account for more than 35% of the Western European Capex, are expected to continue to invest substantially in fiber coverage until the end of the decade. As Figure 19 below illustrates, there is still a substantial amount of Capex required to close the fixed broadband coverage gap some Western European countries have.

Figure 19 illustrates the number of households covered by fiber (homes passed) and the number of millions of new households covered in a year. The period from 2017 to 2022 is based on actuals. The period from 2023 to 2026 is forecasted for new households covered based on the last 5-year average deployment or the maximum speed over the last 5 years (Urban: e.g., DE, IT, NL, UK,…) with deceleration as coverage reaches 95% for urban areas and 80% for rural (note: may be optimistic for some countries). The fiber deployment model differentiates between Urban and Rural areas. Source: based on Omdia & Point Topic’s “Broadband Coverage in Europe 2013-2022” (EU Commission Report).

I should point out that I am not assuming that telcos would be required over the next couple of years to swap out Chinese suppliers outside the scope of the European Commission “The EU 5G Toolkit for Security” framework that mainly focuses on 5G mobile networks eventually including the radio access network. It should be kept in mind that there is a relatively big share of high-risk suppliers within the Western European (actually in most European Union member states) fixed broadband networks (e.g., core routers & switches, SBCs, OLT/ONTs, MSAPs) that if subjected to “5G Toolkit for Security”-like regulation, such as in effect in Denmark (i.e., “The Danish Investment Screening Act”), would result in substantial increase in telcos fixed capital spend. We may see that some Western European telcos will commence replacement programs as equipment becomes obsolete (or near obsolete), and I would expect that the fixed broadband Capex will remain relatively high for telcos in Western Europe even beyond 2026.

Thus, overall, I think it is not unrealistic to anticipate a decrease in Capex over the next 3 years. Contrary to some analysts’ expectations, I do not see the lower Capex level being persistent but rather what to expect due to the reasons given above in this blog.

Figure 20 illustrates the pace and financial requirements for fiber-to-the-premises (FTTP) deployment across the EU, emphasizing the significant challenges ahead. Germany needs the highest number of households passed per week and the largest investments at €32.9 billion to reach 80% household coverage by 2031. The total investment required to reach 80% household fiber coverage by 2031 is estimated at over €110 billion, with most of this funding allocated to urban areas. Despite progress, more than 57% of Western European households still lack fiber coverage as of 2022. Achieving this goal will require maintaining the current pace of deployment and overcoming historical performance limitations. Source: based on Omdia & Point Topic’s “Broadband Coverage in Europe 2013-2022” (EU Commission Report).

CAPEX EXPECTATIONS TOWARDS 2030.

Taking the above Capex forecasting approach, based on the individual 54 Western European telcos in the New Street Research Quarterly review, it is relatively straightforward, but not per se very accurate, to extend to 2030, as shown in the figure below.

It is worth mentioning that predicting Capex’s reliability over such a relatively long period of ten years is prone to a high degree of uncertainty and can actually only be done with relatively high reliability if very detailed information is available on each telco’s long-term, short-term and strategy as well as their economic outlook. In my experience from working with very detailed bottom-up Capex models covering a five and beyond-year horizon (which is not the approach I have used here simply for lack of information required for such an exercise not to be futile), it is already prone to a relatively high degree of uncertainty even with all the information, solid strategic outlook, and reasonable assumptions up front.

Figure 21 illustrates Western Europe’s projected capital expenditure (Capex) development from 2020 to 2030. The slight increase in Capex towards 2030 is primarily driven by the modernization of 5G radio access networks (RAN), which could potentially incorporate 6G capabilities and further deploy 5G Standalone (SA) networks. Additionally, there is a focus on swapping out high-risk suppliers in the mobile domain and completing heavy fiber household coverage in the remaining laggard countries. Suppose the European Commission’s 5G Security Toolkit should be extended to fixed broadband networks, focusing on excluding high-risk suppliers in the 5G mobile domain. In that case, this scenario has not been factored into the current model represented here. The percentages on the chart represent the overall Capex to Total Revenue ratio development over the period.

The capital expenditure trends in Western Europe from 2020 to 2030, with projections indicating a steady investment curve (remember that this is the aggregation of 54 Western European telcos Capex development over the period).

A noticeable rise in Capex towards 2030 can be attributed to several key factors, primarily the modernization of 5G Radio Access Networks (RAN). This modernization effort will likely include upgrades to the current 5G infrastructure and potential integration of 6G (or renamed 5G SA) capabilities as Europe prepares for the next generation of mobile technology, which I still believe is an unavoidable direction. Additionally, deploying or expanding 5G Standalone (SA) networks, which offer more advanced features such as network slicing and ultra-low latency, will further drive investments.

Another significant factor contributing to the increased Capex is the planned replacement of high-risk suppliers in the mobile domain. Countries across Western Europe are expected to phase out network equipment from suppliers deemed risky for national security, aligning with broader EU efforts to ensure a secure telecommunications infrastructure. I expect a very strong push from some member state regulators and the European Commission to finish the replacement by 2027/2028. I also expect impacted telcos (of a certain size) to push back and attempt to time a high-risk supplier swap out with their regular mobile infrastructure obsolescence program and introduction of 6G in their networks towards and after 2030.

Figure 22 shows the projections for 2023 and 2030 for the number of homes covered by fiber in Western European countries and the number of households remaining. It should be noted that a 100% coverage level may be unlikely, and this data does not consider fiber overbuilt (i.e., multiple companies covering the same households with their individual fiber deployments). Fiber overbuilt becomes increasingly likely as the coverage exceeds 80% (on a geographical regional/city basis). Source: based on Omdia & Point Topic’s “Broadband Coverage in Europe 2013-2022” (EU Commission Report).

Simultaneously, Western Europe is expected to complete the extensive rollout of fiber-to-the-home (FTTH) networks, as illustrated by Figure 20 above, particularly in countries lagging behind in fiber deployment, such as Germany, the UK, Belgium, Austria, and Greece. These EU member states will likely have finished covering the majority of households (80+%) with high-speed fiber by the end of the decade. On this topic, we should remember that telcos are using various fiber deployment models that minimize (and optimize) their capital investment levels. By 2030 I would expect that almost 80% of all Western European households will be covered with fiber and thus most consumers and businesses will have easy access to gigabit services to their homes by then (and for most countries long before 2030). Germany is still expected to be the Western European fiber laggard by 20230, with an increased share of 50+% of German households not being covered by fiber (note: in 2022, this was 38%). Most other countries will have reached and exceeded 80% fiber household coverage.

It is also important to note that my Capex model does not assume the extension of the European Commission’s 5G Security Toolkit, which focuses on excluding high-risk suppliers in the 5G domain to fixed broadband networks. If the legal framework were to be applied to the fixed broadband sector as well, an event that I see to be very likely, forcing the removal of high-risk suppliers from fiber broadband networks, Capex requirements would likely increase significantly beyond the projections represented in my assessment with the last years of the decade focused on high-risk supplier replacement in Western European Telcos fixed broadband transport and IP networks. While it is I don’t see a (medium-high) risk that all CPEs would be included in a high-risk supplier ban. However, I do believe that CPEs with the ONT integrated may be required to replace their installed CPE base. If a high-risk supplier ban were to include the ONT, there would be several implications.

Any CPEs that use components from the banned supplier would need to be replaced or retrofitted to ensure compliance. This would require swapping the integrated CPE/ONT units for separate CPE and ONT devices from approved suppliers, which could add to installation costs and increase deployment time. Service providers would also need to reassess their network equipment supply chain, ensuring that new ONTs and CPEs meet regulatory standards for security and compliance. Moreover, replacing equipment could potentially disrupt existing service, necessitating careful planning to manage the transition without major outages for customers. This situation would likely also require updates to the network configuration, as replacing an integrated CPE/ONT device could involve reconfiguring customer devices to work seamlessly with the new setup. I believe it is very likely that telcos eventually will offer fixed broadband service, including CPEs and home gateways, that are free of high-risk suppliers end-2-end (e.g., for B2B and public institutions, e.g., defense and other critically sensitive areas). This may extend to requirements that employees working in or with sensitive areas will need a certificate of high-risk supplier-free end-2-end fixed broadband connection to be allowed to work from home or receive any job-related information (this could extend to mobile devices as well). Again, substantial Capex (and maybe a fair amount of time as well) would be required to reach such a high-risk supplier reduction.

AN ALTERNATE REALITY.

I am unsure whether William Webb’s idea of “The End of Telecoms History” (I really recommend you get his book) will have the same profound impact as Francis Fukuyama’s marvelously thought-provoking book “The End of History and the Last Man or be more “right” than Fukuyama’s book. However, I think it may be an oversimplification of his ideas to say that he has been proven wrong. The world of Man may have proven more resistant to “boredom” than the book assumed (as Fukuyama conceded in subsequent writing). Nevertheless, I do not believe history can be over unless the history makers and writers are all gone (which may happen sooner rather than later). History may have long and “boring” periods where little new and disruptive things happen. Still, historically, something so far has always disrupted the hiatus of history, followed by a quieter period (e.g., Pax Romana, European Feudalism, Ming Dynasty, 19th century’s European balance of power, …). The nature of history is cyclic. Stability and disruption are not opposing forces but part of an ongoing dynamic. I don’t think telecommunication would be that different. Parts of what we define as telecom may reach a natural end and settle until it is disrupted again; for example, the fixed telephony services on copper lines were disrupted by emerging mobile technologies driven by radio access technology innovation back in the 90s and until today. Or, like circuit-switched voice-centric technologies, which have been replaced by data-centric packet-switched technologies, putting an “end” to the classical voice-based business model of the incumbent telecommunication corporations.

At some point in the not-so-distant future (2030-2040), all Western European households will be covered by optical fiber and have a fiber-optic access connection with indoor services being served by ultra-WiFi coverage (remember approx. 80% of mobile consumption happens indoors). Mobile broadband networks have by then been redesigned to mainly provide outdoor coverage in urban and suburban areas. These are being modernized at minimum 10-year cycles as the need for innovation is relatively minor and more focused on energy efficiency and CO2 footprint reductions. Direct-to-cell (D2C) LEO satellite or stratospheric drone constellations utilizing a cellular spectrum above 1800 MHz serve outdoor coverage of rural regions, as opposed to the current D2C use of low-frequency bands such as 600 – 800 MHz (as higher frequency bands are occupied terrestrially and difficult to coordinate with LEO Satellite D2C providers). Let’s dream that the telco IT landscape, Core, transport, and routing networks will be fully converged (i.e., no fixed silo, no mobile silo) and autonomous network operations deal with most technical issues, including planning and optimization.

In this alternate reality, you pay for and get a broadband service enabled by a fully integrated broadband network. Not a mobile service served by a mobile broadband network (including own mobile backhaul, mobile aggregation, mobile backbone, and mobile core), and, not a fixed service served by a fixed broadband network different from the mobile infrastructure.

Given the Western European countries addressed in this report (i.e., see details in Further Reading #1), we would need to cover a surface area of 3.6 million square kilometers. To ensure outdoor coverage in urban areas and road networks, we may not need more than about 50,000 cell sites compared to today’s 300 – 400 thousand. If the cellular infrastructure is shared, the effective number of sites that are paid in full would be substantially lower than that.

The required mobile Capex ballpark estimate would be a fifth (including its share of related fixed support investment, e.g., IT, Core, Transport, Switching, Routing, Product development, etc.) of what it otherwise would be if we continue “The Mobile History” as it has been running up to today.

In this “Alternate Reality” ” instead of having a mobile Capex level of about 10% of the total fixed and mobile revenue (~15+% of mobile service revenues), we would be down to between 2% and 3% of the total telecom revenues (assuming it remains reasonably flat at a 2023 level. The fixed investment level would be relatively low, household coverage would be finished, and most households would be connected. If we use numbers of fixed broadband Capex without substantial fiber deployment, that level should not be much higher than 5% of the total revenue. Thus, instead of today’s persistent level of 18% – 20% of the total telecom revenues, in our “Alternate Reality,” it would not exceed 10%. And just imagine what such a change would do to the operational cost structure.

Obviously, this fictive (and speculative) reality would be “The End of Mobile History.”

It would be an “End to Big Capex” and a stop to spending mobile Capex like there is no (better fixed broadband) tomorrow.

This is an end-reflection of where the current mobile network development may be heading unless the industry gets better at optimizing and prioritizing between mobile and fixed broadband. Re-architecting the fundamental design paradigms of mobile network design, plan, and build is required, including an urgent reset of current 6G thinking.

ACKNOWLEDGEMENT.

I greatly acknowledge my wife, Eva Varadi, for her support, patience, and understanding during the creative process of writing this Blog. There should be no doubt that without the support of Russell Waller (New Street Research), this blog would not have been possible. Thank you so much for providing the financial telco data for Western Europe that lays the ground for much of the Capex analysis in this article. This blog has also been published in telecomanalysis.net with some minor changes and updates.

FURTHER READING.

  1. New Street Research covers the following countries in their Quarterly report: Austria, Belgium, Denmark, Finland, France, Germany, Greece, Italy, Netherlands, Norway, Portugal, Spain, Sweden, Switzerland, and the United Kingdom. Across those 15 countries, ca. 56 telcos are covered.
  2. Kim Kyllesbech Larsen, “Navigating the Future of Telecom Capex: Western Europe’s Telecom Investment 2024 to 2030,” telecomanalysis.net, (October 2024).
  3. Kim Kyllesbech Larsen, “The Nature of Telecom Capex – a 2023 Update”, techneconomyblog.com, (July 2023).
  4. Kim Kyllesbech Larsen, “The Nature of Telecom Capex,” techneconomyblog.com, (July 2022).
  5. Rupert Wood, “A crisis of overproduction in bandwidth means that telecoms capex will inevitably fall,” Analysys Mason (July 2024). A rather costly (for mortals & their budgets, at least) report called “The end of big capex: new strategic options for the telecoms industry” allegedly demonstrates the crisis.
  6. European Commission, “Cybersecurity of 5G networks – EU Toolbox of risk mitigating measures”, (January 2020).
  7. European Commission, “The EU Toolbox for 5G Security”, (2020).
  8. European Commission, “5G security: Member States report on progress on implementing the EU toolbox and strengthening safety measures”, (July 2020). It also includes a link to the actual Member States progress report on 5G Security.
  9. European Commission, “Second report on the implementation of the EU 5G cybersecurity toolbox”, (June 2023).
  10. Danish Investment Screening Act, “Particularly sensitive sectors and activities,” Danish Business Authority, (July 2021). Note that the “Danish Investment Screening Act” is closely aligned with broader European Union (EU) frameworks and initiatives to safeguard critical infrastructure from high-risk foreign suppliers. The Act reflects Denmark’s effort to implement national and EU-level policies to protect sensitive sectors from foreign investments that could pose security risks, particularly in critical infrastructure such as telecommunications, energy, and defense.
  11. Cynthia Kroet, “Eleven EU countries took 5G security measures to ban Huawei, ZTE”, Euro News, (August 2024).
  12. Michael Stenvei, “Historisk indgreb: TDC tvinges til at droppe Huawei-aftale”, Finans.dk, (May 2023).
  13. Mathieu Pollet, “Time to cut back on Huawei, German minister tells telecoms giants,” Politico (August 2023).
  14. German press on high-risk suppliers in German telecommunications networks: “Zeit für den Abschied von Huawei, sagt Innenministerin Faeser” (Handelsblatt, August 18, 2023), “Deutsche Telekom und Huawei: Warum die Abhängigkeit bleibt” (Die Welt, September 7, 2023), “Telekom-Netz: Kritik an schleppendem Rückzug von Huawei-Komponenten” (Der Spiegel, September 20, 2023), “Faeser verschiebt Huawei-Bann und stößt auf heftige Kritik” (Handelsblatt, July 18, 2024), “Huawei-Verbot in 5G-Netzen: Deutschland verschärft, aber langsam” (Tagesschau, July 15, 2024), and “Langsame Fortschritte: Deutschland und das Huawei-Dilemma” (Der Spiegel, September 21, 2024) and many many others.
  15. Iain Morris, “German Huawei ban to cost €2.5B and take years, no thanks to EU”, Light Reading (May 2023).
  16. Alexander Martin, “EU states told to restrict Huawei and ZTE from 5G networks ‘without delay’”, The Record, (June 2023).
  17. Strand Consult, “Understanding the Market for 4G RAN in Europe: Share of Chinese and Non-Chinese Vendors – in 102 Mobile Networks”, (2020).
  18. Strand Consult, “The Market for 5G RAN in Europe: Share of Chinese and Non-Chinese Vendors in 31 European Countries”, (2023).
  19. William Web, “The End of Telecoms History,” Kindle, (June 2024).
  20. GSMA, “The State of 5G 2024 – Introducing the GSMA Intelligence 5G Connectivity Index”, (February 2024).
  21. Speedtest.com, “Speedtest Global Index”, (August 2024).
  22. Ericsson Mobility Visualizer – Mobile Data Traffic.
  23. Kim Kyllesbech Larsen, “5G Economics – An Introduction (Chapter 1)”, techneconomyblog.com, (December 2016).
  24. Kim Kyllesbech Larsen, “Capacity planning in mobile data networks experiencing exponential growth in demand” (April 2012). See slide 5, showing that 50% of all data traffic is generated in 1 cell, 80% of data traffic is carried in up to 3 cells, and only 20% of traffic can be regarded as truly mobile. The presentation has been viewed more than 19 thousand times.
  25. Tom Copeland, Tim Koller, Jack Murrin, “Valuation – Measuring and Managing the Valuation of Companies,” John Wiley & Sons, (3rd edition, 2000). There are newer editions on Amazon.com today (e.g., 7th by now).
  26. Dean Bubley, “The 6G vision needs a Reset” (October 2024).
  27. Geoff Hollingworth, “Why 6G Reset and why I support”, (October 2024).
  28. Opanga, “The RAIN AI Platform”, provides a cognitive AI-based solution that addresses (1) Network Optimization lowering Capex demand and increasing the Customer Experience, (2) Energy Reduction above and beyond existing supplier solutions leading to further Opex efficiencies, and (3) Network Intelligence using AI to better manage your network data at a much higher resolution than is possible with classical dashboard applied to technology-driven data lakes.

The Next Frontier: LEO Satellites for Internet Services.

THE SPACE RACE IS ON.

If all current commercial satellite plans were to be realized within the next decade, we would have more, possibly substantially more, than 65 thousand satellites circling Earth. Today, that number is less than 10 thousand, with more than half that number realized by StarLink’s Low Earth Orbit (LEO) constellation over the last couple of years (i.e., since 2018).

While the “Arms Race” during the Cold War was “a thing” mainly between The USA and the former Soviet Union, the Space Race will, in my opinion, be “battled out” between the commercial interests of the West against the political interest of China (as illustrated in Figure 1 below). The current numbers strongly indicate that Europe, Canada, the Middle East, Africa, and APAC (minus China) will likely and largely be left on the sideline to watch the US and China impose, in theory, a “duopoly” in LEO satellite-based services. However, in practice, it will be a near-monopoly when considering security concerns between the West and the (re-defined) East block.

Figure 1 Illustrates my thesis that we will see a Space Race over the next 10 years between a (or very few) commercial LEO constellation, represented by a Falcon-9 like design (for maybe too obvious reasons), and a Chinese-state owned satellite constellation. (Courtesy: DALL-E).

As of end of 2023, more than 50% of launched and planned commercial LEO satellites are USA-based. Of those, the largest fraction is accounted for by the US-based StarLink constellation (~75%). More than 30% are launched or planned by Chinese companies headed by the state-owned Guo Wang constellation rivaling Elon Musk’s Starlink in ambition and scale. Europe comes in at a distant number 3 with about 8% of the total of fixed internet satellites. Apart from being disappointed, alas, not surprised by the European track record, it is somewhat more baffling that there are so few Indian and African satellite (there are none) constellations given the obvious benefits such satellites could bring to India and the African continent.

India is a leading satellite nation with a proud tradition of innovative satellite designs and manufacturing and a solid track record of satellite launches. However, regarding commercial LEO constellations, India still needs to catch up on some opportunities here. Having previously worked on the economics and operationalizing a satellite ATC (i.e., a satellite service with an ancillary terrestrial component) internet service across India, it is mind-blowing (imo) how much economic opportunity there is to replace by satellite the vast terrestrial cellular infrastructure in rural India. Not to mention a quantum leap in communication broadband services resilience and availability that could be provided. According to the StarLink coverage map, the regulatory approval in India for allowing StarLink (US) services is still pending. In the meantime, Eutelsat’s OneWeb (EU) received regulatory approval in late 2023 for its satellite internet service over India in collaboration with Barthi Enterprises (India), that is also the largest shareholder in the recently formed Eutelsat Group with 21.2%. Moreover, Jio’s JioSpaceFiber satellite internet services were launched in several Indian states at the end of 2023, using the SES (EU) MEO O3b mPower satellite constellation. Despite the clear satellite know-how and capital available, it appears there is little activity for Indian-based LEO satellite development, taking up the competition with international operators.

The African continent is attracting all the major LEO satellite constellations such as StarLink (US), OneWeb (EU), Amazon Kuipers (US), and Telesat Lightspeed (CAN). However, getting regulatory approval for their satellite-based internet services is a complex, time-consuming, and challenging process with Africa’s 54 recognized sovereign countries. I would expect that we will see the Chinese-based satellite constellations (e.g., Guo Wang) taking up here as well due to the strong ties between China and several of the African nations.

This article is not about SpaceX’s StarLink satellite constellation. Although StarLink is mentioned a lot and used as an example. Recently, at the Mobile World Congress 2024 in Barcelona, talking to satellite operators (but not StarLink) providing fixed broadband satellite services, we joked about how long into a meeting we could go before SpaceX and StarLink would be mentioned (~ 5 minutes where the record, I think).

This article is about the key enablers (frequencies, frequency bandwidth, antenna design, …) that make up an LEO satellite service, the LEO satellite itself, the kind of services one should expect from it, and its limitations.

There is no doubt that LEO satellites of today have an essential mission: delivering broadband internet to rural and remote areas with little or no terrestrial cellular or fixed infrastructure to provide internet services. Satellites can offer broadband internet to remote areas with little population density and a population spread out reasonably uniformly over a large area. A LEO satellite constellation is not (in general) a substitute for an existing terrestrial communications infrastructure. Still, it can enhance it by increasing service availability and being an important remedy for business continuity in remote rural areas. Satellite systems are capacity-limited as they serve vast areas, typically with limited spectral resources and capacity per unit area.

In comparison, we have much smaller coverage areas with demand-matched spectral resources in a terrestrial cellular network. It is also easier to increase capacity in a terrestrial cellular system by adding more sectors or increasing the number of sites in an area that requires such investments. Adding more cells, and thus increasing the system capacity, to satellite coverage requires a new generation of satellites with more advanced antenna designs, typically by increasing the number of phased-array beams and more complex modulation and coding mechanisms that boost the spectral efficiency, leading to increased capacity and quality for the services rendered to the ground. Increasing the system capacity of a cellular communications system by increasing the number of cells (i.e., cell splitting) works the same in satellite systems as it does for a terrestrial cellular system.

So, on average, LEO satellite internet services to individual customers (or households), such as those offered by StarLink, are excellent for remote, lowly populated areas with a nicely spread-out population. If we de-average this statement. Clearly, within the satellite coverage area, we may have towns and settlements where, locally, the population density can be fairly large despite being very small over the larger footprint covered by the satellite. As the capacity and quality of the satellite is a shared resource, serving towns and settlements of a certain size may not be the best approach to providing a sustainable and good customer experience as the satellite resources exhaust rapidly in such scenarios. In such scenarios, a hybrid architecture is of much better use as well as providing all customers in a town or settlement with the best service possible leveraging the existing terrestrial communications infrastructure, cellular as well as fixed, with that of a satellite backhaul broadband connection between a satellite ground gateway and the broadband internet satellite. This is offered by several satellite broadband providers (both from GEO, MEO and LEO orbits) and has the beauty of not only being limited to one provider. Unfortunately, this particular finesse, is often overlooked by the awe of massive scale of the StarLink constellation.

AND SO IT STARTS.

When I compared the economics of stratospheric drone-based cellular coverage with that of LEO satellites and terrestrial-based cellular networks in my previous article, “Stratospheric Drones: Revolutionizing Terrestrial Rural Broadband from the Skies?”, it was clear that even if LEO satellites are costly to establish, they provide a substantial cost advantage over cellular coverage in rural and remote areas that are either scarcely covered or not at all. Although the existing LEO satellite constellations have limited capacity compared to a terrestrial cellular network and would perform rather poorly over densely populated areas (e.g., urban and suburban areas), they can offer very decent fixed-wireless-access-like broadband services in rural and remote areas at speeds exceeding even 100 Mbps, such as shown by the Starlink constellation. Even if the provided speed and capacity is likely be substantially lower than what a terrestrial cellular network could offer, it often provides the missing (internet) link. Anything larger than nothing remains infinitely better.

Low Earth Orbit (LEO) satellites represent the next frontier in (novel) communication network architectures, what we in modern lingo would call non-terrestrial networks (NTN), with the ability to combine both mobile and fixed broadband services, enhancing and substituting terrestrial networks. The LEO satellites orbit significantly closer to Earth than their Geostationary Orbit (GEO) counterparts at 36 thousand kilometers, typically at altitudes between 300 to 2,000 kilometers, LEO satellites offer substantially reduced latency, higher bandwidth capabilities, and a more direct line of sight to receivers on the ground. It makes LEO satellites an obvious and integral component of non-terrestrial networks, which aim to extend the reach of existing fixed and mobile broadband services, particularly in rural, un-and under-served, or inaccessible regions as a high-availability element of terrestrial communications networks in the event of natural disasters (flooding, earthquake, …), or military conflict, in which the terrestrial networks are taken out of operation.

Another key advantage of LEO satellite is that the likelihood of a line-of-sight (LoS) to a point on the ground is very high compared to establishing a LoS for terrestrial cellular coverage that, in general, would be very low. In other words, the signal propagation from a LEO satellite closely approximates that of free space. Thus, all the various environmental signal loss factors we must consider for a standard terrestrial-based cellular mobile network do not apply to our satellite with signal propagation largely being determined by the distance between the satellite and the ground (see Figure 2).

Figure 2 illustrates the difference between terrestrial cellular coverage from a cell tower and that of a Low Earth Orbit (LEO) Satellite. The benefit of seeing the world from above is that environmental and physical factors have substantially less impact on signal propagation and quality primarily being impacted by distance as it approximates free space propagation with signal attenuation mainly determined by the Line-of-Sight (LoS) distance from antenna to Earth. This situation is very different for a terrestrial-based cellular tower with its radiated signal being substantially compromised by environmental factors.

Low Earth Orbit (LEO) satellites, compared to GEO and MEO-based higher-altitude satellite systems, in general, have simpler designs and smaller sizes, weights, and volumes. Their design and architecture are not just a function of technological trends but also a manifestation of their operational environment. The (relative) simplicity of LEO satellites also allows for more standardized production, allowing for off-the-shelf components and modular designs that can be manufactured in larger quantities, such as the case with CubeSats standard and SmallSats in general. The lower altitude of LEO satellites translates to a reduced distance from the launch site to the operational orbit, which inherently affects the economics of satellite launches. This proximity to Earth means that the energy required to propel a satellite into LEO is significantly less than needed to reach Geostationary Earth Orbit (GEO), resulting in lower launch costs.

The advent of LEO satellite constellations marks an important shift in how we approach global connectivity. With the potential to provide ubiquitous internet coverage in rural and remote places with little or no terrestrial communications infrastructure, satellites are increasingly being positioned as vital elements in global communication. The LEO satellites, as well as stratospheric drones, have the ability to provide economical internet access, as addressed in my previous article, in remote areas and play a significant role in disaster relief efforts. For example, when terrestrial communication networks may be disrupted after a natural disaster, LEO satellites can quickly re-establish communication links to normal cellular devices or ad-how earth-based satellite systems, enabling efficient coordination of rescue and relief operations. Furthermore, they offer a resilient network backbone that complements terrestrial infrastructure.

The Internet of Things (IoT) benefits from the capabilities of LEO satellites. Particular in areas where there is little or no existing terrestrial communications networks. IoT devices often operate in remote or mobile environments, from sensors in agricultural fields to trackers across shipping routes. LEO satellites provide reliable connectivity to IoT networks, facilitating many applications, such as non- and near real-time monitoring of environmental data, seamless asset tracking over transcontinental journeys, and rapid deployment of smart devices in smart city infrastructures. As an example, let us look at the minimum requirements for establishing a LEO satellite constellation that can gather IoT measurements. At an altitude of 550 km the satellite would take ca. 1.5 hour to return to a given point on its orbit. Earth rotates (see also below) which require us to deploy several orbital planes to ensure that we have continuous coverage throughout the 24 hours of a day (assuming this is required). Depending on the satellite antenna design, the target coverage area, and how often a measurement is required, a satellite constellation to support an IoT business may not require much more than 20 (lower measurement frequency) to 60 (higher measurement frequency, but far from real real-time data collection) LEO satellites (@ 550 km).

For defense purposes, LEO satellite systems present unique advantages. Their lower orbits allow for high-resolution imagery and rapid data collection, which are crucial for surveillance, reconnaissance, and operational awareness. As typically more LEO satellites will be required, compared to a GEO satellite, such systems also offer a higher degree of redundancy in case of anti-satellite (ASAT) warfare scenarios. When integrated with civilian applications, military use cases can leverage the robust commercial infrastructure for communication and geolocation services, enhancing capabilities while distributing the system’s visibility and potential targets.

Standalone military LEO satellites are engineered for specific defense needs. These may include hardened systems for secure communication, resistance to jamming, and interception. For instance, they can be equipped with advanced encryption algorithms to ensure secure transmission of sensitive military data. They also carry tailored payloads for electronic warfare, signal intelligence, and tactical communications. For example, they can host sensors for detecting and locating enemy radar and communication systems, providing a significant advantage in electronic warfare. As the line between civilian and military space applications blurs, dual-use LEO satellite systems are emerging, capable of serving civilian broadband and specialized military requirements. It should be pointed out that there also military applications, such as signal gathering, that may not be compatible with civil communications use cases.

In a military conflict, the distributed architecture and lower altitude of LEO constellations may offer some advantages regarding resilience and targetability compared to GEO and MEO-based satellites. Their more significant numbers (i.e., 10s to 1000s) compared to GEO, and the potential for quicker orbital resupply can make them less susceptible to complete system takedown. However, their lower altitudes could make them accessible to various ASAT technologies, including ground-based missiles or space-based kinetic interceptors.

It is not uncommon to encounter academic researchers and commentators who give the impression that LEO satellites could replace existing terrestrial-based infrastructures and solve all terrestrial communications issues known to man. That is (of course) not the case. Often, such statements appears to be based an incomplete understanding of the capacity limitation of satellite systems. Due to satellites’ excellent coverage with very large terrestrial footprints, the satellite capacity is shared over very large areas. For example, consider an LEO satellite at 550 km altitude. The satellite footprint, or coverage area (aka ground swath), is the area on the Earth’s surface over which the satellite can establish a direct line of sight. The satellite footprint in our example diameter would be ca. five thousand five hundred kilometers. An equivalent area of ca. 23 million square kilometers is more than twice that of the USA (or China or Canada). Before you get too excited, the satellite antenna will typically restrict the surface area the satellite will cover. The extent of the observable world that is seen at any given moment by the satellite antenna is defined as the Field of View (FoV) and can vary from a few degrees (narrow beams, small coverage area) to 40 degrees or higher (wide beams, large coverage areas). At a FoV of 20 degrees, the antenna footprint would be ca. 2 thousand 400 kilometers, equivalent to a coverage area of ca. 5 million square kilometers.

In comparison, for a FoV of 0.8 degrees, the antenna footprint would only be 100 kilometers. If our satellite has a 16-satellite beam capability, it would translate into a coverage diameter of 24 km per beam. For the StarLink system based on the Ku-band (13 GHz) and a cell downlink (Satellite-to-Earth) capacity of ca. 680 Mbps (in 250 MHz) we would have ca. 2 Mbps per km2 unit coverage area. Compared to a terrestrial rural cellular site with 85 MHz (Downlink, Base station antenna to customer terminal), it would deliver 10+ Mbps per km2 unit coverage area.

It is always good to keep in mind that “Satellites mission is not to replace terrestrial communications infrastructures but supplement and enhance them”, and furthermore, “Satellites offer the missing (internet) link in areas where there is no terrestrial communications infrastructure present”. Satellites offer superior coverage to any terrestrial communications infrastructure. Satellites limitations are in providing capacity, and quality, at population scale as well as supporting applications and access technologies requiring very short latencies (e.g., smaller than 10 ms).

In the following, I will focus on terrestrial cellular coverage and services that LEO satellites can provide. At the end of my blog, I hope I have given you (the reader) a reasonable understanding of how terrestrial coverage, capacity, and quality work in a (LEO) satellite system and have given you an impression of key parameters we can add to the satellite to improve those.

EARTH ROTATES, AND SO DO SATELLITES.

Before getting into the details of low earth orbit satellites, let us briefly get a couple of basic topics off the table. Skipping this part may be a good option if you are already into and in the know satellites. Or maybe carry on an get a good laugh of those terra firma cellular folks that forgot about the rotation of Earth 😉

From an altitude and orbit (around Earth) perspective, you may have heard of two types of satellites: The GEO and the LEO satellites. Geostationary (GEO) satellites are positioned in a geostationary orbit at ~36 thousand kilometers above Earth. That the satellite is geostationary means it rotates with the Earth and appears stationary from the ground, requiring only one satellite to maintain constant coverage over an area that can be up to one-third of Earth’s surface. Low Earth Orbit (LEO) satellites are positioned at an altitude between 300 to 2000 kilometers above Earth and move relative to the Earth’s surface at high speeds, requiring a network or constellation to ensure continuous coverage of a particular area.

I have experienced that terrestrial cellular folks (like myself) when first thinking about satellite coverage are having some intuitive issues with satellite coverage. We are not used to our antennas moving away from the targeted coverage area, and our targeted coverage area, too, is moving away from our antenna. The geometry and dynamics of terrestrial cellular coverage are simpler than they are for satellite-based coverage. For LEO satellite network planners, it is not rocket science (pun intended) that the satellites move around in their designated orbit over Earth at orbital speeds of ca. 70 to 80 km per second. Thus, at an altitude of 500 km, a LEO satellite orbits Earth approximately every 1.5 hours. Earth, thankfully, rotates. Compared to its GEO satellite “cousin,” the LEO satellite ” is not “stationary” from the perspective of the ground. Thus, as Earth rotates, the targeted coverage area moves away from the coverage provided by the orbital satellite.

We need several satellites in the same orbit and several orbits (i.e., orbital planes) to provide continuous satellite coverage of a target area. This is very different from terrestrial cellular coverage of a given area (needles to say).

WHAT LEO SATELLITES BRING TO THE GROUND.

Anything is infinitely more than nothing. The Low Earth Orbit satellite brings the possibility of internet connectivity where there previously was nothing, either because too few potential customers spread out over a large area made terrestrial-based services hugely uneconomical or the environment is too hostile to build normal terrestrial networks within reasonable economics.

Figure 3 illustrates a low Earth satellite constellation providing internet to rural and remote areas as a way to solve part of the digital divide challenge in terms of availability. Obviously, the affordability is likely to remain a challenge unless subsidized by customers who can afford satellite services in other places where availability is more of a convenience question. (Courtesy: DALL-E)

The LEO satellites represent a transformative shift in internet connectivity, providing advantages over traditional cellular and fixed broadband networks, particularly for global access, speed, and deployment capabilities. As described in “Stratospheric Drones: Revolutionizing Terrestrial Rural Broadband from the Skies?”, LEO satellite constellations, or networks, may also be significantly more economical than equivalent cellular networks in rural and remote areas where the economics of coverage by satellite, as depicted in the above Figure 3, is by far better than by traditional terrestrial cellular means.

One of the foremost benefits of LEO satellites is their ability to offer global coverage as well as reasonable broadband and latency performance that is difficult to match with GEO and MEO satellites. The GEO stationary satellite obviously also offers global broadband coverage, the unit coverage being much more extensive than for a LEO satellite, but it is not possible to offer very low latency services, and it is more difficult to provide high data rates (in comparison to a LEO satellite). LEO satellites can reach the most remote and rural areas of the world, places where laying cables or setting up cell towers is impractical. This is a crucial step in delivering communications services where none exist today, ensuring that underserved populations and regions gain access to internet connectivity.

Another significant advantage is the reduction in latency that LEO satellites provide. Since they orbit much closer to Earth, typically at an altitude between 350 to 700 km, compared to their geostationary counterparts that are at 36 thousand kilometers altitude, the time it takes for a communications signal to travel between the user and the satellite is significantly reduced. This lower latency is crucial for enhancing the user experience in real-time applications such as video calls and online gaming, making these activities more enjoyable and responsive.

An inherent benefit of satellite constellations is their ability for quick deployment. They can be deployed rapidly in space, offering a quicker solution to achieving widespread internet coverage than the time-consuming and often challenging process of laying cables or erecting terrestrial infrastructure. Moreover, the network can easily be expanded by adding more satellites, allowing it to dynamically meet changing demand without extensive modifications on the ground.

LEO satellite networks are inherently scalable. By launching additional satellites, they can accommodate growing internet usage demands, ensuring that the network remains efficient and capable of serving more users over time without significant changes to ground infrastructure.

Furthermore, these satellite networks offer resilience and reliability. With multiple satellites in orbit, the network can maintain connectivity even if one satellite fails or is obstructed, providing a level of redundancy that makes the network less susceptible to outages. This ensures consistent performance across different geographical areas, unlike terrestrial networks that may suffer from physical damage or maintenance issues.

Another critical advantage is (relative) cost-effectiveness compared to a terrestrial-based cellular network. In remote or hard-to-reach areas, deploying satellites can be more economical than the high expenses associated with extending terrestrial broadband infrastructure. As satellite production and launch costs continue to decrease, the economics of LEO satellite internet become increasingly competitive, potentially reducing the cost for end-users.

LEO satellites offer a promising solution to some of the limitations of traditional connectivity methods. By overcoming geographical, infrastructural, and economic barriers, LEO satellite technology has the potential to not just complement but effectively substitute terrestrial-based cellular and fixed broadband services, especially in areas where such services are inadequate or non-existent.

Figure 4 below provides an overview of LEO satellite coverage with fixed broadband services offered to customers in the Ku band with a Ka backhaul link to ground station GWs that connect to, for example, the internet. Having inter-satellite communications (e.g., via laser links such as those used by Starlink satellites as per satellite version 1.5) allows for substantially less ground-station gateways. Inter-satellite laser links between intra-plane satellites are a distinct advantage in ensuring coverage for rural and remote areas where it might be difficult, very costly, and impractical to have a satellite ground station GW to connect to due to the lack of global internet infrastructure.

Figure 4 In general, a satellite is required to have LoS to its ground station gateway (GW); in other words, the GW needs to be within the coverage footprint of the satellite. For LEO satellites, which are at low altitudes, between 300 and 2000 km, and thus have a much lower footprint than MEO and GEO satellites, this would result in a need for a substantial amount of ground stations. This is depicted in (a) above. With inter-satellite laser links (SLL), e.g., those implemented by Starlink, it is possible to reduce the ground station gateways significantly, which is particularly helpful in rural and very remote areas. These laser links enable direct communication between satellites in orbit, which enhances the network’s performance, reliability, and global reach.

Inter-satellite laser links (ISLLs), or, as it is also called Optical Inter-satellite Links (OISK), are an advanced communication technology utilized by satellite constellations, such as for example Starlink, to facilitate high-speed secure data transmission directly between satellites. Inter-satellite laser links are today (primarily) designed for intra-plane communication within satellite constellations, enabling data transfer between satellites that share the same orbital plane. This is due to the relatively stable geometries and predictable distances between satellites in the same orbit, which facilitate maintaining the line-of-sight connections necessary for laser communications. ISLLs mark a significant departure from traditional reliance on ground stations for inter-satellite communication, and as such the ISL offers many benefits, including the ability to transmit data at speeds comparable to fiber-optic cables. Additionally, ISLLs enable satellite constellations to deliver seamless coverage across the entire planet, including over oceans and polar regions where ground station infrastructure is limited or non-existent. The technology also inherently enhances the security of data transmissions, thanks to the focused nature of laser beams, which are difficult to intercept.

However, the deployment of ISLLs is not without challenges. The technology requires a clear line of sight between satellites, which can be affected by their orbital positions, necessitating precise control mechanisms. Moreover, the theoretical limit to the number of satellites linked in a daisy chain is influenced by several factors, including the satellite’s power capabilities, the network architecture, and the need to maintain clear lines of sight. High-power laser systems also demand considerable energy, impacting the satellite’s power budget and requiring efficient management to balance operational needs. The complexity and cost of developing such sophisticated laser communication systems, combined with very precise pointing mechanisms and sensitive detectors, can be quite challenging and need to be carefully weighted against building satellite ground stations.

Cross-plane ISLL transmission, or the ability to communicate between satellites in different orbital planes, presents additional technical challenges, as it is technically highly challenging to maintain a stable line of sight between satellites moving in different orbital planes. However, the potential for ISLLs to support cross-plane links is recognized as a valuable capability for creating a fully interconnected satellite constellation. The development and incorporation of cross-plane ISLL capabilities into satellites are an area of active research and development. Such capabilities would reduce the reliance on ground stations and significantly increase the resilience of satellite constellations. I see the development as a next-generation topic together with many other important developments as described in the end of this blog. However, the power consumption of the ISLL is a point of concern that needs careful attention as it will impact many other aspects of the satellite operation.

THE DIGITAL DIVIDE.

The digital divide refers to the “internet haves and haves not” or “the gap between individuals who have access to modern information and communication technology (ICT),” such as the internet, computers, and smartphones, and those who do not have access. This divide can be due to various factors, including economic, geographic, age, and educational barriers. Essentially, as illustrated in Figure 5, it’s the difference between the “digitally connected” and the “digitally disconnected.”.

The significance of the digital divide is considerable, impacting billions of people worldwide. It is estimated that a little less than 40% of the world’s population, or roughly 2.9 billion people, had never used the internet (as of 2023). This gap is most pronounced in developing countries, rural areas, and among older populations and economically disadvantaged groups.

The digital divide affects individuals’ ability to access information, education, and job opportunities and impacts their ability to participate in digital economies and the modern social life that the rest of us (i.e., the other side of the divide or the privileged 60%) have become used to. Bridging this divide is crucial for ensuring equitable access to technology and its benefits, fostering social and economic inclusion, and supporting global development goals.

Figure 5 illustrates the digital divide, that is, the gap between individuals with access to modern information and communication technology (ICT), such as the internet, computers, and smartphones, and those who do not have access. (Courtesy: DALL-E)

CHALLENGES WITH LEO SATELLITE SOLUTIONS.

Low-Earth-orbit satellites offer compelling advantages for global internet connectivity, yet they are not without challenges and disadvantages when considered substitutes for cellular and fixed broadband services. These drawbacks underscore the complexities and limitations of deploying LEO satellite technology globally.

The capital investment required and the ongoing costs associated with designing, manufacturing, launching, and maintaining a constellation of LEO satellites are substantial. Despite technological advancements and increased competition driving costs down, the financial barrier to entry remains high. Compared to their geostationary counterparts, the relatively short lifespan of LEO satellites necessitates frequent replacements, further adding to operational expenses.

While LEO satellites offer significantly reduced latency (round trip times, RTT ~ 4 ms) compared to geostationary satellites (RTT ~ 240 ms), they may still face latency and bandwidth limitations, especially as the number of users on the satellite network increases. This can lead to reduced service quality during peak usage times, highlighting the potential for congestion and bandwidth constraints. This is also the reason why the main business model of LEO satellite constellations is primarily to address coverage and needs in rural and remote locations. Alternatively, the LEO satellite business model focuses on low-bandwidth needs such as texting, voice messaging, and low-bandwidth Internet of Things (IoT) services.

Navigating the regulatory and spectrum management landscape presents another challenge for LEO satellite operators. Securing spectrum rights and preventing signal interference requires coordination across multiple jurisdictions, which can complicate deployment efforts and increase the complexity of operations.

The environmental and space traffic concerns associated with deploying large numbers of satellites are significant. The potential for space debris and the sustainability of low Earth orbits are critical issues, with collisions posing risks to other satellites and space missions. Additionally, the environmental impact of frequent rocket launches raises further concerns.

FIXED-WIRELESS ACCESS (FWA) BASED LEO SATELLITE SOLUTIONS.

Using the NewSpace Index database, updated December 2023, there are currently more than 6,463 internet satellites launched, of which 5,650 (~87%) from StarLink, and 40,000+ satellites planned for launch, with SpaceX’s Starlink satellites having 11,908 planned (~30%). More than 45% of the satellites launched and planned support multi-application use cases. Thus internet, together with, for example, IoT (~4%) and/or Direct-2-Device (D2D, ~39%). The D2D share is due to StarLink’s plans to provide services to mobile terminals with their latest satellite constellation. The first six StarLink v2 satellites with direct-to-cellular capability were successfully launched on January 2nd, 2024. Some care should be taken in the share of D2D satellites in the StarLink number as it does not consider the different form factors of the version 2 satellite that do not all include D2D capabilities.

Most LEO satellites, helped by StarLink satellite quantum, operational and planned, support satellite fixed broadband internet services. It is worth noting that the Chinese Guo Wang constellation ranks second in terms of planned LEO satellites, with almost 13,000 planned, rivaling the StarLink constellation. After StarLink and Guo Wang are counted there is only 34% or ca. 16,000 internet satellites left in the planning pool across 30+ satellite companies. While StarLink is privately owned (by Elon Musk), the Guo Wang (國網 ~ “The state network”) constellation is led by China SatNet and created by the SASAC (China’s State-Owned Assets Supervision and Administration Commission). SASAC oversees China’s biggest state-owned enterprises. I expect that such an LEO satellite constellation, which would be the second biggest LEO constellation, as planned by Guo Wang and controlled by the Chinese State, would be of considerable concern to the West due to the possibility of dual-use (i.e., civil & military) of such a constellation.

StarLink coverage as of March 2024 (see StarLink’s availability map) does not provide services in Russia, China, Iran, Iraq, Afghanistan, Venezuela, and Cuba (20% of Earth’s total land base surface area). There are still quite a few countries in Africa and South-East Asia, including India, where regulatory approval remains pending.

Figure 6 NewSpace Index data of commercial satellite constellations in terms of total number of launched and planned (top) per company (or constellation name) and (bottom) per country.

While the term FWA, fixed wireless access, is not traditionally used to describe satellite internet services, the broadband services offered by LEO satellites can be considered a form of “wireless access” since they also provide connectivity without cables or fiber. In essence, LEO satellite broadband is a complementary service to traditional FWA, extending wireless broadband access to locations beyond the reach of terrestrial networks. In the following, I will continue to use the term FWA for the fixed broadband LEO satellite services provided to individual customers, including SMEs. As some of the LEO satellite businesses eventually also might provide direct-to-device (D2D) services to normal terrestrial mobile devices, either on their own acquired cellular spectrum or in partnership with terrestrial cellular operators, the LEO satellite operation (or business architecture) becomes much closer to terrestrial cellular operations.

Figure 7 Illustrating a Non-Terrestrial Network consisting of a Low Earth Orbit (LEO) satellite constellation providing fixed broadband services, such as Fixed Wireless Access, to individual terrestrial users (e.g., Starlink, Kuiper, OneWeb,…). Each hexagon represents a satellite beam inside the larger satellite coverage area. Note that, in general, there will be some coverage overlap between individual satellites, ensuring a continuous service. The operating altitude of an LEO satellite constellation is between 300 and 2,000 km, with most aiming to be at 450 to 550 km altitude. It is assumed that the satellites are interconnected, e.g., laser links. The User Terminal antenna (UT) is dynamically orienting itself after the best line-of-sight (in terms of signal quality) to a satellite within UT’s field-of-view (FoV). The FoV has not been shown in the picture above so as not to overcomplicate the illustration.

Low Earth Orbit (LEO) satellite services like Starlink have emerged to provide fixed broadband internet to individual consumers and small to medium-sized enterprises (SMEs) targeting rural and remote areas often where no other broadband solutions are available or with poor legacy copper- or coax-based infrastructure. These services deploy constellations of satellites orbiting close to Earth to offer high-speed internet with the significant advantage of reaching rural and remote areas where traditional ground-based infrastructure is absent or economically unfeasible.

One of the most significant benefits of LEO satellite broadband is the ability to deliver connectivity with lower latency compared to traditional satellite internet delivered by geosynchronous satellites, enhancing the user experience for real-time applications. The rapid deployment capability of these services also means that areas in dire need of internet access can be connected much quicker than waiting for ground infrastructure development. Additionally, satellite broadband’s reliability is less affected by terrestrial challenges, such as natural disasters that can disrupt other forms of connectivity.

The satellite service comes with its challenges. The cost of user equipment, such as satellite dishes, can be a barrier for some users. So, can the installation process be of the terrestrial satellite dish required to establish the connection to the satellite. Moreover, services might be limited by data caps or experience slower speeds after reaching certain usage thresholds, which can be a drawback for users with high data demands. Weather conditions can also impact the signal quality, particularly at the higher frequencies used by the satellite, albeit to a lesser extent than geostationary satellite services. However, the target areas where the fixed broadband satellite service is most suited are rural and remote areas that either have no terrestrial broadband infrastructure (terrestrial cellular broadband or wired broadband such as coax or fiber)

Beyond Starlink, other providers are venturing into the LEO satellite broadband market. OneWeb is actively developing a constellation to offer internet services worldwide, focusing on communities that are currently underserved by broadband. Telesat Lightspeed is also gearing up to provide broadband services, emphasizing the delivery of high-quality internet to the enterprise and government sectors.

Other LEO satellite businesses, such as AST SpaceMobile and Lynk Mobile, are taking a unique approach by aiming to connect standard mobile phones directly to their satellite network, extending cellular coverage beyond the reach of traditional cell towers. More about that in the section below (see “New Kids on the Block – Direct-to-Devices LEO satellites”).

I have been asked why I appear somewhat dismissive of the Amazon’s Project Kuiper in a previous version of article particular compared to StarLink (I guess). The expressed mission is to “provide broadband services to unserved and underserved consumers, businesses in the United States, …” (FCC 20-102). Project Kuiper plans for a broadband constellation of 3,226 microsatellites at 3 altitudes (i.e., orbital shells) around 600 km providing fixed broadband services in the Ka-band (i.e.,~ 17-30 GHz). In its US-based FCC (Federal Communications Commission) filling and in the subsequent FCC authorization it is clear that the Kuiper constellation primarily targets contiguous coverage of the USA (but mentions that services cannot be provided in the majority of Alaska, … funny I thought that was a good definition of a underserved remote and scarcely populated area?). Amazon has committed to launch 50% (1,618 satellites) of their committed satellites constellation before July 2026 (until now 2+ has been launched) and the remaining 50% before July 2029. There is however far less details on the Kuiper satellite design, than for example is available for the various versions of the StarLink satellites. Given the Kuiper will operate in the Ka-band there may be more frequency bandwidth allocated per beam than possible in the StarLink satellites using the Ku-band for customer device connectivity. However, Ka-band is at a higher frequency which may result in a more compromised signal propagation. In my opinion based on the information from the FCC submissions and correspondence, the Kuiper constellation appear less ambitious compared to StarLink vision, mission and tangible commitment in terms of aggressive launches, very high level of innovation and iterative development on their platform and capabilities in general. This may of course change over time and as more information becomes available on the Amazon’s Project Kuiper.

FWA-based LEO satellite solutions – takeaway:

  • LoS-based and free-space-like signal propagation allows high-frequency signals (i.e., high throughput, capacity, and quality) to provide near-ideal performance only impacted by the distance between the antenna and the ground terminal. Something that is, in general, not possible for a terrestrial-based cellular infrastructure.
  • Provides satellite fixed broadband internet connectivity typically using the Ku-band in geographically isolated locations where terrestrial broadband infrastructure is limited or non-existent.
  • Lower latency (and round trip time) compared to MEO and GEO satellite internet solutions.
  • Current systems are designed to provide broadband internet services in scarcely populated areas and underserved (or unserved) regions where traditional terrestrial-based communications infrastructures are highly uneconomical and/or impractical to deploy.
  • As shown in my previous article (i.e., “Stratospheric Drones: Revolutionizing Terrestrial Rural Broadband from the Skies?”), LEO satellite networks may be an economical interesting alternative to terrestrial rural cellular networks in countries with large scarcely populated rural areas requiring tens of thousands of cellular sites to cover. Hybrid models with LEO satellite FWA-like coverage to individuals in rural areas and with satellite backhaul to major settlements and towns should be considered in large geographies.
  • Resilience to terrestrial disruptions is a key advantage. It ensures functionality even when ground-based infrastructure is disrupted, which is an essential element for maintaining the Business Continuity of an operator’s telecommunications services. Particular hierarchical architectures with for example GEO-satellite, LEO satellite and Earth-based transport infrastructure will result in very high reliability network operations (possibly approaching ultra-high availability, although not with service parity).
  • Current systems are inherently capacity-limited due to their vast coverage areas (i.e., lower performance per unit coverage area). In the peak demand period, they will typically perform worse than terrestrial-based cellular networks (e.g., LTE or 5G).
  • In regions where modern terrestrial cellular and fixed broadband services are already established, satellite broadband may face challenges competing with these potentially cheaper, faster, and more reliable services, which are underpinned by the terrestrial communications infrastructure.
  • It is susceptible to weather conditions, such as heavy rain or snow, which can degrade signal quality. This may impact system capacity and quality, resulting in inconsistent customer experience throughout the year.
  • Must navigate complex regulatory environments in each country, which can affect service availability and lead to delays in service rollout.
  • Depending on the altitude, LEO satellites are typically replaced on a 5—to 7-year cycle due to atmospheric drag (which increases as altitude decreases; thus, the lower the altitude, the shorter a satellite’s life). This ultimately means that any improvements in system capacity and quality will take time to be thoroughly enjoyed by all customers.

SATELLITE BACKHAUL SOLUTIONS.

Figure 8 illustrates the architecture of a Low Earth Orbit (LEO) satellite backhaul system used by providers like OneWeb as well as StarLink with their so-called “Community Gateway”. It showcases the connectivity between terrestrial internet infrastructure (i.e., Satellite Gateways) and satellites in orbit, enabling high-speed data transmission. The network consists of LEO satellites that communicate with each other (inter-satellite Comms) using the Ku and Ka frequency bands. These satellites connect to ground-based satellite gateways (GW), which interface with Points of Presence (PoP) and Internet Exchange Points (IXP), integrating the space-based network with the terrestrial internet (WWW). Note: The indicated speeds and frequency bands (e.g., Ku: 12–18 GHz, Ka: 28–40 GHz) and data speeds illustrate the network’s capabilities.

LEO satellites providing backhaul connectivity, such as shown in Figure 8 above, are extending internet services to the farthest reaches of the globe. These satellites offer many benefits, as already discussed above, in connecting remote, rural, and previously un- and under-served areas with reliable internet services. Many remote regions lack foundational telecom infrastructure, particularly long-haul transport networks needed for carrying traffic away from remote populated areas. Satellite backhauls do not only offer a substantially better financial solution for enhancing internet connectivity to remote areas but are often the only viable solution for connectivity.

Take, for example, Greenland. The world’s largest non-continental island, the size of Western Europe, is characterized by its sparse population and distinct unconnected by road settlement patterns mainly along the West Coast (as well as a couple of settlements on the East Coast), influenced mainly by its vast ice sheets and rugged terrain. With a population of around 56+ thousand, primarily concentrated on the west coast, Greenland’s demographic distribution is spread out over ca. 50+ settlements and about 20 towns. Nuuk, the capital, is the island’s most populous city, housing over 18+ thousand residents and serving as the administrative, economic, and cultural hub. Terrestrial cellular networks serve settlements’ and towns’ communication and internet services needs, with the traffic carried back to the central switching centers by long-haul microwave links, sea cables, and satellite broadband connectivity. Several settlements connectivity needs can only be served by satellite backhaul, e.g., settlements on the East Coast (e.g., Tasiilaq with ca. 2,000 inhabitants and Ittoqqotooormiit (an awesome name!) with around 400+ inhabitants). LEO satellite backhaul solutions serving Satellite-only communities, such as those operated and offered by OneWeb (Eutelsat), could provide a backhaul transport solution that would match FWA latency specifications due to better (round trip time) performance than that of a GEO satellite backhaul solution.

It should also be clear that remote satellite-only settlements and towns may have communications service needs and demand that a localized 4G (or 5G) terrestrial cellular network with a satellite backhaul can serve much better than, for example, relying on individual ad-hoc connectivity solution from for example Starlink. When the area’s total bandwidth demand exceeds the capacity of an FWA satellite service, a localized terrestrial network solution with a satellite backhaul is, in general, better.

The LEO satellites should offer significantly reduced latency compared to their geostationary counterparts due to their closer proximity to the Earth. This reduction in delay is essential for a wide range of real-time applications and services, from adhering to modern radio access (e.g., 4G and 5G) requirements, VoIP, and online gaming to critical financial transactions, enhancing the user experience and broadening the scope of possible services and business.

Among the leading LEO satellite constellations providing backhaul solutions today are SpaceX’s Starlink (via their community gateway), aiming to deliver high-speed internet globally with a preference of direct to consumer connectivity; OneWeb, focusing on internet services for businesses and communities in remote areas; Telesat’s Lightspeed, designed to offer secure and reliable connectivity; and Amazon’s Project Kuiper, which plans to deploy thousands of satellites to provide broadband to unserved and underserved communities worldwide.

Satellite backhaul solutions – takeaway:

  • Satellite-backhaul solutions are excellent, cost-effective solution for providing an existing isolated cellular (and fixed access) network with high-bandwidth connectivity to the Internet (such as in remote and deep rural areas).
  • LEO satellites can reduce the need for extensive and very costly ground-based infrastructure by serving as a backhaul solution. For some areas, such as Greenland, the Sahara, or the Brazilian rainforest, it may not be practical or economical to connect by terrestrial-based transmission (e.g., long-haul microwave links or backbone & backhaul fiber) to remote settlements or towns.
  • An LEO-based backhaul solution supports applications and radio access technologies requiring a very low round trip time scale (RTT<50 ms) than is possible with a GEO-based satellite backhaul. However, the optimum RTT will depend on where the LEO satellite ground gateway connects to the internet service provider and how low the RTT can be.
  • The collaborative nature of a satellite-backhaul solution allows the terrestrial operator to focus on and have full control of all its customers’ network experiences, as well as optimize the traffic within its own network infrastructure.
  • LEO satellite backhaul solutions can significantly boost network resilience and availability, providing a secure and reliable connectivity solution.
  • Satellite-backhaul solutions require local ground-based satellite transmission capabilities (e.g., a satellite ground station).
  • The operator should consider that at a certain threshold of low population density, direct-to-consumer satellite services like Starlink might be more economical than constructing a local telecom network that relies on satellite backhaul (see above section on “Fixed Wireless Access (FWA) based LEO satellite solutions”).
  • Satellite backhaul providers require regulatory permits to offer backhaul services. These permits are necessary for several reasons, including the use of radio frequency spectrum, operation of satellite ground stations, and provision of telecommunications services within various jurisdictions.
  • The Satellite life-time in orbit is between 5 to 7 years depending on the LEO altitude. A MEO satellite (2 to 36 thousand km altitude) last between 10 to 20 years (GEO). This also dictates the modernization and upgrade cycle as well as timing of your ROI investment case and refinancing needs.

NEW KIDS ON THE BLOCK – DIRECT-TO-DEVICE LEO SATELLITES.

A recent X-exchange (from March 2nd):

Elon Musk: “SpaceX just achieved peak download speed of 17 Mb/s from a satellite direct to unmodified Samsung Android Phone.” (note: the speed correspond to a spectral efficiency of ~3.4 Mbps/MHz/beam).

Reply from user: “That’s incredible … Fixed wireless networks need to be looking over their shoulders?”

Elon Musk: “No, because this is the current peak speed per beam and the beams are large, so this system is only effective where there is no existing cellular service. This services works in partnership with wireless providers, like what @SpaceX and @TMobile announced.”

Figure 9 illustrating a LEO satellite direct-to-device communication in a remote areas without any terrestrially-based communications infrastructure. Satellite being the only means of communications either by a normal mobile device or by classical satphone. (Courtesy: DALL-E).

Low Earth Orbit (LEO) Satellite Direct-to-Device technology enables direct communication between satellites in orbit and standard mobile devices, such as smartphones and tablets, without requiring additional specialized hardware. This technology promises to extend connectivity to remote, rural, and underserved areas globally, where traditional cellular network infrastructure is absent or economically unfeasible to deploy. The system can offer lower latency communication by leveraging LEO satellites, which orbit closer to Earth than geostationary satellites, making it more practical for everyday use. The round trip time (RTT), the time it takes the for the signal to travel from the satellite to the mobile device and back, is ca. 4 milliseconds for a LEO satellite at 550 km compared to ca. 240 milliseconds for a geosynchronous satellite (at 36 thousand kilometers altitude).

The key advantage of a satellite in low Earth orbit is that the likelihood of a line-of-sight to a point on the ground is very high compared to establishing a line-of-sight for terrestrial cellular coverage that, in general, would be very low. In other words, the cellular signal propagation from a LEO satellite closely approximates that of free space. Thus, all the various environmental signal loss factors we must consider for a standard terrestrial-based mobile network do not apply to our satellite. In other, more simplistic words, the signal propagation directly from the satellite to the mobile device is less compromised than it typically would be from a terrestrial cellular tower to the same mobile device. The difference between free-space propagation, which considers only distance and frequency, and the terrestrial signal propagation models, which quantifies all the gains and losses experienced by a terrestrial cellular signal, is very substantial and in favor of free-space propagation.  As our Earth-bound cellular intuition of signal propagation often gets in the way of understanding the signal propagation from a satellite (or antenna in the sky in general), I recommend writing down the math using the formula of free space propagation loss and comparing this with terrestrial cellular link budget models, such as for example the COST 231-Hata Model (relatively simple) or the more recent 3GPP TR 38.901 Model (complex). In rural and sub-urban areas, depending on the environment, in-door coverage may be marginally worse, fairly similar, or even better than from terrestrial cell tower at a distance. This applies to both the uplink and downlink communications channel between the mobile device and the LEO satellite, and is also the reason why higher frequency (with higher frequency bandwidths available) use on LEO satellites can work better than in a terrestrial cellular network.

However, despite its potential to dramatically expand coverage, after all that is what satellites do, LEO Satellite Direct-to-Device technology is not a replacement for terrestrial cellular services and terrestrial communications infrastructures for several reasons: (a) Although the spectral efficiency can be excellent, the frequency bandwidth (in MHz) and data speeds (in Mbps) available through satellite connections are typically lower than those provided by ground-based cellular networks, limiting its use for high-bandwidth applications. (b) The satellite-based D2D services are, in general, capacity-limited and might not be able to handle higher user density typical for urban areas as efficiently as terrestrial networks, which are designed to accommodate large numbers of users through dense deployment of cell towers. (c) Environmental factors like buildings or bad weather can more significantly impact satellite communications’ reliability and quality than terrestrial services. (d) A satellite D2D service requires regulatory approval (per country), as the D2D frequency typically will be limited to terrestrial cellular services and will have to be coordinated and managed with any terrestrial use to avoid service degradation (or disruption) for customers using terrestrial cellular services also using the frequency. The satellites will have to be able to switch off their D2D service when the satellite covers jurisdictions that have not provided approval or where the relevant frequency/frequencies are in use terrestrially.

Using the NewSpace Index database, updated December 2023, there are current more than 8,000 Direct-to Device (D2D), or Direct-2-Cell (D2C), satellites planned for launch, with SpaceX’s Starlink v2 having 7,500 planned. The rest, 795 satellites, are distributed on 6 other satellite operators (e.g. AST Mobile, Sateliot (Spain), Inmarsat (HEO-orbit), Lynk,…). If we look at satellites designed for IoT connectivity we get in total 5,302, with 4,739 (not including StarLink) still planned, distributed out over 50+ satellite operators. The average IoT satellite constellation including what is currently planned is ~95 satellites with the majority targeted for LEO. The the satellite operators included in the 50+ count have confirmed funding with a minimum amount of US$2 billion (half of the operators have only funding confirmed without an amount). About 2,937 (435 launched) satellites are being planned to only serve IoT markets (note: I think this seems a bit excessive). With Swarm Technologies, a SpaceX subsidiary rank number 1 in terms of both launched and planned satellites. Swarm Technologies having launched at least 189 CubeSats (e.g., both 0.25U and 1U types) and have planned an addition 150. The second ranked IoT-only operator is Orbcomm with 51 satellites launched and an additional 52 planned. The average launched of the remaining IoT specific satellites operators are 5 with on average planning to launch 55 (over 42 constellations).

There are also 3 satellite operators (i.e., Chinese-based Galaxy Space: 1,000 LEO-sats; US-based Mangata Networks: 791 MEO/HEO-sats, and US-based Omnispace: 200 LEO?-sats) that have planned a total of 2,000 satellites to support 5G applications with their satellite solutions and one operator (i.e., Hanwha Systems) has planned 2,000 LEO satellites for 6G.

The emergence of LEO satellite direct-to-device (D2D) services, as depicted in the Figure 10 below, is at the forefront of satellite communication innovations, offering a direct line of connectivity between devices that bypasses the need for traditional cellular-based ground-based network infrastructure (e.g., cell towers). This approach benefits from the relatively short distance of hundreds of kilometers between LEO satellites and the Earth, reducing communication latency and broadening bandwidth capabilities compared to their geostationary counterparts. One of the key advantages of LEO D2D services is their ability to provide global coverage with an extensive number of satellites, i.e., in their 100s to 1000s depending the targeted quality of service, to support the services, ensuring that even the most remote and underserved areas have access to reliable communication channels. They are also critical in disaster resilience, maintaining communications when terrestrial networks fail due to emergencies or natural disasters.

Figure 10 This schematic presents the network architecture for satellite-based direct-to-device (D2D) communication facilitated by Low Earth Orbit (LEO) satellites, exemplified by collaborations like Starlink and T-Mobile US, Lynk Mobile, and AST Space Mobile. It illustrates how satellites in LEO enable direct connectivity between user equipment (UE), such as standard mobile devices and IoT (Internet of Things) devices, using terrestrial cellular frequencies and VHF/UHF bands. The system also shows inter-satellite links operating in the Ka-band for seamless network integration, with satellite gateways (GW) linking the space-based network to ground infrastructure, including Points of Presence (PoP) and Internet Exchange Points (IXP), which connect to the wider internet (WWW). This architecture supports innovative services like Omnispace and Astrocast, offering LEO satellite IoT connectivity. The network could be particularly crucial for defense and special operations in remote and challenging environments, such as the deserts or the Arctic regions of Greenland, where terrestrial networks are unavailable. As an example shown here, using regular terrestrial cellular frequencies in both downlink (~300 MHz to 7 GHz) and uplinks (900 MHz or lower to 2.1 GHz) ensures robust and versatile communication capabilities in diverse operational contexts.

While the majority of the 5,000+ Starlink constellation is 13 GHz (Ku-band), at the beginning of 2024, SpaceX launched a few 2nd generation Starlink satellites that support direct connections from the satellite to a normal cellular device (e.g., smartphone), using 5 MHz of T-Mobile USA’s PCS band (1900 MHz). The targeted consumer service, as expressed by T-Mobile USA, provides texting capabilities across the USA for areas with no or poor existing cellular coverage. This is fairly similar to services at similar cellular coverage areas presently offered by, for example, AST SpaceMobileOmniSpace, and Lynk Global LEO satellite services with reported maximum downlink speed approaching 20 Mbps. The so-called Direct-2-Device, where the device is a normal smartphone without satellite connectivity functionality, is expected to develop rapidly over the next 10 years and continue to increase the supported user speeds (i.e., utilized terrestrial cellular spectrum) and system capacity in terms of smaller coverage areas and higher number of satellite beams.

Table 1 below provides an overview of the top 13 LEO satellite constellations targeting (fixed) internet services (e.g., Ku band), IoT and M2M services, and Direct-to-Device (or Direct-to-Cell, D2C) services. The data has been compiled from the NewSpace Index website, which should be with data as of 31st of December 2023. The Top-satellite constellation rank has been based on the number of launched satellites until the end of 2023. Two additional Direct-2-Cell (D2C or Direct-to-Device, D2D) LEO satellite constellations are planned for 2024-2025. One is SpaceX Starlink 2nd generation, which launched at the beginning of 2024, using T-Mobile USA’s PCS Band to connect (D2D) to normal terrestrial cellular handsets. The other D2D (D2C) service is Inmarsat’s Orchestra satellite constellation based on L-band (for mobile terrestrial services) and Ka for fixed broadband services. One new constellation (Mangata Networks, see also the NewSpace constellation information) targeting 5G services. With two 5G constellations already launched, i.e., Galaxy Space (Yinhe) launched 8 LEO satellites, 1,000 planned using Q- and V-bands (i.e., not a D2D cellular 5G service), and OmniSpace launched two satellites and appear to have planned a total of 200 satellites. Moreover, currently, there is one planned constellation targeting 6G by the South Korean Hanwha Group (a bit premature, but interesting to follow nevertheless) with 2,000 6G (LEO) satellites planned.

Most currently launched and planned satellite constellations offering (or plan to provide) Direct-2-Cell services, including IoT and M2M, are designed for low-frequency bandwidth services that are unlikely to compete with terrestrial cellular networks’ quality of service where reasonable good coverage (or better) exists.

Table 1 An overview of the Top-14 LEO satellite constellations targeting (fixed) internet services (e.g., Ku band), IoT and M2M services, and Direct-to-Device (or direct-to-cell) services. The data has been compiled from the NewSpace Index website, which should be with data as of 31st of December 2023.

The deployment of LEO D2D services also navigates a complicated regulatory landscape, with the need for harmonized spectrum allocation across different regions. Managing interference with terrestrial cellular networks and other satellite operations is another interesting challenge albeit complex aspect, requiring sophisticated solutions to ensure signal integrity. Moreover, despite the cost-effectiveness of LEO satellites in terms of launch and operation, establishing a full-fledged network for D2D services demands substantial initial investment, covering satellite development, launch, and the setup of supporting ground infrastructure.

LEO satellites with D2D-based capabilities – takeaway:

  • Provides lower-bandwidth services (e.g., GPRS/EDGE/HSDPA-like) where no existing terrestrial cellular service is present.
  • (Re-)use on Satellite of the terrestrial cellular spectrum.
  • D2D-based satellite services may become crucial in business continuity scenarios, providing redundancy and increased service availability to existing terrestrial cellular networks. This is particularly essential as a remedy for emergency response personnel in case terrestrial networks are not functional. Limited capacity (due to little assigned frequency bandwidth) over a large coverage area serving rural and remote areas with little or no cellular infrastructure.
  • Securing regulatory approval for satellite services over independent jurisdictions is a complex and critical task for any operator looking to provide global or regional satellite-based communications. The satellite operator may have to switch off transmission over jurisdictions where no permission has been granted.
  • If the spectrum is also deployed on the ground, satellite use of it must be managed and coordinated (due to interference) with the terrestrial cellular networks.
  • Require lowly or non-utilized cellular spectrum in the terrestrial operator’s spectrum portfolio.
  • D2D-based communications require a more complex and sophisticated satellite design, including the satellite antenna resulting in higher manufacturing and launch cost.
  • The IoT-only commercial satellite constellation “space” is crowded with a total of 44 constellations (note: a few operators have several constellations). I assume that many of those plans will eventually not be realized. Note that SpaceX Swarm Technology is leading and in terms of total numbers (available in the NewSpace Index) database will remain a leader from the shear amount of satellites once their plan has been realized. I expect we will see a Chinese constellation in this space as well unless the capability will be built into the Guo Wang constellation.
  • The Satellite life-time in orbit is between 5 to 7 years depending on the altitude. This timeline also dictates the modernization and upgrade cycle as well as timing of your ROI investment and refinancing needs.
  • Today’s D2D satellite systems are frequency-bandwidth limited. However, if so designed, satellites could provide a frequency asymmetric satellite-to-device connection. For instance, the downlink from the satellite to the device could utilize a high frequency (not used in the targeted rural or remote area) and a larger bandwidth, while the uplink communication between the terrestrial device and the LEO satellite could use a sufficiently lower frequency and smaller frequency bandwidth.

MAKERS OF SATELLITES.

In the rapidly evolving space industry, a diverse array of companies specializes in manufacturing satellites for Low Earth Orbit (LEO), ranging from small CubeSats to larger satellites for constellations similar to those used by OneWeb (UK) and Starlink (USA). Among these, smaller companies like NanoAvionics (Lithuania) and Tyvak Nano-Satellite Systems (USA) have carved out niches by focusing on modular and cost-efficient small satellite platforms typically below 25 kg. NanoAvionics is renowned for its flexible mission support, offering everything from design to operation services for CubeSats (e.g., 1U, 3U, 6U) and larger small satellites (100+ kg). Similarly, Tyvak excels in providing custom-made solutions for nano-satellites and CubeSats, catering to specific mission needs with a comprehensive suite of services, including design, manufacturing, and testing.

UK-based Surrey Satellite Technology Limited (SSTL) stands out for its innovative approach to small, cost-effective satellites for various applications, with cost-effectiveness in achieving the desired system’s performance, reliability, and mission objectives at a lower cost than traditional satellite projects that easily runs into USD 100s of million. SSTL’s commitment to delivering satellites that balance performance and budget has made it a popular satellite manufacturer globally.

On the larger end of the spectrum, companies like SpaceX (USA) and Thales Alenia Space (France-Italy) are making significant strides in satellite manufacturing at scale. SpaceX has ventured beyond its foundational launch services to produce thousands of small satellites (250+ kg) for its Starlink broadband constellation, which comprises 5,700+ LEO satellites, showcasing mass satellite production. Thales Alenia Space offers reliable satellite platforms and payload integration services for LEO constellation projects.

With their extensive expertise in aerospace and defense, Lockheed Martin Space (USA) and Northrop Grumman (USA) produce various satellite systems suitable for commercial, military, and scientific missions. Their ability to support large-scale satellite constellation projects from design to launch demonstrates high expertise and reliability. Similarly, aerospace giants Airbus Defense and Space (EU) and Boeing Defense, Space & Security (USA) offer comprehensive satellite solutions, including designing and manufacturing small satellites for LEO. Their involvement in high-profile projects highlights their capacity to deliver advanced satellite systems for a wide range of use cases.

Together, these companies, from smaller specialized firms to global aerospace leaders, play crucial roles in the satellite manufacturing industry. They enable a wide array of LEO missions, catering to the burgeoning demand for satellite services across telecommunications, Earth observation, and beyond, thus facilitating access to space for diverse clients and applications.

ECONOMICS.

Before going into details, let’s spend some time on an example illustrating the basic components required for building a satellite and getting it to launch. Here, I point at a super cool alternative to the above-mentioned companies, the USA-based startup Apex, co-founded by CTO Max Benassi (ex-SpaceX and Astra) and CEO Ian Cinnamon. To get an impression of the macro-components of a satellite system, I recommend checking out the Apex webpage and “playing” with their satellite configurator. The basic package comes at a price tag of USD 3.2 million and a 9-month delivery window. It includes a 100 kg satellite bus platform, a power system, a communication system based on X-band (8 – 12 GHz), and a guidance, navigation, and control package. The basic package does not include a solar array drive assembly (SADA), which plays a critical role in the operation of satellites by ensuring that the satellite’s solar panels are optimally oriented toward the Sun. Adding the SADA brings with it an additional USD 500 thousand. Also, the propulsion mechanism (e.g., chemical or electric; in general, there are more possibilities) is not provided (+ USD 450 thousand), nor are any services included (e.g., payload & launch vehicle integration and testing, USD 575 thousand), including SADAs, propulsion, and services, Apex will have a satellite launch ready for an amount of close to USD 4.8 million.

However, we are not done. The above solution still needs to include the so-called payload, which relates to the equipment or instruments required to perform the LEO satellite mission (e.g., broadband communications services), the actual satellite launch itself, and the operational aspects of a successful post-launch (i.e., ground infrastructure and operation center(s)).

Let’s take SpaceX’s Starlink satellite as an example illustrating mission and payload more clearly. The Starlink satellite’s primary mission is to provide fixed-wireless access broadband internet to an Earth-based fixed antenna using. The Starlink payload primarily consists of advanced broadband internet transmission equipment designed to provide high-speed internet access across the globe. This includes phased-array antennas for communication with user terminals on the ground, high-frequency radio transceivers to facilitate data transmission, and inter-satellite links allowing satellites to communicate in orbit, enhancing network coverage and data throughput.

The economical aspects of launching a Low Earth Orbit (LEO) satellite project span a broad spectrum of costs from the initial concept phase to deployment and operational management. These projects commence with research and development, where significant investments are made in designengineering, and the iterative process of prototyping and testing to ensure the satellite meets its intended performance and reliability standards in harsh space conditions (e.g., vacuum, extreme temperature variations, radiation, solar flares, high-velocity impacts with micrometeoroids and man-made space debris, erosion, …).

Manufacturing the satellite involves additional expenses, including procuring high-quality components that can withstand space conditions and assembling and integrating the satellite bus with its mission-specific payload. Ensuring the highest quality standards throughout this process is crucial to minimizing the risk of in-orbit failure, which can substantially increase project costs. The payload should be seen as the heart of the satellite’s mission. It could be a set of scientific instruments for measuring atmospheric data, optical sensors for imaging, transponders for communication, or any other equipment designed to fulfill the satellite’s specific objectives. The payload will vary greatly depending on the mission, whether for Earth observation, scientific research, navigation, or telecommunications.

Of course, there are many other types and more affordable options for LEO satellites than a Starlink-like one (although we should also not ignore achievements of SpaceX and learn from them as much as possible). As seen from Table 1, we have a range of substantially smaller satellite types or form factors. The 1U (i.e., one unit) CubeSat is a satellite with a form factor of 10x10x11.35 cm3 and weighs no more than 1.33 kilograms. A rough cost range for manufacturing a 1U CubeSat could be from USD 50 to 100+ thousand depending on mission complexity and payload components (e.g., commercial-off-the-shelf or application or mission-specific design). The range includes considering the costs associated with the satellite’s design, components, assembly, testing, and initial integration efforts. The cost range, however, does not include other significant costs associated with satellite missions, such as launch services, ground station operations, mission control, and insurance, which is likely to (significantly) increase the total project cost. Furthermore, we have additional form factors, such as 3U CubeSat (10x10x34.05 cm3, <4 kg), manufacturing cost in the range of USD 100 to 500+ thousand, 6U CubeSat (20x10x34 cm3, <12 kg), that can carry more complex payload solutions than the smaller 1U and 3U, with the manufacturing cost in the range of USD 200 thousand to USD 1+ million and 12U satellites (20x20x34 cm3, <24 kg) that again support complex payload solutions and in general will be significantly more expensive to manufacture.

Securing a launch vehicle is one of the most significant expenditures in a satellite project. This cost not only includes the price of the rocket and launch itself but also encompasses integration, pre-launch services, and satellite transportation to the launch site. Beyond the launch, establishing and maintaining the ground segment infrastructure, such as ground stations and a mission control center, is essential for successful satellite communication and operation. These facilities enable ongoing tracking, telemetry, and command operations, as well as the processing and management of the data collected by the satellite.

The SpaceX Falcon rocket is used extensively by other satellite businesses (see above Table 1) as well as by SpaceX for their own Starlink constellation network. The rocket has a payload capability of ca. 23 thousand kg and a volume handling capacity of approximately 300 cubic meters. SpaceX has launched around 60 Starlink satellites per Falcon 9 mission for the first-generation satellites. The launch cost per 1st generation satellite would then be around USD 1 million per satellite using the previously quoted USD 62 million (2018 figure) for a Falcon 9 launch. The second-generation Starlink satellites are substantially more advanced compared to the 1st generation. They are also heavier, weighing around a thousand kilograms. A Falcon 9 would only be able to launch around 20 generation 2 satellites (only considering the weight limitation), while a Falcon Heavy could lift ca. 60 2nd gen. satellites but also at a higher price point of USD 90 million (2018 figure). Thus the launch cost per satellite would be between USD 1.5 million using Falcon Heavy and USD 3.1 million using Falcon 9. Although the launch cost is based on price figures from 2018, the expected efficiency gained from re-use may have either kept the cost level or reduced it further as expected, particularly with Falcon Heavy.

Satellite businesses looking to launch small volumes of satellites, such as CubeSats, have a variety of strategies at their disposal to manage launch costs effectively. One widely adopted approach is participating in rideshare missions, where the expenses of a single launch vehicle are shared among multiple payloads, substantially reducing the cost for each operator. This method is particularly attractive due to its cost efficiency and the regularity of missions offered by, for example, SpaceX. Prices for rideshare missions can start from as low as a few thousand dollars for very small payloads (like CubeSats) to several hundred thousand dollars for larger small satellites. For example, SpaceX advertises rideshare prices starting at $1 million for payloads up to 200 kg. Alternatively, dedicated small launcher services cater specifically to the needs of small satellite operators, offering more tailored launch options in terms of timing and desired orbit. Companies such as Rocket Lab (USA) and Astra (USA) launch services have emerged, providing flexibility that rideshare missions might not, although at a slightly higher cost. However, these costs remain significantly lower than arranging a dedicated launch on a larger vehicle. For example, Rocket Lab’s Electron rocket, specializing in launching small satellites, offers dedicated launches with prices starting around USD 7 million for the entire launch vehicle carrying up to 300 kg. Astra has reported prices in the range of USD 2.5 million for a dedicated LEO launch with their (discontinued) Rocket 3 with payloads of up to 150 kg. The cost for individual small satellites will depend on their share of the payload mass and the specific mission requirements.

Satellite ground stations, which consist of arrays of phased-array antennas, are critical for managing the satellite constellation, routing internet traffic, and providing users with access to the satellite network. These stations are strategically located to maximize coverage and minimize latency, ensuring that at least one ground station is within the line of sight of satellites as they orbit the Earth. As of mid-2023, Starlink operated around 150 ground stations worldwide (also called Starlink Gateways), with 64 live and an additional 33 planned in the USA. The cost of constructing a ground station would be between USD 300 thousand to half a million not including the physical access point, also called the point-of-presence (PoP), and transport infrastructure connecting the PoP (and gateway) to the internet exchange where we find the internet service providers (ISPs) and the content delivery networks (CDNs). The Pop may add another USD 100 to 200 thousand to the ground infrastructure unit cost. The transport cost from the gateway to the Internet exchange can vary a lot depending on the gateway’s location.

Insurance is a critical component of the financial planning for a satellite project, covering risks associated with both the launch phase and the satellite’s operational period in orbit. These insurances are, in general, running at between 5% to 20% of the total project cost depending on the satellite value, the track record of the launch vehicle, mission complexity, and duration (i.e., typically 5 – 7 years for a LEO satellite at 500 km) and so forth. Insurance could be broken up into launch insurance and insurance covering the satellite once it is in orbit.

Operational costs, the Opex, include the day-to-day expenses of running the satellite, from staffing and technical support to ground station usage fees.

Regulatory and licensing fees, including frequency allocation and orbital slot registration, ensure the satellite operates without interfering with other space assets. Finally, at the end of the satellite’s operational life, costs associated with safely deorbiting the satellite are incurred to comply with space debris mitigation guidelines and ensure a responsible conclusion to the mission.

The total cost of an LEO satellite project can vary widely, influenced by the satellite’s complexity, mission goals, and lifespan. Effective project management and strategic decision-making are crucial to navigating these expenses, optimizing the project’s budget, and achieving mission success.

Figure 11 illustrates an LEO CubeSat orbiting above the Earth, capturing the satellite’s compact design and its role in modern space exploration and technology demonstration. Note that the CubeSat design comes in several standardized dimensions, with the reference design, also called 1U, being almost 1 thousandth of a cubic meter and weighing less than 1.33 kg. More advanced CubeSat satellites would typically be 6U or higher.

CubeSats (e.g., 1U, 3U, 6U, 12U):

  • Manufacturing Cost: Ranges from USD 50,000 for a simple 1U CubeSat to over USD 1 million for a more complex missions supported by 6U (or higher) CubeSat with advanced payloads (and 12U may again amount to several million US dollars).
  • Launch Cost: This can vary significantly depending on the launch provider and the rideshare opportunities, ranging from a few thousand dollars for a 1U CubeSat on a rideshare mission to several million dollars for a dedicated launch of larger CubeSats or small satellites.
  • Operational Costs: Ground station services, mission control, and data handling can add tens to hundreds of thousands of dollars annually, depending on the mission’s complexity and duration.

Small Satellites (25 kg up to 500 kg):

  • Manufacturing Cost: Ranges from USD 500,000 to over 10 million, depending on the satellite’s size, complexity, and payload requirements.
  • Launch Cost: While rideshare missions can reduce costs, dedicated launches for small satellites can range from USD 10 million to 62 million (e.g., Falcon 9) and beyond (e.g., USD 90 million for Falcon Heavy).
  • Operational Costs: These are similar to CubeSats, but potentially higher due to the satellite’s larger size and more complex mission requirements, reaching several hundred thousand to over a million dollars annually.

The range for the total project cost of a LEO satellite:

Given these considerations, the total cost range for a LEO satellite project can vary from as low as a few hundred thousand dollars for a simple CubeSat project utilizing rideshare opportunities and minimal operational requirements to hundreds of millions of dollars for more complex small satellite missions requiring dedicated launches and extensive operational support.

It is important to note that these are rough estimates, and the actual cost can vary based on specific mission requirements, technological advancements, and market conditions.

CAPACITY AND QUALITY

Figure 12 Satellite-based cellular capacity, or quality measured, by the unit or total throughput in Mbps is approximately driven by the amount of spectrum (in MHz) times the effective spectral efficiency (in Mbps/MHz/units) times the number of satellite beams resulting in cells on the ground.

The overall capacity and quality of satellite communication systems, given in Mbps, is on a high level, the product of three key factors: (i) the amount of frequency bandwidth in MHz allocated to the satellite operations multiplied by (ii) the effective spectral efficiency in Mbps per MHz over a unit satellite-beam coverage area multiplied by (iii) the number of satellite beams that provide the resulting terrestrial cell coverage. Thus, in other words:

Satellite Capacity (in Mbps) =
Frequency Bandwidth in MHz ×
Effective Spectral Efficiency in Mbps/MHz/Beam ×
Number of Beams (or Cells)

Consider a satellite system supporting 8 beams (and thus an equivalent of terrestrial coverage cells), each with 250 MHz allocated within the same spectral frequency range, can efficiently support ca. 680 Mbps per beam. This is achieved with an antenna setup that effectively provides a spectral efficiency of ~2.7 Mbps/MHz/cell (or beam) in the downlink (i.e., from the satellite to the ground). Moreover, the satellite typically will have another frequency and antenna configuration that establishes a robust connection to the ground station that connects to the internet via, for example, third-party internet service providers. The 680 Mbps is then shared among users that are within the satellite beam coverage, e.g., if you have 100 customers demanding a service, the speed each would experience on average would be around 7 Mbps. This may not seem very impressive compared to the cellular speeds we are used to getting with an LTE or 5G terrestrial cellular service. However, such speeds are, of course, much better than having no means of connecting to the internet.

Higher frequencies (i.e., in the GHz range) used to provide terrestrial cellular broadband services are in general quiet sensitive to the terrestrial environment and non-LoS propagation. It is a basic principle of physics that signal propagation characteristics, including the range and penetration capabilities of an electromagnetic waves, is inversely related to their frequency. Vegetation and terrain becomes an increasingly critical factor to consider in higher frequency propagation and the resulting quality of coverage. For example trees, forests, and other dense foliage can absorb and scatter radio waves, attenuating signals. The type and density of vegetation, along with seasonal changes like foliage density in summer versus winter, can significantly impact signal strength. Terrains often include varied topographies such as housing, hills, valleys, and flat plains, each affecting signal reach differently. For instance, housing, hilly or mountainous areas may cause signal shadowing and reflection, while flat terrains might offer less obstruction, enabling signals to travel further. Cellular mobile operators tend to like high frequencies (GHz) for cellular broadband services as it is possible to get substantially more system throughput in bits per second available to deliver to our demanding customers than at frequencies in the MHz range. As can be observed in Figure 12 above, we see that the frequency bandwidth is a multiplier for the satellite capacity and quality. Cellular mobile operators tend to “dislike” higher frequencies because of their poorer propagation conditions in their terrestrially based cellular networks resulting in the need for increased site densification at a significant incremental capital expense.

The key advantage of a LEO satellite is that the likelihood of a line-of-sight to a point on the ground is very high compared to establishing a line-of-sight for terrestrial cellular coverage that, in general, would be very low. In other words, the cellular signal propagation from an satellite closely approximates that of free space. Thus, all the various environmental signal loss factors we must consider for a standard terrestrial-based mobile network do not apply to our satellite having only to overcome the distance from the satellite antenna to the ground.

Let us first look at the satellite frequency component of the above satellite capacity, and quality, formula:

FREQUENCY SPECTRUM FOR SATELLITES.

The satellite frequency spectrum encompasses a range of electromagnetic frequencies allocated specifically for satellite communication. These frequencies are divided into bands, commonly known as L-band (e.g., mobile broadband), S-band (e.g., mobile broadband), C-band, X-band (e.g., mainly used by military), Ku-band (e.g., fixed broadband), Ka-band (e.g., fixed broadband), and V-band. Each serves different satellite applications due to its distinct propagation characteristics and capabilities. The spectrum bandwidth used by satellites refers to the width of the frequency range that a satellite system is licensed to use for transmitting and receiving signals.

Careful management of satellite spectrum bandwidth is critical to prevent interference with terrestrial communications systems. Since both satellite and terrestrial systems can operate on similar frequency ranges, there is a potential for crossover interference, which can degrade the performance of both systems. This is particularly important for bands like C-band and Ku-band, which are also used for terrestrial cellular networks and other applications like broadcasting.

Using the same spectrum for both satellite and terrestrial cellular coverage within the same geographical area is challenging due to the risk of interference. Satellites transmit signals over vast areas, and if those signals are on the same frequency as terrestrial cellular systems, they can overpower the local ground-based signals, causing reception issues for users on the ground. Conversely, the uplink signals from terrestrial sources can interfere with the satellite’s ability to receive communications from its service area.

Regulatory bodies such as the International Telecommunication Union (ITU) are crucial in mitigating these interference issues. They coordinate the allocation of frequency bands and establish regulations that govern their use. This includes defining geographical zones where certain frequencies may be used exclusively for either terrestrial or satellite services, as well as setting limits on signal power levels to minimize the chance of interference. Additionally, technology solutions like advanced filtering, beam shaping, and polarization techniques are employed to further isolate satellite communications from terrestrial systems, ensuring that both may coexist and operate effectively without mutual disruption.

The International Telecommunication Union (ITU) has designated several frequency bands for Fixed Satellite Services (FSS) and Mobile Satellite Services (MSS) that can be used by satellites operating in Low Earth Orbit (LEO). The specific bands allocated for satellite services, FSS and MSS, are determined by the ITU’s Radio Regulations, which are periodically updated to reflect global telecommunication’s evolving needs and address emerging technologies. Here are some of the key frequency bands commonly considered for FSS and MSS with LEO satellites:

V-Band 40 GHz to 75 GHz (microwave frequency range).
The V-band is appealing for Low Earth Orbit (LEO) satellite constellations designed to provide global broadband internet access. LEO satellites can benefit from the V-band’s capacity to support high data rates, which is essential for serving densely populated areas and delivering competitive internet speeds. The reduced path loss at lower altitudes, compared to GEO, also makes the V-band a viable option for LEO satellites. Due to the higher frequencies offered by V-band it also is significant more sensitive to atmospheric attenuation, (e.g., oxygen absorption around 60 GHz), including rain fade, which is likely to affect signal integrity. This necessitates the development of advanced technologies for adaptive coding and modulation, power amplification, and beamforming to ensure reliable communication under various weather conditions. Several LEO satellite operators have expressed an interest in operationalizing the V-band in their satellite constellations (e.g., StarLink, OneWeb, Kuiper, Lightspeed). This band should be regarded as an emergent LEO frequency band.

Ka-Band 17.7 GHz to 20.2 GHz (Downlink) & 27.5 GHz to 30.0 GHz (Uplink).
The Ka-band offers higher bandwidths, enabling greater data throughput than lower bands. Not surprising this band is favored by high-throughput satellite solutions. It is widely used by fixed satellite services (FSS). This makes it ideal for high-speed internet services. However, it is more susceptible to absorption and scattering by atmospheric particles, including raindrops and snowflakes. This absorption and scattering effect weakens the signal strength when it reaches the receiver. To mitigate rain fade effects in the Ka-band, satellite, and ground equipment must be designed with higher link margins, incorporating more powerful transmitters and more sensitive receivers. Additionally, adaptive modulation and coding techniques can be employed to adjust the signal dynamically in response to changing weather conditions. Overall, the system is more costly and, therefore, primarily used for satellite-to-earth ground station communications and high-performance satellite backhaul solutions.

For example, Starlink and OneWeb use the Ka-band to connect to satellite Earth gateways and point-of-presence, which connect to Internet Exchange and the wider internet. It is worth noticing that the terrestrial 5 G band n256 (26.5 to 29.5 GHz) falls within the Ka-band’s uplink frequency band. Furthermore, SES’s mPower satellites, operating at Middle Earth Orbit (MEO), operate exclusively in this band, providing internet backhaul services.

Ku-Band 12.75 GHz to 13.25 GHz (Downlink) & 14.0 GHz to 14.5 GHz (Uplink).
The Ku-band is widely used for FSS satellite communications, including fixed satellite services, due to its balance between bandwidth availability and susceptibility to rain fade. It is suitable for broadband services, TV broadcasting, and backhaul connections. For example, Starlink and OneWeb satellites are using this band to provide broadband services to earth-based customer terminals.

X-Band 7.25 GHz to 7.75 GHz (Downlink) & 7.9 GHz to 8.4 GHz (Uplink).
The X-band in satellite applications is governed by international agreements and national regulations to prevent interference between different services and to ensure the efficient use of the spectrum. The X-band is extensively used for secure military satellite communications, offering advantages like high data rates and relative resilience to jamming and eavesdropping. It supports a wide range of military applications, including mobile command, control, communications, computer, intelligence, surveillance, and reconnaissance (i.e., C4ISR) operations. Most defense-oriented satellites operate at geostationary orbit, ensuring constant coverage of specific geographic areas (e.g., Airbus Skynet constellations, Spain’s XTAR-EUR, and France’s Syracuse satellites). Most European LEO defense satellites, used primarily for reconnaissance, are fairly old, with more than 15 years since the first launch, and are limited in numbers (i.e., <10). The most recent European LEO satellite system is the French-based Multinational Space-based Imaging System (MUSIS) and Composante Spatiale Optique (CSO), where the first CSO components were launched in 2018. There are few commercial satellites utilizing the X-band.

C-Band 3.7 GHz to 4.2 GHz (Downlink) & 5.925 GHz to 6.425 GHz (Uplink)
C-band is less susceptible to rain fade and is traditionally used for satellite TV broadcasting, maritime, and aviation communications. However, parts of the C-band are also being repurposed for terrestrial 5G networks in some regions, leading to potential conflicts and the need for careful coordination. The C-band is primarily used in geostationary orbit (GEO) rather than Low Earth Orbit (LEO), due to the historical allocation of C-band for fixed satellite services (FSS) and its favorable propagation characteristics. I haven’t really come across any LEO constellation using the C-band. GEO FSS satellite operators using this band extensively are SES (Luxembourg), Intelsat (Luxembourg/USA), Eutelsat (France), Inmarsat (UK), etc..

S-Band 2.0 GHz to 4.0 GHz
S-band is used for various applications, including mobile communications, weather radar, and some types of broadband services. It offers a good compromise between bandwidth and resistance to atmospheric absorption. Both Omnispace (USA) and Globalstar (USA) LEO satellites operate in this band. Omnispace is also interesting as they have expressed intent to have LEO satellites supporting the 5G services in the band n256 (26.5 to 29.5 GHz), which falls within the uplink of the Ka-band.

L-Band 1.0 GHz to 2.0 GHz
L-band is less commonly used for fixed satellite services but is notable for its use in mobile satellite services (MSS), satellite phone communications, and GPS. It provides good coverage and penetration characteristics. Both Lynk Mobile (USA), offering Direct-2-Device, IoT, and M2M services, and Astrocast (Switzerland), with their IoT/M2M services, are examples of LEO satellite businesses operating in this band.

UHF 300 MHz to 3.0 GHz
The UHF band is more widely used for satellite communications, including mobile satellite services (MSS), satellite radio, and some types of broadband data services. It is favored for its relatively good propagation characteristics, including the ability to penetrate buildings and foliage. For example, Fossa Systems LEO pico-satellites (i.e., 1p form-factor) use this frequency for their IoT and M2M communications services.

VHF 30 MHz to 300 MHz

The VHF band is less commonly used in satellite communications for commercial broadband services. Still, it is important for applications such as satellite telemetry, tracking, and control (TT&C) operations and amateur satellite communications. Its use is often limited due to the lower bandwidth available and the higher susceptibility to interference from terrestrial sources. Swarm Technologies (USA and a SpaceX subsidiary) using 137-138 MHz (Downlink) and 148-150 MHz (Uplink). However, it appears that they have stopped taking new devices on their network. Orbcomm (USA) is another example of a satellite service provider using the VHF band for IoT and M2M communications. There is very limited capacity in this band due to many other existing use cases, and LEO satellite companies appear to plan to upgrade to the UHF band or to piggyback on direct-2-cell (or direct-2-device) satellite solutions, enabling LEO satellite communications with 3GPP compatible IoT and M2M devices.

SATELLITE ANTENNAS.

Satellites operating in Geostationary Earth Orbit (GEO), Medium Earth Orbit (MEO), and Low Earth Orbit (LEO) utilize a variety of antenna types tailored to their specific missions, which range from communication and navigation to observation (e.g., signal intelligence). The satellite’s applications influence the selection of an antenna, the characteristics of its orbit, and the coverage area required.

Antenna technology is intrinsically linked to spectral efficiency in satellite communications systems and of course any other wireless systems. Antenna designs influence how effectively a communication system can transmit and receive signals within a given frequency band, which is the essence of spectral efficiency (i.e., how much information per unit time in bits per second can I squeeze through my communications channel).

Thus, advancements in antenna technology are fundamental to improving spectral efficiency, making it a key area of research and development in the quest for more capable and efficient communication systems.

Parabolic dish antennas are prevalent for GEO satellites due to their high gain and narrow beam width, making them ideal for broadcasting and fixed satellite services. These antennas focus a tight beam on specific areas on Earth, enabling strong and direct signals essential for television, internet, and communication services. Horn antennas, while simpler, are sometimes used as feeds for larger parabolic antennas or for telemetry, tracking, and command functions due to their reliability. Additionally, phased array antennas are becoming more common in GEO satellites for their ability to steer beams electronically, offering flexibility in coverage and the capability to handle multiple beams and frequencies simultaneously.

Phased-array antennas are indispensable in for MEO satellites, such as those used in navigation systems like GPS (USA), BeiDou (China), Galileo (European), or GLONASS (Russian). These satellite constellations cover large areas of the Earth’s surface and can adjust beam directions dynamically, a critical feature given the satellites’ movement relative to the Earth. Patch antennas are also widely used in MEO satellites, especially for mobile communication constellations, due to their compact and low-profile design, making them suitable for mobile voice and data communications.

Phased-array antennas are very important for LEO satellites use cases as well, which include broadband communication constellations like Starlink and OneWeb. Their (fast) beam-steering capabilities are essential for maintaining continuous communication with ground stations and user terminals as the satellites quickly traverse the sky. The phased-array antenna also allow for optimizing coverage with both narrow as well as wider field of view (from the perspective of the satellite antenna) that allow the satellite operator to trade-off cell capacity and cell coverage.

Simpler Dipole antennas are employed for more straightforward data relay and telemetry purposes in smaller satellites and CubeSats, where space and power constraints are significant factors. Reflect array antennas, which offer a mix of high gain and beam steering capabilities, are used in specific LEO satellites for communication and observation applications (e.g., for signal intelligence gathering), combining features of both parabolic and phased array antennas.

Mission specific requirements drive the choice of antenna for a satellite. For example, GEO satellites often use high-gain, narrowly focused antennas due to their fixed position relative to the Earth, while MEO and LEO satellites, which move relatively closer to the Earth’s surface, require antennas capable of maintaining stable connections with moving ground terminals or covering large geographical areas.

Advanced antenna technologies such as beamforming, phased-arrays, and Multiple In Multiple Out (MMO) antenna configurations are crucial in managing and utilizing the spectrum more efficiently. They enable precise targeting of radio waves, minimizing interference, and optimizing bandwidth usage. This direct control over the transmission path and signal shape allows more data (bits) to be sent and received within the same spectral space, effectively increasing the communication channel’s capacity. In particular, MIMO antenna configurations and advanced antenna beamforming have enabled terrestrial mobile cellular access technologies (e.g., LTE and 5G) to quantum leap the effective spectral efficiency, broadband speed and capacity orders of magnitude above and beyond older technologies of 2G and 3G. Similar principles are being deployed today in modern advanced communications satellite antennas, providing increased capacity and quality within the satellite cellular coverage area provided by the satellite beam.

Moreover, antenna technology developments like polarization and frequency reuse directly impact a satellite system’s ability to maximize spectral resources. Allowing simultaneous transmissions on the same frequency through different polarizations or spatial separations effectively double the capacity without needing additional spectrum.

WHERE DO WE END UP.

If all current commercial satellite plans were realized, within the next decade, we would have more, possibly substantially more than 65 thousand satellites circling Earth. Today, that number is less than 10 thousand, with more than half that number realized by StarLink’s LEO constellation. Imagine the increase in, and the amount of, space debris circling Earth within the next 10 years. This will likely pose a substantial increase in operational risk for new space missions and will have to be addressed urgently.

Over the next decade, we may have at least 2 major LEO satellite constellations. One from Starlink with an excess of 12 thousand satellites, and one from China, the Guo Wang, the state network, likewise with 12 thousand LEO satellites. One global satellite constellation is from an American-based commercial company; the other is a worldwide satellite constellation representing the Chinese state. It would not be too surprising to see that by 2034, the two satellite constellations will divide Earth in part, being serviced by a commercial satellite constellation (e.g., North America, Europe, parts of the Middle East, some of APAC including India, possibly some parts of Africa). Another part will likely be served by a Chinese-controlled LEO constellation providing satellite broadband service to China, Russia, significant parts of Africa, and parts of APAC.

Over the next decade, satellite services will undergo transformative advancements, reshaping the architecture of global communication infrastructures and significantly impacting various sectors, including broadband internet, global navigation, Earth observation, and beyond. As these services evolve, we should anticipate major leaps in satellite technologies, driven by innovation in propulsion systems, miniaturization of technology, advancements in onboard processing capabilities, increasing use of AI and machine learning leapfrogging satellites operational efficiency and performance, breakthrough in material science reducing weight and increasing packing density, leapfrogs in antenna technology, and last but not least much more efficient use of the radio frequency spectrum. Moreover, we will see the breakthrough innovation that will allow better co-existence and autonomous collaboration of frequency spectrum utilization between non-terrestrial and terrestrial networks reducing the need for much regulatory bureaucracy that might anyway be replaced by decentralized autonomous organizations (DAOs) and smart contracts. This development will be essential as satellite constellations are being integrated into 5G and 6G network architectures as the non-terrestrial network cellular access component. This particular topic, like many in this article, is worth a whole new article on its own.

I expect that over the next 10 years we will see electronically steerable phased-array antennas, as a notable advancement. These would offer increased agility and efficiency in beamforming and signal direction. Their ability to swiftly adjust beams for optimal coverage and connectivity without physical movement makes them perfect for the dynamic nature of Low Earth Orbit (LEO) satellite constellations. This technology will becomes increasingly cost-effective and energy-efficient, enabling widespread deployment across various satellite platforms (not only LEO designs). The advance in phased-array antenna technology will facilitate substantial increase in the satellite system capacity by increasing the number of beams, the variation on beam size (possibly down to a customer ground station level), and support multi-band operations within the same antenna.

Another promising development is the integration of metamaterials in antenna design, which will lead to more compact, flexible, and lightweight antennas. The science of metamaterials is super interesting and relates to manufacturing artificial materials to have properties not found in naturally occurring materials with unique electromagnetic behaviors arising from their internal structure. Metamaterial antennas is going to offer superior performance, including better signal control and reduced interference, which is crucial for maintaining high-quality broadband connections. These materials are also important for substantially reducing the weight of the satellite antenna, while boosting its performance. Thus, the technology will also support bringing the satellite launch cost down dramatically.

Although primarily associated MIMO antennas with terrestrial networks, I would also expect that massive MIMO technology will find applications in satellite broadband systems. Satellite systems, just like ground based cellular networks, can significantly increase their capacity and efficiency by utilizing many antenna elements to simultaneously communicate with multiple ground terminals. This could be particularly transformative for next-generation satellite networks, supporting higher data rates and accommodating more users. The technology will increase the capacity and quality of the satellite system dramatically as it has done on terrestrial cellular networks.

Furthermore, advancements in onboard processing capabilities will allow satellites to perform more complex signal processing tasks directly in space, reducing latency and improving the efficiency of data transmission. Coupled with AI and machine learning algorithms, future satellite antennas could dynamically optimize their operational parameters in real-time, adapting to changes in the network environment and user demand.

Additionally, research into quantum antenna technology may offer breakthroughs in satellite communication, providing unprecedented levels of sensitivity and bandwidth efficiency. Although still early, quantum antennas could revolutionize signal reception and transmission in satellite broadband systems. In the context of LEO satellite systems, I am particularly excited about utilizing the Rydberg Effect to enhance system sensitivity could lead to massive improvements. The heightened sensitivity of Rydberg atoms to electromagnetic fields could be harnessed to develop ultra-sensitive detectors for radio frequency (RF) signals. Such detectors could surpass the performance of traditional semiconductor-based devices in terms of sensitivity and selectivity, enabling satellite systems to detect weaker signals, improve signal-to-noise ratios, and even operate effectively over greater distances or with less power. Furthermore, space could potentially be the near-ideal environment for operationalizing Rydberg antenna and communications systems as space had near-perfect vacuum, very low-temperatures (in Earth shadow at least or with proper thermal management), relatively free of electromagnetic radiation (compared to Earth), as well as its micro-gravity environment that may facilitate long-range “communications” between Rydberg atoms. This particular topic may be further out in the future than “just” a decade from now, although it may also be with satellites we will see the first promising results of this technology.

One key area of development will be the integration of LEO satellite networks with terrestrial 5G and emerging 6G networks, marking a significant step in the evolution of Non-Terrestrial Network (NTN) architectures. This integration promises to deliver seamless, high-speed connectivity across the globe, including in remote and rural areas previously underserved by traditional broadband infrastructure. By complementing terrestrial networks, LEO satellites will help achieve ubiquitous wireless coverage, facilitating a wide range of applications and use cases from high-definition video streaming to real-time IoT data collection.

The convergence of LEO satellite services with 5G and 6G will also spur network management and orchestration innovation. Advanced techniques for managing interference, optimizing handovers between terrestrial and non-terrestrial networks, and efficiently allocating spectral resources will be crucial. It would be odd not to mention it here, so artificial intelligence and machine learning algorithms will, of course, support these efforts, enabling dynamic network adaptation to changing conditions and demands.

Moreover, the next decade will likely see significant improvements in the environmental sustainability of LEO satellite operations. Innovations in satellite design and materials, along with more efficient launch vehicles and end-of-life deorbiting strategies, will help mitigate the challenges of space debris and ensure the long-term viability of LEO satellite constellations.

In the realm of global connectivity, LEO satellites will have bridged the digital divide, offering affordable and accessible internet services to billions of people worldwide unconnected today. In 2023 the estimate is that there are about 3 billion people, almost 40% of all people in the world today, that have never used internet. In the next decade, it must be our ambition that with LEO satellite networks this number is brought down to very near Zero. This will have profound implications for education, healthcare, economic development, and global collaboration.

FURTHER READING.

  1. A. Vanelli-Coralli, N. Chuberre, G. Masini, A. Guidotti, M. El Jaafari, “5G Non-Terrestrial Networks.”, Wiley (2024). A recommended reading for deep diving into NTN networks of satellites, typically the LEO kind, and High-Altitude Platform Systems (HAPS) such as stratospheric drones.
  2. I. del Portillo et al., “A technical comparison of three low earth orbit satellite constellation systems to provide global broadband,” Acta Astronautica, (2019).
  3. Nils Pachler et al., “An Updated Comparison of Four Low Earth Orbit Satellite Constellation Systems to Provide Global Broadband” (2021).
  4. Starlink, “Starlink specifications” (Starlink.com page). The following Wikipedia resource is quite good as well: Starlink.
  5. Quora, “How much does a satellite cost for SpaceX’s Starlink project and what would be the cheapest way to launch it into space?” (June 2023). This link includes a post from Elon Musk commenting on the cost involved in manufacturing the Starlink satellite and the cost of launching SpaceX’s Falcon 9 rocket.
  6. Michael Baylor, “With Block 5, SpaceX to increase launch cadence and lower prices.”, nasaspaceflight.com (May, 2018).
  7. Gwynne Shotwell, TED Talk from May 2018. She quotes here a total of USD 10 billion as a target for the 12,000 satellite network. This is just an amazing visionary talk/discussion about what may happen by 2028 (in 4-5 years ;-).
  8. Juliana Suess, “Guo Wang: China’s Answer to Starlink?”, (May 2023).
  9. Makena Young & Akhil Thadani, “Low Orbit, High Stakes, All-In on the LEO Broadband Competition.”, Center for Strategic & International Studies CSIS, (Dec. 2022).
  10. AST SpaceMobile website: https://ast-science.com/ Constellation Areas: Internet, Direct-to-Cell, Space-Based Cellular Broadband, Satellite-to-Cellphone. 243 LEO satellites planned. 2 launched.
  11. Lynk Global website: https://lynk.world/ (see also FCC Order and Authorization). It should be noted that Lynk can operate within 617 to 960 MHz (Space-to-Earth) and 663 to 915 MHz (Earth-to-Space). However, only outside the USA. Constellation Area: IoT / M2M, Satellite-to-Cellphone, Internet, Direct-to-Cell. 8 LEO satellites out of 10 planned.
  12. Omnispace website: https://omnispace.com/ Constellation Area: IoT / M2M, 5G. Ambition to have the world’s first global 5G non-terrestrial network. Initial support 3GPP-defined Narrow-Band IoT radio interface. Planned 200 LEO and <15 MEO satellites. So far, only 2 satellites have been launched.
  13. NewSpace Index: https://www.newspace.im/ I find this resource to have excellent and up-to-date information on commercial satellite constellations.
  14. R.K. Mailloux, “Phased Array Antenna Handbook, 3rd Edition”, Artech House, (September 2017).
  15. A.K. Singh, M.P. Abegaonkar, and S.K. Koul, “Metamaterials for Antenna Applications”, CRC Press (September 2021).
  16. T.L. Marzetta, E.G. Larsson, H. Yang, and H.Q. Ngo, “Fundamentals of Massive MIMO”, Cambridge University Press, (November 2016).
  17. G.Y. Slepyan, S. Vlasenko, and D. Mogilevtsev, “Quantum Antennas”, arXiv:2206.14065v2, (June 2022).
  18. R. Huntley, “Quantum Rydberg Receiver Shakes Up RF Fundamentals”, EE Times, (January 2022).
  19. Y. Du, N. Cong, X. Wei, X. Zhang, W. Lou, J. He, and R. Yang, “Realization of multiband communications using different Rydberg final states”, AIP Advances, (June 2022). Demonstrating the applicability of the Rydberg effect in digital transceivers in the Ku and Ka bands.

ACKNOWLEDGEMENT.

I greatly acknowledge my wife, Eva Varadi, for her support, patience, and understanding during the creative process of writing this article.

Stratospheric Drones & Low Earth Satellites: Revolutionizing Terrestrial Rural Broadband from the Skies?

“From an economic and customer experience standpoint, deploying stratospheric drones may be significantly more cost effective than establishing extra terrestrial infrastructures”.

This article, in a different and somewhat shorter format, has also been published by New Street Research under the title “Stratospheric drones: A game changer for rural networks?”. You will need to register with New Street Research to get access.

As a mobile cellular industry expert and a techno-economist, the first time I was presented with the concept of stratospheric drones, I feel the butterflies in my belly. That tingling feeling that I was seeing something that could be a huge disruptor of how mobile cellular networks are being designed and built. Imagine getting rid of the profitability-challenged rural cellular networks (i.e., the towers, the energy consumption, the capital infrastructure investments), and, at the same time, offering much better quality to customers in rural areas than is possible with the existing cellular network we have deployed there. A technology that could fundamentally change the industry’s mobile cellular cost structure for the better at a quantum leap in quality and, in general, provide economical broadband services to the unconnected at a fraction of the cost of our traditional ways of building terrestrial cellular coverage.

Back in 2015, I got involved with Deutsche Telekom AG Group Technology, under the leadership of Bruno Jacobfeuerborn, in working out the detailed operational plans, deployment strategies, and, of course, the business case as well as general economics of building a stratospheric cellular coverage platform from scratch with the UK-based Stratospheric Platform Ltd [2] in which Deutsche Telekom is an investor. The investment thesis was really in the way we expected the stratospheric high-altitude platform to make a large part of mobile operators’ terrestrial rural cellular networks obsolete and how it might strengthen mobile operator footprints in countries where rural and remote coverage was either very weak or non-existing (e.g., The USA, an important market for Deutsche Telekom AG).

At the time, our thoughts were to have an operational stratospheric coverage platform operationally by 2025, 10 years after kicking off the program, with more than 100 high-altitude platforms covering a major Western European country serving rural areas. As it so often is, reality is unforgiving, as it often is with genuinely disruptive ideas. Getting to a stage of deployment and operation at scale of a high-altitude platform is still some years out due to the lack of maturity of the flight platform, including regulatory approvals for operating a HAP network at scale, increasing the operating window of the flight platform, fueling, technology challenges with the advanced antenna system, being allowed to deployed terrestrial-based cellular spectrum above terra firma, etc. Many of these challenges are progressing well, although slowly.

Globally, various companies are actively working on developing stratospheric drones to enhance cellular coverage. These include aerospace and defense giants like Airbus, advancing its Zephyr drone, and BAE Systems, collaborating with Prismatic for their PHASA-35 UAV. One of the most exciting HAPS companies focusing on developing world-leading high-altitude aircraft that I have come across during my planning work on how to operationalize a Stratospheric cellular coverage platform is the German company Leichtwerk AG, which has their hydrogen-fueled StratoStreamer as well as a solar-powered platform under development with the their StratoStreamer being close to production-ready. Telecom companies like Deutsche Telekom AG and BT Group are experimenting with hydrogen-powered drones in partnership with Stratospheric Platforms Limited. Through its subsidiary HAPSMobile, SoftBank is also a significant player with its Sunglider project. Additionally, entities like China Aerospace Science and Technology Corporation and Cambridge Consultants contribute to this field by co-developing enabling technologies (e.g., advanced phased-array antenna, fuel technologies, material science, …) critical for the success and deployability of high-altitude platforms at scale, aiming to improve connectivity in rural, remote, and underserved areas.

The work on integrating High Altitude Platform (HAP) networks with terrestrial cellular systems involves significant coordination with international regulatory bodies like the International Telecommunication Union Radiocommunication Sector (ITU-R) and the World Radiocommunication Conference (WRC). This process is crucial for securing permission to reuse terrestrial cellular spectrum in the stratosphere. Key focus areas include negotiating the allocation and management of frequency bands for HAP systems, ensuring they don’t interfere with terrestrial networks. These efforts are vital for successfully deploying and operating HAP systems, enabling them to provide enhanced connectivity globally, especially in rural areas where terrestrial cellular frequencies are already in use and remote and underserved regions. At the latest WRC-2023 conference, Softbank successfully gained approval within the Asia-Pacific region to use mobile spectrum bands for stratospheric drone-based mobile broadband cellular services.

Most mobile operators have at least 50% of their cellular network infrastructure assets in rural areas. While necessary for providing the coverage that mobile customers have come to expect everywhere, these sites carry only a fraction of the total mobile traffic. Individually, rural sites have poor financial returns due to their proportional operational and capital expenses.

In general, the Opex of the cellular network takes up between 50% and 60% of the Technology Opex, and at least 50% of that can be attributed to maintaining and operating the rural part of the radio access network. Capex is more cyclical than Opex due to, for example, the modernization of radio access technology. Nevertheless, over a typical modernization cycle (5 to 7 years), the rural network demands a little bit less but a similar share of Capex overall as for Opex. Typically, the Opex share of the rural cellular network may be around 10% of the corporate Opex, and its associated total cost is between 12% and 15% of the total expenses.

The global telecom towers market size in 2023 is estimated at ca. 26+ billion euros, ca. 2.5% of total telecom turnover, with a projected growth of CAGR 3.3% from now to 2030. The top 10 Tower management companies manage close to 1 million towers worldwide for mobile CSPs. Although many mobile operators have chosen to spin off their passive site infrastructure, there are still some remaining that may yet to spin off their cellular infrastructure to one of many Tower management companies, captive or independent, such as American Tower (224,019+ towers), Cellnex Telecom (112,737+ towers), Vantage Towers (46,100+ towers), GD Towers (+41,600 towers), etc…

IMAGINE.

Focusing on the low- or no-profitable rural cellular coverage.

Imagine an alternative coverage technology to the normal cellular one all mobile operators are using that would allow them to do without the costly and low-profitable rural cellular network they have today to satisfy their customers’ expectations of high-quality ubiquitous cellular coverage.

For the alternative technology to be attractive, it would need to deliver at least the same quality and capacity as the existing terrestrial-based cellular coverage for substantially better economics.

If a mobile operator with a 40% EBITDA margin did not need its rural cellular network, it could improve its margin by a sustainable 5% and increase its cash generation in relative terms by 50% (i.e., from 0.2×Revenue to 0.3×Revenue), assuming a capex-to-revenue ratio of 20% before implementing the technology being reduced to 15% after due to avoiding modernization and capacity investments in the rural areas.

Imagine that the alternative technology would provide a better cellular quality to the consumer for a quantum leap reduction of the associated cost structure compared to today’s cellular networks.

Such an alternative coverage technology might also impact the global tower companies’ absolute level of sustainable tower revenues, with a substantial proportion of revenue related to rural site infrastructure being at risk.

Figure 1 An example of an unmanned autonomous stratospheric coverage platform. Source: Cambridge Consultants presentation (see reference [2]) based on their work with Stratospheric Platforms Ltd (SPL) and SPL’s innovative high-altitude coverage platform.

TERRESTRIAL CELLULAR RURAL COVERAGE – A MATTER OF POOR ECONOMICS.

When considering the quality we experience in a terrestrial cellular network, a comprehensive understanding of various environmental and physical factors is crucial to predicting the signal quality accurately. All these factors generally work against cellular signal propagation regarding how far the signal can reach from the transmitting cellular tower and the achievable quality (e.g., signal strength) that a customer can experience from a cellular service.

Firstly, the terrain plays a significant role. Rural landscapes often include varied topographies such as hills, valleys, and flat plains, each affecting signal reach differently. For instance, hilly or mountainous areas may cause signal shadowing and reflection, while flat terrains might offer less obstruction, enabling signals to travel further.

At higher frequencies (i.e., above 1 GHz), vegetation becomes an increasingly critical factor to consider. Trees, forests, and other dense foliage can absorb and scatter radio waves, attenuating signals. The type and density of vegetation, along with seasonal changes like foliage density in summer versus winter, can significantly impact signal strength.

The height and placement of transmitting and receiving antennas are also vital considerations. In rural areas, where there are fewer tall buildings, the height of the antenna can have a pronounced effect on the line of sight and, consequently, on the signal coverage and quality. Elevated antennas mitigate the impact of terrain and vegetation to some extent.

Furthermore, the lower density of buildings in rural areas means fewer reflections and less multipath interference than in urban environments. However, larger structures, such as farm buildings or industrial facilities, must be factored in, as they can obstruct or reflect signals.

Finally, the distance between the transmitter and receiver is fundamental to signal propagation. With typically fewer cell towers spread over larger distances, understanding how signal strength diminishes with distance is critical to ensuring reliable coverage at a high quality, such as high cellular throughput, as the mobile customer expects.

The typical way for a cellular operator to mitigate the environmental and physical factors that inevitably result in loss of signal strength and reduced cellular quality (i.e., sub-standard cellular speed) is to build more sites and thus incur increasing Capex and Opex in areas that in general will have poor economical payback associated with any cellular assets. Thus, such investments make an already poor economic situation even worse as the rural cellular network generally would have very low utilization.

Figure 2 Cellular capacity or quality measured by the unit or total throughput is approximately driven by the amount of spectrum (in MHz) times the effective spectral efficiency (in Mbps/MHz/units) times the number of cells or capacity units deployed. When considering the effective spectral efficiency, one needs to consider the possible “boost” that a higher order MiMo or Advanced Antenna System will bring over and above the Single In Single Out (SISO) antenna would result in.

As our alternative technology also would need to provide at least the same quality and capacity it is worth exploring what can be expected in terms of rural terrestrial capacity. In general, we have that the cellular capacity (and quality) can be written as (also shown in Figure 2 above):

Throughput (in Mbps) =
Spectral Bandwidth in MHz ×
Effective Spectral Efficiency in Mbps/MHz/Cell ×
Number of Cells

We need to keep in mind that an additional important factor when considering quality and capacity is that the higher the operational frequency, the lower the radius (all else being equal). Typically, we can improve the radius at higher frequencies by utilizing advanced antenna beam forming, that is, concentrate the radiated power per unit coverage area, which is why you will often hear that the 3.6 GHz downlink coverage radius is similar to that of 1800 MHz (or PCS). This 3.6 GHz vs. 1.8 GHz coverage radius comparison is made when not all else is equal. Comparing a situation where the 1800 MHz (or PCS) radiated power is spread out over the whole coverage area compared to a coverage situation where the 3.6 GHz (or C-band in general) solution makes use of beamforming, where the transmitted energy density is high, allowing to reach the customer at a range that would not be possible if the 3.6 GHz radiated power would have been spread out over the cell like the example of the 1800 MHz.

As an example, take an average Western European rural 5G site with all cellular bands between 700 and 2100 MHz activated. The site will have a total of 85 MHz DL and 75 MHz UL, with a 10 MHz difference between DL and UL due to band 38 Supplementary Downlink SDL) operational on the site. In our example, we will be optimistic and assume that the effective spectral efficiency is 2 Mbps per MHz per cell (average over all bands and antenna configurations), which would indicate a fair amount of 4×4 and 8×8 MiMo antenna systems deployed. Thus, the unit throughput we would expect to be supplied by the terrestrial rural cell would be 170 Mbps (i.e., 85 MHz × 2.0 Mbps/MHz/Cell). With a rural cell coverage radius between 2 and 3 km, we then have an average throughput per square kilometer of 9 Mbps/km2. Due to the low demand and high-frequency bandwidth per active customer, DL speeds exceeding 100+ Mbps should be relatively easy to sustain with 5G standalone, with uplink speeds being more compromised due to larger coverage areas. Obviously, the rural quality can be improved further by deploying advanced antenna systems and increasing the share of higher-order MiMo antennas in general, as well as increasing the rural site density. However, as already pointed out, this would not be an economically reasonable approach.

THE ADVANTAGE OF SEEING FROM ABOVE.

Figure 3 illustrates the difference between terrestrial cellular coverage from a cell tower and that of a stratospheric drone or high-altitude platform (“Antenna-in-the-Sky”). The benefit of seeing the world from above is that environmental and physical factors have substantially less impact on signal propagation and quality primarily being impacted by distance as it approximates free space propagation. This situation is very different for a terrestrial-based cellular tower with its radiated signal being substantially impacted by the environment as well as physical factors.

It may sound silly to talk about an alternative coverage technology that could replace the need for the cellular tower infrastructure that today is critical for providing mobile broadband coverage to, for example, rural areas. What alternative coverage technologies should we consider?

If, instead of relying on terrestrial-based tower infrastructure, we could move the cellular antenna and possibly the radio node itself to the sky, we would have a situation where most points of the ground would be in the line of sight to the “antenna-in-the-sky.” The antenna in the sky idea is a game changer in terms of coverage itself compared to conventional terrestrial cellular coverage, where environmental and physical factors dramatically reduce signal propagation and signal quality.

The key advantage of an antenna in the sky (AIS) is that the likelihood of a line-of-sight to a point on the ground is very high compared to establishing a line-of-sight for terrestrial cellular coverage that, in general, would be very low. In other words, the cellular signal propagation from an AIS closely approximates that of free space. Thus, all the various environmental signal loss factors we must consider for a standard terrestrial-based mobile network do not apply to our antenna in the sky.

Over the last ten years, we have gotten several technology candidates for our antenna-in-the-sky solution, aiming to provide terrestrial broadband services as a substitute, or enhancement, for terrestrial mobile and fixed broadband services. In the following, I will describe two distinct types of antenna-in-the-sky solutions: (a) Low Earth Orbit (LEO) satellites, operating between 500 to 2000 km above Earth, that provide terrestrial broadband services such as we know from Starlink (SpaceX), OneWeb (Eutelsat Group), and Kuiper (Amazon), and (b) So-called, High Altitude Platforms (HAPS), operating at altitudes between 15 to 30 km (i.e., in the stratosphere). Such platforms are still in the research and trial stages but are very promising technologies to substitute or enhance rural network broadband services. The HAP is supposed to be unmanned, highly autonomous, and ultimately operational in the stratosphere for an extended period (weeks to months), fueled by green hydrogen and possibly solar. The high-altitude platform is thus also an unmanned aerial vehicle (UAV), although I will use the term stratospheric drone and HAP interchangeably in the following.

Low Earth Orbit (LEO) satellites and High Altitude Platforms (HAPs) represent two distinct approaches to providing high-altitude communication and observation services. LEO satellites, operating between 500 km and 2,000 km above the Earth, orbit the planet, offering broad global coverage. The LEO satellite platform is ideal for applications like satellite broadband internet, Earth observation, and global positioning systems. However, deploying and maintaining these satellites involves complex, costly space missions and sophisticated ground control. Although, as SpaceX has demonstrated with the Starlink LEO satellite fixed broadband platform, the unitary economics of their satellites significantly improve by scale when the launch cost is also considered (i.e., number of satellites).

Figure 4 illustrates a non-terrestrial network architecture consisting of a Low Earth Orbit (LEO) satellite constellation providing fixed broadband services to terrestrial users. Each hexagon represents a satellite beam inside the larger satellite coverage area. Note that, in general, there will be some coverage overlap between individual satellites, ensuring a continuous service including interconnected satellites. The user terminal (UT) dynamically aligns itself, aiming at the best quality connection provided by the satellites within the UT field of vision.

Figure 4 Illustrating a Non-Terrestrial Network consisting of a Low Earth Orbit (LEO) satellite constellation providing fixed broadband services to terrestrial users (e.g., Starlink, Kuiper, OneWeb,…). Each hexagon represents a satellite beam inside the larger satellite coverage area. Note that, in general, there will be some coverage overlap between individual satellites, ensuring a continuous service. The operating altitude of a LEO satellite constellation is between 300 and 2,000 km. It is assumed that the satellites are interconnected, e.g., laser links. The User Terminal antenna (UT) is dynamically orienting itself after the best line-of-sight (in terms of signal quality) to a satellite within UT’s field-of-view (FoV). The FoV has not been shown in the picture above so as not to overcomplicate the illustration. It should be noted just like with the drone it is possible to integrate the complete gNB on the LEO satellite. There might even be applications (e.g., defense, natural & unnatural disaster situations, …) where a standalone 5G SA core is integrated.

On the other hand, HAPs, such as unmanned (autonomous) stratospheric drones, operate at altitudes of approximately 15 km to 30 km in the stratosphere. Unlike LEO satellites, the stratospheric drone can hover or move slowly over specific areas, often geostationary relative to the Earth’s surface. This characteristic makes them more suitable for localized coverage tasks like regional broadband, surveillance, and environmental monitoring. The deployment and maintenance of the stratospheric drones are managed from the Earth’s surface and do not require space launch capabilities. Furthermore, enhancing and upgrading the HAPs is straightforward, as they will regularly be on the ground for fueling and maintenance. Upgrades are not possible with an operational LEO satellite solution where any upgrade would have to wait on a subsequent generation and new launch.

Figure 5 illustrates the high-level network architecture of an unmanned autonomous stratospheric drone-based constellation providing terrestrial cellular broadband services to terrestrial mobile users delivered to their normal 5G terminal equipment. Each hexagon represents a beam arising from the phased-array antenna integrated into the drone’s wingspan. To deliver very high-availability services to a rural area, one could assign three HAPs to cover a given area. The drone-based non-terrestrial network is drawn consistent with the architectural radio access network (RAN) elements from Open RAN, e.g., Radio Unit (RU), Distributed Unit (DU), and Central Unit (CU). It should be noted that the whole 5G gNB (the 5G NodeB), including the CU, could be integrated into the stratospheric drone, and in fact, so could the 5G standalone (SA) packet core, enabling full private mobile 5G networks for defense and disaster scenarios or providing coverage in very remote areas with little possibility of ground-based infrastructure (e.g., the arctic region, or desert and mountainous areas).

Figure 5 illustrates a Non-Terrestrial Network consisting of a stratospheric High Altitude Platform (HAP) drone-based constellation providing terrestrial Cellular broadband services to terrestrial mobile users delivered to their normal 5G terminal equipment. Each hexagon represents a beam inside the larger coverage area of the stratospheric drone. To deliver very high-availability services to a rural area, one could assign three HAPs to cover a given area. The operating altitude of a HAP constellation is between 10 to 50 km with an optimum of around 20 km. It is assumed that there is inter-HAP connectivity, e.g., via laser links. Of course, it is also possible to contemplate having the gNB (full 5G radio node) in the stratospheric drone entirely, which would allow easier integration with LEO satellite backhauls, for example. There might even be applications (e.g., defense, natural & unnatural disaster situations, …) where a standalone 5G SA core is integrated.

The unique advantage of the HAP operating in the stratosphere is (1) The altitude is advantageous for providing wider-area cellular coverage with a near-ideal quality above and beyond what is possible with conventional terrestrial-based cellular coverage because of very high line-of-sight likelihood due to less environment and physical issues that substantially reduces the signal propagation and quality of a terrestrial coverage solution, and (2) More stable atmospheric conditions characterize the stratosphere compared to the troposphere below it. This stability allows the stratospheric drone to maintain a consistent position and altitude with less energy expenditure. The stratosphere offers more consistent and direct sunlight exposure for a solar-powered HAP with less atmospheric attenuation. Moreover, due to the thinner atmosphere at stratospheric altitudes, the stratospheric drone will experience a lower air resistance (drag), increasing the energy efficiency and, therefore, increasing the operational airtime.

Figure 6 illustrates Leichtwerk AG’s StratoStreamer HAP design that is near-production ready. Leichtwerk AG works closely together with AESA towards the type certificate that would make it possible to operationalize a drone constellation in Europe. The StratoStreamer has a wingspan of 65 meter and can carry a payload of 100+ kg. Courtesy: Leichtwerk AG.

Each of these solutions has its unique advantages and limitations. LEO satellites provide extensive coverage but come with higher operational complexities and costs. HAPs offer more focused coverage and are easier to manage, but they need the global reach of LEO satellites. The choice between these two depends on the specific requirements of the intended application, including coverage area, budget, and infrastructure capabilities.

In an era where digital connectivity is indispensable, stratospheric drones could emerge as a game-changing technology. These unmanned (autonomous) drones, operating in the stratosphere, offer unique operational and economic advantages over terrestrial networks and are even seen as competitive alternatives to low earth orbit (LEO) satellite networks like Starlink or OneWeb.

STRATOSPHERIC DRONES VS TERRESTRIAL NETWORKS.

Stratospheric drones positioned much closer to the Earth’s surface than satellites, provide distinct signal strength and latency benefits. The HAP’s vantage point in the stratosphere (around 20 km above the Earth) ensures a high probability of line-of-sight with terrestrial user devices, mitigating the adverse effects of terrain obstacles that frequently challenge ground-based networks. This capability is particularly beneficial in rural areas in general and mountainous or densely forested areas, where conventional cellular towers struggle to provide consistent coverage.

Why the stratosphere? The stratosphere is the layer of Earth’s atmosphere located above the troposphere, which is the layer where weather occurs. The stratosphere is generally characterized by stable, dry conditions with very little water vapor and minimal horizontal winds. It is also home to the ozone layer, which absorbs and filters out most of the Sun’s harmful ultraviolet radiation. It is also above the altitude of commercial air traffic, which typically flies at altitudes ranging from approximately 9 to 12 kilometers (30,000 to 40,000 feet). These conditions (in addition to those mentioned above) make operating a stratospheric platform very advantageous.

Figure 6 illustrates the coverage fundamentals of (a) a terrestrial cellular radio network with the signal strength and quality degrading increasingly as one moves away from the antenna and (b) the terrestrial coverage from a stratospheric drone (antenna in the sky) flying at an altitude of 15 to 30 km. The stratospheric drone, also called a High-Altitude Platform (HAP), provides near-ideal signal strength and quality due to direct line-of-sight (LoS) with the ground, compared to the signal and quality from a terrestrial cellular site that is influenced by its environment and physical factors and the fact that LoS is much less likely in a conventional terrestrial cellular network. It is worth keeping in mind that the coverage scenarios where a stratospheric drone and a low earth satellite may excel in particular are in rural areas and outdoor coverage in more dense urban areas. In urban areas, the clutter, or environmental features and objects, will make line-of-site more challenging, impacting the strength and quality of the radio signals.

Figure 6 The chart above illustrates the coverage fundamentals of (a) a terrestrial cellular radio network with the signal strength and quality degrading increasingly as one moves away from the antenna and (b) the terrestrial coverage from a stratospheric drone (antenna in the sky) flying at an altitude of 15 to 30 km. The stratospheric drone, also called a High Altitude Platform (HAP), provides near-ideal signal strength and quality due to direct line-of-sight (LoS) with the ground, compared to the signal & quality from a terrestrial cellular site that is influenced by its environment and physical factors and the fact that LoS is much less likely in a conventional terrestrial cellular network.

From an economic and customer experience standpoint, deploying stratospheric drones may be significantly more cost-effective than establishing extensive terrestrial infrastructure, especially in remote or rural areas. The setup and operational costs of cellular towers, including land acquisition, construction, and maintenance, are substantially higher compared to the deployment of stratospheric drones. These aerial platforms, once airborne, can cover vast geographical areas, potentially rendering numerous terrestrial towers redundant. At an operating height of 20 km, one would expect a coverage radius ranging from 20 km up to 500 km, depending on the antenna system, application, and business model (e.g., terrestrial broadband services, surveillance, environmental monitoring, …).

The stratospheric drone-based coverage platform, and by platform, I mean the complete infrastructure that will replace the terrestrial cellular network, will consist of unmanned autonomous drones with a considerable wingspan (e.g., 747-like of ca. 69 meters). For example, European (German) Leichtwerk’s StratoStreamer has a wingspan of 65 meters and a wing area of 197 square meters with a payload of 120+ kg (note: in comparison a Boing 747 has ca. 500+ m2 wing area but its payload is obviously much much higher and in the range of 50 to 60 metric tons). Leichtwerk AG work closely together with AESA in order to achieve the European Union Aviation Safety Agency (EASA) type certificate that would allow the HAPS to integrate into civil airspace (see refs. [34] for what that means).

An advanced antenna system is positioned under the wings (or the belly) of the drone. I will assume that the coverage radius provided by a single drone is 50 km, but it can dynamically be made smaller or larger depending on the coverage scenario and use case. The drone-based advanced antenna system breaks up the coverage area (ca. six thousand five hundred plus square kilometers) into 400 patches (i.e., a number that can be increased substantially), averaging approx. 16 km2 per patch and a radius of ca. 2.5 km. Due to its near-ideal cellular link budget, the effective spectral efficiency is expected to be initially around 6 Mbps per MHz per cell. Additionally, the drone does not have the same spectrum limitations as a rural terrestrial site and would be able to support frequency bands in the downlink from ~900 MHz up to 3.9 GHz (and possibly higher, although likely with different antenna designs). Due to the HAP altitude, the Earth-to-HAP uplink signal will be limited to a lower frequency spectrum to ensure good signal quality is being received at the stratospheric antenna. It is prudent to assume a limit of 2.1 GHz to possibly 2.6 GHz. All under the assumption that the stratospheric drone operator has achieved regulatory approval for operating the terrestrial cellular spectrum from their coverage platform. It should be noted that today, cellular frequency spectrum approved for terrestrial use cannot be used at an altitude unless regulatory permission has been given (more on this later).

Let’s look at an example. We would need ca. 46 drones to cover the whole of Germany with the above-assumed specifications. Furthermore, if we take the average spectrum portfolio of the 3 main German operators, this will imply that the stratospheric drone could be functioning with up to 145 MHz in downlink and at least 55 MHz uplink (i.e., limiting UL to include 2.1 GHz). Using the HAP DL spectral efficiency and coverage area we get a throughput density of 70+ Mbps/km2 and an effective rural cell throughput of 870 Mbps. In terrestrial-based cellular coverage, the contribution to quality at higher frequencies is rapidly degrading as a function of the distance to the antenna. This is not the case for HAP-based coverage due to its near-ideal signal propagation.

In comparison, the three incumbent German operators have on average ca. 30±4k sites per operator with an average terrestrial coverage area of 12 km2 and a coverage radius of ca. 2.0 km (i.e., smaller in cities, ~1.3 km, larger in rural areas, ~2.7 km). Assume that the average cost of ownership related only to the passive part of the site is 20+ thousand euros and that 50% of the 30k sites (expect a higher number) would be redundant as the rural coverage would be replaced by stratospheric drones. Such a site reduction quantum conservatively would lead to a minimum gross monetary reduction of 300 million euros annually (not considering the cost of the alternative technology coverage solution).

In our example, the question is whether we can operate a stratospheric drone-based platform covering rural Germany for less than 300 million euros yearly. Let’s examine this question. Say the stratospheric drone price is 1 million euros per piece (similar to the current Starlink satellite price, excluding the launch cost, which would add another 1.1 million euros to the satellite cost). For redundancy and availability purposes, we assume we need 100 stratospheric drones to cover rural Germany, allowing me to decommission in the radius of 15 thousand rural terrestrial sites. The decommissioning cost and economical right timing of tower contract termination need to be considered. Due to the standard long-term contracts may be 5 (optimistic) to 10+ years (realistic) year before the rural network termination could be completed. Many Telecom businesses that have spun out their passive site infrastructure have done so in mutual captivity with the Tower management company and may have committed to very “sticky” contracts that have very little flexibility in terms of site termination at scale (e.g., 2% annually allowed over total portfolio).

We have a capital expense of 100 million for the stratospheric drones.  We also have to establish the support infrastructure (e.g., ground stations, airfield suitability rework, development, …), and consider operational expenses. The ballpark figure for this cost would be around 100 million euros for Capex for establishing the supporting infrastructure and another 30 million euros in annual operational expenses. In terms of steady-state Capex, it should be at most 20 million per year. In our example, the terrestrial rural network would have cost 3 billion euros, mainly Opex, over ten years compared to 700 million euros, a little less than half as Opex, for the stratospheric drone-based platform (not considering inflation).

The economical requirements of a stratospheric unmanned and autonomous drone-based coverage platform should be superior compared to the current cellular terrestrial coverage platform. As the stratospheric coverage platform scales and increasingly more stratospheric drones are deployed, the unit price is also likely to reduce accordingly.

Spectrum usage rights yet another critical piece.

It should be emphasized that the deployment of cellular frequency spectrum in stratospheric and LEO satellite contexts is governed by a combination of technical feasibility, regulatory frameworks, coordination to prevent interference, and operational needs. The ITU, along with national regulatory bodies, plays a central role in deciding the operational possibilities and balancing the needs and concerns of various stakeholders, including satellite operators, terrestrial network providers, and other spectrum users. Today, there are many restrictions and direct regulatory prohibitions in repurposing terrestrially assigned cellular frequencies for non-terrestrial purposes.

The role of the World Radiocommunications Conference (WRC) role is pivotal in managing the global radio-frequency spectrum and satellite orbits. Its decisions directly impact the development and deployment of various radiocommunication services worldwide, ensuring their efficient operation and preventing interference across borders. The WRC’s work is fundamental to the smooth functioning of global communication networks, from television and radio broadcasting to cellular networks and satellite-based services. The WRC is typically held every three to four years, with the latest one, WRC-23, held in Dubai at the end of 2023, reference [13] provides the provisional final acts of WRC-23 (December 2023). In landmark recommendation, WRC-23 relaxed the terrestrial-only conditions for the 698 to 960 MHz and 1,71 to 2.17 GHz, and 2.5 to 2.69 GHz frequency bands to also apply for high-altitude platform stations (HAPS) base stations (“Antennas-in -Sky”). It should be noted that there are slightly different frequency band ranges and conditions, depending on which of the three ITU-R regions (as well as exceptions for particular countries within a region) the system will be deployed in. Also the HAPS systems do not enjoy protection or priority over existing use of those frequency bands terrestrially. It is important to note that the WRC-23 recommendation only apply to coverage platforms (i.e., HAPS) in the range from 20 to 50 km altitude. These WRC-23 frequency-bands relaxation does not apply to satellite operation. With the recognized importance of non-terrestrial networks and the current standardization efforts (e.g., towards 6G), it is expected that the fairly restrictive regime on terrestrial cellular spectrum may be relaxed further to also allow mobile terrestrial spectrum to be used in “Antenna-in-the-Sky” coverage platforms. Nevertheless, HAPS and terrestrial use of cellular frequency spectrum will have to be coordinated to avoid interference and resulting capacity and quality degradation.

SoftBank announced recently (i.e., 28 December 2023 [11]), after deliberations at the WRC-23, that they had successfully gained approval within the Asia-Pacific region (i.e., ITU-R region 3) to use mobile spectrum bands, namely 700-900MHz, 1.7GHz, and 2.5GHz, for stratospheric drone-based mobile broadband cellular services (see also refs. [13]). As a result of this decision, operators in different countries and regions will be able to choose a spectrum with greater flexibility when they introduce HAPS-based mobile broadband communication services, thereby enabling seamless usage with existing smartphones and other devices.

Another example of re-using terrestrial licensed cellular spectrum above ground is SpaceX direct-to-cell capable 2nd generation Starlink satellites.

On January 2nd, 2024, SpaceX launched their new generation of Starlink satellites with direct-to-cell capabilities to close a connection to a regular mobile cellular phone (e.g., smartphone). The new direct-to-cell Starlink satellites use T-Mobile US terrestrial licensed cellular frequency band (i.e., 2×5 MHz Band 25, PCS G-block) and will work, according to T-Mobile US, with most of their existing mobile phones. The initial direct-to-cell commercial plans will only support low-bandwidth text messaging and no voice or more bandwidth-heavy applications (e.g., streaming). Expectations are that the direct-to-cell system would deliver up to 18.3 Mbps (3.66 Mbps/MHz/cell) downlink and up to 7.2 Mbps (1.44 Mbps/MHz/cell) uplink over a channel bandwidth of 5 MHz (maximum).

Given that terrestrial 4G LTE systems struggle with such performance, it will be super interesting to see what the actual performance of the direct-to-cell satellite constellation will be.

COMPARISON WITH LEO SATELLITE BROADBAND NETWORKS.

When juxtaposed with LEO satellite networks such as Starlink (SpaceX), OneWeb (Eutelsat Group), or Kuiper (Amazon), stratospheric drones offer several advantages. Firstly, the proximity to the Earth’s surface (i.e., 300 – 2,000 km) results in lower latency, a critical factor for real-time applications. While LEO satellites, like those used by Starlink, have reduced latency (ca. 3 ms round-trip-time) compared to traditional geostationary satellites (ca. 240 ms round-trip-time), stratospheric drones can provide even quicker response times (one-tenth of an ms in round-trip-time), making the stratospheric drone substantially more beneficial for applications such as emergency services, telemedicine, and high-speed internet services.

A stratospheric platform operating at 20 km altitude and targeting surveillance, all else being equal, would be 25 times better at distinguishing objects apart than an LEO satellite operating at 500 km altitude. The global aerial imaging market is expected to exceed 7 billion euros by 2030, with a CAGR of 14.2% from 2021. The flexibility of the stratospheric drone platform allows for combining cellular broadband services and a wide range of advanced aerial imaging services. Again, it is advantageous that the stratospheric drone regularly returns to Earth for fueling, maintenance, and technology upgrades and enhancements. This is not possible with an LEO satellite platform.

Moreover, the deployment and maintenance of stratospheric drones are, in theory, less complex and costly than launching and maintaining a constellation of satellites. While Starlink and similar projects require significant upfront investment for satellite manufacturing and rocket launches, stratospheric drones can be deployed at a fraction of the cost, making them a more economically viable option for many applications.

The Starlink LEO satellite constellation currently is the most comprehensive satellite (fixed) broadband coverage service. As of November 2023, Starlink had more than 5,000 satellites in low orbit (i.e., ca. 550 km altitude), and an additional 7,000+ are planned to be deployed, with a total target of 12+ thousand satellites. The current generation of Starlink satellites has three downlink phased-array antennas and one uplink phase-array antenna. This specification translates into 48 beams downlink (satellite to ground) and 16 beams uplink (ground to satellite). Each Starlink beam covers approx. 2,800 km2 with a coverage range of ca. 30 km, over which a 250 MHz downlink channel (in the Ku band) has been assigned. According to Portillo et al. [14], the spectral efficiency is estimated to be 2.7 Mbps per MHz, providing a total throughput of a maximum of 675 Mbps in the coverage area or a throughput density of ca. 0.24 Mbps per km2.

According to the latest Q2-2023 Ookla speed test it is found that “among the 27 European countries that were surveyed, Starlink had median download speeds greater than 100 Mbps in 14 countries, greater than 90 Mbps in 20 countries, and greater than 80 in 24 countries, with only three countries failing to reach 70 Mbps” (see reference [18]). Of course, the actual customer experience will depend on the number of concurrent users demanding resources from the LEO satellite as well as weather conditions, proximity of other users, etc. Starlink themselves seem to have set an upper limit of 220 Mbps download speed for their so-called priority service plan or otherwise 100 Mbps (see [19] below). Quite impressive performance if there are no other broadband alternatives available.

According to Elon Musk, SpaceX aims to reduce each Starlink satellite’s cost to less than one million euros. However, according to Elon Musk, the unit price will depend on the design, capabilities, and production volume. The launch cost using the SpaceX Falcon 9 launch vehicle starts at around 57 million euros, and thus, the 50 satellites would add a launch cost of ca. 1.1 million euros per satellite. SpaceX operates, as of September 2023, 150 ground stations (“Starlink Gateways”) globally that continue to connect the satellite network with the internet and ground operations. At Starlink’s operational altitude, the estimated satellite lifetime is between 5 and 7 years due to orbital decay, fuel and propulsion system exhaustion, and component durability. Thus, a LEO satellite business must plan for satellite replacement cycles. This situation differs greatly from the stratospheric drone-based operation, where the vehicles can be continuously maintained and upgraded. Thus, they are significantly more durable, with an expected useful lifetime exceeding ten years and possibly even 20 years of operational use.

Let’s consider our example of Germany and what it would take to provide LEO satellite coverage service targeting rural areas. It is important to understand that a LEO satellite travels at very high speeds (e.g., upwards of 30 thousand km per hour) and thus completes an orbit around Earth in between 90 to 120 minutes (depending on the satellite’s altitude). It is even more important to remember that Earth rotates on its axis (i.e., 24 hours for a full rotation), and the targeted coverage area will have moved compared to a given satellite orbit (this can easily be several hundreds to thousands of kilometers). Thus, to ensure continuous satellite broadband coverage of the same area on Earth, we need a certain number of satellites in a particular orbit and several orbits to ensure continuous coverage at a target area on Earth. We would need at least 210 satellites to provide continuous coverage of Germany. Most of the time, most satellites would not cover Germany, and the operational satellite utilization will be very low unless other areas outside Germany are also being serviced.

Economically, using the Starlink numbers above as a guide, we incur a capital expense of upwards of 450 million euros to realize a satellite constellation that could cover Germany. Let’s also assume that the LEO satellite broadband operator (e.g., Starlink) must build and launch 20 satellites annually to maintain its constellation and thus incur an additional Capex of ca. 40+ million euros annually. This amount does not account for the Capex required to build the ground network and the operations center. Let’s say all the rest requires an additional 10 million euros Capex to realize and for miscellaneous going forward. The technology-related operational expenses should be low, at most 30 million euros annually (this is a guesstimate!) and likely less. So, covering Germany with an LEO broadband satellite platform over ten years would cost ca. 1.3 billion euros. Although substantially more costly than our stratospheric drone platform, it is still less costly than running a rural terrestrial mobile broadband network.

Despite being favorable compared in economic to the terrestrial cellular network, it is highly unlikely to make any operational and economic sense for a single operator to finance such a network, and it would probably only make sense if shared between telecom operators in a country and even more so over multiple countries or states (e.g., European Union, United States, PRC, …).

Despite the implied silliness of a single mobile operator deploying a satellite constellation for a single Western European country (irrespective of it being fairly large), the above example serves two purposes; (1) To illustrates how economically in-efficient rural mobile networks are that a fairly expansive satellite constellation could be more favorable. Keep in mind that most countries have 3 or 4 of them, and (2) It also shows that the for operators to share the economics of a LEO satellite constellation over larger areal footprint may make such a strategy very attractive economically,

Due to the path loss at 550 km (LEO) being substantially higher than at 20 km (stratosphere), all else being equal, the signal quality of the stratospheric broadband drone would be significantly better than that of the LEO satellite. However, designing the LEO satellite with more powerful transmitters and sensitive receivers can compensate for the factor of almost 30 in altitude difference to a certain extent. Clearly, the latency performance of the LEO satellite constellation would be inferior to that of the stratospheric drone-based platform due to the significantly higher operating altitude.

It is, however, the capacity rather than shared cost could be the stumbling block for LEOs: For a rural cellular network or stratospheric drone platform, we see the MNOs effectively having “control” over the capex costs of the network, whether it be the RAN element for a terrestrial network, or the cost of whole drone network (even if it in the future, this might be able to become a shared cost).

However, for the LEO constellation, we think the economics of a single MNO building a LEO constellation even for their own market is almost entirely out of the question (ie multiple €bn capex outlay). Hence, in this situation, the MNOs will rely on a global LEO provider (ie Starlink, or AST Space Mobile) and will “lend” their spectrum to their in their respective geography in order to provide service. Like the HAPs, this will also require further regulatory approvals in order to free up terrestrial spectrum for satellites in rural areas.

We do not yet have the visibility of the payments the LEOs will require, so there is the potential that this could be a lower cost alternative again to rural networks, but as we show below, we think the real limitation for LEOs might not be the shared capacity rental cost, but that there simply won’t be enough capacity available to replicate what a terrestrial network can offer today.

However, the stratospheric drone-based platform provides a near-ideal cellular performance to the consumer, close to the theoretical peak performance of a terrestrial cellular network. It should be emphasized that the theoretical peak cellular performance is typically only experienced, if at all, by consumers if they are very near the terrestrial cellular antenna and in a near free-space propagation environment. This situation is a very rare occurrence for the vast majority of mobile consumers.

Figure 7 summarizes the above comparison between a rural terrestrial cellular network with the non-terrestrial cellular networks such as LEO satellites and Stratospheric drones.

Figure 7 Illustrating a comparison between terrestrial cellular coverage with stratospheric drone-based (“Antenna-in-the-sky”) cellular coverage and Low Earth Orbit (LEO) satellite coverage options.

While the majority of the 5,500+ Starlink constellation is 13 GHz (Ku-band), at the beginning of 2024, Space X launched a few 2nd generation Starlink satellites that support direct connections from the satellite to a normal cellular device (e.g., smartphone), using 5 MHz of T-Mobile USA’s PCS band (1900 MHz). The targeted consumer service, as expressed by T-Mobile USA, is providing texting capabilities over areas with no or poor existing cellular coverage across the USA. This is fairly similar to services at similar cellular coverage areas presently offered by, for example, AST SpaceMobile, OmniSpace, and Lynk Global LEO satellite services with reported maximum speed approaching 20 Mbps. The so-called Direct-2-Device, where the device is a normal smartphone without satellite connectivity functionality, is expected to develop rapidly over the next 10 years and continue to increase the supported user speeds (i.e., utilized terrestrial cellular spectrum) and system capacity in terms of smaller coverage areas and higher number of satellite beams.

Table 1 below provides an overview of the top 10 LEO satellite constellations targeting (fixed) internet services (e.g., Ku band), IoT and M2M services, and Direct-to-Device (or direct-to-cell) services. The data has been compiled from the NewSpace Index website, which should be with data as of 31st of December 2023. The Top-10 satellite constellation rank has been based on the number of launched satellites until the end of 2023. Two additional Direct-2-Cell (D2C or Direct-to-Device, D2D) LEO satellite constellations are planned for 2024-2025. One is SpaceX Starlink 2nd generation, which launched at the beginning of 2024, using T-Mobile USA’s PCS Band to connect (D2D) to normal terrestrial cellular handsets. The other D2D (D2C) service is Inmarsat’s Orchestra satellite constellation based on L-band (for mobile terrestrial services) and Ka for fixed broadband services. One new constellation (Mangata Networks) targeting 5G services. With two 5G constellations already launched, i.e., Galaxy Space (Yinhe) launched 8 LEO satellites, 1,000 planned using Q- and V-bands (i.e., not a D2D cellular 5G service), and OmniSpace launched two satellites and have planned 200 in total. Moreover, currently, there is one planned constellation targeting 6G by the South Korean Hanwha Group (a bit premature, but interesting nevertheless) with 2,000 6G LEO Satellites planned. Most currently launched and planned satellite constellations offering (or plan to provide) Direct-2-Cell services, including IoT and M2M, are designed for low-frequency bandwidth services that are unlikely to compete with terrestrial cellular networks’ quality of service where reasonable good coverage (or better) exists.

In Table 1 below, we then show 5 different services with the key input variables as cell radius, spectral efficiency and downlink spectrum. From this we can derive what the “average” capacity could be per square kilometer of rural coverage.

We focus on this metric as the best measure of capacity available once multiple users are on the service the spectrum available is shared. This is different from “peak” speeds which are only relevant in the case of very few users per cell.

  • We start with terrestrial cellular today for bands up to 2.1GHz and show that assuming a 2.5km cell radius, the average capacity is equivalent to 11Mbps per sq.km.
  • For a LEO service using Ku-band, i.e., with 250MHz to an FWA dish, the capacity could be ca. 2Mbps per sq.km.
  • For a LEO-based D2D device, what is unknown is what the ultimate spectrum allowance could be for satellite services with cellular spectrum bands, and spectral efficiency. Giving the benefit of the doubt on both, but assuming the beam radius is always going to be larger, we can get to an “optimistic” future target of 2Mbps per sq. km, i.e., 1/5th of a rural terrestrial network.
  • Finally, we show for a stratospheric drone, that given similar cell radius to a rural cell today, but with higher downlink available and greater spectral efficiency, we can reach ca. 55Mbps per sq. km, i.e. 5x what a current rural network can offer.

INTEGRATING WITH 5G AND BEYOND.

The advent of 5G, and eventually 6G, technology brings another dimension to the utility of stratospheric drones delivering mobile broadband services. The high-altitude platform’s ability to seamlessly integrate with existing 5G networks makes them an attractive option for expanding coverage and enhancing network capacity at superior economics, particularly in rural areas where the economics for terrestrial-based cellular coverage tend to be poor. Unlike terrestrial networks that require extensive groundwork for 5G rollout, the non-terrestrial network operator (NTNO) can rapidly deploy stratospheric drones to provide immediate 5G coverage over large areas. The high-altitude platform is also incredibly flexible compared to both LEO satellite constellations and conventional rural cellular network flexibility. The platform can easily be upgraded during its ground maintenance window and can be enhanced as the technology evolves. For example, upgrading to and operationalizing 6G would be far more economical with a stratospheric platform than having to visit thousands or more rural sites to modernize or upgrade the installed active infrastructure.

SUMMARY.

Stratospheric drones represent a significant advancement in the realm of wireless communication. Their strategic positioning in the stratosphere offers superior coverage and connectivity compared to terrestrial networks and low-earth satellite solutions. At the same time, their economic efficiency makes them an attractive alternative to ground-based infrastructures and LEO satellite systems. As technology continues to evolve, these high-altitude platforms (HAPs) are poised to play a crucial role in shaping the future of global broadband connectivity and ultra-high availability connectivity solutions, complementing the burgeoning 5G networks and paving the way for next-generation three-dimensional communication solutions. Moving away from today’s flat-earth terrestrial-locked communication platforms.

The strategic as well as the disruptive potential of the unmanned autonomous stratospheric terrestrial coverage platform is enormous, as shown in this article. It has the potential to make most of the rural (at least) cellular infrastructure redundant, resulting in substantial operational and economic benefits to existing mobile operators. At the same time, the HAPs could, in rural areas, provide much better service overall in terms of availability, improved coverage, and near-ideal speeds compared to what is the case in today’s cellular networks. It might also, at scale, become a serious competitive and economical threat to LEO satellite constellations, such as, for example, Starlink and Kuipers, that would struggle to compete on service quality and capacity compared to a stratospheric coverage platform.

Although the strategic, economic, as well as disruptive potential of the unmanned autonomous stratospheric terrestrial coverage platform is enormous, as shown in this article, the flight platform and advanced antenna technology are still in a relatively early development phase. Substantial regulatory work remains in terms of permitting the terrestrial cellular spectrum to be re-used above terra firma at the “Antenna-in-the-Sky. The latest developments out of WRC-23 for Asia Pacific appear very promising, showing that we are moving in the right direction of re-using terrestrial cellular spectrum in high-altitude coverage platforms. Last but not least, operating an unmanned (autonomous) stratospheric platform involves obtaining certifications as well as permissions and complying with various flight regulations at both national and international levels.

Terrestrial Mobile Broadband Network – takeaway:

  • It is the de facto practice for mobile cellular networks to cover nearly 100% geographically. The mobile consumer expects a high-quality, high-availability service everywhere.
  • A terrestrial mobile network has a relatively low area coverage per unit antenna with relatively high capacity and quality.
  • Mobile operators incur high and sustainable infrastructure costs, especially in rural areas with low or no return on that cost.
  • Physical obstructions and terrain limit performance (i.e., non-free space characteristics).
  • Well-established technology with high reliability.
  • The potential for high bandwidth and low latency in urban areas with high demand may become a limiting factor for LEO satellite constellations and stratospheric drone-based platforms. Thus, it is less likely to provide operational and economic benefits covering high-demand, dense urban, and urban areas.

LEO Satellite Network – takeaway:

  • The technology is operational and improving. There is currently some competition (e.g., Starlink, Kuiper, OneWeb, etc.) in this space, primarily targeting fixed broadband and satellite backhaul services. Increasingly, new LEO satellite-based business models are launched providing lower-bandwidth cellular-spectrum based direct-to-device (D2D) text, 4G and 5G services to regular consumer and IoT devices (i.e., Starlink, Lynk Global, AST SpaceMobile, OmniSpace, …).
  • Broader coverage, suitable for global reach. It may only make sense when the business model is viewed from a worldwide reach perspective (e.g., Starlink, OneWeb,…), resulting in much-increased satellite network utilization.
  • An LEO satellite broadband network can cover a vast area per satellite due to its high altitude. However, such systems are in nature capacity-limited, although beam-forming antenna technologies (e.g., phased array antennas) allow better capacity utilization.
  • The LEO satellite solutions are best suited for low-population areas with limited demand, such as rural and largely unpopulated areas (e.g., sea areas, deserts, coastlines, Greenland, polar areas, etc.).
  • Much higher latency compared to terrestrial and drone-based networks. 
  • Less flexible once in orbit. Upgrades and modernization only via replacement.
  • The LEO satellite has a limited useful operational lifetime due to its lower orbital altitude (e.g., 5 to 7 years).
  • Lower infrastructure cost for rural coverage compared to terrestrial networks, but substantially higher than drones when targeting regional areas (e.g., Germany or individual countries in general).
  • Complementary to the existing mobile business model of communications service providers (CSPs) with a substantial business risk to CSPs in low-population areas where little to no capacity limitations may occur.
  • Requires regulatory permission (authorization) to operate terrestrial frequencies on the satellite platform over any given country. This process is overseen by national regulatory bodies in coordination with the International Telecommunication Union (ITU) as well as national regulators (e.g., FCC in the USA). Satellite operators must apply for frequency bands for uplink and downlink communications and coordinate with the ITU to avoid interference with other satellites and terrestrial systems. In recent years, however, there has been a trend towards more flexible spectrum regulations, allowing for innovative uses of the spectrum like integrating terrestrial and satellite services. This flexibility is crucial in accommodating new technologies and service models.
  • Operating a LEO satellite constellation requires a comprehensive set of permissions and certifications that encompass international and national space regulations, frequency allocation, launch authorization, adherence to space debris mitigation guidelines, and various liability and insurance requirements.
  • Both LEO and MEO satellites is likely going to be complementary or supplementary to stratospheric drone-based broadband cellular networks offering high-performing transport solutions and possible even acts as standalone or integrated (with terrestrial networks) 5G core networks or “clouds-in-the-sky”.

Stratospheric Drone-Based Network – takeaway:

  • It is an emerging technology with ongoing research, trials, and proof of concept.
  • A stratospheric drone-based broadband network will have lower deployment costs than terrestrial and LEO satellite broadband networks.
  • In rural areas, the stratospheric drone-based broadband network offers better economics and near-ideal quality than terrestrial mobile networks. In terms of cell size and capacity, it can easily match that of a rural mobile network.
  • The solution offers flexibility and versatility and can be geographically repositioned as needed. The versatility provides a much broader business model than “just” an alternative rural coverage solution (e.g., aerial imaging, surveillance, defense scenarios, disaster area support, etc.).
  • Reduced latency compared to LEO satellites.
  • Also ideal for targeted or temporary coverage needs.
  • Complementary to the existing mobile business model of communications service providers (CSPs) with additional B2B and public services business potential from its application versatility.
  • Potential substantial negative impact on the telecom tower business as the stratospheric drone-based broadband network would make (at least) rural terrestrial towers redundant.
  • May disrupt a substantial part of the LEO satellite business model due to better service quality and capacity leaving the LEO satellite constellations revenue pool to remote areas and specialized use cases.
  • Requires regulatory permission to operate terrestrial frequencies (i.e., frequency authorization) on the stratospheric drone platform (similar to LEO satellites). Big steps have are already been made at the latest WRC-23, where the frequency bands 698 to 960 MHz, 1710 to 2170 MHz, and 2500 to 2690 MHz has been relaxed to allow for use in HAPS operating at 20 to 50 km altitude (i.e., the stratosphere).
  • Operating a stratospheric platform in European airspace involves obtaining certifications as well as permissions and (of course) complying with various regulations at both national and international levels. This includes the European Union Aviation Safety Agency (EASA) type certification and the national civil aviation authorities in Europe.

FURTHER READING.

  1. New Street Research “Stratospheric drones: A game changer for rural networks?” (January 2024).
  2. https://hapsalliance.org/
  3. https://www.stratosphericplatforms.com/, see also “Beaming 5G from the stratosphere” (June, 2023) and “Cambridge Consultants building the world’s largest  commercial airborne antenna” (2021).
  4. Iain Morris, “Deutsche Telekom bets on giant flying antenna”, Light Reading (October 2020).
  5. “Deutsche Telekom and Stratospheric Platforms Limited (SPL) show Cellular communications service from the Stratosphere” (November 2020).
  6. “High Altitude Platform Systems: Towers in the Skies” (June 2021).
  7. “Stratospheric Platforms successfully trials 5G network coverage from HAPS vehicle” (March 2022).
  8. Leichtwerk AG, “High Altitude Platform Stations (HAPS) – A Future Key Element of Broadband Infrastructure” (2023). I recommend to closely follow Leichtwerk AG which is a world champion in making advanced gliding planes. The hydrogen powered StratoStreamer HAP is near-production ready, and they are currently working on a solar-powered platform. Germany is renowned for producing some of the best gliding planes in the world (after WWII Germany was banned from developing and producing aircrafts, military as well as civil. These restrictions was only relaxed in the 60s). Germany has a long and distinguished history in glider development, dating back to the early 20th century. German manufacturers like Schleicher, Schempp-Hirth, and DG Flugzeugbau are among the world’s leading producers of high-quality gliders. These companies are known for their innovative designs, advanced materials, and precision engineering, contributing to Germany’s reputation in this field.
  9. Jerzy Lewandowski, “Airbus Aims to Revolutionize Global Internet Access with Stratospheric Drones” (December 2023).
  10. Utilities One, “An Elevated Approach High Altitude Platforms in Communication Strategies”, (October 2023).
  11. Rajesh Uppal, “Stratospheric drones to provide 5g wireless communications global internet border security and military surveillance”  (May 2023).
  12. Softbank, “SoftBank Corp.-led Proposal to Expand Spectrum Use for HAPS Base Stations Agreed at World Radiocommunication Conference 2023 (WRC-23)”, press release (December 2023).
  13. ITU Publication, World Radiocommunications Conference 2023 (WRC-23), Provisional Final Acts, (December 2023). Note 1: The International Telecommunication Union (ITU) divides the world into three regions for the management of radio frequency spectrum and satellite orbits: Region 1: includes Europe, Africa, the Middle East west of the Persian Gulf including Iraq, the former Soviet Union, and Mongolia, Region 2: covers the Americas, Greenland, and some of the eastern Pacific Islands, and Region 3: encompasses Asia (excl. the former Soviet Union), Australia, the southwest Pacific, and the Indian Ocean’s islands.
  14. Geoff Huston, “Starlink Protocol Performance” (November 2023). Note 2: The recommendations, such as those designated with “ADD” (additional), are typically firm in the sense that they have been agreed upon by the conference participants. However, they are subject to ratification processes in individual countries. The national regulatory authorities in each member state need to implement these recommendations in accordance with their own legal and regulatory frameworks.
  15. Curtis Arnold, “An overview of how Starlink’s Phased Array Antenna “Dishy McFlatface” works.”, LinkedIn (August 2023).
  16. Quora, “How much does a satellite cost for SpaceX’s Starlink project and what would be the cheapest way to launch it into space?” (June 2023).
  17. The Clarus Network Group, “Starlink v OneWeb – A Comprehensive Comparison” (October 2023).
  18. Brian Wang, “SpaceX Launches Starlink Direct to Phone Satellites”, (January 2024).
  19. Sergei Pekhterev, “The Bandwidth Of The StarLink Constellation…and the assessment of its potential subscriber base in the USA.”, SatMagazine, (November 2021).
  20. I. del Portillo et al., “A technical comparison of three low earth orbit satellite constellation systems to provide global broadband,” Acta Astronautica, (2019).
  21. Nils Pachler et al., “An Updated Comparison of Four Low Earth Orbit Satellite Constellation Systems to Provide Global Broadband” (2021).
  22. Shkelzen Cakaj, “The Parameters Comparison of the “Starlink” LEO Satellites Constellation for Different Orbital Shells” (May 2021).
  23. Mike Puchol, “Modeling Starlink capacity” (October 2022).
  24. Mike Dano, “T-Mobile and SpaceX want to connect regular phones to satellites”, Light Reading (August 2022).
  25. Starlink, “SpaceX sends first text message via its newly launched direct to cell satellites” (January 2024).
  26. GSMA.com, “New Speedtest Data Shows Starlink Performance is Mixed — But That’s a Good Thing” (2023).
  27. Starlink, “Starlink specifications” (Starlink.com page).
  28. AST SpaceMobile website: https://ast-science.com/ Constellation Areas: Internet, Direct-to-Cell, Space-Based Cellular Broadband, Satellite-to-Cellphone. 243 LEO satellites planned. 2 launched.
  29. Lynk Global website: https://lynk.world/ (see also FCC Order and Authorization). It should be noted that Lynk can operate within 617 to 960 MHz (Space-to-Earth) and 663 to 915 MHz (Earth-to-Space). However, only outside the USA. Constellation Area: IoT / M2M, Satellite-to-Cellphone, Internet, Direct-to-Cell. 8 LEO satellites out of 10 planned.
  30. Omnispace website: https://omnispace.com/ Constellation Area: IoT / M2M, 5G. World’s first global 5G non terrestrial network. Initial support 3GPP-defined Narrow-Band IoT radio interface. Planned 200 LEO and <15 MEO satellites. So far only 2 satellites launched.
  31. NewSpace Index: https://www.newspace.im/ I find this resource having excellent and up-to date information of commercial satellite constellations.
  32. Wikipedia, “Satellite constellation”.
  33. LEOLABS Space visualization – SpaceX Starlink mapping. (deselect “Debris”, “Beams”, and “Instruments”, and select “Follow Earth”). An alternative visualization service for Starlink & OneWeb satellites is the website Satellitemap.space (you might go to settings and turn on signal Intensity which will give you the satellite coverage hexagons).
  34. European Union Aviation Safety Agency (EASA). Note that an EASA-type Type Certificate is a critical document in the world of aviation. This certificate is a seal of approval, indicating that a particular type of aircraft, engine, or aviation component meets all the established safety and environmental standards per EASA’s stringent regulations. When an aircraft, engine, or component is awarded an EASA Type Certificate, it signifies a thorough and rigorous evaluation process that it has undergone. This process assesses everything from design and manufacturing to performance and safety aspects. The issuance of the certificate confirms that the product is safe for use in civil aviation and complies with the necessary airworthiness requirements. These requirements are essential to ensure aircraft operating in civil airspace safety and reliability. Beyond the borders of the European Union, an EASA Type Certificate is also highly regarded globally. Many countries recognize or accept these certificates, which facilitate international trade in aviation products and contribute to the global standardization of aviation safety.

ACKNOWLEDGEMENT.

I greatly acknowledge my wife, Eva Varadi, for her support, patience, and understanding during the creative process of writing this article.

I also owe a lot of gratitude to James Ratzer, Partner at New Street Research, for editorial suggestions, great discussions and challenges making the paper far better than it otherwise would have been. I would also like to thank Russel Waller, Pan European Telecoms and ESG Equity Analyst at New Street Research, for being supportive and insistent to get something written for NSR.

I also greatly appreciate my past collaboration and the many discussions on the topic of Stratospheric Drones in particular and advanced antenna designs and properties in general that I have had with Dr. Jaroslav Holis, Senior R&D Manager (Group Technology, Deutsche Telekom AG) over the last couple of years. When it comes to my early involvement in Stratospheric Drones activities with Group Technology Deutsche Telekom AG, I have to recognize my friend, mentor, and former boss, Dr. Bruno Jacobfeuerborn, former CTO Deutsche Telekom AG and Telekom Deutschland, for his passion and strong support for this activity since 2015. My friend and former colleague Rachid El Hattachi deserves the credit for “discovering” and believing in the opportunities that a cellular broadband-based stratospheric drone brings to the telecom industry.

Many thanks to CEO Dr. Reiner Kickert of Leichtwerk AG for providing some high resolution pictures of his beautiful StratoStreamer.

Thanks to my friend Amit Keren for suggesting a great quote that starts this article.

Any errors or unclarities are solely due to myself and not the collaborators and colleagues that have done their best to support this piece.

Telco energy consumption – a path to a greener future?

To my friend Rudolf van der Berg this story is not about how volumetric demand (bytes or bits) results in increased energy consumption (W·h). That notion is silly, as we both “violently” agree on ;-). I recommend that readers also check out Rudolf’s wonderful presentation, “Energy Consumption of the Internet (May 2023),” which he delivered at the RIPE86 student event this year in 2023.

Recently, I had the privilege to watch a presentation by a seasoned executive talk about what his telco company is doing for the environment regarding sustainability and CO2 reduction in general. I think the company is doing something innovative beyond compensating shortfalls with buying certificates and (mis)use of green energy resources.

They replace (reasonably) aggressively their copper infrastructure (country stat for 2022: ~90% of HH/~16% subscriptions) with green sustainable fiber (country stat for 2022: ~78%/~60%). This is an obvious strategy that results in a quantum leap in customer experience potential and helps reduce overall energy consumption resulting from operating the ancient copper network.

Missing a bit imo, was the consideration of and the opportunity to phase out the HFC network (country stat for 2022: ~70%/~60%) and reduce the current HFC+Fibre overbuild of 1.45 and, of course, reduce the energy consumption and operational costs (and complexity) of operating two fixed broadband technologies (3 if we include the copper). However, maybe understandably enough, substantial investments have been made in upgrading to Docsis 3.1. An investment that possibly still is somewhat removed from having been written off.

The “wtf-moment” (in an otherwise very pleasantly and agreeable session) came when the speaker alluded that as part of their sustainability and CO2 reduction strategy, the telco was busy migrating from 4G LTE to 5G with the reasoning that 5G is 90% more energy efficient compared to 4G.

Firstly, it is correct that 5G is (in apples-for-apples comparisons!) ca. 90% more efficient in delivering a single bit compared to 4G. The metric we use is Joules-per-bit or Watts-seconds-per-bit. It is also not uncommon at all to experience Telco executives hinting at the relative greenness of 5G (it is, in my opinion, decidedly not a green broadband communications technology … ).

Secondly, so what! Should we really care about relative energy consumption? After all, we pay for absolute energy consumption, not for whatever relativized measure of consumed energy.

I think I know the answer from the CFO and the in-the-know investors.

If the absolute energy consumption of 5G is higher than that of 4G, I will (most likely) have higher operational costs attributed to that increased power consumption with 5G. If I am not in an apples-for-apples situation, which rarely is the case, and I am anyway really not in, the 5G technology requires substantially more power to provide for new requirements and specifications. I will be worse off regarding the associated cost in absolute terms of money. Unless I also have a higher revenue associated with 5G, I am economically worse off than I was with the older technology.

Having higher information-related energy efficiency in cellular communications systems is a feature of the essential requirement of increasingly better spectral efficiency all else being equal. It does not guarantee that, in absolute monetary terms, a Telco will be better off … far from it!

THE ENERGY OF DELIVERING A BIT.

Energy, which I choose to represent in Joules, is equal to the Power (in Watt or W) that I need to consume per time-unit for a given output unit (e.g., a bit) times the unit of time (e.g., a second) it took to provide the unit.

Take a 4G LTE base station that consumes ca. 5.0kW to deliver a maximum throughput of 160 Mbps per sector (@ 80 MHz per sector). The information energy efficiency of the specific 4G LTE base station (e.g., W·s per bit) would be ca. 10 µJ/bit. The 4G LTE base station requires 10 micro (one millionth) Joules to deliver 1 bit (in 1 second).

In the 5G world, we would have a 5G SA base station, using the same frequency bands as 4G and with an additional 10 MHz @ 700MHz and 100 MHz @ 3.5 GHz included. The 3.5 GHz band is supported by an advanced antenna system (AAS) rather than a classical passive antenna system used for the other frequency bands. This configuration consumes 10 kW with ~40% attributed to the 3.5 GHz AAS, supporting ~1 Gbps per sector (@ 190 MHz per sector). This example’s 5G information energy efficiency would be ca. 0.3 µJ/bit.

In this non-apples-for-apples comparison, 5G is about 30 times more efficient in delivering a bit than 4G LTE (in the example above). Regarding what an operator actually pays for, 5G is twice as costly in energy consumption compared to 4G.

It should be noted that the power consumption is not driven by the volumetric demand but by the time that demand exists and the load per unit of time. Also, base stations will have a power consumption even when idle with the degree depending on the intelligence of the energy management system applied.

So, more formalistic, we have

E per bit = P (in W) · time (in sec) per bit, or in the basic units

J / bit = W·s / bit = W / (bit/s) = W / bps = W / [ MHz · Mbps/MHz/unit · unit-quantity ]

E per bit = P (in W) / [ Bandwidth (in MHz) · Spectral Efficiency (in Mbps/MHz/unit) · unit-quantity ]

It is important to remember that this is about the system spec information efficiency and that there is no direct relationship between the Power that you need and the outputted information your system will ultimately support bit-wise.

\frac{E_{4G}}{bit} \; = \; \frac {\; P_{4G} \;} {\; B_{4G} \; \cdot \; \eta_{4G,eff} \; \cdot N \;\;\;} and \;\;\; \frac{E_{5G}}{bit} \; = \; \frac {\; P_{5G} \;} {\; B_{5G} \; \cdot \; \eta_{5G,eff} \; \cdot N \;}

Thus, the relative efficiency between 4G and 5G is

\frac{E_{4G}/bit}{E_{5G}/bit} \; = \; \frac{\; P_{4G} \;}{\; P_{5G}} \; \cdot \; \frac{\; B_{5G} \;}{\; B_{4G}} \; \cdot \; \frac{\; \eta_{5G,eff} \;}{\; \eta_{4G,eff}}

Currently (i.e., 2023), the various components of the above are approximately within the following ranges.

\frac{P_{4G}}{P_{5G}} \; \lesssim \; 1

\frac{B_{5G}}{B_{4G}} \; > \;2

\frac{\; \eta_{5G,eff} \;}{\; \eta_{4G,eff}} \; \approx \; 10

The power consumption of a 5G RAT is higher than that of a 4G RAT. As we add higher frequency spectrum (e.g., C-band, 6GHz, 23GHz,…) to the 5G RAT, increasingly more spectral bandwidth (B) will be available compared to what was deployed for 4G. This will increase the bit-wise energy efficiency of 5G compared to 4G, although the power consumption is also expected to increase as higher frequencies are supported.

If the bandwidth and system power consumption is the same for both radio access technologies (RATs), then we have the relative information energy efficiency is

\frac{E_{4G}/bit}{E_{5G}/bit} \; \approx \; \frac{\; \eta_{5G,eff} \;}{\; \eta_{4G,eff}} \; \gtrsim \; 10

Depending on the relative difference in spectral efficiency. 5G is specified and designed to have at least ten times (10x) the spectral efficiency of 4G. If you do the math (assuming apples-to-apples applies), it is no surprise that 5G is specified to be 90% more efficient in delivering a bit (in a given unit of time) compared to 4G LTE.

And just to emphasize the obvious,

E_{RAT} \; = \; P_{RAT} \; \cdot \; t \; \approx \; E_{idle} \; + \; P_{BB, RAT} \; \cdot \; t \; +\sum_{freq}P_{freq,\; antenna\; type}\; \cdot \; t_{freq} \;

RAT refers to the radio access technology, BB is the baseband, freq the cellular frequencies, and idle to the situation where the system is not being utilized.

Volume in Bytes (or bits) does not directly relate to energy consumption. As frequency bands are added to a sector (of a base station), the overall power consumption will increase. Moreover, the more computing is required in the antenna, such as for advanced antenna systems, including massive MiMo antennas, the more power will be consumed in the base station. The more the frequency bands are being utilized in terms of time, the higher will the power consumption be.

Indirectly, as the cellular system is being used, customers consume bits and bytes (=8·bit) that will depend on the effective spectral efficiency (in bps/Hz), the amount of effective bandwidth (in Hz) experienced by the customers, e.g., many customers will be in a coverage situation where they may not benefit for example from higher frequency bands), and the effective time they make use of the cellular network resources. The observant reader will see that I like the term “effective.” The reason is that customers rarely enjoy the maximum possible spectral efficiency. Likely, not all the frequency spectrum covering customers is necessarily being applied to individual customers, depending on their coverage situation.

In the report “A Comparison of the Energy Consumption of Broadband Data Transfer Technologies (November 2021),” the authors show the energy and volumetric consumption of mobile networks in Finland over the period from 2010 to 2020. To be clear, I do not support the author’s assertion of causation between volumetric demand and energy consumption. As I have shown above, volumetric usage does not directly cause a given power consumption level. Over the 10-year period shown in the report, they observe a 70% increase in absolute power consumption (from 404 to 686 GWh, CAGR ~5.5%) and a factor of ~70 in traffic volume (~60 TB to ~4,000 TB, CAGR ~52%). Caution should be made in resisting the temptation to attribute the increase in energy over the period to be directly related to the data volume increase, however weak it is (i.e., note that the authors did not resist that temptation). Rudolf van der Berg has raised several issues with the approach of the above paper (as well as with many other related works) and indicated that the data and approach of the authors may not be reliable. Unfortunately, in this respect, it appears that systematic, reliable, and consistent data in the Telco industry is hard to come by (even if that data should be available to the individual telcos).

Technology change from 2G/3G to 4G, site densification, and more frequency bands can more than easily explain the increase in energy consumption (and all are far better explanations than data volume). It should be noted that there will also be reasons that decrease power consumption over time, such as more efficient electronics (e.g., via modernization), intelligent power management applications, and, last but not least, switching off of older radio access technologies.

The factors that drive a cell site’s absolute energy consumption is

  • Radio access technology with new technologies generally consumes more energy than older ones (even if the newer technologies have become increasingly more spectrally efficient).
  • The antenna type and configuration, including computing requirements for advanced signal processing and beamforming algorithms (that will improve the spectral efficiency at the expense of increased absolute energy consumption).
  • Equipment efficiency. In general, new generations of electronics and systems designs tend to be more energy-efficient for the same level of performance.
  • Intelligent energy management systems that allow for effective power management strategies will reduce energy consumption compared to what it would have been without such systems.
  • The network optimization goal policy. Is the cellular network planned and optimized for meeting the demands and needs of the customers (i.e., the economic design framework) or for providing the peak performance to as many customers as possible (i.e., the Umlaut/Ookla performance-driven framework)? The Umlaut/Ookla-optimized network, maxing out on base station configuration, will observe substantially higher energy consumption and associated costs.
The absolute cellular energy consumption has continued to rise as new radio access technologies (RAT) have been introduced irrespective of the leapfrog in those RATS spectral (bits per Hz) and information-related (Joules per bit) efficiencies.

WHY 5G IS NOT A GREEN TECHNOLOGY?

Let’s first re-acquaint ourselves with the 2015 vision of the 5G NGMN whitepaper;

“5G should support a 1,000 times traffic increase in the next ten years timeframe, with energy consumption by the whole network of only half that typically consumed by today’s networks. This leads to the requirement of an energy efficiency increase of x2000 in the next ten years timeframe.” (Section 4.2.2 Energy Efficiency, 5G White Paper by NGMN Alliance, February 2015).

The bold emphasis is my own and not in the paper itself. There is no doubt that the authors of the 5G vision paper had the ambition of making 5G a sustainable and greener cellular alternative than historically had been the case.

So, from the above statement, we have two performance figures that illustrate the ambition of 5G relative to 4G. Firstly, we have a requirement that the 5G energy efficiency should be 2000x higher than 4G (as it was back in the beginning of 2015).

\frac{E_{4G}/bit}{E_{5G}/bit} \; = \; \frac{\; P_{4G} \;}{\; P_{5G}} \; \cdot \; \frac{\; B_{5G} \;}{\; B_{4G}} \; \cdot \; \frac{\; \eta_{5G,eff} \;}{\; \eta_{4G,eff}} \; \geq \; 2,000

or

\frac{\; P_{4G} \;}{\; P_{5G}} \; \cdot \; \frac{\; B_{5G} \;}{\; B_{4G}} \; \geq \; 200

if

\frac{\; \eta_{5G,eff} \;}{\; \eta_{4G,eff}} \; \approx \; 10

Getting more spectrum bandwidth is relatively trivial as you go up in frequency and into, for example, the millimeter wave range (and beyond). However, getting 20+ GHz (e.g., 200+x100 MHz @ 4G) of additional practically usable spectrum bandwidth would be rather (=understatement) ambitious.

And that the absolute energy consumption of the whole 5G network should be half of what it was with 4G

\frac{E_{5G}}{E_{4G}} \; = \; \frac{\; P_{5G} \; \cdot \; t\;}{\; P_{4G} \; \cdot \; t}\; \approx \; \frac{\; P_{5G} \;}{\; P_{4G} \; } \; \leq \; \frac{1}{2}

If you think about this for a moment. Halfing the absolute energy consumption is an enormous challenge, even if it would have been with the same RAT. It requires innovation leapfrogs across the RAT electronic architecture, design, and material science underlying all of it. In other words, fundamental changes are required in the RF frontend (e.g., Power amplifiers, transceivers), baseband processing, DSP, DAC, ADC, cooling, control and management systems, algorithms, compute, etc…

But reality eats vision for breakfast … There really is no sign that the super-ambitious goal set by the NGMN Alliance in early 2015 is even remotely achievable even if we would give it another ten years (i.e., 2035). We are more than two orders of magnitude away from the visionary target of NGMN, and we are almost at the 10-year anniversary of the vision paper. We more or less get the benefit of the relative difference in spectral efficiency (x10), but no innovation beyond that has contributed very much to quantum leap cellular energy efficiency bit-wise.

I know many operators who will say that from a sustainability perspective, at least before the energy prices went through the roof, it really does not matter that 5G, in absolute terms, leads to substantial increases in energy consumption. They use green energy to supply the energy demand from 5G and pay off $CO_2$ deficits with certificates.

First of all, unless the increased cost can be recovered with the customers (e.g., price plan increase), it is a doubtful economic venue to pursue (and has a bit of a Titanic feel to it … going down together while the orchestra is playing).

Second, we should ask ourselves whether it is really okay for any industry to greedily consume sustainable and still relatively scarce green resources without being incentivized (or encouraged) to pursue alternatives and optimize across mobile and fixed broadband technologies. Particularly when fixed broadband technologies, such as fiber, are available, that would lead to a very sizable and substantial reduction in energy consumption … as customers increasingly adapt to fiber broadband.

Fiber is the greenest and most sustainable access technology we can deploy compared to cellular broadband technologies.

SO WHAT?

5G is a reality. Telcos are and will continue to invest substantially into 5G as they migrate their customers from 4G LTE to what ultimately will be 5G Standalone. The increase in customer experience and new capabilities or enablers are significant. By now, most Telcos will (i.e., 2023) have a very good idea of the operational expense associated with 5G (if not … you better do the math). Some will have been exploring investing in their own green power plants (e.g., solar, wind, hydrogen, etc.) to mitigate part of the energy surge arising from transitioning to 5G.

I suspect that as Telcos start reflecting on Open RAN as they pivot towards 6G (-> 2030+), above and beyond what 6G, as a RAT, may bring of additional operational expense pain, there will be new energy consumption and sustainability surprises to the cellular part of Telcos P&L. In general, breaking up an electronic system into individual (non-integrated) parts, as opposed to being integrated into a single unit, is likely to result in an increased power consumption. Some of the operational in-efficiencies that occur in breaking up a tightly integrated design can be mitigated by power management strategies. Though in order to get such power management strategies to work at the optimum may force a higher degree of supplier uniformity than the original intent of breaking up the tightly integrated system.

However, only Telcos that consider both their mobile and fixed broadband assets together, rather than two silos apart, will gain in value for customers and shareholders. Fixed-mobile (network) conversion should be taken seriously and may lead to very different considerations and strategies than 10+ years ago.

With increasing coverage of fiber and with Telcos stimulating aggressive uptake, it will allow those to redesign the mobile networks for what they were initially supposed to do … provide convenience and service where there is no fixed network present, such as when being mobile and in areas where the economics of a fixed broadband network makes it least likely to be available (e.g., rural areas) although LEO satellites (i.e., here today), maybe stratospheric drones (i.e., 2030+), may offer solid economic alternatives for those places. Interestingly, further simplifying the cellular networks supporting those areas today.

TAKE AWAY.

Volume in Bytes (or bits) does not directly relate to the energy consumption of the underlying communications networks that enable the usage.

The duration, the time scale, of the customer’s usage (i.e., the use of the network resources) does cause power consumption.

The bit-wise energy efficiency of 5G is superior to that of 4G LTE. It is designed that way via its spectral efficiency. Despite this, a 5G site configuration is likely to consume more energy than a 4G LTE site in the field and, thus, not a like-for-like in terms of number of bands and type of antennas deployed.

The absolute power consumption of a 5G configuration is a function of the number of bands deployed, the type of antennas deployed, intelligent energy management features, and the effective time 5G resources that customers have demanded.

Due to its optical foundation, Fiber is far more energy efficient in both bit-wise relative terms and absolute terms than any other legacy fixed (e.g., xDSL, HFC) or cellular broadband technology (e.g., 4G, 5G).

Looking forward and with the increasing challenges of remaining sustainable and contributing to CO2 reduction, it is paramount to consider an energy-optimized fixed and mobile converged network architecture as opposed to today’s approach of optimizing the fixed network separately from the cellular network. As a society, we should expect that the industry works hard to achieve an overall reduction in energy consumption, relaxing the demand on existing green energy infrastructures.

With 5G as of today, we are orders of magnitude from the original NGMN vision of energy consumption of only half of what was consumed by cellular networks ten years ago (i.e., 2014), requiring an overall energy efficiency increase of x2000.

Be aware that many Telcos and Infrastructure providers will use bit-wise energy efficiency when they report on energy consumption. They will generally report impressive gains over time in the energy that networks consume to deliver bits to their customers. This is the least one should expect.

Last but not least, the telco world is not static and is RAT-wise not very clean, as mobile networks will have several RATs deployed simultaneously (e.g., 2G, 4G, and 5G). As such, we rarely (if ever) have apples-to-apples comparisons on cellular energy consumption.

ACKNOWLEDGEMENT.

I greatly acknowledge my wife, Eva Varadi, for her support, patience, and understanding during the creative process of writing this article. I also greatly appreciate the discussion on this topic that I have had with Rudolf van der Berg over the last couple of years. I thank him for pointing out and reminding me (when I forget) of the shortfalls and poor quality of most of the academic work and lobbying activities done in this area.

PS

If you are aiming at a leapfrog in absolute energy reduction of your cellular network, above and beyond what you get with your infrastructure suppliers (e.g., Nokia, Ericsson, Huawei…), I really recommend you take a look at Opanga‘s machine learning-based Joule ML solution. The Joules ML has been proven to reduce RAN energy costs by 20% – 40% on top of what the RAT supplier’s (e.g., Ericsson, Nokia, Huawei, etc.) own energy management solutions may bring.

Disclosure: I am associated with Opanga and on their Industry Advisory Board.

The Nature of Telecom Capex – a 2023 Update.

CAPEX … IT’S PERSONAL

I built my first Telco technology Capex model back in 1999. I had just become responsible for what then was called Fixed Network Engineering with a portfolio of all technology engineering design & planning except for the radio access network but including all transport aspects from access up to Core and out to the external world. I got a bit frustrated that every time an assumption changed (e.g., business/marketing/sales), I needed to involve many people in my organization to revise their Capex demand. People that were supposed to get our greenfield network rolled out to our customers. Thus, I built my first Capex model that would take the critical business assumptions, size my network (including the radio access network), and consistently assign the right Capex amounts to each category. The model allowed for rapid turnaround on revised business assumptions and a highly auditable track of changes, planning drivers, and unit prices. Since then, I have built best-practice Capex (and technology Opex) models for many Deutsche Telekom AGs and Ooredoo Group entities. Moreover, I have been creating numerous network and business assessment and valuation models (with an eye on M&A), focusing on technology drivers behind Capex and Opex for many different types of telco companies (30+) operating in an extensive range of market environments around the world (20+). Creating and auditing techno-economical models, making those operational and of high quality, it has (for me) been essential to be extensively involved operationally in the telecom sector.

PRELUDE TO CAPEX.

Capital investments, or Capital Expenditures, or just Capex for short, make Telcos go around. Capex is the monetary means used by your Telco to acquire, develop, upgrade, modernize, and maintain tangible, as well as, in some instances, intangible, assets and infrastructure. We can find Capex back under “Property, Plants, and Buildings” (or PPB) in a company’s balance sheet or directly in the profit & loss (or income) statement. Typically for an investment to be characterized as a capital expense, it needs to have a useful lifetime of at least 2 years and be a physical or tangible asset.

What about software? A software development asset is, by definition, intangible or non-physical. However, it can, and often is, assigned Capex status, although such an assignment requires a bit more judgment (and auditorial approvals) than for a real physical asset.

The “Modern History of Telecom” (in Europe) is well represented by Figure 1, showing the fixed-mobile total telecom Capex-to-Revenue ratio from 1996 to 2025.

From 1996 to 2012, most of the European Telco Capex-to-Revenue ratio was driven by investment into mobile technology introductions such as 2G (GSM) in 1996 and 3G (UMTS) in 2000 to 2002 as well as initial 4G (LTE) investments. It is clear that investments into fixed infrastructure, particularly modernizing and enhancing, have been down-prioritized only until recently (e.g., up to 2010+) when incumbents felt obliged to commence investing in fiber infrastructure and urgent modernization of incumbents’ fixed infrastructures in general. For a long time, the investment focus in the telecom industry was mobile networks and sweating the fixed infrastructure assets with attractive margins.

Figure 1 illustrates the “Modern History of Telecom” in Europe. It shows the historical development of Western Europe Telecom Capex to Revenue ratio trend from 1996 to 2025. The maximum was about 28% at the time 2G (GSM) was launched and at minimum after the cash crunch after ultra-expensive 3G licenses and the dot.com crash of 2020. In recent years, since 2008, Capex to Revenue has been steadily increasing as 4G was introduced and fiber deployment started picking up after 20210. It should be emphasized that the Capex to Revenue trend is for both Mobile and Fixed. It does not include frequency spectrum investments.

Across this short modern history of telecom, possibly one of the worst industry (and technology) investments have been the investments we did into 3G. In Europe alone, we invested 100+ billion Euro (i.e., not included in the Figure) into 2100 MHz spectrum licenses that were supposed to provide mobile customers “internet-in-their-pockets”. Something that was really only enabled with the introduction of 4G from 2010 onwards.

Also, from 2010 onwards, telecom companies (in Europe) started to invest increasingly in fiber deployment as well as upgrading their ailing fixed transport and switching networks focusing on enabling competitive fixed broadband services. But fiber investments have picked up in a significant way in the overall telecom Capex, and I suspect it will remain so for the foreseeable future.

Figure 2 When we take the European Telco revenue (mobile & fixed) over the period 1996 to 2025, it is clear that the mobile business model quantum leaped revenue from its inception to around 2008. After this, it has been in steady decline, even if improvement has been observed in the fixed part of the telco business due to the transition from voice-dominated to broadband. Source: https://stats.oecd.org/

As can be observed from Figure 1, since the telecom credit crunch between 2000 and 2003, the Capex share of revenue has steadily increased from just around 12% in 2004, right after the credit crunch, to almost 20% in 2021. Over the period from 2008 to 2021, the industry’s total revenue has steadily declined, as can be seen in Figure 2. Taking the last 10 years (2011-2021) of mobile and fixed revenue data has, on average, reduced by 4+ billion euros a year. The cumulative annual growth rate (CAGR) was at a great +6% from the inception of 2G services in 1996 to 2008, the year of the “great recession.” From 2008 until 2021, the CAGR has been almost -2% in annual revenue loss for Western Europe.

What does that mean for the absolute total Capex spend over the same period? Figure 3 provides the trend of mobile and fixed Capex spending over the period. Since the “happy days” of 2G and 3G Capex spending, Capex rapidly declined after the industry spent 100+ billion Euro on 3G spectrum alone (i.e., 800+ million euros per MHz or 4+ euros per MHz-pop) before the required multi-billion Euro in 3G infrastructure. Though, after 2009, which was the lowest Capex spend after the 3G licenses were acquired, the telecom industry has steadily grown its annual total Capex spend with ca. +1 billion Euro per year (up to 2021) financing new technology introductions (4G and 5G), substantial mobile radio and core modernizations (a big refresh ca. every 6 -7 years), increasing capacity to continuously cope with consumer demand for broadband, fixed transport, and core infrastructure modernization, and last but not least (since the last ~ 8 years) increasing focus on fiber deployment. Over the same period from 2009 to 2021, the total revenue has declined by ca. 5 billion euros per year in Western Europe.

Figure 3 Using the above “Total Capex to Revenue” (Figure 1) and “Total Revenue” (Figure 2) allows us to estimate the absolute “Total Capex” over the same period. Apart from the big Capex swing around the introduction of 2G and 3G and the sharp drop during the “credit crunch” (2000 – 2003), Capex has grown steadily whilst the industry revenue has declined.

It will be very interesting to see how the next 10 years will develop for the telecom industry and its capital investment. There is still a lot to be done on 5G deployment. In fact, many Telcos are just getting started with what they would characterize as “real 5G”, which is 5G standalone at mid-band frequencies (e.g., > 3 GHz for Europe, 2.5 GHz for the USA), modernizing antenna structures from standard passive (low-order) to active antenna systems with higher-order MiMo antennas, possible mmWave deployments, and of course, quantum leap fiber deployment in laggard countries in Europe (e.g., Germany, UK, Greece, Netherlands, … ). Around 2028 to 2030, it would be surprising if the telecoms industry would not commence aggressively selling the consumer the next G. That is 6G.

At this moment, the next 3 to 5 years of Capital spending are being planned out with the aim of having the 2024 budgets approved by November or December. In principle, the long-term plans, that is, until 2027/2028, have agreed on general principles. Though, with the current financial recession brewing. Such plans would likely be scrutinized as well.

I have, over the last year since I published this article, been asked whether I had any data on Ebitda over the period for Western Europe. I have spent considerable time researching this, and the below chart provides my best shot at such a view for the Telecom industry in Western Europe from the early days of mobile until today. This, however, should be taken with much more caution than the above Caex and Revenues, as individual Telco’ s have changed substantially over the period both in their organizational structure and how results have been represented in their annual reports.

Figure 4 illustrates the historical development of the EBITDA margin over the period from 1995 to 2022 and a projection of the possible trends from 2023 onwards. Caution: telcos’ corporate and financial structures (including reporting and associated transparency into details) have substantially changed over the period. The early first 10+ years are more uncertain concerning margin than the later years. Directionally it is representative of the European Telco industry. Take Deutsche Telekom AG, it “lost” 25% of its revenue between 2005 and 2015 (considering only German & European segments). Over the same period, it shredded almost 27% of its Opex.

CAVEATS

Of course, Capex to Revenue ratios, any techno-economical ratio you may define, or cost distributions of any sort are in no way the whole story of a Telco life-and-budget cycle. Over time, due to possible structural changes in how Telcos operate, the past may not reflect the present and may even be less telling in the future.

Telcos may have merged with other Telcos (e.g., Mobile with Fixed), they may have non-Telco subsidiaries (i.e., IT consultancies, management consultancies, …), they may have integrated their fixed and mobile business units, they may have spun off their infrastructure, making use of towercos for their cell site needs (e.g., GD Towers, Vantage, Cellnex, American Towers …), open fibercos (e.g., Fiberhost Poland, Open Dutch Fiber, …) for their fiber needs, hyperscale cloud providers (e.g., AWS, Amazon, Microsoft Azure, ..) for their platform requirements. Capex and Opex will go left and right, up and down, depending on each of the above operational elements. All that may make comparing one Telco’s Capex with another Telco’s investment level and operational state-of-affairs somewhat uncertain.

I have dear colleagues who may be much more brutal. In general, they are not wrong but not as brutally right as their often high grounds could indicate. But then again, I am not a black-and-white guy … I like colors.

So, I believe that investment levels, or more generally, cost levels, can be meaningfully compared between Telcos. Cost, be it Opex or Capex, can be estimated or modeled with relatively high accuracy, assuming you are in the know. It can be compared with other comparables or non-comparables. Though not by your average financial controller with no technology knowledge and in-depth understanding.

Alas, with so many things in this world, you must understand what you are doing, including the limitations.

IT’S THAT TIME OF THE YEAR … CAPEX IS IN THE AIR.

It is the time of the year when many telcos are busy updating their business and financial planning for the following years. It is not uncommon to plan for 3 to 5 years ahead. It involves scenario planning and stress tests of those scenarios. Scenarios would include expectations of how the relevant market will evolve as well as the impact of the political and economic environment (e.g., covid lockdowns, the war in Ukraine, inflationary pressures, supply-chain challenges, … ) and possible changes to their asset ownership (e.g., infrastructure spin-offs).

Typically, between the end of the third or beginning of the fourth quarter, telecommunications businesses would have converged upon a plan for the coming years, and work will focus on in-depth budget planning for the year to come, thus 2024. This is important for the operational part of the business, as work orders and purchase orders for the first quarter of the following year would need to be issued within the current year.

The planning process can be sophisticated, involving many parts of the organization considering many scenarios, and being almost mathematical in its planning nature. It can be relatively simple with the business’s top-down financial targets to adhere to. In most instances, it’s likely a combination of both. Of course, if you are a publicly-traded company or part of one, your past planning will generally limit how much your new planning can change from the old. That is unless you improve upon your old plans or have no choice but to disappoint investors and shareholders (typically, though, one can always work on a good story). In general, businesses tend to be cautiously optimistic about uncertain business drivers (e.g., customer growth, churn, revenue, EBITDA) and conservatively pessimistic on business drivers of a more certain character (e.g., Capex, fixed cost, G&A expenses, people cost, etc..). All that without substantially and negatively changing plans too much between one planning horizon to the next.

Capital expense, Capex, is one of the foundations, or enablers, of the telco business. It finances the building, expansion, operation, and maintenance of the telco network, allowing customers to enjoy mobile services, fixed broadband services, TV services, etc., of ever-increasing quality and diversity. I like to look at Capex as the investments I need to incur in order to sustain my existing revenues, grow my revenues (preferably beating inflationary pressures), and finance any efficiency activities that will reduce my operational expenses in the future.

If we want to make the value of Capex to the corporation a little firmer, we need a little bit of financial calculus. We can write a company’s value (CV) as

CV \; = \; \frac{FCFF_0 \; (1 \; + \; g)}{\; WACC \; - \; g \; }

With g being the expected growth rate in free cash flow in perpetuity, WACC is the Weighted Average Cost of Capital, and FCFF is the Free Cash Flow to the Firm (i.e., company) that we can write as follows;

FCFF = NOPLAT + Depreciation & Amortization (DA) – ∆ Working Capital – Capex,

with NOPLAT being the Net Operating Profit Less Adjusted Taxes (i.e., EBIT – Cash Taxes). So if I have two different Capex budgets with everything else staying the same despite the difference in Capex (if true life would be so easy, right?);

CV_X \; - \; CV_Y \; = \; \Delta Capex \; \left[ \frac{1 \; - \; g}{\; WACC \; - \; g \;} \right]

assuming that everything except the proposed Capex remains the same. With a difference of, for example, 10 Million euros, a future growth rate g = 0% (maybe conservative), and a WACC of 5% (note: you can find the latest average WACC data for the industry here, which is updated regularly by New York University Leonard N. Stern School of Business. The 5% chosen here serves as an illustration only (e.g., this was approximately representative of Telco Europe back in 2022, as of July 2023, it was slightly above 6%). You should always choose the weighted average cost of capital that is applicable to your context). The above formula would tell us that the investment plan having 10 Million euros less would be 200 Million euros more valuable (20× the Capex not spent). Anyone with a bit of (hands-on!) experience in budget business planning would know that the above valuation logic should be taken with a mountain of salt. If you have two Capex plans with no positive difference in business or financial value, you should choose the plan with less Capex (and don’t count yourself rich on what you did not do). Of course, some topics may require Capex without obvious benefits to the top or bottom line. Such examples are easy to find, e.g., regulatory requirements or geo-political risks force investments that may appear valueless or even value destructive. Those require meticulous considerations, and timing may often play a role in optimizing your investment strategy around such topics. In some cases, management will create a narrative around a corporate investment decision that fits an optimized valuation, typically hedging on one-sided inflated risks to the business if not done. Whatever decision is made, it is good to remember that Capex, and resulting Opex, is in most cases a certainty. The business benefits in terms of more revenue or more customers are uncertain as is assuming your business will be worth more in a number of years if your antennas are yellow and not green. One may call this the “Faith-based case of more Capex.”

Figure 5 provides an overview of Western Europe of annual Fixed & Mobile Capex, Total and Service Revenues, and Capex to Revenue ratio (in %). Source: New Street Research Western Europe data.

Figure 5 provides an overview of Western European telcos’ revenue, Capex, and Capex to Revenue ratio. Over the last five years, Western European telcos have been spending increasingly higher Capex levels. In 2021 the telecom Capex was 6 billion euros higher than what was spent in 2017, about 13% higher. Fixed and mobile service revenue increased by 14 billion euros, yielding a Capex to Service revenue ratio of 23% in 2021 compared to 20.6% in 2017. In most cases, the total revenue would be reported, and if luck has its way (or you are a subscriber to New Street Research), the total Capex. Thus, capturing both the mobile and the fixed business, including any non-service-related revenues from the company. As defined in this article, non-service-related revenues would comprise revenues from wholesales, sales of equipment (e.g., mobile devices, STB, and CPEs), and other non-service-specific revenues. As a rule of thumb, the relative difference between total and service-related revenues is usually between 1.1 to 1.3 (e.g., the last 5-year average for WEU was 1.17). 

One of the main drivers for the Western European Capex has firstly been aggressive fiber-to-the-premise (FTTP) deployment and household fiber connectivity, typically measured in homes passed across most of the European metropolitan footprint as well as urban areas in general. As fiber covers more and more residential households, increased subscription to fiber occurs as well. This also requires substantial additional Capex for a fixed broadband business. Figure 6 illustrates the annual FTTP (homes passed) deployment volume in Western Europe as well as the total household fiber coverage.

Figure 6 above shows the fiber to the premise (FTTP) home passed deployment per anno from 2018 to 2021 Actual (source: European Commission’s “Broadband Coverage in Europe 2021” authored by Omdia et al.) and 2021 to 2025 projected numbers (i.e., this author’s own assessment). During the period from 2018 to 2021, household fiber coverage grew from 27% to 43% and is expected to grow to at least 71% by 2026 (not including overbuilt, thus unique household covered). The overbuilt data are based on a work in progress model and really should be seen as directional (it is difficult to get data with respect to the overbuilt).

A large part of the initial deployment has been in relatively dense urban areas as well as relying on aerial fiber deployment outside bigger metropolitan centers. For example, in Portugal, with close to 90% of households covered with fiber as of 2021, the existing HFC infrastructure (duct, underground passageways, …) was a key enabler for the very fast, economical, and extensive household fiber coverage there. Although many Western European markets will be reaching or exceeding 80% of fiber coverage in their urban areas, I would expect to continue to see a substantial amount of Capex being attributed. In fact, what is often overlooked in the assessment of the Capex volume being committed to fiber deployment, is that the unit-Capex is likely to increase substantially as countries with no aerial deployment option pick up their fiber rollout pace (e.g., Germany, the UK, Netherlands) and countries with an already relatively high fiber coverage go increasingly suburban and rural.

Figure 7 above shows the total fiber to the premise (FTTP) home remaining per anno from 2018 to 2021 Actual (source: European Commission’s “Broadband Coverage in Europe 2021” authored by Omdia et al.). The 2022 to 2030 projected remaining households are based on the author’s own assessment and does not consider overbuilt numbers.

The second main driver is in the domain of mobile network investment. The 5G radio access deployment has been a major driver in 2020 and 2021. It is expected to continue to contribute significantly to mobile operators Capex in the coming 5 years. For most Western European operators, the initial 5G deployment was at 700 MHz, which provides a very good 5G coverage. However, due to limited frequency spectral bandwidth, there are not very impressive speeds unless combined with a solid pre-existing 4G network. The deployment of 5G at 700 MHz has had a fairly modest effect on Mobile Capex (apart from what operators had to pay out in the 5G spectrum auctions to acquire the spectrum in the first place). Some mobile networks would have been prepared to accommodate the 700 MHz spectrum being supported by existing lower-order or classical antenna infrastructure. In 2021 and going forward, we will see an increasing part of the mobile Capex being allocated to 3.X GHz deployment. Far more sophisticated antenna systems, which co-incidentally also are far more costly in unit-Capex terms, will be taken into use, such as higher-order MiMo antennas from 8×8 passive MiMo to 32×32 and 64×64 active antennas systems. These advanced antenna systems will be deployed widely in metropolitan and urban areas. Some operators may even deploy these costly but very-high performing antenna systems in suburban and rural clutter with the intention to provide fixed-wireless access services to areas that today and for the next 5 – 7 years continue to be under-served with respect to fixed broadband fiber services.

Overall, I would also expect mobile Capex to continue to increase above and beyond the pre-2020 level.

As an external investor with little detailed insights into individual telco operations, it can be difficult to assess whether individual businesses or the industry are investing sufficiently into their technical landscape to allow for growth and increased demand for quality. Most publicly available financial reporting does not provide (if at all) sufficient insights into how capital expenses are deployed or prioritized across the many facets of a telco’s technical infrastructure, platforms, and services. As many telcos provide mobile and fixed services based on owned or wholesaled mobile and fixed networks (or combinations there off), it has become even more challenging to ascertain the quality of individual telecom operations capital investments.

Figure 8 illustrates why analysts like to plot Total Revenue against Total Capex (for fixed and mobile). It provides an excellent correlation. Though great care should be taken not to assume causation is at work here, i.e., “if I invest X Euro more, I will have Y Euro more in revenues.” It may tell you that you need to invest a certain level of Capex in sustaining a certain level of Revenue in your market context (i.e., country geo-socio-economic context). Source: New Street Research Western Europe data covering the following countries: AT, BE, DK, FI, FR, DE, GR, IT, NL, NO, PT, ES, SE, CH, and UK.

Why bother with revenues from the telco services? These would typically drive and dominate the capital investments and, as such, should relate strongly to the Capex plans of telcos. It is customary to benchmark capital spending by comparing the Capex to Revenue (see Figure 8), indicating how much a business needs to invest into infrastructure and services to obtain a certain income level. If nothing is stated, the revenue used for the Capex-to-Revenue ratio would be total revenue. For telcos with fixed and mobile businesses, it’s a very high-level KPI that does not allow for too many insights (in my opinion). It requires some de-averaging to become more meaningful.

THE TELCO TECHNOLOGY FACTORY

Figure 8 (below) illustrates the main capital investment areas and cost drivers for telecommunications operations with either a fixed broadband network, a mobile network, or both. Typically, around 90% of the capital expenditures will be invested into the technology factory comprising network infrastructure, products, services, and all associated with information technology. The remaining ca. 10% will be spent on non-technical infrastructures, such as shops, office space, and other non-tech tangible assets.

Figure 9 Telco Capex is spent across physical (or tangible) infrastructure assets, such as communications equipment, brick & mortar that hosts the equipment, and staff. Furthermore, a considerable amount of a telcos Capex will also go to human development work, e.g., for IT, products & services, either carried out directly by own staff or third parties (i.e., capitalized labor). The above illustrates the macro-levels that make out a mobile or fixed telecommunications network, and the most important areas Capex will be allocated to.

If we take the helicopter view on a telco’s network, we have the customer’s devices, either mobile devices (e.g., smartphone, Internet of Things, tablet, … ) or fixed devices, such as the customer premise equipment (CPE) and set-top box. Typically the broadband network connection to the customer’s premise would require a media converter or optical network terminator (ONT). For a mobile network, we have a wireless connection between the customer device and the radio access network (RAN), the cellular network’s most southern point (or edge). Radio access technology (e.g., 3G, 4G, or 5G) is very important determines for the customer experience. For a fixed network connection, we have fiber or coax (cable) or copper connecting the customer’s premise and the fixed network (e.g., street cabinet). Access (in general) follows the distribution of the customers’ locations and concentration, and their generated traffic is aggregated increasingly as we move north and up towards and into the core network. In today’s modern networks, big-fat-data broadband connections interconnect with the internet and big public data centers hosting both 3rd party and operator-provided content, services, and applications that the customer base demands. In many existing networks, data centers inside the operator’s own “walls” likewise will have service and application platforms that provide customers with more of the operator’s services. Such private data centers, including what is called micro data centers (μDCs) or edge DCs, may also host 3rd party content delivery networks that enable higher quality content services to a telco’s customer base due to a higher degree of proximity to where the customers are located compared to internet-based data centers (that could be located anywhere in the world).

Figure 10 illustrates break-out the details of a mobile as well as a fixed (fiber-based) network’s infrastructure elements, including the customers’ various types of devices.

Figure 10 illustrates that on a helicopter level, a fixed and a classical mobile network structure are reasonably similar, with the main difference of one network carrying the mobile traffic and the other the fixed traffic. The traffic in the fixed network tends to be at least ten larger than in the mobile network. They mainly differ in the access node and how it connects to the customer. For fixed broadband, the physical connection is established between, for example, the ONL (Optical Line Terminal) in the optical distribution network and ONT (Optical Line Terminal) at the customer’s home via a fiber line (i.e., wired). The wireless connection for mobile is between the Radio Node’s antenna and the end-user device. Note: AAS: Advanced Antenna System (e.g., MiMo, massive-MiMo), BBU: Base-band unit, CPE: Customer Premise Equipment, IOT: Internet of Things, IX: Internet Exchange, OLT: Optical Line Termination, and ONT: Optical Network Termination (same as ONU: Optical Network Unit).

From Figure 10 above, it should be clear that there are a lot of similarities between the mobile and fixed networks, with the biggest difference being that the mobile access network establishes a wireless connection to the customer’s devices versus the fixed access network physically wired connection to the device situated at the customer’s premises.

This is good news for fixed-mobile telecommunications operators as these will have considerable architectural and, thus, investment synergies due to those similarities. Although, the sad truth is that even today, many fixed-mobile telco companies, particularly incumbent, remain far away from having achieved fixed-mobile network harmonization and conversion.

Moreover, there are many questions to be asked as well as concerns when it comes to our industry’s Capex plans; what is the Capex required to accommodate data growth, are existing budgets allowing for sufficient network densification (to accommodate growth and quality), and what is the Capex trade-off between frequency spectrum acquisition, antenna technology, and site densification, how much Capex is justified to pursue the best network in a given market, what is the suitable trade-off between investing in fiber to the home and aggressive 5G deployment, should (incumbent) telco’s pursue fixed wireless access (FWA) and how would that impact their capital plans, what is the right antenna strategy, etc…

On a high level, I will provide guidance on many of the above questions, in this article and in forthcoming ones.

THE CAPEX STRUCTURE OF A TELECOM COMPANY.

When taking a macro look at Capex and not yet having a good idea about the breakdown between mobile and fixed investment levels, we are helped that on a macro level, the Capex categories are similar for a fixed and a mobile network. Apart from the last mile (access) in a fixed network is a fixed line (e.g., fiber, coax, or copper) and a wireless connection in a mobile network; the rest is comparable in nature and function. This is not surprising as a business with a fixed-mobile infrastructure would (should!) leverage the commonalities in transport and part of the access architecture.

In the fixed business, devices required to enable services on the fixed-line network at the fixed customers’ home (e.g., CPE, STB, …) are a capital expense driven by new customers and device replacement. This is not the case for mobile devices (i.e., an operational expense).

Figure 11 above illustrates the major Capex elements and their distribution defined by the median, lower and upper quantiles (the box), and lower and upper extremes (the whiskers) of what one should expect of various elements’ contribution to telco Capex. Note: CPE: Customer Premise Equipment, STB: Set-Top Box.

CUSTOMER PREMISE EQUIPMENT (CPE) & SET-TOP BOXES (STB) investments ARE between 10% to 20% of the TelEcoM Capex.

The capital investment level into Customer premise equipment (CPE) depends on the expected growth in the fixed customer base and the replacement of old or defective CPEs already in the fixed customer base. We would generally expect this to make out between 10% to 20% of the total Capex of a fixed-mobile telco (and 0% in a mobile-only business). When migrating from one access technology (e.g., copper/xDSL phase-out, coaxial cable) to another (e.g., fiber or hybrid coaxial cable), more Capex may be required. Similar considerations for set-top boxes (STB) replacement due to, for example, a new TV platform, non-compliance with new requirements, etc. Many Western European incumbents are phasing out their extensive and aging copper networks and replacing those with fiber-based networks. At the same time, incumbents may have substantial capital requirements phasing out their legacy copper-based access networks, the capital burden on other competitor telcos in markets where this is happening if such would have a significant copper-based wholesale relationship with the incumbent.

In summary, over the next five years, we should expect an increase in CPE-based Caped due to the legacy copper phase-out of incumbent fixed telcos. This will also increase the capital pressure in transport and access categories.

CPE & STB Capex KPIs: Capex share of Total and Capex per Gross Added Customer.

Capex modeling comment: Use your customer forecast model as the driver for new CPEs. Your research should give you an idea of the price range of CPEs used by your target fixed broadband business. Always include CPE replacement in the existing base and the gross adds for the new CPEs. Many fixed broadband retail businesses have been conservative in the capabilities of CPEs they have offered to their customer base (e.g., low-end cheaper CPEs, poor WiFi quality, ≤1Gbps), and it should be considered that these may not be sufficient for customer demand in the following years. An incumbent with a large install base of xDSL customers may also have a substantial migration (to fiber) cost as CPEs are required to be replaced with fiber cable CPEs. Due to the current supply chain and delivery issues, I would assume that operators would be willing to pay a premium for getting critical stock as well as having priority delivery as stock becomes available (e.g., by more expensive shipping means).

Core network & service platformS, including data centers, investments ARE between 8% to 12% of the telecom Capex.

Core network and service platforms should not take up more than 10% of the total Capex. We would regard anything less than 5% or more than 15% as an anomaly in Capital prioritization. This said, over the next couple of years, many telcos with mobile operations will launch 5G standalone core networks, which is a substantial change to the existing core network architecture. This also raises the opportunity for lifting and shifting from monolithic systems or older cloud frameworks to cloud-native and possibly migrating certain functions onto public cloud domains from one or more hyperscalers (e.g., AWS, Azure, Google). As workloads are moved from telco-owned data centers and own monolithic core systems, telco technology cost structure may change from what prior was a substantial capital expense to an operational expense. This is particularly true for software-related developments and licensing.

Another core network & service platform Capex pressure point may come from political or investor pressure to replace Chinese network elements, often far removed from obsolescence and performance issues, with non-Chinese alternatives. This may raise the Core network Capex level for the next 3 to 5 years, possibly beyond 12%. Alas, this would be temporary.

In summary, the following topics would likely be on the Capex priority list;

1. Life-cycle management investments (I like to call Business-as-Usual demand) into software and hardware maintenance, end-of-life replacements, growth (software licenses, HW expansions), and miscellaneous topics. This area tends to dominate the Capex demand unless larger transformational projects exist. It is also the first area to be de-prioritized if required. Working with Priority 1, 2, and 3 categorizations is a good Capital planning methodology. Where Priority 1 is required within the following budget year 1, Prio. 2 is important but can wait until year two without building up too much technical debt and Prio. 3 is nice to have and not expected to be required for the next two subsequent budget years.

2. 5G (Standalone, SA) Core Network deployment (timeline: 18 – 24 months).

3. Network cloudification, initially lift-and-shift with subsequent cloud-native transformation. The trigger point will be enabling the deployment of the 5G standalone (SA) core. Operators will also take the opportunity to clean up their data centers and network core location (timeline: 24 – 36 months).

4. Although edge computing data centers (DC) typically are supposed to support the radio access network (e.g., for Open-RAN), the capital assignment would be with the core network as the expertise for this resides here. The intensity of this Capex (if built by the operator, otherwise, it would be Opex) will depend on the country’s size and fronthaul/backhaul design. The investment trigger point would generally commence on Open-RAN deployment (e.g., 1&1 & Telefonica Germany). The edge DC (or μDC) would most like be standard container-sized (or half that size) and could easily be provided by independent towerco or specific edge-DC 3rd party providers lessening the Capex required for the telco. For smaller geographies (e.g., Netherlands, Denmark, Austria, …), I would not expect this item to be a substantial topic for the Capex plans. Mainly if Open-RAN is not being pursued over the next 5 – 10 years by mainstream incumbent telcos.

5. Chinese supplier replacement. The urgency would depend on regulatory pressure, whether compensation is provided (unlikely) or not, and the obsolescence timeline of the infrastructure in question. Given the high quality at very affordable economics, I expect this not to have the biggest priority and will be executed within timelines dictated more by economics and obsolescence timelines. In any case, I expect that before 2025 most European telcos will have phased out Chinese suppliers from their Core Networks, incl. any Service platforms in use today (timeline: max. 36 months).

6. Cybersecurity investments strengthen infrastructure, processes, and vital data residing in data centers, service platforms, and core network elements. I expect a substantial increase in Capex (and Opex) arising from the telco’s focus on increasing the cyber protection of their critical telecom infrastructure (timeline: max 18 months with urgency).

Core Capex KPIs: Capex share of Total (knowing the share, it is straightforward to get the Capex per Revenue related to the Core), Capex per Incremental demanded data traffic (in Gigabits and Gigabyte per second), Capex per Total traffic, Capex per customer.

Capex modeling comment: In case I have little specific information about an operator’s core network and service platforms, I would tend to model it as a Euro per Customer, Euro per-incremental customer, and Euro per incremental traffic. Checking that I am not violating my Capex range that this category would typically fall within (e.g., 8% to 12%). I would also have to consider obsolescence investments, taking, for example, a percentage of previous cumulated core investments. As mobile operators are in the process, or soon will be, of implementing a 5G standalone core, having an idea of the number of 5G customers and their traffic would be useful to factor that in separately in this Capex category.

Estimating the possible Capex spend on Edge-RAN locations, I would consider that I need ca. 1 μDC per 450 to 700 km2 of O-RAN coverage (i.e., corresponding to a fronthaul distance between the remote radio and the baseband unit of 12 to 15 km). There may be synergies between fixed broadband access locations and the need for μ-datacenters for an O-RAN deployment for an integrated fixed-mobile telco. I suspect that 3rd party towercos, or alike, may eventually also offer this kind of site solutions, possibly sharing the cost with other mobile O-RAN operators.

Transport – core, metro & aggregation investments are between 5% to 15% of Telecom Capex.

The transport network consists of an optical transport network (OTN) connecting all infrastructure nodes via optical fiber. The optical transport network extends down to the access layer from the Core through the Metro and Aggregation layers. On top, the IP network ensures logical connection and control flow of all data transported up and downstream between the infrastructure nodes. As data traffic is carried from the edge of the network upstream, it is aggregated at one or several places in the network (and, of course, disaggregated in the downstream direction). Thus, the higher the transport network, the more bandwidth is supported on the optical and the IP layers. Most of the Capex investment needs would ensure that sufficient optical and IP capacity is available, supporting the growth projections and new service requirements from the business and that no bottlenecks can occur that may have disastrous consequences on customer experience. This mainly comes down to adding cards and ports to the already installed equipment, upgrading & replacing equipment as it reaches capacity or quality limitations, or eventually becoming obsolete. There may be software license fees associated with growth or the introduction of new services that also need to be considered.

Figure 12 above illustrates (high-level) the transport network topology with the optical transport network and IP networking on top. Apart from optical and IP network equipment, this area often includes investments into IP application functions and related hardware (e.g., BNG, DHCP, DNS, AAA RADIUS Servers, …), which have not been shown in the above. In most cases, the underlying optical fiber network would be present and sufficiently scalable, not requiring substantial Capex apart from some repair and minor extensions. Note DWDM: Dense Wavelength-Division multiplexing is an optical fiber multiplexing technology that increases the bandwidth utilization of a FON, BNG: Border Network Gateway connecting subscribers to a network or an internet service providers (ISP) network, important in wholesale arrangements where a 3rd party provides aggregation and access. DHCP: Dynamic Host Configuration Protocol providing IP address allocation and client configurations. AAA: Authentication, Authorization, and Accounting of the subscriber/user, RADIUS: Remote Authentication Dial-In User Service (Server) providing the AAA functionalities.

Although many telcos operate fixed-mobile networks and might even offer fixed-mobile converged services, they may still operate largely separate fixed and mobile networks. It is not uncommon to find very different transport design principles as well as supplier landscapes between fixed and mobile. The maturity, when each was initially built, and technology roadmaps have historically been very different. The fixed traffic dynamics and data volumes are several times higher than mobile traffic. The geographical presence between fixed and mobile tends to be very different (unless the telco of interest is the incumbent with a considerable copper or HFC network). However, the biggest reason for this state of affairs has been people and technology organizations within the telcos resisting change and much more aggressive transport consolidation, which would have been possible.

The mobile traffic could (should!) be accommodated at least from the metro/aggregation layers and upstream through the core transport. There may even be some potential for consolidation on front and backhauls that are worth considering. This would lead to supplier consolidation and organizational synergies as the technology organizations converged into a fixed-mobile engineering organization rather than two separate ones.

I would expect the share of Capex to be on the higher end of the likely range and towards the 10+% at least for the next couple of years, mainly if fixed and mobile networks are being harmonized on the transport level, which may also create an opportunity reduce and harmonize the supplier landscape.

In summary, the following topics would likely be on the Capex priority list;

  1. Life-cycle management (business-as-usual) investments, accommodating growth including new service and quality requirements (annual business-as-usual). There are no indications that the traffic or mobile traffic growth rate over the next five years will be very different from the past. If anything, the 5-year CAGR is slightly decreasing.
  2. Consolidating fixed and mobile transport networks (timelines: 36 to 60 months, depending on network size and geography). Some companies are already in the process of getting this done.
  3. Chinese supplier replacement. To my knowledge, there are fewer regulatory discussions and political pressure for telcos to phase out transport infrastructure. Nevertheless, with the current geopolitical climate (and the upcoming US election in 2024), telcos need to consider this topic very carefully; despite economic (less competition, higher cost), quality, and possible innovation, consequences may result in a departure from such suppliers. It would be a natural consideration in case of modernization needs. An accelerated phase-out may be justified to remove future risks arising from geopolitical pressures.

While I have chosen not to include the Access transport under this category, it is not uncommon to see its budget demand assigned to this category, as the transport side of access (fronthaul and backhaul transport) technically is very synergetic with the transport considerations in aggregation, metro, and core.

Transport Capex KPIs: Capex share of Total, the amount of Capex allocated to Mobile-only and Fixed-only (and, of course, to a harmonized/converged evolved transport network), The Utilization level (if data is available or modeled to this level). The amount of Capex-spend on fiber deployment, active and passive optical transport, and IP.

Capex modeling comment: I would see whether any information is available on a number of core data centers, aggregation, and metro locations. If this information is available, it is possible to get an impression of both core, aggregation, and metro transport networks. If this information is not available, I would assume a sensible transport topology given the particularities of the country where the operator resides, considering whether the operator is an incumbent fixed operator with mobile, a mobile-only operation, or a mobile operator that later has added fixed broadband to its product portfolio. If we are not talking about a greenfield operation, most, if not all, will already be in place, and mainly obsolescence, incremental traffic, and possible transport network extensions would incur Capex. It is important to understand whether fixed-mobile operations have harmonized and integrated their transport infrastructure or large-run those independently of each other. There is substantial Capex synergy in operating an integrated transport network, although it will take time and Capex to get to that integration point.

Access investments are typically between 35% to 50% of the Telecom Capex.

Figure 13 (above) is similar to Figure 8 (above), emphasizing the access part of Fixed and Mobile networks. I have extended the mobile access topology to capture newer development of Open-RAN and fronthaul requirements with pooling (“centralizing”) the baseband (BBU) resources in an edge cloud (e.g., container-sized computing center). Fronthaul & Open-RAN poses requirements to the access transport network. It can be relatively costly to transform a legacy RAN backhaul-only based topology to an Open-RAN fronthaul-based topology. Open-RAN and fronthaul topologies for Greenfield deployments are more flexible and at least require less Capex and Opex. 

Mobile Access Capex.

I will define mobile access (or radio access network, RAN) as everything from the antenna on the site location that supports the customers’ usage (or traffic demand) via the active radio equipment (on-site or residing in an edge-cloud datacenter), through the fronthaul and backhaul transport, up to the point before aggregation (i.e., pre-aggregation). It includes passive and active infrastructure on-site, steal & mortar or storage container, front- and backhaul transport, data center software & equipment (that may be required in an edge data center), and any other hardware or software required to have a functional mobile service on whatever G being sold by the mobile operator.

Figure 14 above illustrates a radio access network architecture that is typically deployed by an incumbent telco supporting up to 4G and 5G. A greenfield operation on 5G (and maybe 4G) could (maybe should?) choose to disaggregate the radio access node using an open interface, allowing for a supplier mix between the remote radio head (RRH and digital frontend) at the site location and the centralized (or distributed) baseband unit (BBU). Fronthaul connects the antenna and RRH with a remote BBU that is situated at an edge-cloud data center (e.g., storage container datacenter unit = micro-data center, μDC). Due to latency constraints, the distance between the remote site and the BBU should not be much more than 10 km. It is customary to name the 5G new radio node a gNB (g-Node-B) like the 4G radio node is named eNB (evolved-Node-B).

When considering the mobile access network, it is good to keep in mind that, at the moment, there are at least two main flavors (that can be mixed, of course) to consider. (1) A classical architecture with the site’s radio access hardware and software from a single supplier, with a remote radio head (RRH) as well as digital frontend processing at or near the antenna. The radio nodes do not allow for mixing suppliers between the remote RF and the baseband. Radio nodes are connected to backhaul transmission that may be enabled by fiber or microwave radios. This option is simple and very well-proven. However, it comes with supplier lock-in and possibly less efficient use of baseband resources as these are likewise fixed to the radio node that the baseband unit is installed. (2) A new Open- or disaggregated radio access network (O-RAN), with the Antenna and RHH at the site location (the RU, radio unit in O-RAN), then connected via fronthaul (≤ 10 – 20 km distance) to a μDC that contains the baseband unit (the DU, distributed unit in O-RAN). The μDC would then be connected to the backhaul that connects northbound to the Central Unit (CU), aggregation, and core. The open interface between the RRH (and digital frontend) and the BBU allows different suppliers and hosts the RAN-specific software on common off-the-shelf (COTS) computing equipment. It allows (in theory) for better scaling and efficiency with the baseband resources. However, the framework has not been standardized by the usual bodies of standardization (e.g., 3GPP) and is not universally accepted as a common standard that all telco suppliers would adhere to. It also has not reached maturity yet (sort of obvious) and is currently (as of July 2022) seen to be associated with substantial cyber-security risks (re: maturity). It may be an interesting deployment model for greenfield operations (e.g., Rakuten Mobile Japan, Jio India, 1&1 Germany, Dish Mobile USA). The O-RAN options are depicted in Figure 15 below.

Figure 15 The above illustrates a generic Open RAN architecture starting with the Advanced Antenna System (AAS) and the Radio Unit (RU). The RU contains the functionality associated with the (OSI model) layer 1, partitioned into the lower layer 1 functions with the upper layer 1 functions possibly moved out of the RU and into the Distributed Unit (DU) connected via the fronthaul transport. The DU, which typically will be connected to several RUs, must ensure proper data link management, traffic control, addressing, and reliable communication with the RU (i.e., layer 2 functionalities). The distributed unit connects via the mid-haul transport link to the so-called Central Unit (CU), which typically will be connected to several DUs. The CU plays an important role in the overall ORAN architecture, acting as a central control and management vehicle that coordinates the operations of DUs and RUs, ensuring an efficient and effective operation of the ORAN network. As may be obvious, from the summary of its functionality, layer 3 functionalities reside in the CU. The Central Unit connects via backhaul, aggregation, and core transport to the core network.

For established incumbent mobile operators, I do not see Option (2) as very attractive, at least for the next 5 – 7 years when many legacy technologies (i.e., non-5G) remain to be supported. The main concern should be the maturity, lack of industry-wise standardization, as well as cost of transforming existing access transport networks into compliance with a fronthaul framework. Most likely, some incumbents, the “brave” ones, will deploy O-RAN for 1 or a few 5G bands and keep their legacy networks as is. Most incumbent mobile operators will choose (actually have chosen already) conventional suppliers and the classical topology option to provide their 5G radio access network as it has the highest synergy with the access infrastructure already deployed. Thus, if my assertion is correct, O-RAN will only start becoming mass-market mainstream in 5 to 7 years, when existing deployments become obsolete, and may ultimately become mass-market viable by the introduction of 6G towards the end of the twenties. The verdict is very much still out there, in my opinion.

Planning the mobile-radio access networks Capex requirements is not (that) difficult. Most of it can be mathematically derived and be easily assessed against growth expectations, expected (or targeted) network utilization (or efficiency), and quality. The growth expectations (should) come from consumer and retail businesses’ forecast of mobile customers over the next 3 to 5 years, their expected usage (if they care, otherwise technology should), or data-plan distribution (maybe including technology distributions, if they care. Otherwise, technology should), as well as the desired level of quality (usually the best).

Figure 16 above illustrates a typical cellular planning structural hierarchy from the sector perspective. One site typically has 3 sectors. One sector can have multiple cells depending on the frequency bands installed in the (multi-band) antennas. Massive MiMo antenna systems provide target cellular beams toward the user’s device that extend the range of coverage (via the beam). Very fast scheduling will enable beams to be switched/cycled to other users in the covered sector (a bit oversimplified). Typically, the sector is planned according to the cell utilization, thus on a frequency-by-frequency basis.

Figure 17 illustrates that most investment drivers can be approached as statistical distributions. Those distributions will tell us how much investment is required to ensure that a critical parameter X remains below a pre-defined critical limit Xc within a given probability (i.e., the proportion of the distribution exceeding Xc). The planning approach will typically establish a reference distribution based on actual data. Then based on marketing forecasts, the planners will evolve the reference based on the expected future usage that drives the planning parameter. Example: Let X be the customer’s average speed in a radio cell (e.g., in a given sector of an antenna site) in the busy hour. The business (including technology) has decided it will target 98% of its cells and should provide better than 10 Mbps for more than 50% of the active time a customer uses a given cell. Typically, we will have several quality-based KPIs, and the more breached they are, the more likely it will be that a Capex action is initiated to improve the customer experience.

Network planners will have access to much information down to the cell level (i.e., the active frequency band in a given sector). This helps them develop solid planning and statistical models that provide confidence in the extrapolation of the critical planning parameters as demand changes (typically increases) that subsequently drive the need for expansions, parameter adjustments, and other optimization requirements. As shown in Figure 17 above, it is customary to allow for some cells to breach a defined critical limit Xc, usually though it is kept low to ensure a given customer experience level. Examples of planning parameters could be cell (and sector) utilization in the busy hour, active concurrent users in cell (or sector), duration users spend at a or lower deemed poor speed level in a given cell, physical resource block (the famous PRB, try to ask what it stands for & what it means😉) utilization, etc.

The following topics would likely be on the Capex priority list;

  1. New radio access deployment Capex. This may be for building new sites for coverage, typically in newly built residential areas, and due to capacity requirements where existing sites can no longer support the demand in a given area. Furthermore, this Capex also covers a new technology deployment such as 5G or deploying a new frequency band requiring a new antenna solution like 3.X GHz would do. As independent tower infrastructure companies (towerco) increasingly are used to providing the required passive site infrastructure solution (e.g., location, concrete, or steel masts/towers/poles), this part will not be a Capex item but be charged as Opex back to the mobile operator. From a European mobile radio access network Capex perspective, the average cost of a total site solution, with active as well as passive infrastructure, should have been reduced by ca. 100 thousand plus Euro, which may translate into a monthly Opex charge of 800 to 1300 Euro per site solution. It should be noted that while many operators have spun off their passive site solutions to third parties and thus effectively reduced their site-related Capex, the cost of antennas has increased dramatically as operators have moved away from classical simple SiSo (Single-in Singe-out) passive antennas to much more advanced antenna systems supporting multiple frequency bands, higher-order antennas (e.g., MiMo) and recently also started deploying active antennas (i.e., integrated amplifiers). This is largely also driven by mobile operators commissioning more and more frequency bands on their radio-access sites. The planning horizon needs at least to be 2 years and preferably 3 to 5 years.
  2. Capex investments that accommodate anticipated radio access growth and increased quality requirements. It is normal to be between 18 – 24 months ahead of the present capacity demand overall, accepting no more than 2% to 5% of cells (in BH) to breach a critical specification limit. Several such critical limits would be used for longer-term planning and operational day-to-day monitoring.
  3. Life-cycle management (business-as-usual) investments such as software annual fees, including licenses that are typically structured around the technologies deployed (e.g., 2G, 3G, 4G, and 5G) and active infrastructure modernization replacing radio access equipment (e.g., baseband units, radio units, antennas, …) that have become obsolete. Site reworks or construction optimization would typically be executed (on request from the operator) by the Towerco entity, where the mobile operator leases the passive site infrastructure. Thus, in such instances may not be a Capex item but charged back as an Operational expense to the telco.
  4. Even if there have been fewer regulatory discussions and political pressure for telcos to phase out radio access, Chinese supplier replacement should be considered. Nevertheless, with the current geopolitical climate (and the upcoming US election), telcos need to consider this topic very carefully; despite economic (less competition, higher cost), quality, and possible innovation, consequences may result in a departure from such suppliers. It would be a natural consideration in case of modernization needs. An accelerated phase-out may be justified to remove future risks arising from geopolitical pressures, although it would result in above-and-beyond capital commitment over a shorter period than otherwise would be the case. Telco valuation may suffer more in the short to medium term than otherwise would have been the case with a more natural phaseout due to obsolescence.

Mobile Access Capex KPIs: Capex share of Total, Access Utilization (reported/planned data traffic demand to the data traffic that could be supplied if all or part of the spectrum was activated), Capex per Site location, Capex per Incremental data traffic demand (in Gigabyte and Gigabit per second which is the real investment driver), Capex per Total Traffic (in Gigabyte and Gigabit per second), Capex per Mobile Customer and Capex to Mobile Revenue (preferably service revenue but the total is fine if the other is not available). As a rule of thumb, 50% of a mobile network typically covers rural areas, which also may carry less than 20% of the total data traffic.

Whether actual and planned Capex is available or an analyst is modeling it, the above KPIs should be followed over an extended period. A single year does not tell much of a story.

Capex modeling comment: When modeling the Capex required for the radio access network, you need to have an idea about how many sites your target telco has. There are many ways to get to that number. In most European countries, it is a matter of public record. Most telcos, nowadays, rarely build their own passive site infrastructure but get that from independent third-party tower companies (e.g., CellNex w. ca. 75k locations, Vantage Towers w. ca. 82k locations, … ) or site-share on another operators site locations if available. So, modeling the RAN Capex is a matter of having a benchmark of the active equipment, knowing what active equipment is most likely to be deployed and how much. I see this as being an iterative modeling process. Given the number of sites and historical Capex, it is possible to come to a reasonable estimate of both volumes of sites being changed and the range of unit Capex (given good guestimates of active equipment pricing range). Of course, in case you are doing a Capex review, the data should be available to you, and the exercise should be straightforward. The mobile Capex KPIs above will allow for consistency checks of a modeling exercise or guide a Capex review process.

I recommend using the classical topology described above when building a radio access model. That is unless you have information that the telco under analysis is transforming to a disaggregated topology with both fronthaul and backhaul. Remember you are not only required to capture the Capex for what is associated with the site location but also what is spent on the access transport. Otherwise, there is a chance that you over-estimate the unit-Capex for the site-related investments.

It is also worth keeping in mind that typically, the first place a telecom company would cut Capex (or down-prioritize) is pressured during the planning process would be in the radio access network category. The reason is that the site-related unitary capex tends to be incredibly well-defined. If you reduce your rollout to 100 site-related units, you should have a very well-defined quantum of Capex that can be allocated to another category. Also, the operational impact of cutting in this category tends to be very well-defined. Depending on how well planned the overall Capex has been done, there typically would be a slack of 5% to 10% overall that could be re-assigned or ultimately reduced if financial results warrant such a move.

Fixed Access Capex.

As mobile access, fixed access is about getting your service out to your customers. Or, if you are a wholesale provider, you can provide the means of your wholesale customer reaching their customer by providing your own fixed access transport infrastructure. Fixed access is about connecting the home, the office, the public institution (e.g., school), or whatever type of dwelling in general.

Figure 18 illustrates a fixed access network and its position in the overall telco architecture. The following make up the ODN (Optical Distribution Network); OLT: Optical Line Termination, ODF: Optical Distribution Frame, POS: Passive Optical Splitter, ONT: Optical Network Termination. At the customer premise, besides the ONT, we have the CPE: Customer Premise Equipment and the STB: Set-Top Box. Suppose you are an operator that bought wholesale fixed access from another telco’ (incl. Open Access Providers, OAPs). In that case, you may require a BNG to establish the connection with your customer’s CPE and STB through the wholesale access network.

As fiber optical access networks are being deployed across Europe, this tends to be a substantial Capex item on the budgets of telcos. Here we have two main Capex drivers. First is the Capex for deploying fibers across urban areas, which provides coverage for households (or dwellings) and is measured as Capex-per-homes passed. Second is the Capex required for establishing the connection to households (or dwellings). The method of fiber deployment is either buried, possibly using existing ducts or underground passageways, or via aerial deployment using established poles (e.g., power poles or street furniture poles) or new poles deployed with the fiber deployment. Aerial deployment tends to incur lower Capex than buried fiber solutions due to requiring less civil work. The OLT, ODF, POS, and optical fiber planning, design, and build to provide home coverage depends on the home-passed deployment ambition. The fiber to connect a home (i.e., civil work and materials), ONT, CPE, and STBs are driven by homes connected (or FTTH connected). Typically, CPE and STBs are not included in the Access Capex but should be accounted for as a separate business-driven Capex item.

The network solutions (BNG, OLT, Routers, Switches, …) outside the customer’s dwelling come in the form of a cabinet and appropriate cards to populate the cabinet. The cards provide the capacity and serviced speed (e.g., 100 Mbps, 300 Mbps, 1 Gbps, 10 Gbps, …) sold to the fixed broadband customer. Moreover, for some of the deployed solutions, there is likely a mandatory software (incl. features) fee and possibly both optional and custom-specific features (although rare to see that in mainstream deployments). It should be clear (but you would be surprised) that ONT and CPE should support the provisioned speed of the fixed access network. The customer cannot get more quality than the minimum level of either the ONT, CPE, or what the ODN has been built to deliver. In other words, if the networking cards have been deployed only to support up to 1 Gbps and your ONT, and CPE may support 3 Gbps or more, your customer will not be able to have a service beyond 1 Gbps. Of course, the other way around as well. I cannot stress enough the importance of longer-term planning in this respect. Your network should be as flexible as possible in providing customer services. It may seem that Capex savings can be made by only deploying capacity sold today or may be required by business over the next 12 months. While taking a 3 to 5-year view on the deployed network capacity and ONT/CPEs provided to customers avoids having to rip out relatively new equipment or finance the significant replacement of obsolete customer premise equipment that no longer can support the services required.

When we look at the economic drivers for fixed access, we can look at the capital cost of deploying a kilometer of fiber. This is particularly interesting if we are only interested in the fiber deployment itself and nothing else. Depending on the type of clutter, deployment, and labor cost occur. Maybe it is more interesting to bundle your investment into what is required to pass a household and what is required to connect a household (after it has been passed). Thus, we look at the Capex-per-home (or dwellings) passed and separate the Capex to connect an individual customer’s premise. It is important to realize that these Capex drivers are not just a single value but will depend on the household density depends on the type of area the deployment happens. We generally expect dense urban clutters to have a high dwelling density; thus, more households are covered (or passed) per km of fiber deployed. Dense-urban areas, however, may not necessarily hold the highest density of potential residential customers and hold less retail interest in the retail business. Generally, urban areas have higher household densities (including residential households) than sub-urban clutter. Rural areas are expected to have the lowest density and, thus, the most costly (on a household basis) to deploy.

Figure 19, just below, illustrates the basic economics of buried (as opposed to aerial) fiber for FTTH homes passed and FTTH homes connected. Apart from showing the intuitive economic logic, the cost per home passed or connected is driven by the household density (note: it’s one driver and fairly important but does not capture all the factors). This may serve as a base for rough assessments of the cost of fiber deployment in homes passed and homes connected as a function of household density. I have used data in the Fiber-to-the-Home Council Europe report of July 2012 (10 years old), “The Cost of Meeting Europe’s Network Needs”, and have corrected for the European inflationary price increase since 2012 of ca. 14% and raised that to 20% to account for increased demand for FTTH related work by third parties. Then I checked this against some data points known to me (which do not coincide with the cities quoted in the chart). These data points relate to buried fiber, including the homes connected cost chart. Aerial fiber deployment (including home connected) would cost less than depicted here. Of course, some care should be taken in generalizing this to actual projects where proper knowledge of the local circumstances is preferred to the above.

Figure 19 The “chicken and egg” of connecting customers’ premises with fiber and providing them with 100s of Mbps up to Gbps broadband quality is that the fibers need to pass the home first before the home can be connected. The cost of passing a premise (i.e., the home passed) and connecting a premise (home connected) should, for planning purposes, be split up. The cost of rolling out fiber to get homes-passed coverage is not surprisingly particularly sensitive to household density. We will have more households per unit area in urban areas compared to rural areas. Connecting a home is more sensitive to household density in deep rural areas where the distance from the main fiber line connection point to the household may be longer. The above cost curves are for buried fiber lines and are in 2021 prices.

Aerial fiber deployment would generally be less capital-intensive due to faster and easier deployment (less civil work, including permitting) using pre-existing (or newly built) poles. Not every country allows aerial deployment or even has the infrastructure (i.e., poles) available, which may be medium and low-voltage poles (e.g., for last-mile access). Some countries will have a policy allowing only buried fibers in the city or metropolitan areas and supporting pole infrastructure for aerial deployment in sub-urban and rural clutters. I have tried to illustrate this with Figure 18 below, where the pie charts show the aerial potential and share that may have to be assigned to buried fiber deployment.

Figure 20 above illustrates the amount of fiber coverage (i.e., in terms of homes passed) in Western European markets. The number for 2015 and 2021 is based on European Commission’s “Broadband Coverage in Europe 2021” (authored by Omdia et al.). The 2025 & 2031 coverage numbers are my extrapolation of the 5-year trend leading up to 2021, considering the potential for aerial versus buried deployment. Aerial making accelerated deployment gains is more likely than in markets that only have buried fiber as a possibility, either because of regulation or lack of appropriate infrastructure for aerials. The only country that may be below 50% FTTH coverage in 2025 is Germany (i.e., DE), with a projected 39% of homes passed by 2025. Should Germany aim for 50% instead, they would have to do ca. 15 million households passed or, on average, 3 million a year from 2021 to 2025. Maximum Germany achieved in one year was in 2020, with ca. 1.4 million homes passed (i.e., Covid was good for getting “things done”). In 2021 this number dropped to ca. 700 thousand or half of the 2020 number. The maximum any country in Europe has done in one year was France, with 2.9 million homes passed in 2018. However, France does allow for aerial fiber deployment outside major metropolitan areas.

Figure 21 above provides an overview across Western Europe for the last 5 years (2016 – 2021) of average annual household fiber deployment, the maximum done in one year in the previous 5 years, and the average required to achieve household coverage in 2026 shown above in Figure 20. For Germany (DE), the average deployment pace of 3.23 homes passed per year (orange bar) would then result in a coverage estimate of 25%. I don’t see any practical reasons for the UK, France, and Italy not to make the estimated household coverage by 2026, which may exceed my estimates.

From a deployment pace and Capex perspective, it is good to keep in mind that as time goes by, the deployment cost per household is likely to increase as household density reduces when the deployment moves from metropolitan areas toward suburban and rural. Thus, even if the deployment pace may reduce naturally for many countries in Figure 20 towards 2025, absolute Capex may not necessarily reduce accordingly.

In summary, the following topics would likely be on the Capex priority list;

  1. Continued fiber deployment to achieve household coverage. Based on Figure 17, at household (HH) densities above 500 per km2, the unit Capex for buried fiber should be below 900 Euro per HH passed with an average of 600 Euro per HH passed. Below 500 HH per km2, the cost increases rapidly towards 3,000 Euro per HH passed. The aerial deployment will result in substantially lower Capex, maybe with as much as 50% lower unit Capex.
  2. As customers subscribe, the fiber access cost associated with connecting homes (last-mile connectivity) will need to be considered. Figure 17 provides some guidance regarding the quantum-Euro range expected for buried fiber. Aerial-based connections may be somewhat cheaper.
  3. Life-cycle management (business-as-usual) investments, modernization investments, accommodating growth including new service and quality requirements (annual business as usual). Typically it would be upgrading OLT, ONTs, routers, and switches to support higher bandwidth requirements upgrading line cards (or interface cards), and moving from ≤100 Mbps to 1 Gbps and 10 Gbps. Many telcos will be considering upgrading their GPON (Gigabit Passive Optical Networks, 2.5 Gbps↓ / 1.2 Gbps↑) to provide XGPON (10 Gbps↓ / 2.5 Gbps↑) or even XGSPON services (10 Gbps↓ / 10 Gbps↑).
  4. Chinese supplier exposure and risks (i.e., political and regulatory enforcement) may be an issue in some Western European markets and require accelerated phase-out capital needs. In general, I don’t see fixed access infrastructure being a priority in this respect, given the strong focus on increasing household fiber coverage, which already takes up a lot of human and financial resources. However, this topic needs to be considered in case of obsolescence and thus would be a business case and performance-driven with a risk adjustment in dealing with Chinese suppliers at that point in time.

Fixed Access Capex KPIs: Capex share of Total, Capex per km, Number of HH passed and connected, Capex per HH passed, Capex per HH connected, Capex to Incremental Traffic, GPON, XGPON and XGSPON share of Capex and Households connected.

Whether actual and planned Capex is available or an analyst is modeling it, the above KPIs should be followed over an extended period. A single year does not tell much of a story.

Capex modeling comment: In a modeling exercise, I would use estimates for the telco’s household coverage plans as well as the expected household-connected sales projections. Hopefully, historical numbers would be available to the analyst that can be used to estimate the unit-Capex for a household passed and a household connected. You need to have an idea of where the telco is in terms of household density, and thus as time goes by, you may assume that the cost of deployment per household increases somewhat. For example, use Figure 18 to guide the scaling curve you need. The above-fixed access Capex KPIs should allow checking for inconsistencies in your model or, if you are reviewing a Capex plan, whether that Capex plan is self-consistent with the data provided.

If anyone would have doubted it, there is still much to do with fiber optical deployment in Western Europe. We still have around 100+ million homes to pass and a likely capital investment need of 100+ billion euros. Fiber deployment will remain a tremendously important investment area for the foreseeable future.

Figure 22 shows the remaining fiber coverage in homes passed based on 2021 actuals for urban and rural areas. In general, it is expected that once urban areas’ coverage has reached 80% to 90%, the further coverage-based rollout will reduce. Though, for attractive urban areas, overbuilt, that is, deploying fiber where there already are fibers deployed, is likely to continue.

Figure 23 The top illustrates the next 5 years’ weekly rollout to reach an 80% to 90% household coverage range by 2025. The bottom, it shows an estimate of the remaining capital investment required to reach that 80% to 90% coverage range. This assessment is based on 2021 actuals from the European Commission’s “Broadband Coverage in Europe 2021” (authored by Omdia et al.); the weekly activity and Capex levels are thus from 2022 onwards.

In many Western European countries, the pace is expected to be increased considerably compared to the previous 5 years (i.e., 2016 – 2021). Even if the above figure may be over-optimistic, with respect to the goal of 2026, the European ambition for fiberizing European markets will impose a lot of pressure on speedy deployment.

IT investment levels are typically between 15% and 25% of Telecom Capex.

IT may be the most complex area to reach a consensus on concerning Capex. In my experience, it is also the area within a telco with the highest and most emotional discussion overhead within the operations and at a Board level. Just like everyone is far better at driving a car than the average driver, everyone is far better at IT than the IT experts and knows exactly what is wrong with IT and how to make IT much better and much faster, and much cheaper (if there ever was an area in telco-land where there are too many cooks).

Why is that the case? I tend to say that IT is much more “touchy-feely” than networks where most of the Capex can be estimated almost mathematically (and sufficiently complicated for non-technology folks to not bother with it too much … btw I tend to disagree with this from a system or architecture perspective). Of course, that is also not the whole truth.

IT designs, plans, develops (or builds), and operates all the business support systems that enable the business to sell to its customers, support its customers, and in general, keep the relationship with the customer throughout the customer life-cycle across all the products and services offered by the business irrespective of it being fixed or mobile or converged. IT has much more intense interactions with the business than any other technology department, whose purpose is to support the business in enabling its requirements.

Most of the IT Capex is related to people’s work, such as development, maintenance, and operations. Thus capitalized labor of external and internal labor is the main driver for IT Capex. The work relates to maintaining and improving existing services and products and developing new ones on the IT system landscape or IT stacks. In 2021, Western European telco Capex spending was about 20% of their total revenue. Out of that, 4±1 % or in the order of 10±3 billion Euro is spent on IT. With ca. 714 million fixed and mobile subscribers, this corresponds to an IT average spend of 14 Euros per telco customer in 2021. Best investment practices should aim at an IT Capex spend at or below 3% of revenue on average over 5 years (to avoid penalizing IT transformation programs). As a rule of thumb, if you do not have any details of internal cost structure (I bet you usually would not have that information), assume that the IT-related Opex has a similar quantum as Capex (you may compensate for GDP differences between markets). Thus, the total IT spend (Capex and Opex) would be in the order of 2×Capex, so the IT Spend to Revenue double the IT-related Capex to Revenue. While these considerations would give you an idea of the IT investment level and drill down a bit further into cost structure details, it is wise to keep in mind that it’s all a macro average, and the spread can be pretty significant. For example, two telcos with roughly the same number of customers, IT landscape, and complexity and have pretty different revenue levels (e.g., due to differences in ARPU that can be achieved in the particular market) may have comparable absolute IT spending levels but very different relative levels compared to the revenue. I also know of telcos with very low total IT spend to Revenue ITR (shareholder imposed), which had (and have) a horrid IT infrastructure performance with very extended outages (days) on billing and frequent instabilities all over its IT systems. Whatever might have been saved by imposing a dramatic reduction in the IT Capex (e.g., remember 10 million euros Capex reduction equivalent to 200 million euros value enhancement) was more than lost on inferior customer service and experience (including the inability to bill the customers).

You will find industry experts and pundits that expertly insist that your IT development spend is way too high or too low (although the latter is rare!). I recommend respectfully taking such banter seriously. Although try to understand what they are comparing with, what KPIs they are using, and whether it’s apples for apples and not with pineapples. In my experience, I would expect a mobile-only business to have a better IT spend level than a fixed-mobile telco, as a mobile IT landscape tends to be more modern and relatively simple compared to a fixed IT landscape. First, we often find more legacy (and I mean with a capital L) in the fixed IT landscape with much older services and products still being kept operational. The fixed IT landscape is highly customized, making transformation and modernization complex and costly. At least if old and older legacy products must remain operational. Another false friend in comparing one company IT spending with another’s is that the cost structure may be different. For example, it is worth understanding where OSS (Operational Support System) development is accounted for. Is it in the IT spend, or is it in the Network-side of things? Service platforms and Data Centers may be another difference where such spending may be with IT or Networks.

Figure 24 shows the helicopter view of a traditional telco IT architectural stack. Unless the telco is a true greenfield, it is a very normal state of affairs to have multiple co-existing stacks, which may have some degree of integration at various levels (sub-layers). Most fixed-mobile telcos remain with a high degree of IT architecture separation between their mobile and fixed business on a retail and B2B level. When approaching IT, investments never consider just one year. Understand their IT investment strategy in the immediate past (2-3 years prior) as well as how that fits with known and immediate future investments (2 – 3 years out).

Above, Figure 24 illustrates the typical layers and sub-layers in an IT stack. Every sub-layer may contain different applications, functionalities, and systems, all with an over-arching property of the sub-layer description. It is not uncommon for a telco to have multiple IT stacks serving different brands (e.g., value, premium, …) and products (e.g., mobile, fixed, converged) and business lines (e.g., consumer/retail, business-to-business, wholesale, …). Some layers may be consolidated across stacks, and others may be more fragmented. The most common division is between fixed and mobile product categories, as historically, the IT business support systems (BSS) as well as the operational support systems (OSS) were segregated and might even have been managed by two different IT departments (that kind of silliness is more historical albeit recent).

Figure 25 shows a typical fixed-mobile incumbent (i.e., anything not greenfield) multi-stack IT architecture and their most likely aspiration of aggressive integrated stack supporting a fixed-mobile conversion business. Out of experience, I am not a big fan of retail & B2B IT stack integration. It creates a lot of operational complexity and muddies the investment transparency and economics of particular B2B at the expense of the retail business.

A typical IT landscape supporting fixed and mobile services may have quite a few IT stacks and a wide range of solutions for various products and services. It is not uncommon that a Fixed-Mobile telco would have several mobile brands (e.g., premium, value, …) and a separate (from an IT architecture perspective, at least) fixed brand. Then in addition, there may be differences between the retail (business-to-consumer, B2C) and the business-to-business (B2B) side of the telco, also being supported by separate stacks or different partitions of a stack. This is illustrated in Figure 24 above. In order for the telco business to become more efficient with respect to its IT landscape, including development, maintenance, and operational aspects of managing a complex IT infrastructure landscape, it should strive to consolidate stacks where it makes sense and not un-importantly along the business wish of convergence at least between fixed and mobile.

Figure 24 above illustrates an example of an IT stack harmonization activity long retail brands as well as Fixed and Mobile products as well as a separation of stacks into a retail and a business-to-business stack. It is, of course, possible to leverage some of the business logic and product synergies between B2C and B2B by harmonizing IT stacks across both business domains. However, in my experience, nothing great comes out of that, and more likely than not, you will penalize B2C by spending above and beyond value & investment attention on B2B. The B2B requirements tend to be significantly more complex to implement, their specifications change frequently (in line with their business customers’ demand), and the unit cost of development returns less unit revenue than the consumer part. Economically and from a value-consideration perspective, the telco needs to find an IT stack solution that is more in line with what B2B contributes to the valuation and fits its requirements. That may be a big challenge, particularly for minor players, as its business rarely justifies a standalone IT stack or developments. At least not a stack that is developed and maintained at the same high-quality level as a consumer stack. There is simply a mismatch in the B2B requirements, often having much higher quality and functionality requirements than the consumer part, and what it contributes to the business compared to, for example, B2C.

When I judge IT Capex, I care less about the absolute level of spend (within reason, of course) than what is practical to support within the given IT landscape the organization has been dealt with and, of course, the organization itself, including 3rd party support. Most systems will have development constraints and a natural order of how development can be executed. It will not matter how much money you are given or how many resources you throw at some problems; there will be an optimum amount of resources and time required to complete a task. This naturally leads to prioritization which may lead to disappointment of some stakeholders and projects that may not be prioritized to the degree they might feel entitled to.

When looking at IT capital spending and comparing one telco with another, it is worthwhile to take a 3- to 5-year time horizon, as telcos may be in different business and transformation cycles. A one-year comparison or benchmark may not be appropriate for understanding a given IT-spend journey and its operational and strategic rationale. Search for incidents (frequency and severity) that may indicate inappropriate spend prioritization or overall too little available IT budget.

The IT Capex budget would typically be split into (a) Consumer or retail part (i.e., B2C), (b) Business to Business and wholesale part, (c) IT technical part (optimization, modernization, cloudification, and transformations in general), and a (d) General and Administrative (G&A) part (e.g., Finance, HR, ..). Many IT-related projects, particularly of transformative nature, will run over multiple years (although if much more than 24 months, the risk of failure and monetary waste increases rapidly) and should be planned accordingly. For the business-driven demand (from the consumer, business, and wholesale), it makes sense to assign Capex proportional to the segment’s revenue and the customers those segments support and leverage any synergies in the development work required by the business units. For IT, capital spending should be assigned, ensuring that technical debt is manageable across the IT infrastructure and landscape and that efficiency gains arising from transformative projects (including landscape modernization) are delivered timely. In general, such IT projects promise efficiency in terms of more agile development possibilities (faster time to market), lower development and operational costs, and, last but not least, improved quality in terms of stability and reduced incidents. The G&A prioritizes finance projects and then HR and other corporate projects.

In summary, the following topics would likely be on the Capex priority list;

  1. Provide IT development support for business demand in the next business plan cycle (3 – 5 years with a strong emphasis on the year ahead). The allocation key should be close to the Revenue (or Ebitda) and customer contribution expected within the budget planning period. The development focus is on maintenance, (incremental) improvements to existing products/services, and new products/services required to make the business plans. In my experience, the initial demand tends to be 2 to 3 times higher than what a reasonable financial envelope would dictate (i.e., even considering what is possible to do within the natural limitations of the given IT landscape and organization) and what is ultimately agreed upon.
  2. Cloudification transformation journey moving away from the traditional monolithic IT platform and into a public, hybrid, or private cloud environment. In my opinion, the safest approach is a “lift-and-shift” approach where existing functionality is developed in the cloud environment. After a successful migration from the traditional monolithic platform into the cloud environment, the next phase of the cloudification journey will be to move to a cloud-native framework should be embarked. This provides a very solid automation framework delivering additional efficiencies and improved stability and quality (e.g., reduction in incidents). Analysts should be aware that migrating to a (public) cloud environment may reduce the capitalization possibilities with the consequence that Capex may reduce in the forward budget planning, but this would be at the expense of increased Opex for the IT organization.
  3. Stack consolidation. Reducing the number of IT stacks generally lowers the IT Capex demand and improves development efficiency, stability, and quality. The trend is to focus on the harmonization efforts on the frontend (Portals and Outlets layer in Figure 14), the CRM layer (retiring legacy or older CRM solutions), and moving down the layers of the IT stack (see Figure 14) often touching the complex backend systems when they become obsolete providing an opportunity to migrate to a modern cloud-based solution (e.g., cloud billing).
  4. Modernization activities are not covered by cloudification investments or business requirements.
  5. Development support for Finance (e.g., ERP/SAP requirements), HR requirements, and other miscellaneous activities not captured above.
  6. Chinese suppliers are rarely an issue in Western European telecom’s IT landscape. However, if present in a telco’s IT environment, I would expect Capex has been allocated for phasing out that supplier urgently over the next 24 months (pending the complexity of such a transformation/migration program) due to strong political and regulatory pressures. Such an initiative may have a value-destructing impact as business-driven IT development (related to the specific system) might not be prioritized too highly during such a program and thus result in less ability to compete for the telco during a phase-out program.

IT Capex KPIs: IT share of Total Capex (if available, broken down into a Fixed and Mobile part), IT Capex to Revenue, ITR (IT total spend to Revenue), IT Capex per Customer, IT Capex per Employee, IT FTEs to Total FTEs.

Moreover, if available or being modeled, I would like to have an idea about how much of the IT Capex goes to investment categories such as (i) Maintain, (ii) Growth, and (iii) Transform. I will get worried if the majority of IT Capex over an extended period goes to the Growth category and little to Maintain and Transform. This indicates a telco that has deprioritized quality and ignores efficiency, resulting in the risk of value destruction over time (if such a trend were sustained). A telco with little Transform spend (again over an extended period) is a business that does not modernize (another word for sweating assets).

Capex modeling comment: when I am modeling IT and have little information available, I would first assume an IT Capex to Revenue ratio around 4% (mobile-only) to 6% (fixed-mobile operation) and check as I develop the other telco Capex components whether the IT Capex stays within 15% to 25%. Of course, keep an eye out for all the above IT Capex KPIs, as they provide a more holistic picture of how much confidence you can have in the Capex model.

Figure 26 illustrates the anticipated IT Capex to Revenue ranges for 2024: using New Street Research (total) Capex data for Western Europe, the author’s own Capex projection modeling, and using the heuristics that IT spend typically would be 15% to 25% of the total Capex, we can estimate the most likely ranges of IT Capex to Revenue for the telecommunications business covered by NSR for 2024. For individual operations, we may also want to look at the time series of IT spending to revenue and compare that to any available intelligence (e.g., transformation intensive, M&A integration, business-as-usual, etc..)

Using the heuristic of the IT Capex being between 15% (1st quantile) and 25% (3rd quantile) of the total Capex, we can get an impression of how much individual Telcos invest in IT annually. The above chart shows such an estimate for 2024. I have the historical IT spending levels for several Western European Telcos, which agree well with the above and would typically be a bit below the median unless a Telco is in the progress of a major IT transformation (e.g., after a merger, structural separation, Huawei forced replacement, etc..). One would also expect and should check that the total IT spend, Capex and Opex, are decreasing over time when the transformational IT spend has been removed. If this is observed, it would indicate that Telco does become increasingly more efficient in its IT operation. Usually, the biggest effect should be in IT Opex reduction over time.

Figure 27 illustrates the anticipated IT Capex to Customer ranges for 2024: having estimated the likely IT spend ranges (in Figure 26) for various Western European telcos, allows us to estimate the expected 2024 IT spend per customer (using New Street Research data, author’s own Capex projection model and the IT heuristics describe in the section). In general and in the absence of structural IT transformation programs, I would expect the IT per customer spend to be below the median. Some notes to the above results: TDC (Nuuday & TDC Net) has major IT transformation programs ongoing after the structural separation, KPN is in progress with replacing their Huawei BSS, and I would expect them to be at the upper part of IT spending, Telenor Norway seems higher than I would expect but is an incumbent that traditionally spends substantially more than its competitors so might be okay but caution should be taken here, Switzerland in general and Swisscom, in particular, is higher than I would have expected. This said, it is a sophisticated Telco services market that would be likely to spend above the European average, irrespective I would take some caution with the above representation for Switzerland & Swisscom.

Similar to the IT Capex to Revenue, we can get an impression of what Telcos spend on IT Capex as it relates to their total mobile and fixed customer base. Again for Telcos in Western Europe (as well as outside), these ranges shown above do seem reasonable as the estimated range of where one would expect the IT spend. The analyst is always encouraged to look at this over a 3- to 5-year period to better appreciate the trend and should keep in mind that not all Telcos are in synch with their IT investments (as hopefully is obvious as transformation strategies and business cycles may be very different even within the same market).

Other, or miscellaneous, investments tend to be between 3% and 8% of the Telecom Capex.

When modeling a telco’s Capex, I find it very helpful to keep an “Other” or “Miscellaneous” Capex category for anything non-technology related. Modeling-wise, having a placeholder for items you don’t know about or may have forgotten is convenient. I typically start my models with 15% of all Capex. As my model matures, I should be able to reduce this to below 10% and preferably down to 5% (but I will accept 8% as a kind of good enough limit). I have had Capx review assignments where the Capex for future years had close to 20% in the “Miscellaneous.” If this “unspecified” Capex would not be included, the Capex to Revenue in the later years would drop substantially to a level that might not be deemed credible. In my experience, every planned Capex category will have a bit of “Other”-ness included as many smaller things require Capex but are difficult to mathematically derive a measure for. I tend to leave it if it is below 5% of a given Capex category. However, if it is substantial (>5%), it may reveal “sandbagging” or simply less maturity in the Capex planning and budget process.

Apart from a placeholder for stuff we don’t know, you will typically find Capex for shop refurbishment or modernization here, including office improvements and IT investments.

DE-AVERAGING THE TELECOM CAPEX TO FIXED AND MOBILE CONTRIBUTIONS.

There are similar heuristics to go deeper down into where the Capex should be spent, but that is a detail for another time.

Our first step is decomposing the total Capex into a fixed and a mobile component. We find that a multi-linear model including Total Capex, Mobile Customers, Mobile Service Revenue, Fixed Customers, and Fixed Service Revenues can account for 93% of the Capex trend. The multi-linear regression formula looks like the following;

C_{total} \; = \; C_{mobile} \; + \; C_{fixed}

\; = \; \alpha_{customers}^{mobile} \; N_{customers}^{mobile} \; + \; \alpha_{revenue}^{mobile} \; R_{revenue}^{mobile}

\; +  \;  \beta_{customers}^{fixed} \; N_{customers}^{fixed} \; + \; \beta_{revenue}^{fixed} \; R_{revenue}^{fixed}

with C = Capex, N = total customer count, R = service revenue, and α and β are the regression coefficient estimates from the multi-linear regression. The Capex model has been trained on 80% of the data (1,008 data points) chosen randomly and validated on the remainder (252 data points). All regression coefficients (4 in total) are statistically significant, with p-values well below a 95% confidence level.

Figure 28 above shows the Predicted Capex versus the Actual Capex. It illustrates that the predicted model agreed reasonably well with the actual Capex, which would also be expected based on the statistical KPIs resulting from the fit.

The Total is (obviously) available to us and therefore allows us to estimate both fixed and mobile Capex levels, by

C_{fixed} \; = \;  \beta_{customers}^{fixed} \; N_{customers}^{fixed} \; + \; \beta_{revenue}^{fixed} \; R_{revenue}^{fixed}

C_{mobile} \; = \; C_{total} \; - \; C_{fixed}

The result of the fixed-mobile Capex decomposition is shown in Figure 26 below. Apart from being (reasonably) statistically sound, it is comforting that the trend in Capex for fixed and mobile seem to agree with what our intuition should be. The increase in mobile Capex (for Western Europe) over the last 5 years appears reasonable, given that 5G deployment commenced in early 2019. During the Covid lockdown from early 2020, fixed revenue was boosted by a massive shift in fixed broadband traffic (and voice) from the office to the individuals’ homes. Likewise, mobile service revenues have been in slow decline for years. Thus, the Capex increase due to 5G and reduced mobile service revenues ultimately leads to a relatively more significant increase in the mobile Capex to Revenue ratio.

Figure 29 illustrates the statistical modeling (by multi-linear regression), or decomposition, of the Total Capex as a function of Mobile Customers, Mobile Service Revenues, Fixed Customers, and Fixed Service Revenues, allowing to break up of the Capex into Fixed and Mobile components by decomposing the total Capex. The absolute Capex level is higher for fixed than what is found for mobile, with about a factor of 2 until 2021, when mobile Capex increases due to 5G investments in the mobile industry. It is found that the Mobile Capex has increased the most over the last 5 years (e.g., 5G deployment) while the service revenues have declined somewhat over the same period. This increased the Mobile Capex to Service Revenue ratio (note: based on Total Revenue, the ratio would be somewhat smaller, by ca. 17%). Source: Total Capex, Fixed, and Mobile Service revenues from New Street Research data for Western Europe. Note: The decomposition of the total Capex into Fixed and Mobile Capex is based on the author’s own statistical analysis and modeling. It is not a delivery of the New Street Research report.

CAN MOBILE-TRAFFIC GROWTH CONTINUE TO BE ACCOMMODATED CAPEX-WISE?

In my opinion, there has been much panic in our industry in the past about exhausting the cellular capacity of mobile networks and the imminent doom of our industry. A fear fueled by the exponential growth of user demand perceived inadequate spectrum amount and low spectral efficiency of the deployed cellular technologies, e.g., 3G-HSPA, classical passive single-in single-out antennas. Going back to the “hey-days” of 3G-HSPA, there was a fear that if cellular demand kept its growth rate, it would result in supply requirements going towards infinity and the required Capex likewise. So clearly an unsustainable business model for the mobile industry. Today, there is (in my opinion) no basis for such fears short or medium-term. With the increased fiberization of our society, where most homes will be connected to fiber within the next 5 – 10 years, cellular doomsday, in the sense of running out of capacity or needing infinite levels of Capex to sustain cellular demand, maybe a day never to come.

In Western Europe, the total mobile subscriber penetration was ca. 130% of the total population in 2021, with an excess of approximately 2.1+ mobile devices per subscriber. Mobile internet penetration was 76% of the total population in 2021 and is expected to reach 83% by 2025. In 2021, Europe’s average smartphone penetration rate was 77.6%, and it is projected to be around 84% by 2025. Also, by 2024±1, 50% of all connections in Western Europe are projected to be 5G connections. There are some expectations that around 2030, 6G might start being introduced in Western European markets. 2G and 3G will be increasingly phased out of the Western European mobile networks, and the spectrum will be repurposed for 4G and eventually 5G.

The above Figure 30 shows forecasted mobile users by their main mobile access technology. Source: based on the author’s forecast model relying on past technology diffusion trends for Western Europe and benchmarked against some WEU markets and other telco projections. See also 5G Standalone – European Demand & Expectations by Kim Larsen.

We may not see a complete phase-out of either older Gs, as observed in Figure 19. Due to a relatively large base of non-VOLTE (Voice-over-LTE) devices, mobile networks will have to support voice circuit-switched fallback to 2G or 3G. Furthermore, for the foreseeable future, it would be unlikely that all visiting roaming customers would have VOLTE-based devices. Furthermore, there might be legacy machine-2-machine businesses that would be prohibitively costly and complex to migrate from existing 2G or 3G networks to either LTE or 5G. All in all, ensure that 2G and 3G may remain with us for reasonably long.

Figure 31 above shows that mobile and fixed data traffic consumption is growing in totality and per-user level. On average mobile traffic grew faster than fixed from 2015 to 2021. A trend that is expected to continue with the introduction of 5G. Although the total traffic growth rate is slowing down somewhat over the period, on a per-user basis (mobile as well as fixed), the consumptive growth rate has remained stable.

Since the early days of 3G-HSPA (High-Speed Packet Access) radio access, investors and telco businesses have been worried that there would be an end to how much demand could be supported in our cellular networks. The “fear” is often triggered by seeing the exponential growth trend of total traffic or of the usage per customer (to be honest, that fear has not been made smaller by technology folks “panicking” as well).

Let us look at the numbers for 2021 as they are reported in the Cisco VNI report. The total mobile data traffic was in the order of 4 Exabytes (4 Billion gigabytes, GB), more than 5.5× the level of 2016. It is more than 600 million times the average mobile data consumption of 6.5 GB per month per customer (in 2021). Compare this with the Western European population of ca. 200 million. While big numbers, the 6.5 GB per month per customer is insignificant. Assuming that most of this volume comes from video streaming at an optimum speed of 3 – 5 Mbps (good enough for HD video stream), the 6.5 GB translates into approx. 3 – 5 hours of video streaming over a month.

The above Figure 32 Illustrates a 24-hour workday total data demand on the mobile network infrastructure. A weekend profile would be more flattish. We spend at least 12 hours in our home, ca. 7 hours at work (including school), and a maximum of 5 hours (~20%) commuting, shopping, and otherwise being away from our home or workplace. Previous studies of mobile traffic load have shown that 80% of a consumer’s mobile demand falls in 3 main radio node sites around the home and workplace. The remaining 20% tends to be much more mobile-like in the sense of being spread out over many different radio-node sites.

Daily we have an average of ca. 215 Megabytes per day (if spread equally over the month), corresponding to 6 – 10 minutes of video streaming. The average length of a YouTube was ca. 4.4 minutes. In Western Europe, consumers spend an average of 2.4 hours per day on the internet with their smartphones (having younger children, I am surprised it is not more than that). However, these 2.4 hours are not necessarily network-active in the sense of continuously demanding network resources. In fact, most consumers will be active somewhere between 8:00 to around 22:00, after which network demand reduces sharply. Thus, we have 14 hours of user busy time, and within this time, a Western European consumer would spend 2.4 hours cumulated over the day (or ca. 17% of the active time).

Figure 33 above illustrates (based on actual observed trends) how 5 million mobile users distribute across a mobile network of 5,000 sites (or radio nodes) and 15,000 sectors (typically 3 sectors = 1 site). Typically, user and traffic distributions tend to be log-norm-like with long tails. In the example above, we have in the busy hour a median value of ca. 80 users attached to a sector, with 15 being active (i.e., loading the network) in the busy hour, demanding a maximum of ca. 5 GB (per sector) or an average of ca. 330 MB per active user in the radio sector over that sector’s relevant busy hour.

Typically, 2 limits, with a high degree of inter-dependency, would allegedly hit the cellular businesses rendering profitable growth difficult at some point in the future. The first limit is a practical technology limit on how much capacity a radio access system can supply. As we will see a bit later, this will depend on the operator’s frequency spectrum position (deployed, not what might be on the shelf), the number of sites (site density), the installed antenna technology, and its effective spectral efficiency. The second (inter-dependent) limit is an economic limit. The incremental Capex that telcos would need to commit to sustaining the demand at a given quality level would become highly unprofitable, rendering further cellular business uneconomical.

From a Capex perspective, the cellular access part drives a considerable amount of the mobile investment demand. Together with the supporting transport, such as fronthaul, backhaul, aggregation, and core transport, the capital investment share is typically 50% or higher. This is without including the spectrum frequencies required to offer the cellular service. Such are usually acquired by local frequency spectrum auctions and amount to substantial investment levels.

In the following, the focus will be on cellular access.

The Cellular Demand.

Before discussing the cellular supply side of things, let us first explore the demand side from the view of a helicopter. Demand is created by users (N) of the cellular services offered by telcos. Users can be human or non-human such as things in general or more specific machines. Each user has a particular demand that, in an aggregated way, can be represented by the average demand in Bytes per User (d). Thus, we can then identify two growth drivers. One from adding new users (ΔN) to our cellular network and another from the incremental change in demand per user (ΔN) as time goes by.

It should be noted that the incremental change in demand or users might not per se be a net increase. Still, it could also be a net decrease, either because the cellular networks have reached the maximum possible level of capacity (or quality) that results in users either reducing their demand or “ churning” from those networks or that an alternative to today’s commercial cellular network triggers abandonment as high-demand users migrate to that alternative — leading both to a reduction in cellular users and the average demand per user. For example, a near-100% Fiber-to-the-Home coverage with supporting WiFi could be a reason for users to abandon cellular networks, at least in an indoor environment, which would reduce between 60 to 80% of present-day cellular data demand. This last (hypothetical) is not an issue for today’s cellular networks and telco businesses.

N_{t+1} \; = \; N_t \; + \; \Delta N_{t+1}

d_{t+1} \; = \; d_t \; + \; \Delta d_{t+1}

D_{t+1}^{total} \; = \; N_{t+1} \times d_{t+1}

Of course, this can easily be broken down into many more drivers and details, e.g., technology diffusion or adaptation, the rate of users moving from one access technology to another (e.g., 3G→4G, 4G→5G, 5G→FTTH+WiFi), improved network & user device capabilities (better coverage, higher speeds, lower latency, bigger display size, device chip generation), new cellular service adaptation (e.g., TV streaming, VR, AR, …), etc.…

However, what is often forgotten is that the data volume of consumptive demand (in Byte) is not the main direct driver for network demand and, thus, not for the required investment level. A gross volumetric demand can be caused by various gross throughput demands (bits per second). The throughput demanded in the busiest hour (T_{demand} or T_{BH}) is the direct driver of network load, and thus, network investments, the volumetric demand, is a manifestation of that throughput demand.

T_{demand} \; = \; T_{BH \; in \; bits/sec} \; max_t \sum_{cell} \; n_t^{cell} \; \times \; 8 \; \delta_t^{cell} \; = \; max_t \sum_{cell} \; \tau_t^{cell}

With n_t^{cell} being the number of active users in a given radio cell at the time-instant of unit t taken within a day. \delta_t^{cell} is the Bytes consumed in a time instant (e.g., typically a second); thus, 8 \delta_t^{cell}  gives us the bits per time unit (or bits/sec), which is throughput consumed. Sum over all the cells’ instant throughput (\tau_t^{cell} bits/sec) in the same instant and take the maximum across. For example, a day provides the busiest hour throughput for the whole network. Each radio cell drives its capacity provision and supply (in bits/sec) and the investments required to provide that demanded capacity in the air interface and front- and back-haul.

For example, if n = 6 active (concurrent) users, each consuming on average  = 0.625 Mega Bytes per second (5 Megabits per second, Mbps), the typical requirement for a YouTube stream with an HD 1080p resolution, our radio access network in that cell would experience a demanded load of 30 Mbps (i.e., 6×5 Mbps). Of course, provided that the given cell has sufficient capacity to deliver what is demanded. A 4G cellular system, without any special antenna technology, e.g., Single-in-Single-out (SiSo) classical antenna and not the more modern Multiple-in-Multiple-out (MiMo) antenna, can be expected to deliver ca. 1.5 Mbps/MHz per cell. Thus, we would need at least 20 MHz spectrum to provide for 6 concurrent users, each demanding 5 Mbps. With a simple 2T2R MiMo antenna system, we could support about 8 simultaneous users under the same conditions. A 33% increase in what our system can handle without such an antenna. As mobile operators implement increasingly sophisticated antenna systems (i.e., higher-order MiMo systems) and move to 5G, a leapfrog in the handling capacity and quality will occur.

Figure 34 Is the sky the limit to demand? Ultimately, the limit will come from the practical and economic limits to how much can be supplied at the cellular level (e.g., spectral bandwidth, antenna technology, and software features …). Quality will reduce as the supply limit is reached, resulting in demand adaptation, hopefully settling at a demand-supply (metastable) equilibrium.

Cellular planners have many heuristics to work with that together trigger when a given radio cell would be required to be expanded to provide more capacity, which can be provided by software (licenses), hardware (expansion/replacement), civil works (sectorization/cell splits) and geographical (cell split) means. Going northbound, up from the edge of the radio network up through the transmission chain, such as fronthaul, back, aggregation, and core transport network, may require additional investments in expanding the supplied demand at a given load level.

As discussed, mobile access and transport together can easily make up more than half of a mobile capital budget’s planned and budgeted Capex.

So, to know whether the demand triggers new expansions and thus capital demand as well as the resulting operational expenses (Opex), we really need to look at the supply side. That is what our current mobile network can offer. When it cannot provide a targeted level of quality, how much capacity do we have to add to the network to be on a given level of service quality?

The Cellular Supply.

Cellular capacity in units of throughput (T_{supply}) given in bits per second, the basic building block of quality, is relatively easy to estimate. The cellular throughput (per unit cell) is provided by the amount of committed frequency spectrum to the air interface, what your radio access network and antenna support are, multiplied by the so-called spectral efficiency in bits per Hz per cell. The spectral efficiency depends on the antenna technology and the underlying software implementation of signal processing schemes enabling the details of receiving and sending signals over the air interface.

T_{supply} can be written as follows;

With Mbps being megabits (a million bits) per second and MHz being Mega Herz.

For example, if we have a site that covers 3 cells (or sectors) with a deployed 100 MHz @ 3.6GHz (B) on a 32T32R advanced antenna system (AAS) with an effective downlink (i.e., from the antenna to user), spectral efficiency \eta_{eff} of ca. 20 Mbps/MHz/cell (i.e., \eta_{eff} = n_{eff} \times \eta_{SISO}), we should expect to have a cell throughput on average at 1,000 Mbps (1 Gbps).

The capacity supply formula can be applied to the cell-level consideration providing sizing and thus investment guidance as we move northbound up the mobile network and traffic aggregates and concentrates towards the core and connections points to the external internet.

From the demand planning (e.g., number of customers, types of services sold, etc..), that would typically come from the Marketing and Sales department within the telco company, the technical team can translate those plans into a network demand and then calculate what they would need to do to cope with the customer demand within an agreed level of quality.

In Figure 35 above, operators provide cellular capacity by deploying their spectral assets on an appropriate antenna type and system-level radio access network hardware and software. Competition can arise from a superior spectrum position (balanced across low, medium, and high-frequency bands), better or more aggressive antenna technology, and utilizing their radio access supplier(s)’ features (e.g., signal processing schemes). Usually, the least economical option will be densifying the operator’s site grid where needed (on a macro or micro level).

Figure 36 above shows the various options available to the operator to create more capacity and quality. In terms of competitive edge, more spectrum than competitors provided it is being used and is balanced across low, medium, and high bands, provides the surest path to becoming the best network in a given market and is difficult to economically copy by operators with substantially less spectrum. Their options would be compensating for the spectrum deficit by building more sites and deploying more aggressive antenna technologies. The last one is relatively easy to follow by anyone and may only provide some respite temporarily.  

An average mobile network in Western Europe has ca. 270 MHz spectrum (60 MHz low-band below 1800 and 210 MHz medium-band below 5 GHz) distributed over an average of 7 cellular frequency bands. It is rare to see all bands deployed in actual deployments and not uniformly across a complete network. The amount of spectrum deployed should match demand density; thus, more spectrum is typically deployed in urban areas than in rural ones. In demand-first-driven strategies, the frequency bands will be deployed based on actual demand that would typically not require all bands to be deployed. This is opposed to MNOs that focus on high quality, where demand is less important, and where typically, most bands would be deployed extensively across their networks. The demand-first-driven strategy tends to be the most economically efficient strategy as long as the resulting cellular quality is market-competitive and customers are sufficiently satisfied.

In terms of downlink spectral capacity, we have an average of 155 MHz or 63 MHz, excluding the C-band contribution. Overall, this allows for a downlink supply of a minimum of 40 GB per hour (assuming low effective spectral efficiency, little advanced antenna technology deployed, and not all medium-band being utilized, e.g., C-Band and 2.5 GHz). Out of the 210 MHz mid-band spectrum, 92 MHz falls in the 3.X GHz (C-band) range and is thus still very much in the process of being deployed for 5G (as of June 2022). The C-band has, on average, increased the spectral capacity of Western European telcos by 50+% and, with its very high suitability for deployment together with massive MiMo and advanced antenna systems, effectively more than doubled the total cellular capacity and quality compared to pre-C-band deployment (using a 64T64R massive MiMo as a reference with today’s effective spectral efficiency … it will be even better as time goes by).

Figure 37 (above) shows the latest Ookla and OpenSignal DL speed benchmarks for Western Europe MNOs (light blue circles), and comparing this with their spectrum holdings below 3.x GHz indicates that there may be a lot of unexploited cellular capacity and quality to be unleashed in the future. Although, it would not be for free and likely require substantial additional Capex if deemed necessary. The ‘Expected DL Mbps’ (orange solid line, *) assumes the simplest antenna setup (e.g., classical SiSo antennas) and that all bands are fully used. On average, MNOs above the benchmark line have more advanced antenna setups (higher-order antennas) and fully (or close to) spectrum deployment. MNOs below the benchmark line likely have spectrum assets that have not been fully deployed yet and (or) “under-prioritized” their antenna technology infrastructure. The DL spectrum holding excludes C- and mmWave spectrum. Note:  There was a mistake in the original chart published on LinkedIn as the data was depicted against the total spectrum holding (DL+UL) and not only DL. Data: 54 Western European telcos.

Figure 37 illustrates the Western European cellular performance across MNOs, as measured by DL speed in Mbps, and compares this with the theoretical estimate of the performance they could have if all DL spectrum (not considering C-band, 3.x GHz) in their portfolio had been deployed at a fairly simple antenna setup (mainly SiSo and some 2T2R MiMo) with an effective spectral efficiency of 0.85 Mbps per MHz. It is good to point out that this is expected of 3G HSPA without MiMo. We observe that 21 telcos are above the solid (orange) line, and 33 have an actual average measured performance that is substantially below the line in many cases. Being above the line indicates that most spectrum has been deployed consistently across the network, and more advanced antennas, e.g., higher-order MiMo, are in use. Being below the line does (of course) not mean that networks are badly planned or not appropriately optimized. Not at all. Choices are always made in designing a cellular network. Often dictated by the economic reality of a given operator, geographical demand distribution, clutter particularities, or the modernization cycle an operator may be in. The most obvious reasons for why some networks are operating well under the solid line are; (1) Not all spectrum is being used everywhere (less in rural and more in urban clutter), (2) Rural configurations are simpler and thus provide less performance than urban sites. We have (in general) more traffic demand in urban areas than in rural. Unless a rural area turns seasonally touristic, e.g., lake Balaton in Hungary in the summer … It is simply a good technology planning methodology to prioritize demand in Capex planning, and it makes very good economic sense (3) Many incumbent mobile networks have a fundamental grid based on (GSM) 900MHz and later in-filled for (UMTS) 2100MHz…which typically would have less site density than networks based on (DCS) 1800MHz. However, site density differences between competing networks have been increasingly leveled out and are no longer a big issue in Western Europe (at least).

Overall, I see this as excellent news. For most mobile operators, the spectrum portfolio and the available spectrum bandwidth are not limiting factors in coping with demanded capacity and quality. Operators have many network & technology levers to work with to increase both quality and capacity for their customers. Of course, subject to a willingness to prioritize their Capex accordingly.

A mobile operator has few options to supply cellular capacity and quality demanded by its customer base.

  • Acquire more spectrum bandwidth by buying in an auction, buying from 3rd party (including M&A), asymmetric sharing, leasing, or trading (if regulatory permissible).
  • Deploy a better (spectral efficient) radio access technology, e.g., (2G, 3G) → (4G, 5G) or/and 4G → 5G, etc. Benefits will only be seen once a critical mass of customer terminal equipment supporting that new technology has been reached on the network (e.g., ≥20%).
  • Upgrade antenna technology infrastructure from lower-order passive antennas to higher-order active antenna systems. In the same category would be to ensure that smart, efficient signal processing schemes are being used on the air interface.
  • Building a denser cellular network where capacity demand dictates or coverage does not support the optimum use of higher frequency bands (e.g., 3.x GHz or higher).
  • Small cell deployment in areas where macro-cellular built-out is no longer possible or prohibitively costly. Though small cells scale poorly with respect to economics and maybe really the last resort.

Sectorization with higher-frequency massive-MiMo may be an alternative to small-cell and macro-cellular additions. However, sectorization requires that it is possible civil-engineering wise (e.g., construction) re: structural stability, permissible by the landlord/towerco and finally economic compared to a new site built. Adding more than the usual 3-sectors to a site would further boost site spectral efficiency as more antennas are added.

Acquiring more spectrum requires that such spectrum is available either by a regulatory offering (public auction, public beauty contest) or via alternative means such as 3rd party trading, leasing, asymmetric sharing, or by acquiring an MNO (in the market) with spectrum. In Western Europe, the average cost of spectrum is in the ballpark of 100 million Euro per 10 million population per 20 MHz low-band or 100 MHz medium bands. Within the European Union, recent auctions provide a 20-year usage-rights period before the spectrum would have to be re-auctioned. This policy is very different from, for example, in the USA, where spectrum rights are bought and ownership secured in perpetuity (sometimes conditioned on certain conditions being met). For Western Europe, apart from the mmWave spectrum, in the foreseeable future, there will not be many new spectrum acquisition opportunities in the public domain.

This leaves mobile operators with other options listed above. Re-farming spectrum away from legacy technology (e.g., 2G or 3G) in support of another more spectral efficient access technology (e.g., 4G and 5G) is possibly the most straightforward choice. In general, it is the least costly choice provided that more modern options can support the very few customers left. For either retiring 2G or 3G, operators need to be aware that as long as not all terminal equipment support Voice-over-LTE (VoLTE), they need to keep either 2G or 3G (but not both) for 4G circuit-switched fallback (to 2G or 3G) for legacy voice services. The technologist should be prepared for substantial pushback from the retail and wholesale business, as closing down a legacy technology may lead to significant churn in that legacy customer base. Although, in absolute terms, the churn exposure should be much smaller than the overall customer base. Otherwise, it will not make sense to retire the legacy technology in the first place. Suppose the spectral re-farming is towards a new technology (e.g., 5G). In that case, immediate benefits may not occur before a critical mass of capable devices is making use of the re-farmed spectrum. The Capex impact of spectral re-farming tends to be minor, with possibly some licensing costs associated with net savings from retiring the legacy. Most radio departments within mobile operators, supplier experts, and managed service providers have gained much experience in this area over the last 5 – 7 years.

Another venue that should be taken is upgrading or modernizing the radio access network with more capable antenna infrastructure, such as higher-order massive MiMo antenna systems. As has been pointed out by Prof. Emil Björnson also, the available signal processing schemes (e.g., for channel estimation, pre-coding, and combining) will be essential for the ultimate gain that can be achieved. This will result in a leapfrog increase in spectral efficiency. Thus, directly boosting air-interface capacity and the quality that the mobile customer can enjoy. If we take a 20-year period, this activity is likely to result in a capital demand in the order of 100 million euros for every 1,000 sites being modernized and assumes a modernization (or obsolescence) cycle of 7 years. In other words, within the next 20 years, a mobile operator will have undergone at least 3 antenna-system modernization cycles. It is important to emphasize that this does not (entirely) cover the likely introduction of 6G during the 20 years. Operators face two main risks in their investment strategy. One risk is that they take a short-term look at their capital investments and customer demand projections. As a result, they may invest in insufficient infrastructure solutions to meet future demands, forcing accelerated write-offs and re-investments. The second significant risk is that the operator invests too aggressively upfront in what appears to be the best solution today to find substantially better and more efficient solutions in the near future that more cautious competitive operators could deploy and achieve a substantially higher quality and investment efficiency. Given the lack of technology maturity and the very high pace of innovation in advanced antenna systems, the right timing is crucial but not straightforward.

Last and maybe least, the operator can choose to densify its cellular grid by adding one or more macro-cellular sites or adding small cells across existing macro-cellular coverage. Before it is possible to build a new site or site, the operator or the serving towerco would need to identify suitable locations and subsequently obtain a permit to establish the new site or site. In urban areas, which typically have the highest macro-site densities, getting a new permit may be very time-consuming and with a relatively high likelihood of not being granted by the municipality. Small cells may be easier to deploy in urban environments than in macro sites. For operators making use of towerco to provide the passive site infrastructure, the cost of permitting and building the site and materials (e.g., steel and concrete) is a recurring operational expense rather than a Capex charge. Of course, active equipment remains a Capex item for the relevant mobile operator.

The conclusion I make above is largely consistent with the conclusions made by New Street Research in their piece “European 5G deep-dive” (July 2021). There is plenty of unexploited spectrum with the European operators and even more opportunity to migrate to more capable antenna systems, such as massive-MiMo and active advanced antenna systems. There are also above 3GHz, other spectrum opportunities without having to think about millimeter Wave spectrum and 5G deployment in the high-frequency spectrum range.

ACKNOWLEDGEMENT.

I greatly acknowledge my wife Eva Varadi, for her support, patience, and understanding during the creative process of writing this Blog. There should be no doubt that without the support of Russell Waller (New Street Research), this blog would not have been possible. Thank you so much for providing much of the data that lays the ground for much of the Capex analysis in this article. Of course, a lot of thanks go out to my former Technology and Network Economics colleagues, who have been a source of inspiration and knowledge. I cannot get away with acknowledging Maurice Ketel (who for many years let my Technology Economics Unit in Deutsche Telekom, I respect him above and beyond), Paul Borker, David Haszeldine, Remek Prokopiak, Michael Dueser, Gudrun Bobzin, as well as many, many other industry colleagues who have contributed with valuable insights, discussions & comments throughout the years. Many thanks to Paul Zwaan for a lot of inspiration, insights, and discussions around IT Architecture.

Without executive leadership’s belief in the importance of high-quality techno-financial models, I have no doubt that I would not have been able to build up the experience I have in this field. I am forever thankful, for the trust and for making my professional life super interesting and not just a little fun, to Mads Rasmussen, Bruno Jacobfeuerborn, Hamid Akhavan, Jim Burke, Joachim Horn, and last but certainly not least, Thorsten Langheim.

FURTHER READING.

  1. Kim Kyllesbech Larsen, “The Nature of Telecom Capex.” (July, 2022). My first article laying the ground for Capex in the Telecom industry. The data presented in this article is largely outdated and remains for comparative reasons.
  2. Kim Kyllesbech Larsen, “5G Standalone European Demand Expectations (Part I).”, (January, 2022).
  3. Kim Kyllesbech Larsen, “RAN Unleashed … Strategies for being the best (or the worst) cellular network (Part III).”, (January, 2022).
  4. Tom Copeland, Tim Koller, and Jack Murrin, “Valuation”, John Wiley & Sons, (2000). I regard this as my “bible” when it comes to understanding enterprise valuation. There are obviously many finance books on valuation (I have 10 on my bookshelf). Copeland’s book is the best imo.
  5. Stefan Rommer, Peter Hedman, Magnus Olsson, Lars Frid, Shabnam Sultana, and Catherine Mulligan, “5G Core Networks”, Academic Press, (2020, 1st edition). Good account for what a 5G Core Network entails.
  6. Jia Shen, Zhongda Du, Zhi Zhang, Ning Yang and Hai Tang, “5G NR and enhancements”, Elsevier (2022, 1st edition). Very good and solid account of what 5G New Radio (NR) is about and the considerations around it.
  7. Wim Rouwet, “Open Radio Access Network (O-RAN) Systems Architecture and Design”, Academic Press, (2022). One of the best books on Open Radio Access Network architecture and design (honestly, there are not that many books on this topic yet). I like that the author, at least as an introduction makes the material reasonably accessible to even non-experts (which tbh is also badly needed).
  8. Strand Consult, “OpenRAN and Security: A Literature Review”, (June, 2022). Excellent insights into the O-RAN maturity challenges. This report focuses on the many issues around open source software-based development that is a major part of O-RAN and some deep concerns around what that may mean for security if what should be regarded as critical infrastructure. I warmly recommend their “Debunking 25 Myths of OpenRAN”.
  9. Ian Morris, “Open RAN’s 5G course correction takes it into choppy waters”, Light Reading, (July, 2023).
  10. Hwaiyu Geng P.E., “Data Center Handbook”, Wiley (2021, 2nd edition). I have several older books on the topic that I have used for my models. This one brings the topic of data center design up to date. Also includes the topic of Cloud and Edge computing. Good part on Data Center financial analysis. 
  11. James Farmer, Brian Lane, Kevin Bourgm Weyl Wang, “FTTx Networks, Technology Implementation, and Operations”, Elsevier, (2017, 1st edition). It has some books covering FTTx deployment, GPON, and other alternative fiber technologies. I like this one in particular as it covers hands-on topics as well as basic technology foundations.
  12. Tower companies overview, “Top-12 Global 5G Cell Tower Companies 2021”, (Nov. 2021). A good overview of international tower companies with a meaningful footprint in Europe.
  13. New Street Research, “European 5G deep-dive”, (July, 2021).
  14. Prof. Emil Björnson, https://ebjornson.com/research/ and references therein. Please take a look at many of Prof. Björnson video presentations (e.g., many brilliant YouTube presentations that are fairly assessable).

Fixed Wireless Access in a Modern 5G Setting – What Does it Bring That We Don’t Already Have?

Back in 2014, working at Deutsche Telekom AG and responsible for Technology Economics, we looked at alternatives to fiber deployment in Germany (and other markets). It was clear that deploying fiber in Germany would be massively costly and take a very long time… As an incumbent solely relying on xDSL, there was unease in general and in particular with observing that HFC (hybrid-fiber-coaxial) providers were gaining a lot of traction in key markets around Germany. There was an understanding that fiber would be necessary to secure the longer-term survivability of the business. Even as far back as 2011, this was clear to some visionaries within Deutsche Telekom. My interest at the time was whether fixed wireless access (FWA) solutions could be deployed faster (yes, it could and can, at least in Germany) and bridge the time until fiber was sufficiently deployed and with an economically attractive uptake that allowed an operator to retire the FWA solution or re-purpose it for normal mobile access. It economically did not make sense to deploy FWA everywhere … by far not. Though we found that in certain suburban and rural areas, it could make sense to deploy FWA solutions. … So why did it not happen? At the time, the responsible executives for fixed broadband deployment (no, no converged organization at the time) were nervous that “their” fiber Capex would be re-prioritized to FWA and thus taken away from their fiber deployment. Resulting in even further delays in fiber coverage in Germany. Also … they argued the write-off of fiber investments (e.g., 15 – 20+ years) is so much much longer compared to FWA (e.g., 5 – 7 years), and when factoring in the useful lifetime of fiber versus FWA, it made no sense to deploy it (of course ignoring that we could deploy FWA within 6 months while the fiber in that area might not be present in the next 10+ years;-).

I learned three main lessons (a lot more, actually … but that’s for my memoirs if I remember;-)

  • FWA can be made economically favorable but not universally so everywhere.
  • FWA can be a great instrument to bridge the time until fiber deployment has arrived and a given demand (uptake) in an area exists (you just need to make sure your FWA design accounts for the temporary nature of the purpose of your solutions).
  • FWA at high frequencies (e.g., >20 GHz) is not “just” an overlay of an MNOs existing mobile network. The design should be considered a standalone network, with maximum re-use of any existing infrastructure, with line-of-sight (LoS) to customers and LoS redundancy build-in (i.e., multiple redundant paths to a customer).

We are now 10+ years further (and Germany is still Europe’s laggard in terms of fiber deployment and will remain so for many years to come), and the technology landscape that supports both fiber and fixed wireless access is much further as well…

In the following, it is always good to keep in mind that

“Even if your something appears less economically attractive than something else, if that something else is not available or present, your solution may be an interesting opportunity to capture growth to your business. At least within a given window of opportunity.”

and, so it begins …

FIXED WIRELESS ACCESS (FWA).

In this blog, I will define Fixed Wireless Access (FWA) as a service that provides a fixed-like wireless-based internet broadband connection to a household. FWA bypasses the need for a last-mile fixed wired connection from a nearby access point (e.g., street cabinet) to a customer’s household. Thus substituting the need for a fixed copper, coax, or fiber last-mile connection. I will, in general, position FWA in a modern context of 5G, which may enable existing MNOs to bridge the time until they will have fiber coverage, for example, rural and sub-urban areas. Or, as the thinking goes (for some), completely avoid the need for costly and (allegedly) less profitable deployment of fiber in less household-dense areas where more kilometer of fiber needs to be deployed in order to reach the same amount of households compared to an urban or dense urban area. Of course, companies may also be tempted to build FWA-dedicated ISP networks operating in the mmWave range (i.e., >20 GHz) or in the so-called mid-bands range (e.g., ≥ 2.5 GH, C-band, …) to provide higher quality internet services to sub-urban and rural customers where the economics for fiber coverage and connectivity may be comparably challenged in terms of economics and time to fiber availability.

Figure 1 below provides an overview and comparison of the various ways we connect our customers’ homes, with the exception of LEO satellite and stratospheric drone-based connectivity solutions (it’s another very interesting story). So, illustrating terrestrial network-based connectivity to the household with either a fixed-line (buried or aerial) or wireless.

Figure 1 illustrates 3 different ways to connect to a household. The first (Household A) is the “normal” fixed connection, where the last mile from the street cabinet is a physical connection entering the customer’s household either via a buried connection or via a street pole (aerial connection). In the second situation (Household B), the service provider has no fixed assets readily available in a given area but has mobile radio access network infrastructure in the proximity of the household. The provider may choose to offer Fixed Mobile Substitution (FMS) using their existing mobile infrastructure and spectrum capacity to offer households fixed-like service via an in-door modem capable of receiving the radio frequencies upon which the FMS service is offered. Alternatively, and better for the mobile capacity in general (as well as providing a better customer experience), would be to offer the service with an outdoor customer premise antenna (CPA) connecting to an in-door CPE. If the FMS service is provided via a CPA, it may be called or identified as a fixed wireless access (FWA) service. In this connection scenario, cellular spectrum resources are being shared between the household FMS customers and the mobile customer base. The third connectivity scenario (Household C), is where a dedicated high-speed wireless link is established between a service provider’s remote advanced antenna system (and its associated radio access network equipment) and the household’s (typically outdoor) customer premise antenna. Both infrastructure and spectral resources will be dedicated to providing competitive (to broadband fixed alternatives) fixed-like services to customers. This is fixed-wireless access or FWA. In a modern setting service providers would offer fiber-like speeds (e.g., >100 Mbps) with dedicated mmWave 5G (SA) infrastructure. However, it is also possible to provide better-than-average mobile broadband services over a CPA and an operator’s mobile network (as it is often done with 4G or/and cellular 5G NSA).

For the wireless connection between the service provider’s access network and the household, we have several options;

(1) The Fixed Wireless Access (FWA) network provides a dedicated wireless link between the service provider’s network and the customer’s home. In order to maximize the customer experience, typically, an outdoor customer premise antenna (CPA) would have to be installed on the exterior of a household, offering line-of-sight with the provider’s own advanced antenna residing on its access network infrastructure. The provider will likely dedicate a sufficient amount of wireless spectrum bandwidth (in MHz) to provide a competitive (to fixed) broadband service. In a 5G SA (standalone) setting, this could be a cellular spectrum in the mid-band range (≥ 2.5 – 10 GHz) or (or and) mmWave spectrum above 20 GHz. An access network providing fixed-wireless services in the mid-band spectrum typically would overlay an existing mobile network (if the provider is also an MNO) with possibly site additions allowing for higher-availability services to households as well as increase the scale and potential of connecting households due to increased LoS likelihood. In case the services rely on mmWave frequency bands, I would in general, expect a dedicated network infrastructure would have to be built to provide sufficient household scale, reliability, and availability to households in the covered broadband service area. This may (also) rely on existing mobile network infrastructure if the provider is an established MNO, or it may be completely standalone. My rule of thumb is that for every household that is subscribing to the FWA service, I need at least 2, preferably 3, individual line-of-sight solutions to the household CPA. Most conventional cellular network designs (99+% of all there are out in the wild) cannot offer that kind of coverage solution.

The customer premise antenna (CPA) connects to the household’s customer premise equipment (CPE). The CPE provides WiFi coverage within the household either as a single unit or as part of a meshed WiFi household network.

(2) A service that is based on Fixed Mobile Substitution (FMS) utilizes existing cellular resources, such as infrastructure and spectrum bandwidth, to provide a service to a cellular-based (e.g., 4G/5G) customer premise equipment (CPE) residing inside a customer’s household. The CPE connects to the mobile network (via 4G and/or 5G ) and enjoys the quality of the provider’s mobile network. Inside the household, the CPE offers WiFi coverage that is utilized by the household’s occupants. As existing mobile resources are shared with regular mobile customers that may also be in the same household as the FMS solution itself, the service provider needs to carefully balance capacity and quality between the two customer segments, with the household one typically being the greedy one (with respect to network resources and service plans) and impacting network resources substantially more than the regular mobile user (e.g., usually 20+ to 1).

Figure 2 summarizes various connection possibilities there are to connect a household to the internet as well as media content such as linear and streaming TV.

FWA has been around the telco and ISP toolbox for many years in one form or another. The older (or let’s put it nicer, the experienced) reader will remember that a decade ago, many of us believed that WiMax (Worldwide Interoperability for Microwave Access) was the big thing to solve all the ailing (& failings) of 3G, maybe even becoming our industry’s de facto 4G standard. WiMax promised up to 1 Gbps for a fixed (wireless) access base station and up to around 100 Mbps at low mobility (i.e., <50 km per hour). As we know today, it should not be.

FAST FORWARD TO TODAY & TOMORROW WITH 5G AND FIBER SERVICES.

GSMA (GSM Association, the mobile interest group) has been fairly bullish on the advantages and opportunities of 5G-based Fixed Wireless Access (5G-FWA). Alleging a significant momentum behind FWA with (1) 74+ broadband service providers launching FWA services globally, (2) Expecting 40 million 5G FWA subscribers by 2025. Note globally, as of October 2022, there were 5.5 billion unique mobile subscribers. So 5G FWA amounts to <1% of unique subscribers, and last but not least (3) They expect up to 80% cost saving versus fiber to the home (FTTH) @ 100 Mbps downlink. GSMA lists more advantages according with GSMA but the 3 here are maybe the most important.

According to GSMA, in Western Europe, they expect roughly around 275+ million people will subscribe to 5G by 2025. This number represents ca. 140 million unique 5G households. Applying household scaling between western Europe and Global on the global total of 40 million 5G FWA HH, one should expect to capture between 4 to 5 million 5G FWA households or ca. 2.5% FWA HH penetration in Western Europe by 2025 (see below for details of this estimate). This FWA number also corresponds to a ca. 4% of all unique 5G households, or ca. 2% of all unique 5G subscribers, or ca. 1% of all unique mobile subscribers (in 2025). While 40 million (5 million) globally (in Western Europe) sounds like a large number, it is, to all effects rather minuscule and underwhelming compared to the total mobile and fixed broadband market.

The GSMA report, “The 5G FWA opportunity: series highlights” (from July 2022) also provides a 2025 projection for 5G FWA connections as a percentage of households across various countries. In Figure 3 below, find the GSMA projections with, as a comparison, the estimated fiber-to-the-home connections (FTTH) in 2025 and, for reference, the actual FTTH connections in 2021. It seems compelling to assume that 5G FWA would be an alternative to fiber at home or an HFC D3.1 (D = Docsis) connection. Of course, it is only possible to get a service if the technology of choice covers the household. A fiber connection to your household requires that there is a fiber passing in the proximity of your household. Thus the degree of fiber coverage is important in order to assess the fiber subscription uptake possible. Likewise, a 5G FWA connection requires that the household is within a very good and high-quality 5G coverage of the FWA provider (or the underlying network operator). Figure 4 below provides an overview of 2021 actual and 2026 (projected) fiber-based household coverage (i.e., homes passed) percentages in Western Europe.

Figure 3 above shows GSMA 2025 projections of 5G FWA household (HH) connections vs. actual FTTH connections in 2021 and the author’s forecast of FTTH connections by 2025. In countries where the is no 5G-FWA data means, according to GSMA that the expectations are below 1% of HH connected. The total Western Europe 5G FWA connection figure is in excess of 10+ million HH versus 4 – 5 million that was assessed based on the global number of 5G FWA and unique mobile households. In most Western European markets, 5G FWA as defined in the GSMA study, will be a niche service. Note: the FTTH connected percentages are based on total households in the country instead of homes passed figures. Markets that have reached 80% of HHs are capped at that level. In all cases, it would be possible to go beyond. Sources: GSMA for 5G FWA and OECD statistics database.
Figure 4 fiber coverage measured as a percentage of households passed across Western Europe. 2016 and 2021 are actual data based on European Commission’s “Broadband Coverage in Europe 2021” (authored by Omdia et al.). The 2026 & 2031 figures are the author’s own forecast based on the last 5 years maximum FTTP/B deployment speed. I have imposed a 95% Household coverage ceiling in my deployment model. The pie charts illustrate the degree the fiber deployment can make use of aerial infrastructure vis-a-vis buried requirements.

If we take a look at 5G coverage, which may be an enabler for FWA services that can compete with fiber quality, it would be fairly okay to assume that most mobile operators in Western Europe would have close to a full 5G population (and households) coverage. However, accessing the 5G quality of that coverage would be problematic. 5G coverage may be based on 700 MHz piggybacking on LTE (i.e., non-standalone, NSA 5G), providing nearly 100% household coverage, it may involve considerable mid-band (i.e., > 2.1 GHz frequency bands) 5G coverage in urban and suburban areas with varying degree of rural coverage, it may also involve the deployment of mmWave (i.e., >20 GHz frequency bands) as an overlay to the normal macro cellular network or as dedicated standalone fixed-wireless access network or a combination of both.

Actually, one might also think that in geographical areas where fiber coverage, or D3.1-based HFC, is relatively limited or completely lacking, 5G FWA opportunities would be more compelling due to the lack of competing broadband alternatives. If the premise is that the 5G FWA service should be fiber-like, it would require good quality 5G coverage with speeds exceeding 100 Mbps at high availability and consistency. However, if the fixed broadband service that FWA would compete with is legacy xDSL, then some of the requirements for fiber-like quality may be relaxed (e.g., 100+ Mbps, very high availability, …).

What are the opportunities, and where? Focusing on fiber deployment in Western Europe, Figure 5 illustrates homes covered by fiber and those with no fiber coverage in urban and rural areas as of 2021 (actual). The figure below also provides a forecast of home coverage and homes missing by 2026.

Figure 5 illustrates the percentage of homes fiber covered (i.e., passed) as well as the homes where fiber coverage remains. The 2021 numbers are actual and based on data in the latest European Commission’s “Broadband Coverage in Europe 2021” (authored by Omdia et al.). The 2026 data is the author’s forecast model based on the last 5 years’ fastest fiber rollout speed. 2021 Households numbers (in a million households) are added to the 2021 charts. In general, it is expected that the number of rural households will continue to decline over the period.

As Figure 5 above shows, the urban fiber deployment in Europe is happening at a fast pace in most markets, and the opportunities for alternatives (at scale) may at the same time be seen as diminishing apart from a few laggard markets (e.g., Austria, Belgium, Germany, UK, ..). Rural opportunities for broadband alternatives (to fiber) may be viewed more optimistically with many more households only having access to aging copper lines or relative poor HFC.

A 5G FWA provider may need to think about the window of opportunity to return on the required investment. To address this question, Figure 6 below provides a projection for when at least 80% of households will be connected in urban and rural areas. Showing that in some markets, rural areas may remain more attractive for longer than the corresponding urban areas. Further, if one views the 5G FWA as a bridge to fiber availability, there may be many more opportunities for FWA than what Figures 5 and 6 allude to.

Figure 6 shows projected years until 80% of households have been covered using the maximum deployment pace of the last 5 years. The left side (a) illustrates the urban deployment and (b) the rural fiber deployment. The 80% limit is somewhat arbitrary and, particularly in urban areas, is likely to be exceeded once reached (assuming further deployment is economical). Most commercial (unsubsidized) deployment focus has been in urban areas, while rural areas are often deployed if subsidies are made available by European Union or local government.

Looking at the opportunity for fiber alternatives going forward, Figure 7 below provides the quantum of households that remain to be covered by fiber. This lack of fiber also creates opportunities for broadband alternatives, such as 5G FWA, and maybe non-terrestrial broadband solutions (e.g., Starlink, oneWeb,…). Cellular operators, with a good depth of site coverage, should be able to provide competitive alternatives to existing legacy fixed copper services, as long as LoS is not required, at least. Particularly in some rural areas, depending on the site density and spectrum commitment, around rural villages and towns. Cellular networks may not have much capacity and quality to spare in urban areas for fixed mobile substitution (FMS), at least if designed economically. This said, and depending on the cellular, and fixed broadband competitive environment, FMS-based services (4G and 5G) may be able to bridge the short time until fiber becomes available in an area. This can be used by an incumbent telco that is in the process of migrating its aging copper infrastructure to fiber or as a measure by competing cellular operators to tease copper customers away from that incumbent. Hopefully, those cellular Telcos have also thought about FMS migration off their cellular networks to a permanent fixed broadband solution, such as fiber (or a dedicated mmWave-based FWA service).

Figure 7 estimates the remaining households in (a) urban and (b) rural areas in 2023 and 2026. It may be regarded as a measure of the remaining potential for alternative (to fiber) broadband services. Note: Please note that the scale of Urban and Rural households remaining is different.

As pointed out previously, GSMA projects by 2025 ca. 5 million 5G FWA households in Western Europe. This is less than 3 out of every 100 regular households. Compared with fiber coverage of households estimated to be around 60 out of 100 by 2025. Given that some countries in Western Europe are lagging behind fiber deployment (e.g., Germany, UK, Italy, … see charts above), leaving a large part of their population without modern fixed broadband, one could expect the number might have been bigger than just a few percent. However, 5G FWA at 3.x GHz, and at mmWave frequencies require line-of-sight connections to a customer’s household to provide fiber-like quality and stability. Cellular networks were (obviously) never designed to have LoS to its customers as the cellular frequencies (≤ 3 GHz) were sufficiently low not to be “bothered” (too much) by penetration losses. At and above 3 GHz LoS is increasingly required if a fiber-like service is required.

Another aspect that is often under-appreciated or flat-out ignored (particularly by cellular-minded marketing & sales professionals), is the need for an exterior household customer premise antenna (CPA) that will allow a household to pick up the FWA signal at a higher quality (compared to a gateway antenna indoor due to penetration loss) and with minimum network interference, which may reduce overall quality and capacity in the cellular network (that coincidentally will hurt the normal cellular user as well as other FWA customers). The reason for this neglect is, in my opinion, that it is (allegedly) more difficult to sell such as product to cellular-minded customers and to cellular-minded salespeople as well. It also may increase the cost of technical support due to more complex installation procedures (compared to having a normal mobile phone or indoor gateway) than just turning on a cellular-WiFi modem box inside the home, and it may also result in higher ongoing customer service cost due to more components compared to either a cellular phone or a cellular modem.

THE ECONOMICS.

GSMA Intelligence group compared the total cost of ownership (TCO) of a dedicated 5G FWA mmWave-based connection with that of fiber-to-the-home (FTTH) for an MNO with an existing 5G network in Europe. It appears that the GSMA’s TCO model(s) are rich in detail regarding the underlying traffic models and cost drivers. Moreover, it would also appear that their TCO analysis is (at least at some level) based on an assumed kilometer-based TCO benchmark. It is unclear to me whether Opex has been considered. Though given the analysis is TCO, I assume that it is the case it was considered.

GSMA (for Europe) found that compared to fiber-based household connectivity, 5G FWA is 80% cheaper in rural areas, 60% cheaper in suburban, and 35% cheaper in urban areas compared to an FTTH deployment.

My initial thoughts, without doing any math on the GSMA results, was that I could (easily) imagine that 5G FWA would require less absolute Capex compared to deploying fiber to the home. At least for buried fiber deployment. I would be less confident wrt this result when it comes to aerial fiber deployment, but maybe it is also still a valid result. However, when considering Opex, what 5G FWA incrementally contributes, I would be much less sure that 5G FWA would be outperforming FTTH. At the least in rural and suburban areas where the household customer density per 5G FWA site would be very low (even before considering the opportunity based on LoS likelihood). Thus, the 5G FWA Opex scaled with the number of household subscribers may be a lot less favorable than FTTH, considering the access energy consumption and technical support costs alone. This is even before considering whether a normal rural and a suburban cellular network is at all suitable (designed for) for providing high availability and high-quality+ fixed-like broadband services delivered by 3.x GHz or mmWave frequencies (which in rural and suburban areas may be even more problematic on existing cellular networks).

I would generally not expect that the existing rural/suburban cellular network would be remotely adequate to permanently replace the need for fiber-connected homes. We would most likely need to densify (add new sites) to ensure high quality and high availability connectivity to customers’ premises. This typically would translate into line-of-site (LoS) requirements between the 5G FWA antenna and the customers’ households. Also, to ensure high availability, similar to a fiber connection, we should expect the need for redundant LoS connectivity to the customers’ households (note: experience has shown that having only one LoS connection compromises availability and consistency/reliability substantially). Such redundant connectivity solutions would be even more difficult to find in existing cellular networks. These considerations would, if considered, both add substantial Capex and additional Opex to the 5G FWA TCO reducing the economical (and maybe commercial) attractiveness compared to FTTH.

HOW TO MAKE APPLES AND ORANGES MORE LIKE BANANAS.

As mentioned above, GSMA appears to base (some of) its economic conclusions on a per kilometer (km) unit driver. That is Euro-per-km. While I don’t have anything particular against this driver, apart from being rather 1-dimensional, I believe it provides fewer insights than maybe others’ more direct drivers of income, capital, and operational cost as well as, in the end, a given solution’s commercial success.

I prefer to use the number of households (HH) per square kilometer, thus HH per km2. For fiber deployment and household coverage, I would use fiber per HH passed (HHP). Fiber connecting the household, providing the actual connection (“the last mile”) to customers’ home, I use fiber HH connected (HHC). The intention behind fiber coverage, what is called household passed, is to be able to connect households by extending the fiber to the “last mile” (or the last-1.61-kilometer) and start generating revenues that return on the capital investment done in the first place. Fiber coverage can be thought of as a real option to connect a home. Fiber coverage is obviously a necesity for connecting a home. Similarly, building dedicated fixed-wireless access infrastructure, incrementally on existing cellular infra or from scratch, is to provide a fixed-like high-quality wireless connection to a household.

Figure 8 The above is an illustration of fiber deployment (i.e., coverage and connection) in comparison with fixed wireless access (FWA) coverage and fixed-like wireless services rendered to households (as opposed to individual mobile devices). It also provides a bit of rationale why a km-metric may capture less of the “action” than what happens within a km2 and with the households within. The most important metric in my analysis is the number of connected homes within a km2 as they tend to pay for the party.

Thus household density is a very important driver for the commercial potential, as well as how much of the deployment capital and operational cost can be assigned to a given household in a given geographical area. Urban areas, in general, have more households than suburban and rural areas. The deployment of Capex and Opex in urban areas will be lower per household than in suburban and more rural urbanized areas.

Every household that is fiber covered, implying that the dwelling is within a short reach of the main fiber passing through and ultimately connected, requires an investment with an operational cost associated and revenue for the service is supported by the connection. Fiber total cost of ownership (TCO) will depend on the amount of households covered and the number of households directly connected to a paying customer. For the fiber deployment economics, I am using data from my “Nature of Telecom Capex” (see Figure 16, and please note that the data is for buried fiber) that provides the capital cost of fiber coverage (households passed) and for homes fiber connected, both as a function of household density. For fiber homes passed (HHP) economics, I am renormalizing to fiber homes connected (HHC). Thus if 90% of homes are covered (i.e., passed) in an area and 60% of the homes passed are connected, those connected homes pay for the remaining unconnected homes (30%) under the fiber coverage. This somewhat inflates the cost of connecting a home but is similar to the economic logic of cellular coverage, where the cost is paid by customers having access to the cellular site, even if the cellular site usually covers a lot more people than customers.

In general, fiber deployment becomes increasingly costly as the deployment moves from denser urbanized areas out to suburban and finally rural areas as the household density decreases and more area (and kilometers) need to be covered to capture the same amount of households as in urban areas. Also, it is worth keeping in mind that in countries with the possibility of substantial aerial fiber deployment (e.g., Spain, France, Portugal, Poland, etc..), this leads to a significant unit cost reduction in comparison to buried fiber deployment as we know it from Germany, Netherlands and Denmark. Figure 4 above provides an overview of Western European countries with aerial fiber deployment possibilities and those where buried fiber is required.

For an incremental FWA solution, an existing cellular site will be used. The site location will offer a coverage area where normal broadband cellular services can be provided. Households can of course be connected either via a normal mobile device or a dedicate inhourse gateway connecting to the cellular network (possibly via an exterior CPA) and offering indoor WiFi coverage. For scalable fiber-like wireless quality (e.g., stability and speed) of effective speeds exceeding 100+ Mbps per household connection to be offered from a normal cellular site we typically need line-of-site (LoS) to a customer home as well as a substantial amount of dedicated spectrum bandwidth (100+ MHz) provisioned on an advanced antenna system (AAS e.g., massive MiMo 64×64). The 5G FWA solution, I am assuming, is one that requires the receiving customer to have an outdoor antenna installed on the customer’s home with LoS to the cellular site hosting the FWA solution. The solution is assumed to cover 1 km2 (range of ca. 560 meters) with an effective speed of 300 Mbps per connection. That throughput should hold up to a given connection load limit, after which the speed is expected to decrease as additional household connections are added to the cellular site.

One of, in my opinion, the biggest assumptions (or neglects) of the fiber-like 5G FWA service to households at scale (honestly, a couple of % of HH is not worth discussing;-) is the ability to achieve a line-of-sight between the provider’s cellular site antenna and that of a household with its own customer premise antenna (CPA). For 3.x GHz services, one may assume that everything will still work nicely without LoS and with an inhouse gateway without supporting exterior CPA. I agree … with that premise … if what is required is to beat a xDSL or poor HFC service. There are certainly still many places in Western Europe where that may even make good business sense to attempt to do (that is, competing inferior fixed legacy “broadband” services). The way that cellular networks have been designed (which obviously also have to do with the relative low cellular frequency ranges of the past) is not supporting LoS at scale in urbanized environments. Some great work by professor Dr Akram Al-Hourani, summarised in Figure 9 below, clearly illustrates the difficulty in achieving LoS in urban areas. While I am of the opinion that the basic logic of urban LoS is straightforward, it seems that cellular folks tend to be so used to having (good) cellular coverage pretty much anywhere that it is forgotten when considering higher frequencies that work much better at (or only with) line-of-sight.

The lack of LoS in areas targeted for 5G FWA services needs to be considered in the economic analysis. At least if you are up against fiber-like quality and your intention is to compete at scale (some household opportunity as is the case for fiber). For your FWA cellular-based network, this would often require some degree of densification compared to the as-is cellular network that may be in place. In my work below, I have assumed that my default 5G FWA configuration and targeted service requires 6 sectors covering a 1 km2 of a given urbanized household density. The consequence of that may be that a new (greenfield) site will be required in order to provide 5G FWA at scale (>10+% of HH).

Figure 9 above illustrates the probability in an urban environment for achieving line-of-sight (LoS) between two points, separated by a horizon distance d12 and at height h1 and h2. It is worth keeping in mind that typical urban (and rural) antenna height will be in the range of 30 meter. To give context to the above LoS probability curves, a typical one and two storey will have a height less than 10 meters and 30 meters would represent probably represent 80+% of urbanized areas. The above illustration is inspired by the wonderful work of Dr Akram Al-Hourani Associate Professor and the Telecommunication Program Manager at the School of Engineering, Royal Melbourne Institute of Technology (RMIT) (see his paper “On the Probability of Line-of-Sight in Urban Environments”). There is some relatively simple Monte Carlo simulation work that can be done to verify the above LoS probability trends that I recommend doing.

The economics of this solution is straightforward. I have an upfront investment in enabling the FWA solution with a targeted quality level (e.g., ). In a first approximation and up to a predefined (and pre-agreed as sellable with Marketing), this investment is independent of the number of household customers I get. Of course, at some given load & quality conditions, the FWA capacity may have to be expanded by, for example, adding more capable antennas, more FWA (relevant) spectrum, additional sectors, or building a new site. It should be noted that I have not considered the capacity expansion part in the presented analysis in this article. Thus, as the amount of connected FWA households increases, the quality, in general, and speed, in particular, would decrease (typically by a non-linear process).

Most cellular networks have a substantial part of their infrastructure that does not generate any substantial amount of traffic. In other words, its resources are substantially under-utilized in most cellular networks. Part of building a cellular network is to ensure coverage is guaranteed to almost all of the population (98%+) and geography (>90%), irrespective of the expected demand. Some Telcos’ obsession with public speed & performance tests & benchmarks (e.g., Umlaut, Ookla, etc…) has resulted in many networks having an “insane” (un-demanded and highly un-economical) amount of capacity and quality in coverage areas without any particular customer demand. This typically leads to industry consultants proposing to use all that excess quality for what they may call FWA. I would call it FMS (but what’s in a name). Though, even if there may be a lot of excess cellular capacity and quality in rural and subs-urban areas, it’s hardly fiber-like. And it is also highly unlikely to offer the same scale opportunity in terms of households as a fiber deployment would do (hint: LoS likelihood). The opportunity that is exploitable is to compete with xDSL and poor-quality HFC (if available at all). If an area doesn’t have fiber and no good quality coax, that excess cellular capacity can be used as an alternative to xDSL.

To provide competitive fiber-like FWA services with wireless on top of an existing cellular network, we need to design it “right”. Our aim should be a speed well above 100 Mbps (e.g., 300 Mbps) with stability and availability that requires a different design principle than current legacy cellular networks. To provide a 300 Mbps wireless household connection we could start out with a bandwidth of 100 MHz at 3.5 GHz (i.e., 5G mid-band as an example). Later it is possible to upgrade to or add a mmWave solution with even more bandwidth (e.g., 20 to 300 GHz frequency range with bandwidths of multiples of GHz). In order to get both stability and availability, I will assume that I need a minimum of two but preferably three different LoS solutions for an individual household. If no fiber or other high-quality fixed broadband competitors are around, this requirement may be relaxed (although I think a minimum of two LoS connections are required to provide a real fixed broadband alternative at frequencies above 3 GHz).

SOME COMPARATIVE RESULTS.

In my economic analysis of fiber deployment and 5G-based fixed wireless access, the total cost of ownership (TCO) is presented relative to the number of households connected. This way of presenting the economics has the advantage of relating costs directly to the customer that will pay for the service.

The Capex for fiber deployment can be broken up into two parts. The first part is the fiber coverage, also called fiber household passed (HHP). The second part is household connected (HHC), connecting customer households to the main fiber pass, which is also what we like to call Fiber to the Home (FTTH).

The capital expense of fiber coverage is mainly driven by the civil work (ca. 70%, with the remainder being ca. 20% to passive and ca. 10% for the active part) and relates to the distance fiber is being laid out over (yes, there is a km driver there;-). The cost can be directly related to household density. We have an economic relationship between deployment cost and the actual household density reflecting the difference in unit deployment cost between urban (i.e., high household density, least unit Capex), suburban, and rural (i.e., low household density and highest unit Capex ) urbanized areas. You need fewer kilometers to cover a given amount of households in dense urban areas than is required in a rural village with spread-out dwellings and substantially lower household density. In my economic analysis, I re-scale the fiber coverage cost to the number of households connected (i.e., the customers). Similar to household coverage cost, the household connection cost can likewise be related to the household density, which is already a measure of the connection cost. The details have been described in details in my earlier article, “The Nature of Telecom Capex.”.

The capital expenses related to fixed wireless access will, by its very nature, have a fairly large variation in its various components making up the total investment to provide fixed-like services to customer households. It will depend critically on the design criteria of the service we would like to offer (e.g., max & min speed, availability, … ) as well as the cellular network’s starting point (e.g., greenfield vs brownfield, site density, the likelihood of customer household LoS, etc..). Furthermore, supplier choice, including existing supplier lock-in and corporate purchasing power can influence the unit Capex substantially as well. Civil works and passive infrastructure is reasonably stable across western Europe, with a minor dependency on a given country’s income levels for the civil work-related cost. In my experience, the largest capital expense variation will be on the active telecom equipment depending heavily on procurement scale and supplier leverage. As I have worked in the past for a Telco which is imo&e is one of the strongest (in the industry) in terms of purchasing power and supplier leverage, there is a danger that my unitary Capex assessment may be biased towards the lower end of a reasonable estimate for an industry average for the active equipment required. Another Capex expense factor associated with substantial variation is the spectrum expense I am using in my estimate. My 5G FWA design example requires me to deploy 100 MHz at 3.x GHz (e.g., 3.4 – 3.7 GHz). I have chosen the spectrum cost to be the median of 3.x GHz European spectrum auctions from 2017 to 2023 (a total of 22 in my dataset). The auction median cost is found to be ca. 0.08 € per MHz-pop, and the interquartile range (as a measure for variation) is 0.08 € per MHz-pop. Using an average number of people per Western European household of 2.2, assuming a telco market share of 30%, and a 100 MHz bandwidth, the spectrum cost per connected household would be ca. 60 Euro (per HHC).

In general, the cost of connecting households to fiber scales directly (strongly) with the household density. The cost of connecting a household with fixed wireless access only scales very weakly with the household density (e.g., via CPA, CPE, technical support). Although, if the criteria are that FWA will have to continue to deliver a very high target speed and availability, as the household density increases, there will be substantial step function increases in the required Capex and subsequent resulting Opex. FWA TCO per connected house becomes prohibitively costly as the household density decreases, as is the case for normal cellular services as well.

The total cost of ownership (TCO) includes both the capital as well as the operational expenses relating to the technical implementation of the fixed (FTTH) and fiber-like broadband (5G FWA) service. The various components included in the TCO analysis are summarised in Figure 10.

Figure 10 illustrates the critical parameters used in this analysis and their drivers. As explained, all drivers are re-scaled to be consistent with the household connection. Rather than, for example, the number of households passed for fiber deployment or population coverage for cellular infrastructure deployment. Note 1: for a new 5G FWA site, “Active Equipment” should include a fiber connection & the associated backhaul and possible fronthaul transport equipment. This transport solution is assumed present for an existing site and not included in its economics.

In my analysis, I have compared the cost of implementing different FWA designs with that of connecting a household with fiber. I define a competitive 5G FWA service as a service that can provide a similar quality in terms of speed and stability as that of a GPON-based fiber connection would be able to. The fiber-to-the-home service is designed to bring up to 1 Gbps line speed to a household and could, with the right design, be extended to 10 Gbps with XGPON at a relatively low upgrade capital cost. The FWA service targets an effective speed of 300 Mbps. As household connections are added to the 5G FWA site, at some point, it would become unlikely that the targeted service level can be maintained unless substantial expansions are made to the 5G site, e.g., adding a mmWave solution with a jump in additional frequency spectrum (>100MHz). This would likely lead to additional unit Capex and increases in operational expenses (e.g., energy cost, possible technical support costs, etc..).

Figure 11 compares the TCO, Capex, and Opex of buried fiber to the home (FTTH) to that of fixed wireless access (FWA). For FTTH it is assumed that homes connected amount to 60% of homes passed, which is 90% of the actual household density. The designed FTTH network supports up to 1 Gbps. The FWA is based on LoS to connected homes assuming that I need a total of 6 sectors, one from an existing mobile site and a new 5G site only configured with 5G FWA. The LoS is closed by beamforming from a 64×64 massive MiMo antenna configuration (per sector), with provisioned 100 MHz bandwidth at 3.x GHz, to the customer premise antenna (CPA) installed optimally on the customer household. It is assumed that 30% of covered households will subscribe to the service, and the network cover 98% of all households (with 3-LoS sectors per connected home). The FWA service targets an effective speed of up to 300 Mbps per household. As the number of connected homes increases, there will be a point where the actual serviced speed to the home will be less than 300 Mbps due to the load. The € 30(±8) per month is the Western Europe average cost of a minimum 250 Mbps fixed broadband package. The cities indicate the equivalent household densities. Note: the FWA Opex and, consequently its TCO is different from what has been presented in one of my LinkedIn posts recently. The reason for this is that I spend more time improving my FWA energy consumption model and added some more energy management and steering to my economical model. This is one of the most important cost drivers (also for 5G in general) and I suspect that much more will have to be done in this area to get the overall power consumption substantially down compared to the existing solutions we have today.

Assuming 6 cellular sectors for my chosen 5G FWA solution with 3 of those sectors being greenfield (e.g., abbreviated 3Si + 3Sn), Figure 11 shows that for 5G FWA at scale and targeting competitive services (in terms of quality and stability), is rarely a more economical solution (based on TCO) compared to fiber. Only at high household densities does 5G FWA become economically as attractive as fiber-to-the-home. Although the problem with 5G FWA at large household densities is, that the connection load may be too high to maintain the service design specifications, e.g., speed and availability, without substantial additional upgrades (e.g., mmWave, additional spectrum & sector densification). Even if 5G FWA on a per connected home is (much) more Capex efficient, the economics of Fiber deployment and household fiber connections are more scalable to the connected home than a fixed-like wireless service will be at low and medium household densely urbanized areas.

Relaxing the 5G FWA configuration will not help much as Figure 12 below illustrates. Only in cases where a single existing site (with 3 sectors) can offer a reasonable LoS scale to customer’s households may the TCO be brought down to a comparable range as that of fiber to the home (for a given household density, that is). Using Professor Al-Hourani results one can show that if no receiving household point (e.g., height of building + antenna) is heigher than 15 meter (max. three story buildings) the maximum amount of households with LoS should be no more than 20%. Given that in more rural and suburban environment buildings may be more likely to be a lot lower in exterior height than 15 meter (e.g., 5 – 10 meters) the number of households with LoS (from a single point) could be substantially lower than 20%. In addition, to having a LoS to a household, it, of course, also needs to be your customers premise. Say you have a market share of 30%, one should not expect within a given coverage area to have a potential of more than maybe a maximum of 6% (and likely a lot lower than that). This of course makes any dedicated 5G FWA investment prohibitedly costly due to the lack of scale.

Figure 12 above illustrates a coverage area of 500 connected households and, thus, a relatively dense urban coverage area. FTTH has an uptake of 60% of homes passed, and 5G FWA has a market share of 30% within the covered area. The fiber is relatively straightforward and can be either based on buried or aerial fiber. The depicted figure is based on buried fiber homes connected (FTTH). For FWA we have several options to cover households; (3Si) is based on having 3 sectors with LOS to all household customers. All three sectors are upgraded to support 5G FWA. Based on existing mobile networks and FWA at scale, this would unlikely be the situation. (1Si) is based on one sector covering all connected households (in principle with LoS). One existing sector is upgraded to support 5G FWA. Unless the operator only picks HH with good coverage (e.g., LoS to a given sector) then this scenario appears even more unlikely than the (3Si) scenario at this scale of connected homes, (3Si+3Sn) is based on having an existing site with 3 sectors as well requiring a new 3-sectors site to provide LoS household coverage within the service area. This is also the basis for the FWA cost curves in Figure 10, (3Si+6Sn) based on having an existing site with 3 sectors and requiring two new 3-sectors sites (i.e., additional 6 sectors) to provide LoS household coverage within the service area. Finally, the TCO is compared with (M) a normal mobile 3-sectored 4G & 5G site TCO. The mobile TCO has been normalized to mobile customers assuming a market share of 30%. Note (*): The TCO for the FTTH and all FWA comparisons are based on TCO relative to households connected (HHC).

All in all, using dedicated 5G FWA (or 4G FWA, for that matter) is unlikely to be as economical as a household fiber connection. In rural and suburban areas, where the load may be less of an issue, the existing cellular network’s intercellular distances tend to be too large to be directly usable for fiber-like services. Thus, requiring site densification. In denser urban areas, the connection load may require additional investment to support the demand and maintain quality (e.g., mmWave solutions). However, these places may also be the areas most likely already to be covered by fiber or high quality HFC.

Irrespective of FWA’s maybe poorer economics, in comparison with fiber deployment, there are many countries in Western Europe (and a lot of other places) that lack comprehensive fiber coverage in both urban, suburban and rural areas. Areas that may only be covered by mediocor xDSL services and whatever broadband mobile coverage support. Geograophical areas where fiber may only be deployed years from now if ever at all (unless encourage by EU or other non-commercial subsidies). Such under-served fiber areas may still be commercially interesting for cellular infrastructure telcos, levering existing infra, or dedicated FWA ISPs that may have gotten their hands on lower cost mmWave spectrum.

I should also point out that there is plenty of opportunity for operational expense improvements by deploying for example more intelligent power management systems and/or simply switching off-and-on antenna elements (in the deployed AAS/massive-MiMo antennas) in off-peak traffic hours. The service level that is offered to FWA customers may also be optimized by modern care solutions, e.g., AI chatbots, Apps, IVR, WiFi optimizer solutions, … reducing the need for human-human technical support interactions. However, assuming an FWA customer require a customer premise antenna, requires connectivity to indoor gateway and high quality WiFi coverage in the household, is likely to result in Opex increase in customer care.

IN THE NOW THOUGHTS

I don’t see, FWA, 5G or not, as a credible alternative for fiber to the home. It is doubtful on a household-connection basis that it economically is a better choice. The argument that there is an incredible amount of underutilized resources in our cellular networks, so why not use all that for providing fixed-like, and maybe even fiber-like, services to rural and suburban households, is trying to avoid being held responsible for having possible wasted shareholders money and value but focusing more on being the best irrespective of whether value-generating demand was present or not.

FWA and FMS are technology options that may bridge a time where fiber becomes available in a given geographical footprint. It may act as a precursor for broadband demand that can trigger an accelerate uptake of fiber broadband services once the households have been fiber covered. But its nature as a fiber-like service is likely temporary albeit it may be around for several technology refreshment cycles.

Though, the cellular industry will have to address the relative high operational costs associated with a cellular solution targeting fixed- and fiber-like broadband (and to be honest mobile broadband as well) in comparison with fiber-to-the-home Opex. The projected energy cost of 5G (and 6G for that matter) ecosystem is simply not sustainable nor should it be acceptable to the industry. While suppliers are quick to address the massive improvement in energy consumption per bit-rate per new technology generation, what really is relevant for the network economics is the absolute consumption.

Finally, In time and day, where sustainability and reduction of wasteful demand on critical resources is of incredible importance to our industry, not only for our children’s children but also for achieving favorable financing, shareholders & investors money, consumer trust (and their money month upon month), and possibly the executives self-image, its is difficult to understand why any telco would not prioritize their fiber deployment or fiber service uptake over an incredible resource demanding 5G FWA to either compete or substitute much greener or substantially more sustainable fiber-based services.

ACKNOWLEDGEMENT.

I greatly acknowledge my wife Eva Varadi, for her support, patience, and understanding during the creative process of writing this Blog. Of course, a lot of thanks go out to my former Technology and Network Economics colleagues, who have been a source of inspiration and knowledge. Special thank you to Maurice Ketel (who for many years let my Technology Economics Unit in Deutsche Telekom, I respect him above and beyond), Paul BorkerRemek ProkopikMichael DueserGudrun Bobzin, as well as many many other industry colleagues who have contributed with valuable discussions and important insights. Of course, I can also not get away with (not that I ever would) not thanking Petr Ledl (leading DTAG’s Research & Trials) and Jaroslav Holis (R&T DTAG) for their willingness and a great deal of patience with my many questions into the nature of advanced antenna systems, massive MiMo, what the performance is today and what to expect in terms of performance in the near future. Any mistakes or misrepresentations of these technologies in this article is solely due to me.

FURTHER READING.

FWA EXPECTATIONS – GLOBAL & WESTERN EUROPE

Based on GSMA projections.

5G Economics – The Numbers (Appendix X).

5G essense

100% COVERAGE.

100% 5G coverage is not going to happen with 30 – 300 GHz millimeter-wave frequencies alone.

The “NGMN 5G white paper” , which I will in the subsequent parts refer to as the 5G vision paper, require the 5G coverage to be 100%.

At 100% cellular coverage it becomes somewhat academic whether we talk about population coverage or geographical (area) coverage. The best way to make sure you cover 100% of population is covering 100% of the geography. Of course if you cover 100% of the geography, you are “reasonably” ensured to cover 100% of the population.

While it is theoretically possible to cover 100% (or very near to) of population without covering 100% of the geography, it might be instructive to think why 100% geographical coverage could be a useful target in 5G;

  1. Network-augmented driving and support for varous degrees of autonomous driving would require all roads to be covered (however small).
  2. Internet of Things (IoT) Sensors and Actuators are likely going to be of use also in rural areas (e.g., agriculture, forestation, security, waterways, railways, traffic lights, speed-detectors, villages..) and would require a network to connect to.
  3. Given many users personal area IoT networks (e.g., fitness & health monitors, location detection, smart-devices in general) ubiquitous becomes essential.
  4. Internet of flying things (e.g., drones) are also likely to benefit from 100% area and aerial coverage.

However, many countries remain lacking in comprehensive geographical coverage. Here is an overview of the situation in EU28 (as of 2015);

broadband coverage in eu28

For EU28 countries, 14% of all house holds in 2015 still had no LTE coverage. This was approx.30+ million households or equivalent to 70+ million citizens without LTE coverage. The 14% might seem benign. However, it covers a Rural neglect of 64% of households not having LTE coverage. One of the core reasons for the lack of rural (population and household) coverage is mainly an economic one. Due to the relative low number of population covered per rural site and compounded by affordability issues for the rural population, overall rural sites tend to have low or no profitability. Network sharing can however improve the rural site profitability as site-related costs are shared.

From an area coverage perspective, the 64% of rural households in EU28 not having LTE coverage is likely to amount to a sizable lack of LTE coverage area. This rural proportion of areas and households are also very likely by far the least profitable to cover for any operator possibly even with very progressive network sharing arrangements.

Fixed broadband, Fiber to the Premises (FTTP) and DOCSIS3.0, lacks further behind that of mobile LTE-based broadband. Maybe not surprisingly from an business economic perspective, in rural areas fixed broadband is largely unavailable across EU28.

The chart below illustrates the variation in lack of broadband coverage across LTE, Fiber to the Premises (FTTP) and DOCSIS3.0 (i.e., Cable) from a total country perspective (i.e., rural areas included in average).

delta to 100% hh coverage

We observe that most countries have very far to go on fixed broadband provisioning (i.e., FTTP and DOCSIS3.0) and even on LTE coverage lacks complete coverage. The rural coverage view (not shown here) would be substantially worse than the above Total view.

The 5G ambition is to cover 100% of all population and households. Due to the demographics of how rural households (and populations) are spread, it is also likely that fairly large geographical areas would need to be covered in order to come true on the 100% ambition.

It would appear that bridging this lack of broadband coverage would be best served by a cellular-based technology. Given the fairly low population density in such areas relative higher average service quality (i.e., broadband) could be delivered as long as the cell range is optimized and sufficient spectrum at a relative low carrier frequency (< 1 GHz) would be available. It should be remembered that the super-high 5G 1 – 10 Gbps performance cannot be expected in rural areas. Due to the lower carrier frequency range need to provide economic rural coverage both advanced antenna systems and very large bandwidth (e.g., such as found in the mm-frequency range)  would not be available to those areas. Thus limiting the capacity and peak performance possible even with 5G.

I would suspect that irrespective of the 100% ambition, telecom providers would be challenged by the economics of cellular deployment and traffic distribution. Rural areas really sucks in profitability, even in fairly aggressive sharing scenarios. Although multi-party (more than 2) sharing might be a way to minimize the profitability burden on deep rural coverage.

ugly_tail_thumb.png

The above chart shows the relationship between traffic distribution and sites. As a rule of thumb 50% of revenue is typically generated by 10% of all sites (i.e., in a normal legacy mobile network) and approx. 50% of (rural) sites share roughly 10% of the revenue. Note: in emerging markets the distribution is somewhat steeper as less comprehensive rural coverage typically exist. (Source: The ABC of Network Sharing – The Fundamentals.).

Irrespective of my relative pessimism of the wider coverage utility and economics of millimeter-wave (mm-wave) based coverage, there shall be no doubt that mm-wave coverage will be essential for smaller and smallest cell coverage where due to density of users or applications will require extreme (in comparison to today’s demand) data speeds and capacities. Millimeter-wave coverage-based architectures offer very attractive / advanced antenna solutions that further will allow for increased spectral efficiency and throughput. Also the possibility of using mm-wave point to multipoint connectivity as last mile replacement for fiber appears very attractive in rural and sub-urban clutters (and possible beyond if the cost of the electronics drop according the expeced huge increase in demand for such). This last point however is in my opinion independent of 5G as Facebook with their Terragraph development have shown (i.e., 60 GHz WiGig-based system). A great account for mm-wave wireless communications systems  can be found in T.S. Rappaport et al.’s book “Millimeter Wave Wireless Communications” which not only comprises the benefits of mm-wave systems but also provides an account for the challenges. It should be noted that this topic is still a very active (and interesting) research area that is relative far away from having reached maturity.

In order to provide 100% 5G coverage for the mass market of people & things, we need to engage the traditional cellular frequency bands from 600 MHz to 3 GHz.

1 – 10 Gbps PEAK DATA RATE PER USER.

Getting a Giga bit per second speed is going to require a lot of frequency bandwidth, highly advanced antenna systems and lots of additional cells. And that is likely going to lead to a (very) costly 5G deployment. Irrespective of the anticipated reduced unit cost or relative cost per Byte or bit-per-second.

At 1 Gbps it would take approx. 16 seconds to download a 2 GB SD movie. It would take less than a minute for the HD version (i.e., at 10 Gbps it just gets better;-). Say you have a 16GB smartphone, you loose maybe up to 20+% for the OS, leaving around 13GB for things to download. With 1Gbps it would take less than 2 minutes to fill up your smartphones storage (assuming you haven’t run out of credit on your data plan or reached your data ceiling before then … of course unless you happen to be a customer of T-Mobile US in which case you can binge on = you have no problems!).

The biggest share of broadband usage comes from video streaming which takes up 60% to 80% of all volumetric traffic pending country (i.e., LTE terminal penetration dependent). Providing higher speed to your customer than is required by the applied video streaming technology and smartphone or tablet display being used, seems somewhat futile to aim for. The Table below provides an overview of streaming standards, their optimal speeds and typical viewing distance for optimal experience;

video-resolution-vs-bandwitdh-requirements_thumb.png

Source: 5G Economics – An Introduction (Chapter 1).

So … 1Gbps could be cool … if we deliver 32K video to our customers end device, i.e., 750 – 1600 Mbps optimal data rate. Though it is hard to see customers benefiting from this performance boost given current smartphone or tablet display sizes. The screen size really have to be ridiculously large to truly benefit from this kind of resolution. Of course Star Trek-like full emersion (i.e., holodeck) scenarios would arguably require a lot (=understatement) bandwidth and even more (=beyond understatement) computing power … though such would scenario appears unlikely to be coming out of cellular devices (even in Star Trek).

1 Gbps fixed broadband plans have started to sell across Europe. Typically on Fiber networks although also on DOCSIS3.1 (10Gbps DS/1 Gbps US) networks as well in a few places. It will only be a matter of time before we see 10 Gbps fixed broadband plans being offered to consumers. Irrespective of compelling use cases might be lacking it might at least give you the bragging rights of having the biggest.

From European Commissions “Europe’s Digital Progress Report 2016”,  22 % of European homes subscribe to fast broadband access of at least 30 Mbps. An estimated 8% of European households subscribe to broadband plans of at least 100 Mbps. It is worth noticing that this is not a problem with coverage as according with the EC’s “Digital Progress Report” around 70% of all homes are covered with at least 30 Mbps and ca. 50% are covered with speeds exceeding 100 Mbps.

The chart below illustrates the broadband speed coverage in EU28;

broadband speed hh coverage.png

Even if 1Gbps fixed broadband plans are being offered, still majority of European homes are at speeds below the 100 Mbps. Possible suggesting that affordability and household economics plays a role as well as the basic perceived need for speed might not (yet?) be much beyond 30 Mbps?

Most aggregation and core transport networks are designed, planned, built and operated on a assumption of dominantly customer demand of lower than 100 Mbps packages. As 1Gbps and 10 Gbps gets commercial traction, substantial upgrades are require in aggregation, core transport and last but not least possible also on an access level (to design shorter paths). It is highly likely distances between access, aggregation and core transport elements are too long to support these much higher data rates leading to very substantial redesigns and physical work to support this push to substantial higher throughputs.

Most telecommunications companies will require very substantial investments in their existing transport networks all the way from access to aggregation through the optical core switching networks, out into the world wide web of internet to support 1Gbps to 10 Gbps. Optical switching cards needs to be substantially upgraded, legacy IP/MPLS architectures might no longer work very well (i.e., scale & complexity issue).

Most analysts today believe that incumbent fixed & mobile broadband telecommunications companies with a reasonable modernized transport network are best positioned for 5G compared to mobile-only operators or fixed-mobile incumbents with an aging transport infrastructure.

What about the state of LTE speeds across Europe? OpenSignal recurrently reports on the State of LTE, the following summarizes LTE speeds in Mbps as of June 2017 for EU28 (with the exception of a few countries not included in the OpenSignal dataset);

opensignal state of lte 2017

The OpenSignal measurements are based on more than half a million devices, almost 20 billion measurements over the period of the 3 first month of 2017.

The 5G speed ambition is by todays standards 10 to 30+ times away from present 2016/2017 household fixed broadband demand or the reality of provided LTE speeds.

Let us look at cellular spectral efficiency to be expected from 5G. Using the well known framework;

cellular capacity fundamentals

In essence, I can provide very high data rates in bits per second by providing a lot of frequency bandwidth B, use the most spectrally efficient technologies maximizing η, and/or add as many cells N that my economics allow for.

In the following I rely largely on Jonathan Rodriquez great book on “Fundamentals of 5G Mobile Networks” as a source of inspiration.

The average spectral efficiency is expected to be coming out in the order of 10 Mbps/MHz/cell using advanced receiver architectures, multi-antenna, multi-cell transmission and corporation. So pretty much all the high tech goodies we have in the tool box is being put to use of squeezing out as many bits per spectral Hz available and in a sustainable matter. Under very ideal Signal to Noise Ratio conditions, massive antenna arrays of up to 64 antenna elements (i.e., an optimum) seems to indicate that 50+ Mbps/MHz/Cell might be feasible in peak.

So for a spectral efficiency of 10 Mbps/MHz/cell and a demanded 1 Gbps data rate we would need 100 MHz frequency bandwidth per cell (i.e., using the above formula). Under very ideal conditions and relative large antenna arrays this might lead to a spectral requirement of only 20 MHz at 50 Mbps/MHz/Cell. Obviously, for 10 Gbps data rate we would require 1,000 MHz frequency bandwidth (1 GHz!) per cell at an average spectral efficiency of 10 Mbps/MHz/cell.

The spectral efficiency assumed for 5G heavily depends on successful deployment of many-antenna segment arrays (e.g., Massive MiMo, beam-forming antennas, …). Such fairly complex antenna deployment scenarios work best at higher frequencies, typically above 2GHz. Also such antenna systems works better at TDD than FDD with some margin on spectral efficiency. These advanced antenna solutions works perfectly  in the millimeter wave range (i.e., ca. 30 – 300 GHz) where the antenna segments are much smaller and antennas can be made fairly (very) compact (note: resonance frequency of the antenna proportional to half the wavelength with is inverse proportional to the carrier frequency and thus higher frequencies need smaller material dimension to operate).

Below 2 GHz higher-order MiMo becomes increasingly impractical and the spectral efficiency regress to the limitation of a simple single-path antenna. Substantially lower than what can be achieved at much high frequencies with for example massive-MiMo.

So for the 1Gbps to 10 Gbps data rates to work out we have the following relative simple rationale;

  • High data rates require a lot of frequency bandwidth (>100 MHz to several GHz per channel).
  • Lots of frequency bandwidth are increasingly easier to find at high and very high carrier frequencies (i.e., why millimeter wave frequency band between 30 – 300 GHz is so appealing).
  • High and very high carrier frequencies results in small, smaller and smallest cells with very high bits per second per unit area (i.e., the area is very small!).
  • High and very high carrier frequency allows me to get the most out of higher order MiMo antennas (i.e., with lots of antenna elements),
  • Due to fairly limited cell range, I boost my overall capacity by adding many smallest cells (i.e., at the highest frequencies).

We need to watch out for the small cell densification which tends not to scale very well economically. The scaling becomes a particular problem when we need hundreds of thousands of such small cells as it is expected in most 5G deployment scenarios (i.e., particular driven by the x1000 traffic increase). The advanced antenna systems required (including the computation resources needed) to max out on spectral efficiency are likely going to be one of the major causes of breaking the economical scaling. Although there are many other CapEx and OpEx scaling factors to be concerned about for small cell deployment at scale.

Further, for mass market 5G coverage, as opposed to hot traffic zones or indoor solutions, lower carrier frequencies are needed. These will tend to be in the usual cellular range we know from our legacy cellular communications systems today (e.g., 600 MHz – 2.1 GHz). It should not be expected that 5G spectral efficiency will gain much above what is already possible with LTE and LTE-advanced at this legacy cellular frequency range. Sheer bandwidth accumulation (multi-frequency carrier aggregation) and increased site density is for the lower frequency range a more likely 5G path. Of course mass market 5G customers will benefit from faster reaction times (i.e., lower latencies), higher availability, more advanced & higher performing services arising from the very substantial changes expected in transport networks and data centers with the introduction of 5G.

Last but not least to this story … 80% and above of all mobile broadband customers usage, data as well as voice, happens in very few cells (e.g., 3!) … representing their Home and Work.

most traffic in very few cells

Source: Slideshare presentation by Dr. Kim “Capacity planning in mobile data networks experiencing exponential growth in demand.”

As most of the mobile cellular traffic happen at the home and at work (i.e., thus in most cases indoor) there are many ways to support such traffic without being concerned about the limitation of cell ranges.

The giga bit per second cellular service is NOT a service for the mass market, at least not in its macro-cellular form.

≤ 1 ms IN ROUND-TRIP DELAY.

A total round-trip delay of 1 or less millisecond is very much attuned to niche service. But a niche service that nevertheless could be very costly for all to implement.

I am not going to address this topic too much here. It has to a great extend been addressed almost to ad nauseam in 5G Economics – An Introduction (Chapter 1) and 5G Economics – The Tactile Internet (Chapter 2). I think this particular aspect of 5G is being over-hyped in comparison to how important it ultimately will turn out to be from a return on investment perspective.

Speed of light travels ca. 300 km per millisecond (ms) in vacuum and approx. 210 km per ms in fiber (some material dependency here). Lately engineers have gotten really excited about the speed of light not being fast enough and have made a lot of heavy thinking abou edge this and that (e.g., computing, cloud, cloudlets, CDNs,, etc…). This said it is certainly true that most modern data centers have not been build taking too much into account that speed of light might become insufficient. And should there really be a great business case of sub-millisecond total (i.e., including the application layer) roundtrip time scales edge computing resources would be required a lot closer to customers than what is the case today.

It is common to use delay, round-trip time or round-trip delay, or latency as meaning the same thing. Though it is always cool to make sure people really talk about the same thing by confirming that it is indeed a round-trip rather than single path. Also to be clear it is worthwhile to check that all people around the table talk about delay at the same place in the OSI stack or  network path or whatever reference point agreed to be used.

In the context of  the 5G vision paper it is emphasized that specified round-trip time is based on the application layer (i.e., OSI model) as reference point. It is certainly the most meaningful measure of user experience. This is defined as the End-2-End (E2E) Latency metric and measure the complete delay traversing the OSI stack from physical layer all the way up through network layer to the top application layer, down again, between source and destination including acknowledgement of a successful data packet delivery.

The 5G system shall provide 10 ms E2E latency in general and 1 ms E2E latency for use cases requiring extremely low latency.

The 5G vision paper states “Note these latency targets assume the application layer processing time is negligible to the delay introduced by transport and switching.” (Section 4.1.3 page 26 in “NGMN 5G White paper”).

In my opinion it is a very substantial mouthful to assume that the Application Layer (actually what is above the Network Layer) will not contribute significantly to the overall latency. Certainly for many applications residing outside the operators network borders, in the world wide web, we can expect a very substantial delay (i.e., even in comparison with 10 ms). Again this aspect was also addressed in my two first chapters.

Very substantial investments are likely needed to meet E2E delays envisioned in 5G. In fact the cost of improving latencies gets prohibitively more expensive as the target is lowered. The overall cost of design for 10 ms would be a lot less costly than designing for 1 ms or lower. The network design challenge if 1 millisecond or below is required, is that it might not matter that this is only a “service” needed in very special situations, overall the network would have to be designed for the strictest denominator.

Moreover, if remedies needs to be found to mitigate likely delays above the Network Layer, distance and insufficient speed of light might be the least of worries to get this ambition nailed (even at the 10 ms target). Of course if all applications are moved inside operator’s networked premises with simpler transport paths (and yes shorter effective distances) and distributed across a hierarchical cloud (edge, frontend, backend, etc..), the assumption of negligible delay in layers above the Network Layer might become much more likely. However, it does sound a lot like America Online walled garden fast forward to the past kind of paradigm.

So with 1 ms E2E delay … yeah yeah … “play it again Sam” … relevant applications clearly need to be inside network boundary and being optimized for processing speed or silly & simple (i.e., negligible delay above the Network Layer), no queuing delay (to the extend of being in-efficiency?), near-instantaneous transmission (i.e., negligible transmission delay) and distances likely below tenth of km (i.e., very short propagation delay).

When the speed of light is too slow there are few economic options to solve that challenge.

≥ 10,000 Gbps / Km2 DATA DENSITY.

The data density is maybe not the most sensible measure around. If taken too serious could lead to hyper-ultra dense smallest network deployments.

This has always been a fun one in my opinion. It can be a meaningful design metric or completely meaningless.

There is of course nothing particular challenging in getting a very high throughput density if an area is small enough. If I have a cellular range of few tens of meters, say 20 meters, then my cell area is smaller than 1/1000 of a km2. If I have 620 MHz bandwidth aggregated between 28 GHz and 39 GHz (i.e., both in the millimeter wave band) with a 10 Mbps/MHz/Cell, I could support 6,200 Gbps/km2. That’s almost 3 Petabyte in an hour or 10 years of 24/7 binge watching of HD videos. Note given my spectral efficiency is based on an average value, it is likely that I could achieve substantially more bandwidth density and in peaks closer to the 10,000 Gbps/km2 … easily.

Pretty Awesome Wow!

The basic; a Terabit equals 1024 Gigabits (but I tend to ignore that last 24 … sorry I am not).

With a traffic density of ca. 10,000 Gbps per km2, one would expect to have between 1,000 (@ 10 Gbps peak) to 10,000 (@ 1 Gbps peak) concurrent users per square km.

At 10 Mbps/MHz/Cell one would expect to have a 1,000 Cell-GHz/km2. Assume that we would have 1 GHz bandwidth (i.e., somewhere in the 30 – 300 GHz mm-wave range), one would need 1,000 cells per km2. On average with a cell range of about 20 meters (smaller to smallest … I guess what Nokia would call an Hyper-Ultra-Dense Network;-). Thus each cell would minimum have between 1 to 10 concurrent users.

Just as a reminder! 1 minutes at 1 Gbps corresponds to 7.5 GB. A bit more than what you need for a 80 minute HD (i.e., 720pp) full movie stream … in 1 minutes. So with your (almost) personal smallest cell what about the remaining 59 minutes? Seems somewhat wasteful at least until kingdom come (alas maybe sooner than that).

It would appear that the very high 5G data density target could result in very in-efficient networks from a utilization perspective.

≥ 1 MN / Km2 DEVICE DENSITY.

One million 5G devices per square kilometer appears to be far far out in a future where one would expect us to be talking about 7G or even higher Gs.

1 Million devices seems like a lot and certainly per km2. It is 1 device per square meter on average. A 20 meter cell-range smallest cell would contain ca. 1,200 devices.

To give this number perspective lets compare it with one of my favorite South-East Asian cities. The city with one of the highest population densities around, Manila (Philippines). Manila has more than 40 thousand people per square km. Thus in Manila this would mean that we would have about 24 devices per person or 100+ per household per km2. Overall, in Manila we would then expect approx. 40 million devices spread across the city (i.e., Manila has ca. 1.8 Million inhabitants over an area of 43 km2. Philippines has a population of approx. 100 Million).

Just for the curious, it is possible to find other more populated areas in the world. However, these highly dense areas tends to be over relative smaller surface areas, often much smaller than a square kilometer and with relative few people. For example Fadiouth Island in Dakar have a surface area of 0.15 km2 and 9,000 inhabitants making it one of the most pop densest areas in the world (i.e., 60,000 pop per km2).

I hope I made my case! A million devices per km2 is a big number.

Let us look at it from a forecasting perspective. Just to see whether we are possibly getting close to this 5G ambition number.

IHS forecasts 30.5 Billion installed devices by 2020, IDC is also believes it to be around 30 Billion by 2020. Machina Research is less bullish and projects 27 Billion by 2025 (IHS expects that number to be 75.4 Billion) but this forecast is from 2013. Irrespective, we are obviously in the league of very big numbers. By the way 5G IoT if at all considered is only a tiny fraction of the overall projected IoT numbers (e.g., Machine Research expects 10 Million 5G IoT connections by 2024 …that is extremely small numbers in comparison to the overall IoT projections).

A consensus number for 2020 appears to be 30±5 Billion IoT devices with lower numbers based on 2015 forecasts and higher numbers typically from 2016.

To break this number down to something that could be more meaningful than just being Big and impressive, let just establish a couple of worldish numbers that can help us with this;

  • 2020 population expected to be around 7.8 Billion compared to 2016 7.4 Billion.
  • Global pop per HH is ~3.5 (average number!) which might be marginally lower in 2020. Urban populations tend to have less pop per households ca. 3.0. Urban populations in so-called developed countries are having a pop per HH of ca. 2.4.
  • ca. 55% of world population lives in Urban areas. This will be higher by 2020.
  • Less than 20% of world population lives in developed countries (based on HDI). This is a 2016 estimate and will be higher by 2020.
  • World surface area is 510 Million km2 (including water).
  • of which ca. 150 million km2 is land area
  • of which ca. 75 million km2 is habitable.
  • of which 3% is an upper limit estimate of earth surface area covered by urban development, i.e., 15.3 Million km2.
  • of which approx. 1.7 Million km2 comprises developed regions urban areas.
  • ca. 37% of all land-based area is agricultural land.

Using 30 Billion IoT devices by 2020 is equivalent to;

  • ca. 4 IoT per world population.
  • ca. 14 IoT per world households.
  • ca. 200 IoT per km2 of all land-based surface area.
  • ca. 2,000 IoT per km2 of all urban developed surface area.

If we limit IoT’s in 2020 to developed countries, which wrongly or rightly exclude China, India and larger parts of Latin America, we get the following by 2020;

  • ca. 20 IoT per developed country population.
  • ca. 50 IoT per developed country households.
  • ca. 18,000 IoT per km2 developed country urbanized areas.

Given that it would make sense to include larger areas and population of both China, India and Latin America, the above developed country numbers are bound to be (a lot) lower per Pop, HH and km2. If we include agricultural land the number of IoTs will go down per km2.

So far far away from a Million IoT per km2.

What about parking spaces, for sure IoT will add up when we consider parking spaces!? … Right? Well in Europe you will find that most big cities will have between 50 to 200 (public) parking spaces per square kilometer (e.g., ca. 67 per km2 for Berlin and 160 per km2 in Greater Copenhagen). Aha not really making up to the Million IoT per km2 … what about cars?

In EU28 there are approx. 256 Million passenger cars (2015 data) over a population of ca. 510 Million pops (or ca. 213 million households). So a bit more than 1 passenger car per household on EU28 average. In Eu28 approx. 75+% lives in urban area which comprises ca. 150 thousand square kilometers (i.e., 3.8% of EU28’s 4 Million km2). So one would expect little more (if not a little less) than 1,300 passenger cars per km2. You may say … aha but it is not fair … you don’t include motor vehicles that are used for work … well that is an exercise for you (too convince yourself why that doesn’t really matter too much and with my royal rounding up numbers maybe is already accounted for). Also consider that many EU28 major cities with good public transportation are having significantly less cars per household or population than the average would allude to.

Surely, public street light will make it through? Nope! Typical bigger modern developed country city will have on average approx. 85 street lights per km2, although it varies from 0 to 1,000+. Light bulbs per residential household (from a 2012 study of the US) ranges from 50 to 80+. In developed countries we have roughly 1,000 households per km2 and thus we would expect between 50 thousand to 80+ thousand lightbulbs per km2. Shops and business would add some additions to this number.

With a cumulated annual growth rate of ca. 22% it would take 20 years (from 2020) to reach a Million IoT devices per km2 if we will have 20 thousand per km2 by 2020. With a 30% CAGR it would still take 15 years (from 2020) to reach a Million IoT per km2.

The current IoT projections of 30 Billion IoT devices in operation by 2020 does not appear to be unrealistic when broken down on a household or population level in developed areas (even less ambitious on a worldwide level). The 18,000 IoT per km2 of developed urban surface area by 2020 does appear somewhat ambitious. However, if we would include agricultural land the number would become possible a more reasonable.

If you include street crossings, traffic radars, city-based video monitoring (e.g., London has approx. 300 per km2, Hong Kong ca. 200 per km2), city-based traffic sensors, environmental sensors, etc.. you are going to get to sizable numbers.

However, 18,000 per km2 in urban areas appears somewhat of a challenge. Getting to 1 Million per km2 … hmmm … we will see around 2035 to 2040 (I have added an internet reminder for a check-in by 2035).

Maybe the 1 Million Devices per km2 ambition is not one of the most important 5G design criteria’s for the short term (i.e., next 10 – 20 years).

Oh and most IoT forecasts from the period 2015 – 2016 does not really include 5G IoT devices in particular. The chart below illustrates Machina Research IoT forecast for 2024 (from August 2015). In a more recent forecast from 2016, Machine Research predict that by 2024 there would be ca. 10 million 5G IoT connections or 0.04% of the total number of forecasted connections;

iot connections 2024

The winner is … IoTs using WiFi or other short range communications protocols. Obviously, the cynic in me (mea culpa) would say that a mm-wave based 5G connections can also be characterized as short range … so there might be a very interesting replacement market there for 5G IoT … maybe? 😉

Expectations to 5G-based IoT does not appear to be very impressive at least over the next 10 years and possible beyond.

The un-importance of 5G IoT should not be a great surprise given most 5G deployment scenarios are focused on millimeter-wave smallest 5G cell coverage which is not good for comprehensive coverage of  IoT devices not being limited to those very special 5G coverage situations being thought about today.

Only operators focusing on comprehensive 5G coverage re-purposing lower carrier frequency bands (i.e., 1 GHz and lower) can possible expect to gain a reasonable (as opposed to niche) 5G IoT business. T-Mobile US with their 600 MHz  5G strategy might very well be uniquely positions for taking a large share of future proof IoT business across USA. Though they are also pretty uniquely position for NB-IoT with their comprehensive 700MHz LTE coverage.

For 5G IoT to be meaningful (at scale) the conventional macro-cellular networks needs to be in play for 5G coverage .,, certainly 100% 5G coverage will be a requirement. Although, even with 5G there maybe 100s of Billion of non-5G IoT devices that require coverage and management.

≤ 500 km/h SERVICE SUPPORT.

Sure why not?  but why not faster than that? At hyperloop or commercial passenger airplane speeds for example?

Before we get all excited about Gbps speeds at 500 km/h, it should be clear that the 5G vision paper only proposed speeds between 10 Mbps up-to 50 Mbps (actually it is allowed to regress down to 50 kilo bits per second). With 200 Mbps for broadcast like services.

So in general, this is a pretty reasonable requirement. Maybe with the 200 Mbps for broadcasting services being somewhat head scratching unless the vehicle is one big 16K screen. Although the users proximity to such a screen does not guaranty an ideal 16K viewing experience to say the least.

What moves so fast?

The fastest train today is tracking at ca. 435 km/h (Shanghai Maglev, China).

Typical cruising airspeed for a long-distance commercial passenger aircraft is approx. 900 km/h. So we might not be able to provide the best 5G experience in commercial passenger aircrafts … unless we solve that with an in-plane communications system rather than trying to provide Gbps speed by external coverage means.

Why take a plane when you can jump on the local Hyperloop? The proposed Hyperloop should track at an average speed of around 970 km/h (faster or similar speeds as commercial passengers aircrafts), with a top speed of 1,200 km/h. So if you happen to be in between LA and San Francisco in 2020+ you might not be able to get the best 5G service possible … what a bummer! This is clearly an area where the vision did not look far enough.

Providing services to moving things at a relative fast speed does require a reasonable good coverage. Whether it being train track, hyperloop tunnel or ground to air coverage of commercial passenger aircraft, new coverage solutions would need to be deployed. Or alternative in-vehicular coverage solutions providing a perception of 5G experience might be an alternative that could turn out to be more economical.

The speed requirement is a very reasonable one particular for train coverage.

50% TOTAL NETWORK ENERGY REDUCTION.

If 5G development could come true on this ambition we talk about 10 Billion US Dollars (for the cellular industry). Equivalent to a percentage point on the margin.

There are two aspects of energy efficiency in a cellular based communication system.

  • User equipment that will benefit from longer intervals without charging and thus improve customers experience and overall save energy from less frequently charges.
  • Network infrastructure energy consumption savings will directly positively impact a telecom operators Ebitda.

Energy efficient Smartphones

The first aspect of user equipment is addressed by the 5G vision paper under “4.3 Device Requirements”  sub-section “4.3.3 Device Power Efficiency”; Battery life shall be significantly increased: at least 3 days for a smartphone, and up tp 15 years for a low-cost MTC device.” (note: MTC = Machine Type Communications).

Apple’s iPhone 7 battery life (on a full charge) is around 6 hours of constant use with 7 Plus beating that with ca. 3 hours (i.e., total 9 hours). So 3 days will go a long way.

From a recent 2016 survey from Ask Your Target Market on smartphone consumers requirements to battery lifetime and charging times;

  • 64% of smartphone owners said they are at least somewhat satisfied with their phone’s battery life.
  • 92% of smartphone owners said they consider battery life to be an important factor when considering a new smartphone purchase.
  • 66% said they would even pay a bit more for a cell phone that has a longer battery life.

Looking at the mobile smartphone & tablet non-voice consumption it is also clear why battery lifetime and not in-important the charging time matters;

smartphone usage time per day

Source: eMarketer, April 2016. While 2016 and 2017 are eMarketer forecasts (why dotted line and red circle!) these do appear well in line with other more recent measurements.

Non-voice smartphone & tablet based usage is expected by now to exceed 4 hours (240 minutes) per day on average for US Adults.

That longer battery life-times are needed among smartphone consumers is clear from sales figures and anticipated sales growth of smartphone power-banks (or battery chargers) boosting the life-time with several more hours.

It is however unclear whether the 3 extra days of a 5G smartphone battery life-time is supposed to be under active usage conditions or just in idle mode. Obviously in order to matter materially to the consumer one would expect this vision to apply to active usage (i.e., 4+ hours a day at 100s of Mbps – 1Gbps operations).

Energy efficient network infrastructure.

The 5G vision paper defines energy efficiency as number of bits that can be transmitted over the telecom infrastructure per Joule of Energy.

The total energy cost, i.e., operational expense (OpEx), of telecommunications network can be considerable. Despite our mobile access technologies having become more energy efficient with each generation, the total OpEx of energy attributed to the network infrastructure has increased over the last 10 years in general. The growth in telco infrastructure related energy consumption has been driven by the consumer demand for broadband services in mobile and fixed including incredible increase in data center computing and storage requirements.

In general power consumption OpEx share of total technology cost amounts to 8% to 15% (i.e., for Telcos without heavy reliance of diesel). The general assumption is that with regular modernization, energy efficiency gain in newer electronics can keep growth in energy consumption to a minimum compensating for increased broadband and computing demand.

Note: Technology Opex (including NT & IT) on average lays between 18% to 25% of total corporate Telco Opex. Out of the Technology Opex between 8% to 15% (max) can typically be attributed to telco infrastructure energy consumption. The access & aggregation contribution to the energy cost typically would towards 80% plus. Data centers are expected to increasingly contribute to the power consumption and cost as well. Deep diving into the access equipment power consumption, ca. 60% can be attributed to rectifiers and amplifiers, 15% by the DC power system & miscellaneous and another 25% by cooling.

5G vision paper is very bullish in their requirement to reduce the total energy and its associated cost; it is stated “5G should support a 1,000 times traffic increase in the next 10 years timeframe, with an energy consumption by the whole network of only half that typically consumed by today’s networks. This leads to the requirement of an energy efficiency of x2,000 in the next 10 years timeframe.” (sub-section “4.6.2 Energy Efficiency” NGMN 5G White Paper).

This requirement would mean that in a pure 5G world (i.e., all traffic on 5G), the power consumption arising from the cellular network would be 50% of what is consumed todayIn 2016 terms the Mobile-based Opex saving would be in the order of 5 Billion US$ to 10+ Billion US$ annually. This would be equivalent to 0.5% to 1.1% margin improvement globally (note: using GSMA 2016 Revenue & Growth data and Pyramid Research forecast). If energy price would increase over the next 10 years the saving / benefits would of course be proportionally larger.

As we have seen in the above, it is reasonable to expect a very considerable increase in cell density as the broadband traffic demand increases from peak bandwidth (i.e., 1 – 10 Gbps) and traffic density (i.e., 1 Tbps per km2) expectations.

Depending on the demanded traffic density, spectrum and carrier frequency available for 5G between 100 to 1,000 small cell sites per km2 could be required over the next 10 years. This cell site increase will be required in addition to existing macro-cellular network infrastructure.

Today (in 2017) an operator in EU28-sized country may have between ca. 3,500 to 35,000 cell sites with approx. 50% covering rural areas. Many analysts are expecting that for medium sized countries (e.g., with 3,500 – 10,000 macro cellular sites), operators would eventually have up-to 100,000 small cells under management in addition to their existing macro-cellular sites. Most of those 5G small cells and many of the 5G macro-sites we will have over the next 10 years, are also going to have advanced massive MiMo antenna systems with many active antenna elements per installed base antenna requiring substantial computing to gain maximum performance.

It appears with today’s knowledge extremely challenging (to put it mildly) to envision a 5G network consuming 50% of today’s total energy consumption.

It is highly likely that the 5G radio node electronics in a small cell environment (and maybe also in a macro cellular environment?) will consume less Joules per delivery bit (per second) due to technology advances and less transmitted power required (i.e., its a small or smallest cell). However, this power efficiency technology and network cellular architecture gain can very easily be destroyed by the massive additional demand of small, smaller and smallest cells combined with highly sophisticated antenna systems consuming additional energy for their compute operations to make such systems work. Furthermore, we will see operators increasingly providing sophisticated data center resources network operations as well as for the customers they serve. If the speed of light is insufficient for some services or country geographies, additional edge data centers will be introduced, also leading to an increased energy consumption not present in todays telecom networks. Increased computing and storage demand will also make the absolute efficiency requirement highly challenging.

Will 5G be able to deliver bits (per second) more efficiently … Yes!

Will 5G be able to reduce the overall power consumption of todays telecom networks with 50% … highly unlikely.

In my opinion the industry will have done a pretty good technology job if we can keep the existing energy cost at the level of today (or even allowing for unit price increases over the next 10 years).

The Total power reduction of our telecommunications networks will be one of the most important 5G development tasks as the industry cannot afford a new technology that results in waste amount of incremental absolute cost. Great relative cost doesn’t matter if it results in above and beyond total cost.

≥ 99.999% NETWORK AVAILABILITY & DATA CONNECTION RELIABILITY.

A network availability of 5Ns across all individual network elements and over time correspond to less than a second a day downtime anywhere in the network. Few telecom networks are designed for that today.

5 Nines (5N) is a great aspiration for services and network infrastructures. It also tends to be fairly costly and likely to raise the level of network complexity. Although in the 5G world of heterogeneous networks … well its is already complicated.

5N Network Availability.

From a network and/or service availability perspective it means that over the cause of the day, your service should not experience more than 0.86 seconds of downtime. Across a year the total downtime should not be more than 5 minutes and 16 seconds.

The way 5N Network Availability is define is “The network is available for the targeted communications in 99.999% of the locations  where the network is deployed and 99.999% of the time”. (from “4.4.4 Resilience and High Availability”, NGMN 5G White Paper).

Thus in a 100,000 cell network only 1 cell is allowed experience a downtime and for no longer than less than a second a day.

It should be noted that there are not many networks today that come even close to this kind of requirement. Certainly in countries with frequent long power outages and limited ancillary backup (i.e., battery and/or diesel) this could be a very costly design requirement. Networks relying on weather-sensitive microwave radios for backhaul or for mm-wave frequencies 5G coverage would be required to design in a very substantial amount of redundancy to keep such high geographical & time availability requirements

In general designing a cellular access network for this kind of 5N availability could be fairly to very costly (i.e., Capex could easily run up in several percentage points of Revenue).

One way out from a design perspective is to rely on hierarchical coverage. Thus, for example if a small cell environment is un-available (=down!) the macro-cellular network (or overlay network) continues the service although at a lower service level (i.e., lower or much lower speed compared to the primary service). As also suggested in the vision paper making use of self-healing network features and other real-time measures are expected to further increase the network infrastructure availability. This is also what one may define as Network Resilience.

Nevertheless, the “NGMN 5G White Paper” allows for operators to define the level of network availability appropriate from their own perspective (and budgets I assume).

5N Data Packet Transmission Reliability.

The 5G vision paper, defines Reliability as “… amount of sent data packets successfully delivered to a given destination, within the time constraint required by the targeted service, divided by the total number of sent data packets.”. (“4.4.5 Reliability” in “NGMN 5G White Paper”).

It should be noted that the 5N specification in particular addresses specific use cases or services of which such a reliability is required, e.g., mission critical communications and ultra-low latency service. The 5G allows for a very wide range of reliable data connection. Whether the 5N Reliability requirement will lead to substantial investments or can be managed within the overall 5G design and architectural framework, might depend on the amount of traffic requiring 5Ns.

The 5N data packet transmission reliability target would impose stricter network design. Whether this requirement would result in substantial incremental investment and cost is likely dependent on the current state of existing network infrastructure and its fundamental design.

 

5G Economics – The Tactile Internet (Chapter 2)

If you have read Michael Lewis book “Flash Boys”, I will have absolutely no problem convincing you that a few milliseconds improvement in transport time (i.e., already below 20 ms) of a valuable signal (e.g., containing financial information) can be of tremendous value. It is all about optimizing transport distances, super efficient & extremely fast computing and of course ultra-high availability. The ultra-low transport and process latencies is the backbone (together with the algorithms obviously) of the high frequency trading industry that takes a market share of between 30% (EU) and 50% (US) of the total equity trading volume.

In a recent study by The Boston Consulting Group (BCG) “Uncovering Real Mobile Data Usage and Drivers of Customer Satisfaction” (Nov. 2015) study it was found that latency had a significant impact on customer video viewing satisfaction. For latencies between 75 – 100 milliseconds 72% of users reported being satisfied. The user experience satisfaction level jumped to 83% when latency was below 50 milliseconds. We have most likely all experienced and been aggravated by long call setup times (> couple of seconds) forcing us to look at the screen to confirm that a call setup (dialing) is actually in progress.

Latency and reactiveness or responsiveness matters tremendously to the customers experience and whether it is a bad, good or excellent one.

The Tactile Internet idea is an integral part of the “NGMN 5G Vision” and part of what is characterized as Extreme Real-Time Communications. It has further been worked out in detail in the ITU-T Technology Watch Report  “The Tactile Internet” from August 2014.

The word Tactile” means perceptible by touch. It closely relates to the ambition of creating a haptic experience. Where haptic means a sense of touch. Although we will learn that the Tactile Internet vision is more than a “touchy-feeling” network vision, the idea of haptic feedback in real-time (~ sub-millisecond to low millisecond regime) is very important to the idea of a Tactile Network experience (e.g., remote surgery).

The Tactile Internet is characterized by

  • Ultra-low latency; 1 ms and below latency (as in round-trip-time / round-trip delay).
  • Ultra-high availability; 99.999% availability.
  • Ultra-secure end-2-end communications.
  • Persistent very high bandwidths capability; 1 Gbps and above.

The Tactile Internet is one of the corner stones of 5G. It promises ultra-low end-2-end latencies in the order of 1 millisecond at Giga bits per second speeds and with five 9’s of availability (translating into a 500 ms per day average un-availability).

Interestingly, network predictability and variation in latency have not been receiving too much focus within the Tactile Internet work. Clearly, a high degree of predictability as well as low jitter (or latency variation), could be very desirable property of a tactile network. Possibly even more so than absolute latency in its own right. A right sized round-trip-time with imposed managed latency, meaning a controlled variation of latency, is very essential to the 5G Tactile Internet experience.

It’s 5G on speed and steroids at the same time.

elephant in the room

Let us talk about the elephant in the room.

We can understand Tactile latency requirements in the following way;

An Action including (possible) local Processing, followed by some Transport and Remote Processing of data representing the Action, results in a Re-action again including (possible) local Processing. According with Tactile Internet Vision, the time of this whole even from Action to Re-action has to have run its cause within 1 millisecond or one thousand of a second. In many use cases this process is looped as the Re-action feeds back, resulting in another action. Note in the illustration below, Action and Re-action could take place on the same device (or locality) or could be physically separated. The processes might represent cloud-based computations or manipulations of data or data manipulations local to the device of the user as well as remote devices. It needs to be considered that the latency time scale for one direction is not at all given to be the same in the other direction (even for transport).

tactile internet 1

The simplest example is the mouse click on a internet link or URL (i.e., the Action) resulting a translation of the URL to an IP address and the loading of the resulting content on your screen (i.e., part of the process) with the final page presented on the your device display (i.e., Re-action). From the moment the URL is mouse-clicked until the content is fully presented should take no longer than 1 ms.

tactile internet 2

A more complex use case might be remote surgery. In which a surgical robot is in one location and the surgeon operator is at another location manipulating the robot through an operation. This is illustrated in the above picture. Clearly, for a remote surgical procedure to be safe (i.e., within the margins of risk of not having the possibility of any medical assisted surgery) we would require a very reliable connection (99.999% availability), sufficient bandwidth to ensure adequate video resolution as required by the remote surgeon controlling the robot, as little as possible latency allowing the feel of instantaneous (or predictable) reaction to the actions of the controller (i.e., the surgeons) and of course as little variation in the latency (i.e., jitter) allowing system or human correction of the latency (i.e., high degree of network predictability).

The first Complete Trans-Atlantic Robotic Surgery happened in 2001. Surgeons in New York (USA) remotely operated on a patient in Strasbourg, France. Some 7,000 km away or equivalent to 70 ms in round-trip-time (i.e., 14,000 km in total) for light in fiber. The total procedural delay from hand motion (action) to remote surgical response (reaction) showed up on their video screen took 155 milliseconds. From trials on pigs any delay longer than 330 ms was thought to be associated with an unacceptable degree of risk for the patient. This system then did not offer any haptic feedback to the remote surgeon. This remains the case for most (if not all) remote robotic surgical systems in option today as the latency in most remote surgical scenarios render haptic feedback less than useful. An excellent account for robotic surgery systems (including the economics) can be found at this web site “All About Robotic Surgery”. According to experienced surgeons at 175 ms (and below) a remote robotic operation is perceived (by the surgeon) as imperceptible.

It should be clear that apart from offering long-distance surgical possibilities, robotic surgical systems offers many other benefits (less invasive, higher precision, faster patient recovery, lower overall operational risks, …). In fact most robotic surgeries are done with surgeon and robot being in close proximity.

Another example of coping with lag or latency is a Predator drone pilot. The plane is a so-called unmanned combat aerial vehicle and comes at a price of ca. 4 Million US$ (in 2010) per piece. Although this aerial platform can perform missions autonomously  it will typically have two pilots on the ground monitoring and possible controlling it. The typical operational latency for the Predator can be as much as 2,000 milliseconds. For takeoff and landing, where this latency is most critical, typically the control is handed to to a local crew (either in Nevada or in the country of its mission). The Predator cruise speed is between 130 and 165 km per hour. Thus within the 2 seconds lag the plane will have move approximately 100 meters (i.e., obviously critical in landing & take off scenarios). Nevertheless, a very high degree of autonomy has been build into the Predator platform that also compensates for the very large latency between plane and mission control.

Back to the Tactile Internet latency requirements;

In LTE today, the minimum latency (internal to the network) is around 12 ms without re-transmission and with pre-allocated resources. However, the normal experienced latency (again internal to the network) would be more in the order of 20 ms including 10% likelihood of retransmission and assuming scheduling (which would be normal). However, this excludes any content fetching, processing, presentation on the end-user device and the transport path beyond the operators network (i.e., somewhere in the www). Transmission outside the operator network typically between 10 and 20 ms on-top of the internal latency. The fetching, processing and presentation of content can easily add hundreds of milliseconds to the experience. Below illustrations provides a high level view of the various latency components to be considered in LTE with the transport related latencies providing the floor level to be expected;

latency in networks

In 5G the vision is to achieve a factor 20 better end-2-end (within the operators own network) round-trip-time compared to LTE; thus 1 millisecond.

 

So … what happens in 1 millisecond?

Light will have travelled ca. 200 km in fiber or 300 km in free-space. A car driving (or the fastest baseball flying) 160 km per hour will have moved 4 cm. A steel ball falling to the ground (on Earth) would have moved 5 micro meter (that’s 5 millionth of a meter). In a 1Gbps data stream, 1 ms correspond to ca. 125 Kilo Bytes worth of data. A human nerve impulse last just 1 ms (i.e., in a 100 millivolt pulse).

 

It should be clear that the 1 ms poses some very dramatic limitations;

  • The useful distance over which a tactile applications would work (if 1 ms would really be the requirements that is!) will be short ( likely a lot less than 100 km for fiber-based transport)
  • The air-interface (& number of control plane messages required) needs to reduce dramatically from milliseconds down to microseconds, i.e., factor 20 would require no more than 100 microseconds limiting the useful cell range).
  • Compute & processing requirements, in terms of latency, for UE (incl. screen, drivers, local modem, …), Base Station and Core would require a substantial overhaul (likely limiting level of tactile sophistication).
  • Require own controlled network infrastructure (at least a lot easier to manage latency within), avoiding any communication path leaving own network (walled garden is back with a vengeance?).
  • Network is the sole responsible for latency and can be made arbitrarily small (by distance and access).

Very small cells, very close to compute & processing resources, would be most likely candidates for fulfilling the tactile internet requirements. 

Thus instead of moving functionality and compute up and towards the cloud data center we (might) have an opposing force that requires close proximity to the end-users application. Thus, the great promise of cloud-based economical efficiency is likely going to be dented in this scenario by requiring many more smaller data centers and maybe even micro-data centers moving closer to the access edge (i.e., cell site, aggregation site, …). Not surprisingly, Edge Cloud, Edge Data Center, Edge X is really the new Black …The curse of the edge!?

Looking at several network and compute design considerations a tactile application would require no more than 50 km (i.e., 100 km round-trip) effective round-trip distance or 0.5 ms fiber transport (including switching & routing) round-trip-time. Leaving another 0.5 ms for air-interface (in a cellular/wireless scenario), computing & processing. Furthermore, the very high degree of imposed availability (i.e., 99.999%) might likewise favor proximity between the Tactile Application and any remote Processing-Computing. Obviously,

So in all likelihood we need processing-computing as near as possible to the tactile application (at least if one believes in the 1 ms and about target).

One of the most epic (“in the Dutch coffee shop after a couple of hours category”) promises in “The Tactile Internet” vision paper is the following;

“Tomorrow, using advanced tele-diagnostic tools, it could be available anywhere, anytime; allowing remote physical examination even by palpation (examination by touch). The physician will be able to command the motion of a tele-robot at the patient’s location and receive not only audio-visual information but also critical haptic feedback.(page 6, section 3.5).

All true, if you limited the tele-robot and patient to a distance of no more than 50 km (and likely less!) from the remote medical doctor. In this setup and definition of the Tactile Internet, having a top eye surgeon placed in Delhi would not be able to operate child (near blindness) in a remote village in Madhya Pradesh (India) approx. 800+ km away. Note India has the largest blind population in the world (also by proportion) with 75% of cases avoidable by medical intervention. At best, these specifications allow the doctor not to be in the same room with the patient.

Markus Rank et al did systematic research on the perception of delay in haptic tele-presence systems (Presence, October 2010, MIT Press) and found haptic delay detection thresholds between  30 and 55 ms. Thus haptic feedback did not appear to be sensitive to delays below 30 ms, fairly close to the lowest reported threshold of 20 ms. This combined with experienced tele-robotic surgeons assessing that below 175 ms the remote procedure starts to be perceived as imperceptible, might indicate that the 1 ms, at least for this particular use case, is extremely limiting.

The extreme case would be to have the tactile-related computing done at the radio base station assuming that the tactile use case could be restricted to the covered cell and users supported by that cell. I name this the micro-DC (or micro-cloud or more like what some might call the cloudlet concept) idea. This would be totally back to the older days with lots of compute done at the cell site (and likely kill any traditional legacy cloud-based efficiency thinking … love to use legacy and cloud in same sentence). This would limit the round-trip-time to air-interface latency and compute/processing at the base station and the device supporting the tactile application.

It is normal to talk about the round-trip-time between an action and the subsequent reaction. It is also the time it takes a data or signal to travel from a specific source to a specific destination and back again (i.e., round trip). In case of light in fiber, a 1 millisecond limit on the round-trip-time would imply that the maximum distance that can be travelled (in the fiber) between source to destination and back to the source is 200 km. Limiting the destination to be no more than 100 km away from the source. In case of substantial processing overhead (e.g., computation) the distance between source and destination requires even less than 100 km to allow for the 1 ms target.

THE HUMAN SENSES AND THE TACTILE INTERNET.

The “touchy-feely” aspect, or human sensing in general, is clearly an inspiration to the authors of “The Tactile Internet” vision as can be seen from the following quote;

“We experience interaction with a technical system as intuitive and natural only if the feedback of the system is adapted to our human reaction time. Consequently, the requirements for technical systems enabling real-time interactions depend on the participating human senses.” (page 2, Section 1).

The following human-reaction times illustration shown below is included in “The Tactile Internet” vision paper. Although it originates from Fettweis and Alamouti’s paper titled “5G: Personal Mobile Internet beyond What Cellular Did to Telephony“. It should be noted that the description of the Table is order of magnitude of human reaction times; thus, 10 ms might also be 100 ms or 1 ms and so forth and therefor, as we shall see, it would be difficult to a given reaction time wrong within such a range.human senses

The important point here is that the human perception or senses impact very significantly the user’s experience with a given application or use case.

The responsiveness of a given system or design is incredible important for how well a service or product will be perceived by the user. The responsiveness can be defined as a relative measure against our own sense or perception of time. The measure of responsiveness is clearly not unique but depends on what senses are being used as well as the user engaged.The human mind is not fond of waiting and waiting too long causes distraction, irritation and ultimate anger after which the customer is in all likelihood lost. A very good account of considering the human mind and it senses in design specifications (and of course development) can be found in Jeff Johnson’s 2010 book “Designing with the Mind in Mind”.

The understanding of human senses and the neurophysiological reactions to those senses are important for assessing a given design criteria’s impact on the user experience. For example, designing for 1 ms or lower system reaction times when the relevant neurophysiological timescale is measured in 10s or 100s of milliseconds is likely not resulting in any noticeable (and monetizable) improvement in customer experience. Of course there can be many very good non-human reasons for wanting low or very low latencies.

While you might get the impression, from the above table above from Fettweis et al and countless Tactile Internet and 5G publications referring back to this data, that those neurophysiological reactions are natural constants, it is unfortunately not the case. Modality matters hugely. There are fairly great variations in reactions time within the same neurophysiological response category depending on the individual human under test but often also depending on the underlying experimental setup. In some instances the reaction time deduced would be fairly useless as a design criteria for anything as the detection happens unconsciously and still require the relevant part of the brain to make sense of the event.

We have, based on vision, the surgeon controlling a remote surgical robot stating that anything below 175 ms latency is imperceptible. There is research showing that haptic feedback delay below 30 ms appears to be un-detectable.

John Carmack, CTO of Oculus VR Inc, based on in particular vision (in a fairly dynamic environment) that  “.. when absolute delays are below approximately 20 milliseconds they are generally imperceptible.” particular as it relates to 3D systems and VR/AR user experience which is a lot more dynamic than watching content loading. Moreover, according to some recent user experience research specific to website response time indicates that anything below 100 ms wil be perceived as instantaneous. At 1 second users will sense the delay but would be perceived as seamless. If a web page loads in more than 2 seconds user satisfaction levels drops dramatically and a user would typically bounce.

Based on IAAF (International Athletic Association Federation) rules, an athlete is deemed to have had a false start if that athlete moves sooner than 100 milliseconds after the start signal. The neurophysiological process relevant here is the neuromuscular reaction to the sound heard (i.e., the big bang of the pistol) by the athlete. Research carried out by Paavo V. Komi et al has shown that the reaction time of a prepared (i.e., waiting for the bang!) athlete can be as low as 80 ms. This particular use case relates to the auditory reaction times and the subsequent physiological reaction. P.V. Komi et al also found a great variation in the neuromuscular reaction time to the sound (even far below the 80 ms!).

Neuromuscular reactions to unprepared events typically typically measures in several hundreds of milliseconds (up-to 700 ms) being somewhat faster if driven by auditory senses rather than vision. Note that reflex time scales are approximately 10 times faster or in the order of 80 – 100 ms.

The international Telecommunications Union (ITU) Recommendation G.114, defines for voice applications an upper acceptable one-way (i.e., its you talking you don’t want to be talked back to by yourself) delay of 150 ms. Delays below this limit would provide an acceptable degree of voice user experience in the sense that most users would not hear the delay. It should be understood that a great variation in voice delay sensitivity exist across humans. Voice conversations would be perceived as instantaneous by most below the 100 ms (thought the auditory perception would also depend on the intensity/volume of the voice being listened to).

Finally, let’s discuss human vision. Fettweis et al in my opinion mixes up several psychophysical concepts of vision and TV specifications. Alluding to 10 millisecond is the visual “reaction” time (whatever that now really means). More accurately they describe the phenomena of flicker fusion threshold which describes intermittent light stimulus (or flicker) is perceived as completely steady to an average viewer. This phenomena relates to persistence of vision where the visual system perceives multiple discrete images as a single image (both flicker and persistence of vision are well described in both by Wikipedia and in detail by Yhong-Lin Lu el al “Visual Psychophysics”). There, are other reasons why defining flicker fusion and persistence of vision as a human reaction reaction mechanism is unfortunate.

The 10 ms for vision reaction time, shown in the table above, is at the lowest limit of what researchers (see references 14, 15, 16 ..) find to be the early stages of vision can possible detect (i.e., as opposed to pure guessing ). Mary C. Potter of M.I.T.’s Dept. of Brain & Cognitive Sciences, seminal work on human perception in general and visual perception in particular shows that the human vision is capable very rapidly to make sense of pictures, and objects therein, on the timescale of 10 milliseconds (i.e., 13 ms actually is the lowest reported by Potter). From these studies it is also found that preparedness (i.e., knowing what to look for) helps the detection process although the overall detection results did not differ substantially from knowing the object of interest after the pictures were shown. Note that the setting of these visual reaction time experiments all happens in a controlled laboratory setting with the subject primed to being attentive (e.g., focus on screen with fixation cross for a given period, followed by blank screen for another shorter period, and then a sequence of pictures each presented for a (very) short time, followed again by a blank screen and finally a object name and the yes-no question whether the object was observed in the sequence of pictures). Often these experiments also includes a certain degree of training before the actual experiment  took place. The relevant memory of the target object, In any case and unless re-enforced, will rapidly dissipates. in fact the shorter the viewing time, the quicker it will disappear … which might be a very healthy coping mechanism.

To call this visual reaction time of 10+ ms typical is in my opinion a bit of a stretch. It is typical for that particular experimental setup and very nicely provides important insights into the visual systems capabilities.

One of the more silly things used to demonstrate the importance of ultra-low latencies have been to time delay the video signal send to a wearer’s goggles and then throw a ball at him in the physical world … obviously, the subject will not catch the ball (might as well as thrown it at the back of his head instead). In the Tactile Internet vision paper it the following is stated; “But if a human is expecting speed, such as when manually controlling a visual scene and issuing commands that anticipate rapid response, 1-millisecond reaction time is required(on page 3). And for the record spinning a basketball on your finger has more to do with physics than neurophysiology and human reaction times.

In more realistic settings it would appear that the (prepared) average reaction time of vision is around or below 40 ms. With this in mind, a baseball moving (when thrown by a power pitcher) at 160 km per hour (or ca. 4+ cm per ms) would take a approx. 415 ms to reach the batter (using an effective distance of 18.44 meters). Thus the batter has around 415 ms to visually process the ball coming and hit it at the right time. Given the latency involved in processing vision the ball would be at least 40 cm (@ 10 ms) closer to the batter than his latent visionary impression would imply. Assuming that the neuromuscular reaction time is around 100±20 ms, the batter would need to compensate not only for that but also for his vision process time in order to hit the ball. Based on batting statistics, clearly the brain does compensate for its internal latencies pretty well. In the paper  “Human time perception and its illusions” D.M. Eaglerman that the visual system and the brain (note: visual system is an integral part of the brain) is highly adaptable in recalibrating its time perception below the sub-second level.

It is important to realize that in literature on human reaction times, there is a very wide range of numbers for supposedly similar reaction use cases and certainly a great deal of apparent contradictions (though the experimental frameworks often easily accounts for this).

reaction times

The supporting data for the numbers shown in the above figure can be found via the hyperlink in the above text or in the references below.

Thus, in my opinion, also supported largely by empirical data, a good latency E2E design target for a Tactile network serving human needs, would be between 20 milliseconds and 10 milliseconds. With the latency budget covering the end user device (e.g., tablet, VR/AR goggles, IOT, …), air-interface, transport and processing (i.e., any computing, retrieval/storage, protocol handling, …). It would be unlikely to cover any connectivity out of the operator”s network unless such a connection is manageable from latency and jitter perspective though distance would count against such a strategy.

This would actually be quiet agreeable from a network perspective as the distance to data centers would be far more reasonable and likely reduce the aggressive need for many edge data centers using the below 10 ms target promoted in the Tactile Internet vision paper.

latency budget

There is however one thing that we are assuming in all the above. It is assumed that the user’s local latency can be managed as well and made almost arbitrarily small (i.e., much below 1 ms). Hardly very reasonable even in the short run for human-relevant communications ecosystems (displays, goggles, drivers, etc..) as we shall see below.

For a gaming environment we would look at something like the below illustration;

local latency should be considered

Lets ignore the use case of local games (i.e., where the player only relies on his local computing environment) and focus on games that rely on a remote gaming architecture. This could either be relying on a  client-server based architecture or cloud gaming architecture (e.g., typical SaaS setup). In general the the client-server based setup requires more performance of the users local environment (e.g., equipment) but also allows for more advanced latency compensating strategies enhancing the user perception of instantaneous game reactions. In the cloud game architecture, all game related computing including rendering/encoding (i.e., image synthesis) and video output generation happens in the cloud. The requirements to the end users infrastructure is modest in the cloud gaming setup. However, applying latency reduction strategies becomes much more challenging as such would require much more of the local computing environment that the cloud game architecture tries to get away from. In general the network transport related latency would be the same provide the dedicated game servers and the cloud gaming infrastructure would reside within the same premises. In Choy et al’s 2012 paper “The Brewing Storm in Cloud Gaming: A Measurement Study on Cloud to End-User Latency” , it is shown, through large scale measurements, that current commercial cloud infrastructure architecture is unable to deliver the latency performance for an acceptable (massive) multi-user experience. Partly simply due to such cloud data centers are too far away from the end user. Moreover, the traditional commercial cloud computing infrastructure is simply not optimized for online gaming requiring augmentation of stronger computing resources including GPUs and fast memory designs. Choy et al do propose to distribute the current cloud infrastructure targeting a shorter distance between end user and the relevant cloud game infrastructure. Similar to what is already happening today with content distribution networks (CDNs) being distributed more aggressively in metropolitan areas and thus closer to the end user.

A comprehensive treatment on latencies, or response time scales, in games and how these relates to user experience can be found in Kjetil Raaen’s Ph.D. thesis “Response time in games: Requirements and improvements” as well as in the comprehensive relevant literature list found in this thesis.

From the many studies (as found in Raaen’s work, the work of Mark Claypool and much cited 2002 study by Pantel et al) on gaming experience, including massive multi-user online game experience, shows that players starts to notice delay of about 100 ms of which ca. 20 ms comes from play-out and processing delay. Thus, quiet a far cry from the 1 millisecond. From the work, and not that surprising, sensitivity to gaming latency depends on the type of game played (see the work of Claypool) and how experienced a gamer is with the particular game (e.g., Pantel er al). It should also be noted that in a VR environment, you would want to the image that arrives at your visual system to be in synch with your heads movement and the directions of your vision. If there is a timing difference (or lag) between the direction of your vision and the image presented to your visual system, the user experience becomes rapidly poor causing discomfort by disorientation and confusion (possible leading to a physical reaction such as throwing up). It is also worth noting that in VR there is a substantially latency component simple from the image rendering (e.g., 60 MHz frame rate provides a new frame on average every 16.7 millisecond). Obviously chunking up the display frame rate will reduce the rendering related latency. However, several latency compensation strategies (to compensate for you head and eye movements) have been developed to cope with VR latency (e.g., time warping and prediction schemes).

Anyway, if you would be of the impression that VR is just about showing moving images on the inside of some awesome goggles … hmmm do think again and keep dreaming of 1 millisecond end-2end network centric VR delivery solutions (at least for the networks we have today). Of course 1 ms target is possible really a Proxima-Centauri shot as opposed to a just moonshot.

With a target of no more than 20 milliseconds lag or latency and taking into account the likely reaction time of the users VR system (future system!), that likely leaves no more (and likely less) than 10 milliseconds for transport and any remote server processing. Still this could allow for a data center to be 500 km (5 ms round.trip time in fiber) away from the user and allow another 5 ms for data center processing and possible routing delay along the way.

One might very well be concerned about the present Tactile Internet vision and it’s focus on network centric solutions to the very low latency target of 1 millisecond. The current vision and approach would force (fixed and mobile) network operators to add a considerable amount of data centers in order to get the physical transport time down below the 1 millisecond. This in turn drives the latest trend in telecommunication, the so-called edge data center or edge cloud. In the ultimate limit, such edge data centers (however small) might be placed at cell site locations or fixed network local exchanges or distribution cabinets.

Furthermore, the 1 millisecond as a goal might very well have very little return on user experience (UX) and substantial cost impact for telecom operators. A diligent research through academic literature and wealth of practical UX experiments indicates that this indeed might be the case.

Such a severe and restrictive target as the 1 millisecond is, it severely narrows the Tactile Internet to scenarios where sensing, acting, communication and processing happens in very close proximity of each other. In addition the restrictions to system design it imposes, further limits its relevance in my opinion. The danger is, with the expressed Tactile vision, that too little academic and industrious thinking goes into latency compensating strategies using the latest advances in machine learning, virtual reality development and computational neuroscience (to name a few areas of obvious relevance). Further network reliability and managed latency, in the sense of controlling the variation of the latency, might be of far bigger importance than latency itself below a certain limit.

So if 1 ms is no use to most men and beasts … why bother with this?

While very low latency system architectures might be of little relevance to human senses, it is of course very likely (as it is also pointed out in the Tactile Internet Vision paper) that industrial use cases could benefit from such specifications of latency, reliability and security.

For example in machine-to-machine or things-to-things communications between sensors, actuators, databases, and applications very short reaction times in the order of sub-milliseconds to low milliseconds could be relevant.

We will look at this next.

THE TACTILE INTERNET USE CASES & BUSINESS MODELS.

An open mind would hope that most of what we do strives to out perform human senses, improve how we deal with our environment and situations that are far beyond mere mortal capabilities. Alas I might have read too many Isaac Asimov novels as a kid and young adult.

In particular where 5G has its present emphasis of ultra-high frequencies (i.e., ultra small cells), ultra-wide spectral bandwidth (i.e., lots of Gbps) together with the current vision of the Tactile Internet (ultra-low latencies, ultra-high reliability and ultra-high security), seem to be screaming for being applied to Industrial facilities, logistic warehouses, campus solutions, stadiums, shopping malls, tele-, edge-cloud, networked robotics, etc… In other words, wherever we have a happy mix of sensors, actuators, processors, storage, databases and software based solutions  across a relative confined area, 5G and the Tactile Internet vision appears to be a possible fit and opportunity.

In the following it is important to remember;

  • 1 ms round-trip time ~ 100 km (in fiber) to 150 km (in free space) in 1-way distance from the relevant action if only transport distance mattered to the latency budget.
  • Considering the total latency budget for a 1 ms Tactile application the transport distance is likely to be no more than 20 – 50 km or less (i.e., right at the RAN edge).

One of my absolute current favorite robotics use case that comes somewhat close to a 5G Tactile Internet vision, done with 4G technology, is the example of Ocado’s warehouse automation in UK. Ocado is the world’s largest online-only grocery retailer with ca. 50 thousand lines of goods, delivering more than 200,000 orders a week to customers around the United Kingdom. The 4G network build (by Cambridge Consultants) to support Ocado’s automation is based on LTE at unlicensed 5GHz band allowing Ocado to control 1,000 robots per base station. Each robot communicates with the Base Station and backend control systems every 100 ms on average as they traverses ca. 30 km journey across the warehouse 1,250 square meters. A total of 20 LTE base stations each with an effective range of 4 – 6 meters cover the warehouse area. The LTE technology was essential in order to bring latency down to an acceptable level by fine tuning LTE to perform under its lowest possible latency (<10 ms).

5G will bring lower latency, compared to an even optimized LTE system, that in a similar setup as the above described for Ocado, could further increase the performance. Obviously very high network reliability promised by 5G of such a logistic system needs to be very high to reduce the risk of disruption and subsequent customer dissatisfaction of late (or no) delivery as well as the exposure to grocery stock turning bad.

This all done within the confines of a warehouse building.

ROBOTICS AND TACTILE CONDITIONS

First of all lets limit the Robotics discussion to use cases related to networked robots. After all if the robot doesn’t need a network (pretty cool) it pretty much a singleton and not so relevant for the Tactile Internet discussion. In the following I am using the word Cloud in a fairly loose way and means any form of computing center resources either dedicated or virtualized. The cloud could reside near the networked robotic systems as well as far away depending on the overall system requirements to timing and delay (e.g., that might also depend on the level of robotic autonomy).

Getting networked robots to work well we need to solve a host of technical challenges, such as

  • Latency.
  • Jitter (i.e., variation of latency).
  • Connection reliability.
  • Network congestion.
  • Robot-2-Robot communications.
  • Robot-2-ROS (i.e., general robotics operations system).
  • Computing architecture: distributed, centralized, elastic computing, etc…
  • System stability.
  • Range.
  • Power budget (e.g., power limitations, re-charging).
  • Redundancy.
  • Sensor & actuator fusion (e.g., consolidate & align data from distributed sources for example sensor-actuator network).
  • Context.
  • Autonomy vs human control.
  • Machine learning / machine intelligence.
  • Safety (e.g., human and non-human).
  • Security (e.g., against cyber threats).
  • User Interface.
  • System Architecture.
  • etc…

The network connection-part of the networked robotics system can be either wireless, wired, or a combination of wired & wireless. Connectivity could be either to a local computing cloud or data center, to an external cloud (on the internet) or a combination of internal computing for control and management for applications requiring very low-latency very-low jitter communications and external cloud for backup and latency-jitter uncritical applications and use cases.

For connection types we have Wired (e.g., LAN), Wireless (e.g., WLAN) and Cellular  (e.g., LTE, 5G). There are (at least) three levels of connectivity we need to consider; inter-robot communications, robot-to-cloud communications (or operations and control systems residing in Frontend-Cloud or computing center), and possible Frontend-Cloud to Backend-Cloud (e..g, for backup, storage and latency-insensitive operations and control systems). Obviously, there might not be a need for a split in Frontend and Backend Clouds and pending on the use case requirements could be one and the same. Robots can be either stationary or mobile with a need for inter-robot communications or simply robot-cloud communications.

Various networked robot connectivity architectures are illustrated below;

networked robotics

ACKNOWLEDGEMENT

I greatly acknowledge my wife Eva Varadi for her support, patience and understanding during the creative process of creating this Blog.

.WORTHY 5G & RELATED READS.

  1. “NGMN 5G White Paper” by R.El Hattachi & J. Erfanian (NGMN Alliance, February 2015).
  2. “The Tactile Internet” by ITU-T (August 2014). Note: in this Blog this paper is also referred to as the Tactile Internet Vision.
  3. “5G: Personal Mobile Internet beyond What Cellular Did to Telephony” by G. Fettweis & S. Alamouti, (Communications Magazine, IEEE , vol. 52, no. 2, pp. 140-145, February 2014).
  4. “The Tactile Internet: Vision, Recent Progress, and Open Challenges” by Martin Maier, Mahfuzulhoq Chowdhury, Bhaskar Prasad Rimal, and Dung Pham Van (IEEE Communications Magazine, May 2016).
  5. “John Carmack’s delivers some home truths on latency” by John Carmack, CTO Oculus VR.
  6. “All About Robotic Surgery” by The Official Medical Robotics News Center.
  7. “The surgeon who operates from 400km away” by BBC Future (2014).
  8. “The Case for VM-Based Cloudlets in Mobile Computing” by Mahadev Satyanarayanan et al. (Pervasive Computing 2009).
  9. “Perception of Delay in Haptic Telepresence Systems” by Markus Rank et al. (pp 389, Presence: Vol. 19, Number 5).
  10. “Neuroscience Exploring the Brain” by Mark F. Bear et al. (Fourth Edition, 2016 Wolters Kluwer).
  11. “Neurophysiology: A Conceptual Approach” by Roger Carpenter & Benjamin Reddi (Fifth Edition, 2013 CRC Press). Definitely a very worthy read by anyone who want to understand the underlying principles of sensory functions and basic neural mechanisms.
  12. “Designing with the Mind in Mind” by Jeff Johnson (2010, Morgan Kaufmann). Lots of cool information of how to design a meaningful user interface and of basic user expirence principles worth thinking about.
  13. “Vision How it works and what can go wrong” by John E. Dowling et al. (2016, The MIT Press).
  14. “Visual Psychophysics From Laboratory to Theory” by Yhong-Lin Lu and Barbera Dosher (2014, MIT Press).
  15. “The Time Delay in Human Vision” by D.A. Wardle (The Physics Teacher, Vol. 36, Oct. 1998).
  16. “What do we perceive in a glance of a real-world scene?” by Li Fei-Fei et al. (Journal of Vision (2007) 7(1); 10, 1-29).
  17. “Detecting meaning in RSVP at 13 ms per picture” by Mary C. Potter et al. (Attention, Perception, & Psychophysics, 76(2): 270–279).
  18. “Banana or fruit? Detection and recognition across categorical levels in RSVP” by Mary C. Potter & Carl Erick Hagmann (Psychonomic Bulletin & Review, 22(2), 578-585.).
  19. “Human time perception and its illusions” by David M. Eaglerman (Current Opinion in Neurobiology, Volume 18, Issue 2, Pages 131-136).
  20. “How Much Faster is Fast Enough? User Perception of Latency & Latency Improvements in Direct and Indirect Touch” by J. Deber, R. Jota, C. Forlines and D. Wigdor (CHI 2015, April 18 – 23, 2015, Seoul, Republic of Korea).
  21. “Response time in games: Requirements and improvements” by Kjetil Raaen (Ph.D., 2016, Department of Informatics, The Faculty of Mathematics and Natural Sciences, University of Oslo).
  22. “Latency and player actions in online games” by Mark Claypool & Kajal Claypool (Nov. 2006, Vol. 49, No. 11 Communications of the ACM).
  23. “The Brewing Storm in Cloud Gaming: A Measurement Study on Cloud to End-User Latency” by Sharon Choy et al. (2012, 11th Annual Workshop on Network and Systems Support for Games (NetGames), 1–6).
  24. “On the impact of delay on real-time multiplayer games” by Lothar Pantel and Lars C. Wolf (Proceedings of the 12th International Workshop on Network and Operating Systems Support for Digital Audio and Video, NOSSDAV ’02, New York, NY, USA, pp. 23–29. ACM.).
  25. “Oculus Rift’s time warping feature will make VR easier on your stomach” from ExtremeTech Grant Brunner on Oculus Rift Timewarping. Pretty good video included on the subject.
  26. “World first in radio design” by Cambridge Consultants. Describing the work Cambridge Consultants did with Ocado (UK-based) to design the worlds most automated technologically advanced warehouse based on 4G connected robotics. Please do see the video enclosed in page.
  27. “Ocado: next-generation warehouse automation” by Cambridge Consultants.
  28. “Ocado has a plan to replace humans with robots” by Business Insider UK (May 2015). Note that Ocado has filed more than 73 different patent applications across 32 distinct innovations.
  29. “The Robotic Grocery Store of the Future Is Here” by MIT Technology Review (December 201
  30. “Cloud Robotics: Architecture, Challenges and Applications.” by Guoqiang Hu et al (IEEE Network, May/June 2012).