AI in RAN – Evolution, Opportunities, and Risks

INTRO.

On September 10, at the Berlin Open RAN Working Week (BOWW) (a public event arranged by Deutsche Telekom AG’s T-Labs), I will give a talk about AI in Open RAN and RAN in general. The focus of the talk will be on how AI in RAN can boost the spectral efficiency. I have about 20 minutes, which is way too short to convey what is happening in this field at the moment. Why not write a small piece on the field as I see it at the moment? So, enjoy and feel free to comment or contact me directly for one-on-one discussions. If you are at the event, feel free to connect there as well.

LOOKING BACK.

The earliest use of machine learning and artificial intelligence in the Radio Access Network did not arrive suddenly with the recent wave of AI-RAN initiatives. Long before the term “AI-native RAN” (and even the term AI) became fashionable, vendors were experimenting with data-driven methods to optimize radio performance, automate operations, and manage complexity that traditional engineering rules could no longer handle well or at all. One of the first widely recognized examples came from Ericsson, which worked with SoftBank in Japan on advanced coordination features that would later be branded as Elastic RAN. By dynamically orchestrating users and cell sites, these early deployments delivered substantial throughput gains in dense environments such as Tokyo Station (with more than half a million passengers daily). Although they were not presented as “AI solutions,” they relied on principles of adaptive optimization that anticipated later machine learning–based control loops.

Nokia, and previously Nokia-Siemens Networks, pursued a similar direction through Self-Organizing Networks. SON functions, such as neighbor list management, handover optimization, and load balancing, increasingly incorporate statistical learning and pattern recognition techniques. These capabilities were rolled out across 3G and 4G networks during the 2010s and can be seen as some of the earliest mainstream applications of machine learning inside the RAN. Samsung, Huawei, and ZTE also invested in intelligent automation at this stage, often describing their approaches in terms of network analytics and energy efficiency rather than artificial intelligence, but drawing on many of the same methods. Around the same time, startups began pushing the frontier further: Uhana, founded in 2016 (acquired by VMware in 2019), pioneered the use of deep learning for real-time network optimization and user-experience prediction, going beyond rule-based SON to deliver predictive, closed-loop control. Building on that trajectory, today’s Opanga represents a (much) more advanced, AI-native and vendor-agnostic RAN platform, addressing long-standing industry challenges such as congestion management, energy efficiency, and intelligent spectrum activation at scale. In my opinion, both Uhana and Opanga can be seen as early exemplars of the types of applications that later inspired the formalization of rApps and xApps in the O-RAN framework.

What began as incremental enhancements in SON and coordination functions gradually evolved into more explicit uses of AI. Ericsson extended its portfolio with machine-learning-based downlink link adaptation and parameter optimization; Nokia launched programs to embed AI into both planning and live operations; and other vendors followed suit. By the early 2020s, the industry had begun to coalesce around the idea of an AI-RAN, where RAN functions and AI workloads are tightly interwoven. This vision took concrete form in 2024 with the launch of the AI-RAN Alliance, led by NVIDIA and comprising Ericsson, Nokia, Samsung, SoftBank, T-Mobile, and other partners.

The trajectory from SON and early adaptive coordination toward today’s GPU-accelerated AI-RAN systems underscores that artificial intelligence in the RAN has been less a revolution than an evolution. The seeds were sown in the earliest machine-learning-driven automation of 3G and 4G networks, and they have grown into the integrated AI-native architectures now being tested for 5G Advanced and beyond.

Article content
Figure: Evolution of Open RAN architectures — from early X-RAN disaggregation (2016–2018) to O-RAN standardization (2018–2020), and today’s dual paths of full disaggregated O-RAN and vRAN with O-RAN interfaces.

AI IN OPEN RAN – THE EARLIER DAYS.

Open RAN as a movement has its roots in the xRAN Forum (founded in 2016) and the O-RAN Alliance (created in early 2018 when xRAN merged with C-RAN Alliance). While the architectural thinking and evolution around what has today become the O-RAN Architecture (with its 2 major options) is interesting and very briefly summarized in the above figure. The late 2010s were a time when architectural choices were made in a climate of enormous enthusiasm for cloud-native design and edge cloud computing. At that time, “disaggregation for openness” was considered an essential condition for competition, innovation, and efficiency. I also believe that when xRAN was initiated around 2016, the leading academic and industrial players came predominantly from Germany, South Korea, and Japan. Each of these R&D cultures has a deep tradition of best-in-breed engineering, that is, the idea that the most specialized team or vendor should optimize every single subsystem, and that overall performance emerges from integrating these world-class components. Looking back today, with the benefit of hindsight, one can see how this cultural disposition amplified the push for the maximum disaggregation paradigm, even where integration and operational realities would later prove more challenging. It also explains why early O-RAN documents are so ambitious in scope, embedding intelligence into every layer and opening almost every possible interface imaginable. What appeared to be a purely technical roadmap was, in my opinion, also heavily shaped by the R&D traditions and innovation philosophies of the national groups leading the effort.

However, although this is a super interesting topic (i.e., how culture and background influence innovation, architectural ideas, and choices), it is not the focus of this paper. AI in RAN is the focus. From its very first architectural documents, O-RAN included the idea that AI and ML would be central to automating and optimizing the RAN.

The key moment was 2018, when the O-RAN Alliance released its initial O-RAN architecture white paper (“O-RAN: Towards an Open and Smart RAN”). That document explicitly introduced the concept of the Non-Real-Time (NRT) RIC (rApps) and the Real-Time (RT) RIC (xApps) as platforms designed to host AI/ML-based applications. The NRT RIC was envisioned to run in the operator’s cloud, providing policy guidance, training, and coordination of AI models at timescales well above a second. In contrast, the RT RIC (i.e., the official name is RT RIC, which is unfortunate for abbreviations among the two RICs) would host faster-acting control applications within the 10-ms to 1-s regime. These were framed not just as generic automation nodes but explicitly as AI/ML hosting environments. The idea of a dual RIC structure, breaking up the architecture in layers of relevant timescales, was not conceived in a vacuum. It is, in many ways, an explicit continuation of the ideas introduced in the 3GPP LTE Self-Organizing Network (SON) specifications, where optimization functions were divided between centralized, long-horizon processes running in the network management system and distributed, faster-acting functions embedded at the eNodeB. In the LTE context, the offline or centralized SON dealt with tasks such as PCI assignment, ANR management, and energy saving strategies at timescales of minutes to days. At the same time, the online or distributed SON reacted locally to interference, handover failures, or outages at timescales of hundreds of milliseconds to a few seconds. O-RAN borrowed this logic but codified it in a much more rigid fashion: the Non-RT RIC inherited the role of centralized SON, and the RT RIC inherited the role of distributed SON, with the addition of standardized interfaces and an explicit role as AI application platforms.

Figure: Comparison between the SON functions defined by 3GPP for LTE (right) and the O-RAN RIC architecture (left). The LTE model divides SON into centralized offline (C-SON, in OSS/NMS, working on minutes and beyond) and distributed online (D-SON, at the edge, operating at 100 ms to seconds) functions. In contrast, O-RAN formalized this split into the Non-RT RIC (≥1 s) and Near-RT RIC (10 ms–1 s), embedded within the SMO hierarchy. The figure highlights how O-RAN codified and extended SON’s functional separation into distinct AI/ML application platforms.

The choice to formalize this split also had political dimensions. Vendors were reluctant to expose their most latency-critical baseband algorithms to external control, and the introduction of an RT RIC created a sandbox where third-party innovation could be encouraged without undermining vendor control of the physical layer. At the same time, operators sought assurances that policy, assurance, and compliance would not be bypassed by low-latency applications; therefore, the Non-RT RIC was positioned as a control tower layer situated safely above the millisecond domain. In this sense, the breakup of the time domain was as much a governance and trust compromise as a purely technical necessity. By drawing a clear line between “safe and slow” and “fast but bounded,” O-RAN created a model that felt familiar to operators accustomed to OSS hierarchies, while signaling to regulators and ecosystem players that AI could be introduced in a controlled and explainable manner.

Article content
Figure: Functional and temporal layering of the O-RAN architecture — showing the SMO with embedded NRT-RIC for long-horizon and slow control loops, the RT-RIC for fast loops, and the CU, DU, and RU for real-time through instant reflex actions, interconnected via standardized O-, A-, E-, F-, and eCPRI interfaces.

The figure above shows the O-RAN reference architecture with functional layers and interfaces. The Service Management and Orchestration (SMO) framework hosts the Non-Real-Time RIC (NRT-RIC), which operates on long-horizon loops (greater than 1 second) and is connected via the O1 interface to network elements and via O2 to cloud infrastructure (e.g., NFVI and MANO). Policies, enrichment information, and trained AI/ML models are delivered from the NRT-RIC to the Real-Time RIC (RT-RIC) over the A1 interface. The RT-RIC executes closed-loop control in the 10-ms to 1-s domain through xApps, interfacing with the CU/DU over E2. The 3GPP F1 split separates the CU and DU, while the DU connects to the RU through the open fronthaul (eCPRI/7-2x split). The RU drives active antenna systems (AAS) over largely proprietary interfaces (AISG for RET, vendor-specific for massive MIMO). The vertical time-scale axis highlights the progression from long-horizon orchestration at the SMO down to instant reflex functions in the RU/AAS domain. Both RU and DU operate on a transmission time interval (TTI) between 1 ms and 625 microseconds.

The O-RAN vision for AI and ML is built directly into its architecture from the very first white paper in 2018. The alliance described two guiding themes: openness and intelligence. Openness was about enabling multi-vendor, cloud-native deployments with open interfaces, which was supposed to provide for much more economical RAN solutions, while intelligence was about embedding machine learning and artificial intelligence into every layer of the RAN to deal with growing complexity (i.e., some of it self-inflicted by architecture and system design).

The architectural realization of this vision is the hierarchical RAN Intelligent Controller (RIC), which separates the control into different time domains and couples each to appropriate AI/ML functions:

  • Service Management and Orchestration (SMO, timescale > 1 second) – The Control Tower: The SMO provides the overarching management and orchestration framework for the RAN. Its functions extend beyond the Non-RT RIC, encompassing lifecycle management, configuration, assurance, and resource orchestration across both network functions and the underlying cloud infrastructure. Through the O1 interface (see above figure), the SMO collects performance data, alarms, and configuration information from the CU, DU, and RU, enabling comprehensive FCAPS (Fault, Configuration, Accounting, Performance, Security) management. Through the O2 interface (see above), it orchestrates cloud resources (compute, storage, accelerators) required to host virtualized RAN functions and AI/ML workloads. In addition, the SMO hosts the Non-RT RIC, meaning it not only provides operational oversight but also integrates AI/ML governance, ensuring that trained models and policy guidance align with operator intent and regulatory requirements.
  • Non-Real-Time RIC (NRT RIC, timescale > 1 second) – The Policy Brain: Directly beneath, embedded in the SMO, lies the NRT-RIC, described here as the “policy brain.” This is where policy management, analytics, and AI/ML model training take place. The non-RT RIC collects large volumes of data from the network (spatial-temporal traffic patterns, mobility traces, QoS (Quality of Service) statistics, massive MIMO settings, etc.) and uses them for offline training and long-term optimization. Trained models and optimization policies are then passed down to the RT RIC via the A1 interface (see above). A central functionality of the NRT-RIC is the hosting of rApps (e.g., Python or Java code), which implement policy-driven use cases such as energy savings, traffic steering, and mobility optimization. These applications leverage the broader analytic scope and longer timescales of the NRT-RIC to shape intent and guide the near-real-time actions of the RT-RIC. The NRT-RIC is traditionally viewed as an embedded entity within the SMO (although in theory, it could be a standalone entity)..
  • Real-Time RIC (RT RIC, 10 ms – 1 second timescale) – The Decision Engine: This is where AI-driven control is executed in closed loops. The real-time RT-RIC hosts xApps (e.g., Go or C++ code) that run inference on trained models and perform tasks such as load balancing, interference management, mobility prediction, QoS management, slicing, and per-user (UE) scheduling policies. It maintains a Radio Network Information Base (R-NIB) fed via the E2 interface (see above) from the DU/CU, and uses this data to make fast control decisions in near real-time.
  • Centralized Unit (CU): Below the RT-RIC sits the Centralized Unit, which takes on the role of the “shaper” in the O-RAN architecture. The CU is responsible for higher-layer protocol processing, including PDCP (Packet Data Convergence Protocol) and SDAP (Service Data Adaptation Protocol), and is therefore the natural point in the stack where packet shaping and QoS enforcement occur. At this level, AI-driven policies provided by the RT-RIC can directly influence how data streams are prioritized and treated, ensuring that application- or slice-specific requirements for latency, throughput, and reliability are respected. By interfacing with the RT-RIC over the E2 interface, the CU can dynamically adapt QoS profiles and flow control rules based on real-time network conditions, balancing efficiency with service differentiation. In this way, the CU acts as the bridge between AI-guided orchestration and the deterministic scheduling that occurs deeper in the DU/RU layers. The CU operates on a real-time but not ultra-tight timescale, typically in the range of tens of milliseconds up to around one second (similar to the RT-RIC), depending on the function.
  • DU/RU layer (sub-1 ms down to hundreds of microseconds) – The Executor & Muscles: The Distributed Unit (DU), located below the CU, is referred to as the “executor.” It handles scheduling and precoding at near-instant timescales, measured in sub-millisecond intervals. Here, AI functions take the form of compute agents that apply pre-trained or lightweight models to optimize resource block allocation and reduce latency. At the bottom, the Radio Unit (RU) represents the “muscles” of the system. Its reflex actions happen at the fastest time scales, down to hundreds of microseconds. While it executes deterministic signal processing, beamforming, and precoding, it also feeds measurements upward to fuel AI learning higher in the chain. Here reside the tightest loops, on a Transmission Time Interval (TTI) time scale (i.e., 1ms – 625 µs), such as baseband PHY processing, HARQ feedback, symbol scheduling, and beamforming weights. These functions require deterministic latencies and cannot rely on higher-layer AI/ML loops. Instead, the DU/RU executes control at the L1/L2 level, while still feeding measurement data upward for AI/ML training and adaptation.
Article content
Figure: AI’s hierarchical chain of command in O-RAN — from the SMO as the control tower and NRT-RIC as the policy brain, through the RT-RIC as the decision engine and CU as shaper, down to DU as executor and RU as muscles. Each layer aligns with guiding timescales, agentic AI roles, and contributions to spectral efficiency, balancing perceived SE gains, overhead reductions, and SINR improvements.

The figure above portrays the Open RAN as a “chain of command” where intelligence flows across time scales, from long-horizon orchestration in the cloud down to sub-millisecond reflexes in the radio hardware. To make it more tangible, I have annotated the example of spectral efficiency optimization use case on the right side of the figure. The cascading structure, shown above, highlights how AI and ML roles evolve across the architecture. For instance, the SMO and NRT-RIC increase perceived spectral efficiency through strategic optimization, while the RT-RIC reduces inefficiencies by orchestrating fast loops. Additionally, the DU/RU contribute directly to signal quality improvements, such as SINR gains. The figure thus illustrates Open RAN not as a flat architecture, but as a hierarchy of brains, decisions, and muscles, each with its own guiding time scale and AI function. Taken together, the vision is that AI/ML operates across all time domains, with the non-RT RIC providing strategic intelligence and model training, the RT RIC performing agile, policy-driven adaptation, and the DU/RU executing deterministic microsecond-level tasks, while exposing data to feed higher-layer intelligence. With open interfaces (A1, E2, open fronthaul), this layered AI approach allows multi-vendor participation, third-party innovation, and closed-loop automation across the RAN.

From 2019 onward, O-RAN working groups such as WG2 (Non-RT RIC & A1 interface) and WG3 (RT RIC & E2 interface) began publishing technical specifications that defined how AI/ML models could be trained, distributed, and executed across the RIC layers. By 2020–2021, proof-of-concepts and plugfests showcased concrete AI/ML use cases, such as energy savings, traffic steering, and anomaly detection, running as xApps (residing in RT-RIC) and rApps (residing in NRT-RIC). Following the first O-RAN specifications and proof-of-concepts, it becomes helpful to visualize how the different architectural layers relate to AI and ML. You will find a lot of the standardization documents in the reference list at the end of the document.

rAPPS AND xAPPS – AN ILLUSTRATION.

In the Open RAN architecture, the system’s intelligence is derived from the applications that run on top of the RIC platforms. The rApps exist in the Non-Real-Time RIC and xApps in the Real-Time RIC. While the RICs provide the structural framework and interfaces, it is the apps that carry the logic, algorithms, and decision-making capacity that ultimately shape network behavior. rApps operate at longer timescales, often drawing on large datasets and statistical analysis to identify trends, learn patterns, and refine policies. They are well-suited to classical machine learning processes such as regression, clustering, and reinforcement learning, where training cycles and retraining benefit from aggregated telemetry and contextual information. In practice, rApps are commonly developed in high-level languages such as Python or Java, leveraging established AI/ML libraries and data processing pipelines. In contrast, xApps must execute decisions in near-real time, directly influencing scheduling, beamforming, interference management, and resource allocation. Here, the role of AI and ML is to translate abstract policy into fast, context-sensitive actions, with an increasing reliance on intelligent control strategies, adaptive optimization, and eventually even agent-like autonomy (more on that later in this article). To meet these latency and efficiency requirements, xApps are typically implemented in performance-oriented languages like C++ or Go. However, Python is often used in prototyping stages before critical components are optimized. Together, rApps and xApps represent the realization of intelligence in Open RAN. One set grounded in long-horizon learning and policy shaping (i.e., Non-RT RIC and rApps), the other in short-horizon execution and reflexive adaptation (RT-RIC and xApps). Their interplay is not only central to energy efficiency, interference management, and spectral optimization but also points toward a future where classical ML techniques merge with more advanced AI-driven orchestration to deliver networks that are both adaptive and self-optimizing. Let us have a quick look at examples that illustrate how these applications work in the overall O-RAN architectural stack.

Figure: Energy efficiency loop in Open RAN, showing how long-horizon rApps set policies in the NRT-RIC, xApps in the RT-RIC execute them, and DU/RU translate these into scheduler and hardware actions with continuous telemetry feedback.

One way to understand the rApp–xApp interaction is to follow a simple energy efficiency use case, shown in the figure below. At the top, an energy rApp in the Non-RT RIC learns long-term traffic cycles and defines policies such as ‘allow cell muting below 10% load.’ These policies are then passed to the RT-RIC, where an xApp monitors traffic every second and decides when to shut down carriers or reduce power. The DU translates these decisions into scheduling and resource allocations, while the RU executes the physical actions such as switching off RF chains, entering sleep modes, or muting antenna elements. The figure above illustrates how policy flows downward while telemetry and KPIs flow back up, forming a continuous energy optimization loop. Another similarly layered logic applies to interference coordination, as shown in the figure below. Here, an interference rApp in the Non-RT RIC analyzes long-term patterns of inter-cell interference and sets coordination policies — for example, defining thresholds for ICIC, CoMP, or power capping at the cell edge. The RT-RIC executes these policies through xApps that estimate SINR in real time, apply muting patterns, adjust transmit power, and coordinate beam directions across neighboring cells. The DU handles PRB scheduling and resource allocation, while the RU enacts physical layer actions, such as adjusting beam weights or muting carriers. This second loop shows how rApps and xApps complement each other when interference is the dominant concern.

Figure: Interference coordination loop in Open RAN, where rApps define long-term coordination policies and xApps execute real-time actions on PRBs, power, and beams through DU/RU with continuous telemetry feedback.

Yet these loops do not always reinforce each other. If left uncoordinated, they can collide. An energy rApp may push the system toward contraction, reducing Tx power, muting carriers, and blanking PRBs. In contrast, an interference xApp simultaneously pushes for expansion, raising Tx power, activating carriers, and dynamically allocating PRBs. Both act on the same levers inside the CU/DU/RU, but in opposite directions. The result can be oscillatory behaviour, with power and scheduling thrashing back and forth, degrading QoS, and wasting energy. The figure below illustrates this risk and underscores why conflict management and intent arbitration are critical for a stable Open RAN.

Figure: Example of conflict between an energy-saving rApp and an interference-mitigation xApp, where opposing control intents on the same CU/DU/RU parameters can cause oscillatory behaviour.

Beyond the foundational description of how rApps and xApps operate, it is equally important to address the conflicts and issues that can arise when multiple applications are deployed simultaneously in the Non-RT and RT-RICs. Because each app is designed with a specific optimization objective in mind, it is almost inevitable that two or more apps will occasionally attempt to act on the same parameters in contradictory ways. While the energy efficiency versus interference management example is already well understood, there are broader categories of conflict that extend across both timescales.

Conflicts between rApps occur when long-term policy objectives are not aligned. For instance, a spectral efficiency rApp may continuously push the network toward maximizing bits per Hertz by advocating for higher transmit power, more active carriers, or denser pilot signaling. At the same time, an energy-saving rApp may be trying to mute those very carriers, reduce pilot density, and cap transmit power to conserve energy. Both policies can be valid in isolation, but when issued without coordination, they create conflicting intents that leave the RT-RIC and lower layers struggling to reconcile them. Even worse, the oscillatory behavior that results can propagate into the DU and RU, creating instability at the level of scheduling and RF execution. The xApps, too, can easily find themselves in conflict when they react to short-term KPI fluctuations with divergent strategies. An interference management xApp might impose aggressive PRB blanking patterns or reduce power at the cell edge. At the same time, a mobility optimization xApp might simultaneously widen cell range expansion parameters to offload traffic. The first action is designed to protect edge users, while the second may flood them with more load, undoing the intended benefit. Similarly, an xApp pushing for higher spectral efficiency may keep activating carriers and pushing toward higher modulation and coding schemes, while another xApp dedicated to energy conservation is attempting to put those carriers to sleep. The result is rapid toggling of resource states, which wastes signaling overhead and disrupts user experience.

The O-RAN Alliance has recognized these risks and proposed mechanisms to address them. Architecturally, conflict management is designed to reside in the RT-RIC, where a Conflict Mitigation and Arbitration framework evaluates competing intents from different xApps before they reach the CU/DU. Policies from the Non-RT RIC can also be tagged with priorities or guardrails, which the RT-RIC uses to arbitrate real-time conflicts. In practice, this means that when two xApps attempt to control the same parameter, the RT-RIC applies priority rules, resolves contradictions, or, in some cases, rejects conflicting commands entirely. On the rApp side, conflict resolution is handled at a higher abstraction level by the Non-RT RIC, which can consolidate or harmonize policies before they are passed down through the A1 interface.

The layered conflict mitigation approach in O-RAN provides mechanisms to arbitrate competing intents between apps. It can reduce the risk of oscillatory behavior, but it cannot guarantee stability completely. Since rApps and xApps may originate from different sources and vary in design quality, careful testing, certification, and continuous monitoring will remain essential to ensure that application diversity does not undermine network coherence. Equally important are policies that impose guardbands, buffers, and safety margins in how parameters can be tuned, which serve as a hedge against instabilities when apps are misaligned, whether the conflict arises between rApps, between xApps, or across the rApp–xApp boundary. These guardbands provide the architectural equivalent of shock absorbers, limiting the amplitude of conflicting actions and ensuring that, even if multiple apps pull in different directions, the network avoids catastrophic oscillations.

Last but not least, the risks may increase as rApps and xApps evolve beyond narrowly scoped optimizers into more agentic forms. An agentic app does not merely execute a set of policies or inference models. It can plan, explore alternatives, and adapt its strategies with a degree of autonomy (and agency). While this is likely to unlock powerful new capabilities, it also expands the possibility of emergent and unforeseen interactions. Two agentic apps, even if aligned at deployment, may drift toward conflicting behaviors as they continuously learn and adapt in real time. Without strict guardrails and robust conflict resolution, such autonomy could magnify instabilities rather than contain them, leading to system behavior that is difficult to predict or control. In this sense, the transition from classical rApps and xApps to agentic forms is not only an opportunity but also a new frontier of risk that must be carefully managed within the O-RAN architecture.

IS AI IN RAN ALL ABOUT “ChatGPT”?

I want to emphasize that when I address AI in the RAN, I generally do not refer to generative language models, such as ChatGPT, or other large-scale conversational systems built upon a human language context. Those technologies are based on Large Language Models (LLMs), which belong to the family of deep learning architectures built on transformer networks. A transformer network is a type of neural network architecture built around the attention mechanism, which allows the model to weigh the importance of different parts of an input sequence simultaneously rather than processing it step by step. They are typically trained on enormous human-based text datasets, utilizing billions of parameters, which requires immense computational resources and lengthy training cycles. Their most visible purpose today is to generate and interpret human language, operating effectively at the scale of seconds or longer in user interactions. In the context of network operations, I suspect that GPT-like LLMS will have a mission in the frontend where humans will need to interact with the communications network using human language. That said, the notion of “generative AI” is not inherently limited to natural language. The same underlying transformer-based methods can be adapted to other modalities (information sources), including machine-oriented languages or even telemetry sequences. For example, a generative model trained on RAN logs, KPIs, and signaling traces could be used to create synthetic telemetry or predict unusual event patterns. In this sense, generative AI could provide value to the RAN domain by augmenting datasets, compressing semantic information, or even assisting in anomaly detection. The caveat, however, is that these benefits still rely on heavy models with large memory footprints and significant inference latency. While they may serve well in the Non-RT RIC or SMO domain, where time scales are relaxed and compute resources are more abundant, they are unlikely to be terribly practical for the RT RIC or the DU/RU, where deterministic deadlines in the millisecond or microsecond range must be met.

By contrast, the application of AI/ML in the RAN is fundamentally about real-time signal processing, optimization, and control. RAN intelligence focuses on tasks such as load balancing, interference mitigation, mobility prediction, traffic steering, energy optimization, and resource scheduling. These are not problems of natural human language understanding but of strict scheduling and radio optimization. The time scales at which these functions operate are orders of magnitude shorter than those typical of generative AI. From long-horizon analytics in the Non-RT RIC (greater than one second) to near-real-time inference in the RT-RIC (i.e., 10 ms–1 s), and finally to deterministic microsecond loops in the DU/RU. This stark difference in time scales and problem domains explains why it appears unlikely that the RAN can be controlled end-to-end by “ChatGPT-like” AI. LLMs, whether trained on human language or telemetry sequences, are (today at least) too computationally heavy, too slow in inference, and are optimized for open-ended reasoning rather than deterministic control. Instead, the RAN requires a mix of lightweight supervised and reinforcement learning models, online inference engines, and, in some cases, ultra-compact TinyML implementations that can run directly in hardware-constrained environments.

In general, AI in the RAN is about embedding intelligence into control loops at the right time scale and with the right efficiency. Generative AI may have a role in enriching data and informing higher-level orchestration. It is difficult to see how it can efficiently replace the tailored, lightweight models that drive the RAN’s real-time and near-real-time control.

As O-RAN (and RAN in general) evolves from a vision of open interfaces and modular disaggregation into a true intelligence-driven network, one of the clearest frontiers is the use of Large Language Models (LLMs) at the top of the stack (i.e., frontend/human-facing). The SMO, with its embedded Non-RT RIC, already serves as the strategic brain of the architecture, responsible for lifecycle management, long-horizon policy, and the training of AI/ML models. This is also the one domain where time scales are relaxed, measured in seconds or longer, and where sufficient compute resources exist to host heavier models. In this environment, LLMs can be utilized in two key ways. First, they can serve as intent interpreters for intent-driven network operations, bridging the gap between operator directives and machine-executable policies. Instead of crafting detailed rules or static configuration scripts, operators could express high-level goals, such as prioritizing emergency service traffic in a given region or minimizing energy consumption during off-peak hours. An LLM, tuned with telecom-specific knowledge, can translate those intents into precise policy actions distributed through the A1 interface to the RT RIC. Second, LLMs can act as semantic compressors, consuming the vast streams of logs, KPIs, and alarms that flow upward through O1, and distilling them into structured insights or natural language summaries that humans can easily grasp. This reduces cognitive load for operators while ensuring (at least we should hope so!) that the decision logic remains transparent, possibly explainable, and auditable. In both roles, LLMs do not replace the specialized ML models running lower in the architecture. Instead, they enhance the orchestration layer by embedding reasoning and language understanding where time and resources permit.

WHAT AI & ML ARE LIKELY TO WORK IN RAN?

This piece assumes a working familiarity with core machine-learning concepts, models, training and evaluation processes, and the main families you will encounter in practice. If you want a compact, authoritative refresher, most of what I reference is covered, clearly and rigorously, in Goodfellow, Bengio, and Courville’s Deep Learning (Adaptive Computation and Machine Learning series, MIT Press). For hands-on practice, many excellent Coursera courses walk through these ideas with code, labs, and real datasets. They are a fast way to build the intuition you will need for the examples discussed in this section. Feel free to browse through my certification list, which includes over 60 certifications, with the earliest ML and AI courses dating back to 2015 (should have been updated by now), and possibly find some inspiration.

Throughout the article, I use “AI” and “ML” interchangeably for readability, but formally, they should be regarded as distinct. Artificial Intelligence (AI) is the broader field concerned with building systems that perceive their environment, reason about it, and act to achieve goals, encompassing planning, search, knowledge representation, learning, and decision-making. Machine Learning (ML) is a subset of AI that focuses specifically on data-driven methods that learn patterns or policies from examples, improving performance on a task through experience rather than explicit, hand-crafted rules, which is where the most interesting aspects occur.

Article content
Figure: Mapping of AI roles, data flows, and model families across the O-RAN stack — from SMO and NRT-RIC handling long-horizon policy, orchestration, and training, to RT-RIC managing fast-loop inference and optimization, down to CU and DU/RU executing near-real-time and hardware-domain actions with lightweight, embedded AI models.

Artificial intelligence in the O-RAN stack exhibits distinct characteristics depending on its deployment location. Still, it is helpful to see it as one continuous flow from intent at the very top to deterministic execution at the very bottom. So, let’s go with the flow.

At the level of the Service Management and Orchestration, AI acts as the control tower for the entire system. This is where business or human intent must be translated into structured goals, and where guardrails, audit mechanisms, and reversibility are established to ensure compliance with regulatory oversight. Statistical models and rules remain essential at this layer because they provide the necessary constraint checking and explainability for governance. Yet the role of large language models is increasing rapidly, as they provide a bridge from human language into structured policies, intent templates, and root-cause narratives. Generative approaches are also beginning to play a role by producing synthetic extreme events to stress-test policies before they are deployed. While synthetic data for rare events offers a powerful tool for training and stress-testing AI systems, it may carry significant statistical risks. Generative models can fail to represent the very distributions they aim to capture, bias inference, or even introduce entirely artificial patterns into the data. Their use therefore requires careful anchoring in extremes-aware statistical methods, rigorous validation against real-world holdout data, and safeguards against recursive contamination. When these conditions are met, synthetic data can meaningfully expand the space of scenarios available for training and testing. Without the appropriate control mechanisms, decisions or policies based on synthetic data risk becoming a source of misplaced confidence rather than resilience. With all that considered, the SMO should be the steward of safety and interpretability, ensuring that only validated and reversible actions flow down into the operational fabric. If agentic AI is introduced here, it could reshape how intent is operationalized. Instead of merely validating human inputs, agentic systems might proactively (autonomeously) propose actions, refine intents into strategies, or initiate self-healing workflows on their own. While this promises greater autonomy and resilience, it also raises new challenges for oversight, since the SMO would become not just a filter but a creative actor in its own right.

At the top level, rApps (which reside in the NRT-RIC) are indirectly shaped by SMO policies, as they inherit intent, guardrails, and reversibility constraints. For example, when the SMO utilizes LLMs to translate business goals into structured intents, it essentially sets the design space within which rApps can train or re-optimize their models. The SMO also provides observability hooks, allowing rApp outputs to be audited before being pushed downstream.

The Non-Real-Time RIC can be understood as the long-horizon brain of the RAN. Its function is to train, retrain, and refine models, conduct long-term analysis, and transform historical and simulated experience into reusable policies. Reinforcement learning in its many flavors is the cornerstone here, particularly offline or constrained forms that can safely explore large data archives or digital twin scenarios. Autoencoders, clustering, and other representation learning methods uncover hidden structures in traffic and mobility, while supervised deep networks and boosted trees provide accurate forecasting of demand and performance. Generative simulators extend the scope by fabricating rare but instructive scenarios, allowing policies to be trained for resilience against the unexpected. Increasingly, language-based systems are also being applied to policy generation, bridging between strategic descriptions and machine-enforceable templates. The NRT-RIC strengthens AI’s applicability by moving risk away from live networks, producing validated artifacts that can later be executed at speed. If an agentic paradigm is introduced here, it would mean that the NRT-RIC is not merely a training ground but an active planner, continuously setting objectives for the rest of the system and negotiating trade-offs between coverage, energy, and user experience. This shift would make the Non-RT RIC a more autonomous planning organ, but it would also demand stronger mechanisms for bounding and auditing its explorations.

Here, at the NTR-RIC, rApps that are native to this RIC level are the central vehicle for model training, policy generation, and scenario exploration. They consume SMO intent and turn it into reusable policies or models for the RT-RIC. For example, a mobility rApp could use clustering and reinforcement learning to generate policies for user handover optimization, which the RT-RIC then executes in near real time. Another rApp might simulate mMIMO pairing scenarios offline, distill them into simplified lookup tables or quantized policies, and hand these artifacts down for execution at the DU/RU. Thus, rApps act as the policy factories. Their outputs cascade into xApps, at the RT-RIC, CU parameter sets, and lightweight silicon-bound models deeper down.

The Real-Time RIC is where planning gives way to fast, local action. At timescales between ten milliseconds and one second, the RT-RIC is tasked with run-time inference, traffic steering, slicing enforcement, and short-term interference management. Because the latency budget is tight, the model families that thrive here are compact and efficient. Shallow neural networks, recurrent models, and temporal CNN-RNN hybrids are all appropriate for predicting near-future load and translating context into rapid actions. Decision trees and ensemble methods remain attractive because of their predictable execution and interpretability. Online reinforcement learning, in which an agent interacts with its environment in real-time and updates its policy based on rewards or penalties, together with contextual bandits, a simplified variant that optimizes single-step decisions from observed contexts, both enable adaptation in small, incremental steps while minimizing the risk of destabilization. In specific contexts, lightweight graph neural networks (GNNs), which are streamlined versions of GNNs designed to model relationships between entities while keeping computational costs low, can capture the topological relationships between neighboring cells. In the RT-RIC, models must balance accuracy with predictable execution under tight timescales. Shallow neural networks (simple feedforward models capturing non-linear patterns), recurrent models (RNNs that retain memory of past inputs), and hybrid convolutional neural network–recurrent neural network (CNN–RNN) models (combining spatial feature extraction with temporal sequencing) are well-suited for processing fast-evolving time series, such as traffic load or interference, delivering near-future predictions with low latency. Decision trees (rule-based classifiers that split data hierarchically) and ensemble methods (collections of weak learners, such as random forests or boosting) add value through their lightweight, deterministic behavior and interpretability, making them reliable for regulatory oversight and stable actuation. Online reinforcement learning (RL) and contextual bandits further allow the system to adapt incrementally to changing conditions without risking destabilization. In more complex contexts, lightweight GNNs capture the topological structure between neighboring cells, supporting coordination in handovers or interference management while remaining efficient enough for real-time use. The RT-RIC thus embodies the point where AI policies become immediate operational decisions, measurable in KPIs within seconds. When viewed through the lens of agency, this layer becomes even more dynamic. An agentic RT-RIC could weigh competing goals, prioritize among multiple applications, and negotiate real-time conflicts without waiting for external intervention. Such an agency might significantly improve efficiency and responsiveness but would also blur the boundary between optimization and autonomous control, requiring new arbitration frameworks and assurance layers.

At this level, xApps, native to the RT-RIC, execute policies derived from rApps and adapt them to live network telemetry. An xApp for traffic steering might combine a policy from the Non-RT RIC with local contextual bandits to adjust routing in the moment. Another xApp could, for example, use lightweight GNNs to coordinate interference management across adjacent cells, directly influencing DU scheduling and RU beamforming. This makes xApps the translators of long-term rApp insights into second-by-second action, bridging the predictive foresight of rApps with the deterministic constraints of the DU/RU.

The Centralized Unit occupies an intermediate position between near-real-time responsiveness and higher-layer mobility and bearer management. Here, the most useful models are those that can both predict and pre-position resources before bottlenecks occur. Long Short-Term Memory networks (LSTMs, recurrent models designed to capture long-range dependencies), Gated Recurrent Units (GRUs, simplified RNNs with fewer parameters), and temporal Convolutional Neural Networks (CNNs, convolution-based models adapted for sequential data) are natural fits for forecasting user trajectories, mobility patterns, and session demand, thereby enabling proactive preparation of handovers and early allocation of network slices. Constrained reinforcement learning (RL, trial-and-error learning optimized under explicit safety or policy limits) methods play an important role at the bearer level, where they must carefully balance Quality of Service (QoS) guarantees against overall resource utilization, ensuring efficiency without violating service-level requirements. At the same time, rule-based optimizers remain well-suited for more deterministic processes, such as configuring Packet Data Convergence Protocol (PDCP) and Radio Link Control (RLC) parameters, where fixed logic can deliver predictable and stable outcomes in real-time. The CU strengthens applicability by anticipating issues before they materialize and by converting intent into per-flow adjustments. If agency is introduced at this layer, it might manifest as CU-level agents negotiating mobility anchors or bearer priorities directly, without relying entirely on upstream instructions. This could increase resilience in scenarios where connectivity to higher layers is impaired. Still, it also adds complexity, as the CU would need a framework for coordinating its autonomous decisions with the broader policy environment.

Both xApps and rApps can influence CU functions as they relate to bearer management and PDCP/RLC configuration. For example, a QoS balancing rApp might propose long-term thresholds for bearer prioritization. At the same time, a short-horizon xApp enforces these by pre-positioning slice allocations or adjusting bearer anchors in anticipation of predicted mobility. The CU thus becomes a convergence point, where rApp strategies and xApp tactics jointly shape mobility management and session stability before decisions cascade into DU scheduling.

At the very bottom of the stack, the Distributed Unit and Radio Unit function under the most stringent timing constraints, often in the realm of microseconds. Their role is to execute deterministic PHY and MAC functions, including HARQ, link adaptation, beamforming, and channel state processing. Only models that can be compiled into silicon, quantized, or otherwise guaranteed to run within strict latency budgets are viable in this layer of the Radio Access Network. Tiny Machine Learning (TinyML), Quantized Neural Networks (QNN), and lookup-table distilled models enable inference speeds compatible with microsecond-level scheduling constraints. As RU and DU components typically operate under strict latency and computational constraints, TinyML and low-bit QNNs are ideal for deploying functions such as beam selection, RF monitoring, anomaly detection, or lightweight PHY inference tasks. Deep-unfolded networks and physics-informed neural models are particularly valuable because they can replace traditional iterative solvers in equalization and channel estimation, achieving high accuracy while ensuring fixed execution times. In advanced antenna systems, neural digital predistortion and amplifier linearization enhance power efficiency and spectral containment. At the same time, sequence-based predictors can cut down channel state information (CSI) overhead and help stabilize multi-user multiple-input multiple-output (MU-MIMO) pairing. At this level, the integration of agentic AI must, in my opinion, be approached with caution. The DU and RU domains are all about execution rather than deliberation. Introducing agency here could compromise determinism. However, carefully bounded micro-agents that autonomously tune beams or adjust precoders within strict envelopes might prove valuable. The broader challenge is to reconcile the demand for predictability with the appeal of adaptive intelligence baked into hardware.

At this layer, most intelligence is “baked in” and must respect microsecond determinism timescales. Yet, rApps and xApps may still indirectly shape the DU/RU environment. The DU/RU do not run complex agentic loops themselves, but they inherit distilled intelligence from the higher layers. Micro-agents, if used, must be tightly bound. For example, an RU micro-agent may autonomously choose among two or three safe precoding matrices supplied by an xApp, but never generate them on its own.

Taking all the above together, the O-RAN stack can be seen as a continuum of intelligence, moving from the policy-heavy, interpretative functions at the SMO to the deterministic, silicon-bound execution at the RU. Agentic AI has the potential to change this continuum by shifting layers from passive executors to active participants. An agentic SMO might not only validate intents but generate them. An agentic Non-RT RIC might become an autonomous planner. An agentic RT-RIC could arbitrate between conflicting goals independently. And even the CU or DU might gain micro-agents that adjust parameters locally without instruction. This greater autonomy promises efficiency and adaptability but raises profound questions about accountability, oversight, and control. If the agency is allowed to propagate too deeply into the stack, the risk is that millions of daily inferences are taken without transparent justification or the possibility of reversal. This situation is unlikely to be considered regulatory acceptable and would be in direct violation of the European Artificial Intelligence Act, violating core provisions of the EU AI Act. The main risks are a lack of adequate human oversight (Article 14), inadequate record-keeping and traceability (Article 12), failures of transparency (Article 13), and the inability to provide meaningful explanations to affected users (Article 86). Together, these gaps would undermine the broader lifecycle obligations on risk management and accountability set out in Articles 8–17. To mitigate that, openness becomes indispensable: open policies, open data schemas, model lineage, and transparent observability hooks allow agency to be exercised without undermining trust. In this way, the RAN of the future may become not only intelligent but agentic, provided that its newfound autonomy is balanced by openness, auditability, and human authority at the points that matter most. However, I suspect that reaching that point may be a much bigger challenge than developing the AI Agentic framework and autonomous processes.

While the promise of AI in O-RAN is compelling, it is equally important to recognize where existing functions already perform so effectively that AI has little to add. At higher layers, such as the SMO and the Non-RT RIC, the complexity of orchestration, policy translation, and long-horizon planning naturally creates a demand for AI. These are domains where deterministic rules quickly become brittle, and where the adaptive and generative capacities of modern models unlock new value. Similarly, the RT-RIC benefits from lightweight ML approaches because traffic dynamics and interference conditions shift on timescales that rule-based heuristics often struggle to capture. As one descends closer to execution, however, the incremental value of AI begins to diminish. In the CU domain, many bearer management and PDCP/RLC functions can be enhanced by predictive models. Still, much of the optimization is already well supported by deterministic algorithms that operate within known bounds. The same is even more pronounced at the DU and RU levels. Here, fundamental PHY and MAC procedures such as HARQ timing, CRC checks, coding and decoding, and link-layer retransmissions are highly optimized, deterministic, and hardware-accelerated. These functions have been refined over decades of wireless research, and their performance approaches the physical and information-theoretical limits. For example, beamforming and precoding illustrate this well. Linear algebraic methods such as zero-forcing and MMSE are deeply entrenched, efficient, and predictable. AI and ML can sometimes enhance them at the margins by improving CSI compression, reducing feedback overhead, or stabilizing non-stationary channels. Yet it is unlikely to displace the core mathematical solvers that already deliver excellent performance. Link adaptation is similar. While machine learning may offer marginal gains in dynamic or noisy conditions, conventional SINR-based thresholding remains highly effective and, crucially, deterministic. It is worth remembering that simply and arbitrarily applying AI or ML functionality to an architectural element does not necessarily mean it will make a difference or even turn out to be beneficial.

This distinction becomes especially relevant when considering the implications of agentic AI. In my opinion, agency is most useful at the top of the stack, where strategy, trade-offs, and ambiguity dominate. In the SMO or Non-RT RIC, agentic systems can propose strategies, negotiate policies, or adapt scenarios in ways that humans or static systems could never match. At the RT-RIC, a carefully bound agency may improve arbitration among competing applications. But deeper in the stack, particularly at the DU and RU, the agency adds little value and risks undermining determinism. At microsecond timescales, where physics rules and deadlines are absolute, autonomy may be less of an advantage and more of a liability. The most practical role of AI here is supplementary, enabling anomaly detection, parameter fine-tuning, or assisting advanced antenna systems in ways that respect strict timing constraints. This balance of promise and limitation underscores a central point. AI is not a panacea for O-RAN, nor should it be applied indiscriminately.

Article content
Figure: Comparative view of how AI transforms RAN operations — contrasting classical vendor-proprietary SON approaches, Opanga’s vendor-agnostic RAIN platform, and O-RAN implementations using xApps and rApps for energy efficiency, spectral optimization, congestion control, anomaly detection, QoE, interference management, coverage, and security.

The Table above highlights how RAN intelligence has evolved from classical vendor-specific SON functions toward open O-RAN frameworks and Opanga’s RAIN platform. While Classical RAN relied heavily on embedded algorithms and static rules, O-RAN introduces rApps and xApps to distribute intelligence across near-real-time and non-real-time control loops. Opanga’s RAIN, however, stands out as a truly AI-native and vendor-agnostic platform that is already commercially deployed at scale today. By tackling congestion, energy reduction, and intelligent spectrum on/off management without reliance on DPI (which is, anyway, a losing strategy as QUIC becomes increasingly used) or proprietary stacks, RAIN directly addresses some of the most pressing efficiency and sustainability challenges in today’s networks. It also appears straightforward for Opanga to adapt its AI engines into rApps or xApps should the Open RAN market scale substantially in the future, reinforcing its potential as one of the strongest and most practical AI platforms in the RAN domain today.

A NATIVE-AI RAN TEASER.

Native-AI in the RAN context means that artificial intelligence is not just an add-on to existing processes, but is embedded directly into the system’s architecture, protocols, and control loops. Instead of having xApps and rApps bolted on top of traditional deterministic scheduling and optimization functions, a native-AI design treats learning, inference, and adaptation as first-class primitives in the way the RAN is built and operated. This is fundamentally different from today’s RAN system designs, where AI is mostly externalized, invoked at slower timescales, and constrained by legacy interfaces. In a native-AI architecture, intent, prediction, and actuation are tightly coupled at millisecond or even microsecond resolution, creating new possibilities for spectral efficiency, user experience optimization, and autonomous orchestration. A native-AI RAN would likely require heavier hardware at the edge of the network than today’s Open (or “classical”) RAN deployments. In the current architecture, the DU and RU rely on highly optimized deterministic hardware such as FPGAs, SmartNICs, and custom ASICs to execute PHY/MAC functions at predictable latencies and with tight power budgets. AI workloads are typically concentrated higher up in the stack, in the NRT-RIC or RT-RIC, where they can run on centralized GPU or CPU clusters without overwhelming the radio units. However, by contrast, a native-AI design pushes inference directly into the DU and even the RU, where microsecond-scale decisions on beamforming, HARQ, and link adaptation must be made. This implies the integration of embedded accelerators, such as AI-optimized ASICs, NPUs, or small-form-factor GPUs, into radio hardware, along with larger memory footprints for real-time model execution and storage. The resulting compute demand and cooling requirements could increase power consumption substantially beyond today’s SmartNIC-based O-RAN nodes. An effect that would be multiplied by millions of cell sites worldwide should such a design be chosen. This may (should!) raise concerns regarding both CapEx and OpEx due to higher costs for silicon and more demanding site engineering for power and heat management.

Article content
Figure: A comparison of the possible differences between today’s Open RAN and the AI-Native RAN Architecture. I should point out that the AI-Native RAN architecture is my own depiction and may not be how it may eventually look.

A native-AI RAN promises several advantages over existing architectures. By embedding intelligence directly into the control loops, the system can achieve higher spectral efficiency through ultra-fast adaptation of beamforming, interference management, and resource allocation, going beyond the limits of deterministic algorithms. It also allows for far more fine-grained optimization of the user experience, with decisions made per device, per flow, and in real-time, enabling predictive buffering and even semantic compression without noticeable delay. Operations themselves become more autonomous, with the RAN continuously tuning and healing itself in ways that reduce the need for manual intervention. Importantly, intent expressed at the management layer can be mapped directly into execution at the radio layer, creating continuity from policy to action that is missing in today’s O-RAN framework. Native-AI designs are also better able to anticipate and respond to extreme conditions, making the system more resilient under stress. Finally, they open the door to 6G concepts such as cell-less architectures, distributed massive MIMO, and AI-native PHY functions that cannot be realized under today’s layered, deterministic designs.

At the same time, the drawbacks of the Native-AI RAN approach may also be quite substantial. Embedding AI at microsecond control loops makes it almost impossible to trace reasoning steps or provide post-hoc explainability, creating tension with regulatory requirements such as the EU AI Act and NIS2. Because AI becomes the core operating fabric, mistakes, adversarial inputs, or misaligned objectives can cascade across the system much faster than in current architectures, amplifying the scale of failures. Continuous inference close to the radio layer also risks driving up compute demand and energy consumption far beyond what today’s SmartNIC- or FPGA-based solutions can handle. There is a danger of re-introducing vendor lock-in, as AI-native stacks may not interoperate cleanly with legacy xApps and rApps, undermining the very rationale of open interfaces. Training and refining these models requires sensitive operational and user data, raising privacy and data sovereignty concerns. Finally, the speed at which native-AI RANs operate makes meaningful human oversight nearly impossible, challenging the principle of human-in-the-loop control that regulators increasingly require for critical infrastructure operation.

Perhaps not too surprising, NVIDIA, a founding member of the AI-RAN Alliance, is a leading advocate for AI-native RAN, with strong leadership across infrastructure innovation, collaborative development, tooling, standard-setting, and future network frameworks. Their AI-Aerial platform and broad ecosystem partnerships illustrate their pivotal role in transitioning network architectures toward deeply integrated intelligence, especially in the 6G era. The AI-Native RAN concept and the gap it opens compared to existing O-RAN and classical RAN approaches will be the subject of a follow-up article I am preparing based on my current research into this field.

WHY REGULATORY AGENCIES MAY END THE AI PARTY (BEFORE IT REALLY STARTS).

Article content
Figure: Regulatory challenges for applying AI in critical telecom infrastructure, highlighting transparency, explainability, and auditability as key oversight requirements under European Commission mandates, posing constraints on AI-driven RAN systems.

We are about to “let loose” advanced AI/ML applications and processes across all aspects of our telecommunication networks. From the core all the way through to access and out to consumers and businesses making use of what is today regarded as highly critical infrastructure. This reduces cognitive load for operators while aiming to keep decision logic transparent, explainable, and auditable. In both roles, LLMs do not replace the specialized ML models running lower in the architecture. Instead, they enhance the orchestration layer by embedding reasoning and language understanding where time and resources permit. Yet it is here that one of the sharpest challenges emerges. The regulatory and policy scrutiny that inevitably follows when AI is introduced into critical infrastructure.

In the EU, the legal baseline now treats many network-embedded AI systems as high-risk by default whenever they are used as safety or operational components in the management and operation of critical digital infrastructure. This category encompasses modern telecom networks squarely. Under the EU AI Act, such systems must satisfy stringent requirements for risk management, technical documentation, transparency, logging, human oversight, robustness, and cybersecurity, and they must be prepared for conformity assessment and market surveillance. If the AI used in RAN control or orchestration cannot meet these duties, deployment can be curtailed or prohibited until compliance is demonstrated. The same regulation now also imposes obligations on general-purpose AI (foundation/LLM) providers, including additional duties when models are deemed to pose systemic risk, to enhance transparency and safety across the supply chain that may support telecom use cases. This AI-specific layer builds upon the EU’s broader critical infrastructure and cybersecurity regime. The NIS2 Directive strengthens security and incident-reporting obligations for essential entities, explicitly including digital and communications infrastructure, while promoting supply-chain due diligence. This means that operators must demonstrate how they assess and manage risks from AI components and vendors embedded in their networks. The EU’s 5G Cybersecurity Toolbox adds a risk-based, vendor-agnostic lens to supplier decisions (applied to “high-risk” vendors). Still, the logic is general: provenance alone, whether from China, the US, Israel, or any “friendly” jurisdiction, does not exempt AI/ML components from rigorous technical and governance assurances. The Cyber Resilience Act extends horizontal cybersecurity duties to “products with digital elements,” which can capture network software and AI-enabled components, linking market access to secure-by-design engineering, vulnerability handling, and update practices.

Data-protection law also bites. GDPR Article 22 places boundaries on decisions based solely on automated processing that produce legal or similarly significant effects on individuals, a genuine concern as networks increasingly mediate critical services and safety-of-life communications. Recent case law from the Court of Justice of the EU underscores a right of access to meaningful information about automated decision-making “procedures and principles,” raising the bar for explainability and auditability in any network AI that profiles or affects individuals. In short, operators must be able to show their work, not just that an AI policy improved a KPI, but how it made the call. These European guardrails are mirrored (though not identically) elsewhere. The UK Telecoms Security Act and its Code of Practice impose enforceable security measures on providers. In the US, the voluntary NIST AI Risk Management Framework has become the de facto blueprint for AI governance, emphasizing transparency, accountability, and human oversight, principles that regulators can (and do) import into sectoral supervision. None of these frameworks cares only about “who made it”. They also care about how it performs, how it fails, how it is governed, and how it can be inspected.

The AI Act’s human-oversight requirement (i.e., Article 14 in the EU Artificial Intelligence Act) exists precisely to bound such risks, ensuring operators can intervene, override, or disable when behavior diverges from safety or fundamental rights expectations. Its technical documentation and transparency obligations require traceable design choices and lifecycle records. Where these assurances cannot be demonstrated, regulators may limit or ban such deployments in critical infrastructure.

Against this backdrop, proposals to deploy autonomous AI agents deeply embedded in the RAN stack face a (very) higher bar. Autonomy risks eroding the very properties that European law demands.

  • TransparencyReasoning steps are difficult to reconstruct: Traditional RAN algorithms are rule-based and auditable, making their logic transparent and reproducible. By contrast, modern AI models, especially deep learning and generative approaches, embed decision logic in complex weight matrices, where the precise reasoning steps cannot be reconstructed. Post-hoc explainability methods provide only approximations, not complete causal transparency. This creates tension with regulatory frameworks such as the EU AI Act, which requires technical documentation, traceability, and user-understandable logic for high-risk AI in critical infrastructure. The NIS2 Directive and GDPR Article 22 add further obligations for traceability and meaningful explanation of automated decisions. If operators cannot show why an AI system in the RAN made a given decision, compliance risks arise. The challenge is amplified with autonomous agents (i.e., Agentic AI), where decisions emerge from adaptive policies and interactions that are inherently non-deterministic. For critical infrastructure, such as telecom networks, transparency is therefore not optional but a regulatory necessity. Opaque models may face restrictions or outright bans.
  • Explainability – Decisions must be understandable: Explainability means that operators and regulators can not only observe what a model decided, but also understand why. In RAN AI, this is challenging because deep models may optimize across multiple features simultaneously, making their outputs hard to interpret. The EU AI Act requires high-risk systems to provide explanations that are “appropriate to the intended audience,” meaning engineers must be able to trace technical logic. In contrast, regulators and end-users require more accessible reasoning. Without explainability, trust in AI-driven traffic steering, slicing, or energy optimization cannot be established. A lack of clarity risks regulatory rejection and reduces operator confidence in deploying advanced AI at scale.
  • Auditability – Decisions must be verifiable: Auditability ensures that every AI-driven decision in the RAN can be logged, traced, and checked after the fact. Traditional rule-based schedulers are inherently auditable, but ML models, especially adaptive ones, require extensive logging frameworks to capture states, inputs, and outputs. The NIS2 Directive and the Cyber Resilience Act require such traceability for digital infrastructure, while the AI Act imposes additional obligations for record-keeping and post-market monitoring. Without audit trails, it becomes impossible to verify compliance or to investigate failures, outages, or discriminatory behaviors. In critical infrastructure, a lack of auditability is not just a technical gap but a regulatory showstopper, potentially leading to deployment bans.
  • Human Oversight – The challenge of real-time intervention: Both the EU AI Act and the NIS2 Directive require that high-risk AI systems remain under meaningful human oversight, with the possibility to override or disable AI-initiated actions. In the context of O-RAN, this creates a unique tension. Many RIC-driven optimizations and DU/RU control loops operate at millisecond or even microsecond timescales, where thousands or millions of inferences occur daily. Expecting a human operator to monitor, let alone intervene in real time, is technically infeasible. Instead, oversight must be implemented through policy guardrails, monitoring dashboards, fallback modes, and automated escalation procedures. The challenge is to satisfy the regulatory demand for human control without undermining the efficiency gains that AI brings. If this balance cannot be struck, regulators may judge certain autonomous functions non-compliant, slowing or blocking their deployment in critical telecom infrastructure.

The upshot for telecom is clear. Even as generative and agentic AI move into SMO/Non-RT orchestration for intent translation or semantic compression, the time-scale fundamentals do not change. RT and sub-ms loops must remain deterministic, inspectable, and controllable, with human-governed, well-documented interfaces mediating any AI influence. The regulatory risk is therefore not hypothetical. It is structural. As generative AI and LLMs move closer to the orchestration and policy layers of O-RAN, their opacity and non-deterministic reasoning raise questions about compliance. While such models may provide valuable tools for intent interpretation or telemetry summarization, their integration into live networks will only be viable if accompanied by robust frameworks for explainability, monitoring, and assurance. This places a dual burden on operators and vendors: to innovate in AI-driven automation, but also to invest in governance structures that can withstand regulatory scrutiny.

In a European context, no AI model will likely be permitted in the RAN unless it can pass the tests of explainability, auditability, and human oversight that regulators will and also should demand of functionality residing in critical infrastructures.

WRAPPING UP.

The article charts an evolution from SON-era automation to today’s AI-RAN vision, showing how O-RAN institutionalized “openness + intelligence” through a layered control stack, SMO/NRT-RIC for policy and learning, RT-RIC for fast decisions, and CU/DU/RU for deterministic execution at millisecond to microsecond timescales. It argues that LLMs belong at the top (SMO/NRT-RIC) for intent translation and semantic compression, while lightweight supervised/RL/TinyML models run the real-time loops below. “ChatGPT-like” systems (i.e., founded on human-generated context) are ill-suited to near-RT and sub-ms control. Synthetic data can stress-test rare events, but it demands statistics that are aware of extremes and validation against real holdouts to avoid misleading inference. Many low-level PHY/MAC primitives (HARQ, coding/decoding, CRC, MMSE precoding, and SINR-based link adaptation) are generally close to optimal, so AI/ML’s gains in these areas may be marginal and, at least initially, not the place to focus on.

Most importantly, pushing agentic autonomy too deep into the stack is likely to collide with both physics and law. Without reversibility, logging, and explainability, deployments risk breaching the EU AI Act’s requirements for human oversight, transparency, and lifecycle accountability. The practical stance is clear. Keep RT-RIC and DU/RU loops deterministic and inspectable, confine agency to SMO/NRT-RIC under strong policy guardrails and observability, and pair innovation with governance that can withstand regulatory scrutiny.

  • AI in RAN is evolutionary, not revolutionary, from SON and Elastic RAN-style coordination to GPU-accelerated AI-RAN and the 2024 AI-RAN Alliance.
  • O-RAN’s design incorporates AI via a hierarchical approach: SMO (governance/intent), NRT-RIC (training/policy), RT-RIC (near-real-time decisions), CU (shaping/QoS/UX, etc.), and DU/RU (deterministic PHY/MAC).
  • LLMs are well-suited for SMO/NRT-RIC for intent translation and semantic compression; however, they are ill-suited for RT-RIC or DU/RU, where millisecond–to–microsecond determinism is mandatory.
  • Lightweight supervised/RL/TinyML models, not “ChatGPT-like” systems, are the practical engines for near-real-time and real-time control loops.
  • Synthetic data for rare events, generated in the NRT-RIC and SMO, is valid but carries some risk. Approaches must be validated against real holdouts and statistics that account for extremes to avoid misleading inference.
  • Many low-level PHY/MAC primitives (HARQ, coding/decoding, CRC, classical precoding/MMSE, SINR-based link adaptation) are already near-optimal. AI may only add marginal gains at the edge.
  • Regulatory risk: Deep agentic autonomy without reversibility threatens EU AI Act Article 14 (human oversight). Operators must be able to intervene/override, which, to an extent, may defeat the more aggressive pursuits of autonomous network operations.
  • Regulatory risk: Opaque/unanalyzable models undermine transparency and record-keeping duties (Articles 12–13), especially if millions of inferences lack traceable logs and rationale.
  • Regulatory risk: For systems affecting individuals or critical services, explainability obligations (including GDPR Article 22 context) and AI Act lifecycle controls (Articles 8–17) require audit trails, documentation, and post-market monitoring, as well as curtailment of non-compliant agentic behavior risks.
  • Practical compliance stance: It may make sense to keep RT-RIC and DU/RU loops deterministic and inspectable, and constrain agency to SMO/NRT-RIC with strong policy guardrails, observability, and fallback modes.

ABBREVIATION LIST.

  • 3GPP – 3rd Generation Partnership Project.
  • A1 – O-RAN Interface between Non-RT RIC and RT-RIC.
  • AAS – Active Antenna Systems.
  • AISG – Antenna Interface Standards Group.
  • AI – Artificial Intelligence.
  • AI-RAN – Artificial Intelligence for Radio Access Networks.
  • AI-Native RAN – Radio Access Network with AI embedded into architecture, protocols, and control loops.
  • ASIC – Application-Specific Integrated Circuit.
  • CapEx – Capital Expenditure.
  • CPU – Central Processing Unit.
  • C-RAN – Cloud Radio Access Network.
  • CRC – Cyclic Redundancy Check.
  • CU – Centralized Unit.
  • DU – Distributed Unit.
  • E2 – O-RAN Interface between RT-RIC and CU/DU.
  • eCPRI – Enhanced Common Public Radio Interface.
  • EU – European Union.
  • FCAPS – Fault, Configuration, Accounting, Performance, Security.
  • FPGA – Field-Programmable Gate Array.
  • F1 – 3GPP-defined interface split between CU and DU.
  • GDPR – General Data Protection Regulation.
  • GPU – Graphics Processing Unit.
  • GRU – Gated Recurrent Unit.
  • HARQ – Hybrid Automatic Repeat Request.
  • KPI – Key Performance Indicator.
  • L1/L2 – Layer 1 / Layer 2 (in the OSI stack, PHY and MAC).
  • LLM – Large Language Model.
  • LSTM – Long Short-Term Memory.
  • MAC – Medium Access Control.
  • MANO – Management and Orchestration.
  • MIMO – Multiple Input, Multiple Output.
  • ML – Machine Learning.
  • MMSE – Minimum Mean Square Error.
  • NFVI – Network Functions Virtualization Infrastructure.
  • NIS2 – EU Directive on measures for a high standard level of cybersecurity across the Union.
  • NPU – Neural Processing Unit.
  • NRT-RIC – Non-Real-Time RAN Intelligent Controller.
  • O1 – O-RAN Operations and Management Interface to network elements.
  • O2 – O-RAN Interface to cloud infrastructure (NFVI and MANO).
  • O-RAN – Open Radio Access Network.
  • OpEx – Operating Expenditure.
  • PDCP – Packet Data Convergence Protocol.
  • PHY – Physical Layer.
  • QoS – Quality of Service.
  • RAN – Radio Access Network.
  • rApp – Non-Real-Time RIC Application.
  • RET – Remote Electrical Tilt.
  • RIC – RAN Intelligent Controller.
  • RLC – Radio Link Control.
  • R-NIB – Radio Network Information Base.
  • RT-RIC – Real-Time RAN Intelligent Controller.
  • RU – Radio Unit.
  • SDAP – Service Data Adaptation Protocol.
  • SINR – Signal-to-Interference-plus-Noise Ratio.
  • SmartNIC – Smart Network Interface Card.
  • SMO – Service Management and Orchestration.
  • SON – Self-Organizing Network.
  • T-Labs – Deutsche Telekom Laboratories.
  • TTI – Transmission Time Interval.
  • UE – User Equipment.
  • US – United States.
  • WG2 – O-RAN Working Group 2 (Non-RT RIC & A1 interface).
  • WG3 – O-RAN Working Group 3 (RT-RIC & E2 Interface).
  • xApp – Real-Time RIC Application.

ACKNOWLEDGEMENT.

I want to acknowledge my wife, Eva Varadi, for her unwavering support, patience, and understanding throughout the creative process of writing this article.

FOLLOW-UP READING.

  1. Kim Kyllesbech Larsen (May 2023), “Conversing with the Future: An interview with an AI … Thoughts on our reliance on and trust in generative AI.” An introduction to generative models and large language models.
  2. Goodfellow, I., Bengio, Y., Courville, A. (2016), Deep Learning (Adaptive Computation and Machine Learning series). The MIT Press. Kindle Edition.
  3. Collins, S. T., & Callahan, C. W. (2009). Cultural differences in systems engineering: What they are, what they aren’t, and how to measure them. 19th Annual International Symposium of the International Council on Systems Engineering, INCOSE 2009, 2.
  4. Herzog, J. (2015). Software Architecture in Practice, Third Edition, Written by Len Bass, Paul Clements, and Rick Kazman. ACM SIGSOFT Software Engineering Notes, 40(1).
  5. O-RAN Alliance (October 2018). “O-RAN: Towards an Open and Smart RAN“.
  6. TS 103 982 – V8.0.0. (2024) – Publicly Available Specification (PAS); O-RAN Architecture Description (O-RAN.WG1.OAD-R003-v08.00).
  7. Lee, H., Cha, J., Kwon, D., Jeong, M., & Park, I. (2020, December 1). “Hosting AI/ML Workflows on O-RAN RIC Platform”. 2020 IEEE Globecom Workshops, GC Wkshps 2020 – Proceedings.
  8. TS 103 983 – V3.1.0. (2024)- Publicly Available Specification (PAS); A1 interface: General Aspects and Principles (O-RAN.WG2.A1GAP-R003-v03.01).
  9. TS 104 038 – V4.1.0. (2024) – Publicly Available Specification (PAS); E2 interface: General Aspects and Principles (O-RAN.WG3.E2GAP-R003-v04.01).
  10. TS 104 039 – V4.0.0. (2024) – Publicly Available Specification (PAS); E2 interface: Application Protocol (O-RAN.WG3.E2AP-R003-v04.00).
  11. TS 104 040 – V4.0.0. (2024) – Publicly Available Specification (PAS); E2 interface: Service Model (O-RAN.WG3.E2SM-R003-v04.00).
  12. O-RAN Work Group 3. (2025). Near-Real-time RAN Intelligent Controller E2 Service Model (E2SM) KPM Technical Specification.
  13. Bao, L., Yun, S., Lee, J., & Quek, T. Q. S. (2025). LLM-hRIC: LLM-empowered Hierarchical RAN Intelligent Control for O-RAN.
  14. Tang, Y., Srinivasan, U. C., Scott, B. J., Umealor, O., Kevogo, D., & Guo, W. (2025). End-to-End Edge AI Service Provisioning Framework in 6G ORAN.
  15. Gajjar, P., & Shah, V. K. (n.d.). ORANSight-2.0: Foundational LLMs for O-RAN.
  16. Elkael, M., D’Oro, S., Bonati, L., Polese, M., Lee, Y., Furueda, K., & Melodia, T. (2025). AgentRAN: An Agentic AI Architecture for Autonomous Control of Open 6G Networks.
  17. Gu, J., Zhang, X., & Wang, G. (2025). Beyond the Norm: A Survey of Synthetic Data Generation for Rare Events.
  18. Michael Peel (July 2024), The problem of ‘model collapse’: how a lack of human data limits AI progress, Financial Times.
  19. Decruyenaere, A., Dehaene, H., Rabaey, P., Polet, C., Decruyenaere, J., Demeester, T., & Vansteelandt, S. (2025). Debiasing Synthetic Data Generated by Deep Generative Models.
  20. Decruyenaere, A., Dehaene, H., Rabaey, P., Polet, C., Decruyenaere, J., Vansteelandt, S., & Demeester, T. (2024). The Real Deal Behind the Artificial Appeal: Inferential Utility of Tabular Synthetic Data.
  21. Vishwakarma, R., Modi, S. D., & Seshagiri, V. (2025). Statistical Guarantees in Synthetic Data through Conformal Adversarial Generation.
  22. Banbury, C. R., Reddi, V. J., Lam, M., Fu, W., Fazel, A., Holleman, J., Huang, X., Hurtado, R., Kanter, D., Lokhmotov, A., Patterson, D., Pau, D., Seo, J., Sieracki, J., Thakker, U., Verhelst, M., & Yadav, P. (2021). Benchmarking TinyML Systems: Challenges and Direction.
  23. Capogrosso, L., Cunico, F., Cheng, D. S., Fummi, F., & Cristani, M. (2023). A Machine Learning-oriented Survey on Tiny Machine Learning.
  24. Hubara, I., Courbariaux, M., Soudry, D., El-Yaniv, R., & Bengio, Y. (2016). Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations.
  25. AI Act. The AI Act is the first-ever comprehensive legal framework on AI, addressing the risks associated with AI, and is alleged to position Europe to play a leading role globally (as claimed by the European Commission).
  26. The EU Artificial Intelligence Act. For matters related explicitly to Critical Infrastructure, see in particular Annex III: High-Risk AI Systems Referred to in Article 6(2), Recital 55 and Article 6: Classification Rules for High-Risk AI Systems. I also recommend taking a look at “Article 14: Human Oversight”.
  27. European Commission (January 2020), “Cybersecurity of 5G networks – EU Toolbox of risk mitigating measures”.
  28. European Commission (June 2023), “Commission announces next steps on cybersecurity of 5G networks in complement to latest progress report by Member States”.
  29. European Commission, “NIS2 Directive: securing network and information systems”.
  30. Council of Europe (October 2024), “Cyber resilience act: Council adopts new law on security requirements for digital products.”.
  31. GDPR Article 22, “Automated individual decision-making, including profiling”. See also the following article from Crowell & Moring LLP: “Europe’s Highest Court Compels Disclosure of Automated Decision-Making “Procedures and Principles” In Data Access Request Case”.

Will LEO Satellite Direct-to-Cell Networks make Terrestrial Networks Obsolete?

THE POST-TOWER ERA – A FAIRYTAIL.

From the bustling streets of New York to the remote highlands of Mongolia, the skyline had visibly changed. Where steel towers and antennas once dominated now stood open spaces and restored natural ecosystems. Forests reclaimed their natural habitats, and birds nested in trees undisturbed by the scaring of high rural cellular towers. This transformation was not sudden but resulted from decades of progress in satellite technology, growing demand for ubiquitous connectivity, an increasingly urgent need to address the environmental footprint of traditional telecom infrastructures, and the economic need to dramatically reduce operational expenses tied up in tower infrastructure. By the time the last cell site was decommissioned, society stood at the cusp of a new age of connectivity by LEO satellites covering all of Earth.

The annual savings worldwide from making terrestrial cellular towers obsolete in total cost are estimated to amount to at least 300 billion euros, and it is expected that moving cellular access to “heaven” will avoid more than 150 million metric tons of CO2 emissions annually. The retirement of all terrestrial cellular networks worldwide has been like eliminating the entire carbon footprint of The Netherlands or Malaysia and leading to a dramatic reduction in demand for sustainable green energy sources that previously were used to power the global cellular infrastructure.

INTRODUCTION.

Recent postings and a substantial part of commentary give the impression that we are heading towards a post-tower era where Elon Musk’s Low Earth Orbit (LEO) satellite Starlink network (together with competing options, e.g., ATS Spacemobile and Lynk, and no, I do not see Amazon’s Project Kuiper in this space) will make terrestrially-based tower infrastructure and earth-bound cellular services obsolete.

T-Mobile USA is launching its Direct-to-Cell (D2C) service via SpaceX’s Starlink LEO satellite network. The T-Mobile service is designed to work with existing LTE-compatible smartphones, allowing users to connect to Starlink satellites without needing specialized hardware or smartphone applications.

Since the announcement, posts and media coverage have declared the imminent death of the terrestrial cellular network. When it is pointed out that this may be a premature death sentence to an industry, telecom operators, and their existing cellular mobile networks, it is also not uncommon to be told off as being too pessimistic and an unbeliever in Musk’s genius vision. Musk has on occasion made it clear the Starlink D2C service is aimed at texts and voice calls in remote and rural areas, and to be honest, the D2C service currently hinges on 2×5 MHz in the T-Mobile’s PCS band, adding constraints to the “broadbandedness” of the service. The fact that the service doesn’t match the best of T-Mobile US’s 5G network quality (e.g., 205+ Mbps downlink) or even get near its 4G speeds should really not bother anyone, as the value of the D2C service is that it is available in remote and rural areas with little to no terrestrial cellular coverage and that you can use your regular cellular device with no need for a costly satellite service and satphone (e.g., Iridium, Thuraya, Globalstar).

While I don’t expect to (or even want to) change people’s beliefs, I do think it would be great to contribute to more knowledge and insights based on facts about what is possible with low-earth orbiting satellites as a terrestrial substitute and what is uninformed or misguided opinion.

The rise of LEO satellites has sparked discussions about the potential obsolescence of terrestrial cellular networks. With advancements in satellite technology and increasing partnerships, such as T-Mobile’s collaboration with SpaceX’s Starlink, proponents envision a future where towers are replaced by ubiquitous connectivity from the heavens. However, the feasibility of LEO satellites achieving service parity with terrestrial networks raises significant technical, economic, and regulatory questions. This article explores the challenges and possibilities of LEO Direct-to-Cell (D2C) networks, shedding light on whether they can genuinely replace ground-based cellular infrastructure or will remain a complementary technology for specific use cases.

WHY DISTANCE MATTERS.

The distance between you (your cellular device) and the base station’s antenna determines your expected service experience in cellular and wireless networks. The longer you are away from the base station that serves you, in general, the poorer your connection quality and performance will be, with everything else being equal. As the distance increases, signal weakening (i.e., path loss) grows exponentially, reducing signal quality and making it harder for devices to maintain reliable communication. Closer proximity allows for more substantial, faster, and more stable connections, while longer distances require more power and advanced technologies like beamforming or repeaters to compensate.

Physics tells us how a signal loses its signal strength (or power) over a distance with the square of the distance from the source of the signal itself (either the base station transmitter or the consumer device). This applies universally to all electromagnetic waves traveling in free space. Free space means that there are no obstacles, reflections, or scattering. No terrain features, buildings, or atmospheric conditions interfere with the propagation signal.

So, what matters to the Free Space Path Loss (FSPL)? That is the signal strength over a given distance in free space:

  • The signal strength reduces (the path loss increases) with the square of the distance (d) from its source.
  • Path loss increases (i.e., signal strength decreases) with the (square of the) frequency (f). The higher the frequency, the higher the path loss at a given distance from the signal source.
  • A larger transmit antenna aperture reduces the path loss by focusing the transmitted signal (energy) more efficiently. An antenna aperture is an antenna’s “effective area” that captures or transmits electromagnetic waves. It depends directly on antenna gain and inverse of the square of the signal frequency (i.e., higher frequency → smaller aperture).
  • Higher receiver gain will also reduce the path loss.

$PL_{FS} \; = \; \left( \frac{4 \pi}{c} \right)^2 (d \; f)^2 \; \propto d^2 \; f^2$

$$FSPL_{dB} \; = 10 \; Log_{10} (PL_{FS}) \; = \; 20 \; Log_{10}(d) \; + \; 20 \; Log_{10}(f) \; + \; constant$$

The above equations show a strong dependency on distance; the farther away, the larger the signal loss, and the higher the frequency, the larger the signal loss. Relaxing some of the assumptions leading to the above relationship leads us to the following:

$FSPL_{dB}^{rs} \; = \; 20 \; Log_{10}(d) \; – \; 10 \; Log_{10}(A_t^{eff}) \; – \; 10 \; Log_{10}(G_{r}) \; + \; constant$

The last of the above equations introduces the transmitter’s effective antenna aperture (\(A_t^{eff}\)) and the receiver’s gain (\(G_r\)), telling us that larger apertures reduce path loss as they focus the transmitted energy more efficiently and that higher receiver gain likewise reduces the path loss (i.e., “they hear better”).

It is worth remembering that the transmitter antenna aperture is directly tied to the transmitter gain ($G_t$) when the frequency (f) has been fixed. We have

$A_t^{eff} \; = \; \frac{c^2}{4\pi} \; \frac{1}{f^2} \; G_t \; = \; 0.000585 \; m^2 \; G_t \;$ @ f = 3.5 GHz.

From the above, as an example, it is straightforward to see that the relative path loss difference between the two distances of 550 km (e.g., typical altitude of an LEO satellite) and 2.5 km (typical terrestrial cellular coverage range ) is

$\frac{PL_{FS}(550 km)}{PL_{FS}(2.5 km)} \; = \; \left( \frac {550}{2.5}\right)^2 \; = \; 220^2 \; \approx \; 50$ thousand. So if all else was equal (it isn’t, btw!), we would expect that the signal loss at a distance of 550 km would be 50 thousand times higher than at 2.5 km. Or, in the electrical engineer’s language, at a distance of 550 km, the loss would be 47 dB higher than at 2.5 km.

The figure illustrates the difference between (a) terrestrial cellular and (b) satellite coverage. A terrestrial cellular signal typically covers a radius of 0.5 to 5 km. In contrast, a LEO satellite signal travels a substantial distance to reach Earth (e.g., Starlink satellite is at an altitude of 550 km). While the terrestrial signal propagates through the many obstacles it meets on its earthly path, the satellite signal’s propagation path would typically be free-space-like (i.e., no obstacles) until it penetrates buildings or other objects to reach consumer devices. Historically, most satellite-to-Earth communication has relied on outdoor ground stations or dishes where the outdoor antenna on Earth provides LoS to the satellite and will also compensate somewhat for the signal loss due to the distance to the satellite.

Let’s compare a terrestrial 5G 3.5 GHz advanced antenna system (AAS) 2.5 km from a receiver with a LEO satellite system at an altitude of 550 km. Note I could have chosen a lower frequency, e.g., 800 MHz or the PCS 1900 band. While it would give me some advantages regarding path loss (i.e., $FSPL \; \propto \; f^2$), the available bandwidth is rather smallish and insufficient for state-or-art 5G services (imo!). From a free-space path loss perspective, independently of frequency, we need to overcome an almost 50 thousand times relative difference in distance squared (ca. 47 dB difference) in favor of the terrestrial system. In this comparison, it should be understood that the terrestrial and the satellite systems use the same carrier frequency (otherwise, one should account for the difference in frequency), and the only difference that matters (for the FSPL) is the difference in distance to the receiver.

Suppose I require that my satellite system has the same signal loss in terms of FSPL as my terrestrial system to aim at a comparable quality of service level. In that case, I have several options in terms of satellite enhancements. I could increase transmit power, although it would imply that I need a transmit power of 47 dB more than the terrestrial system, or approximately 48 kW, which is likely impractical for the satellite due to power limitations. Compare this with the current Starlink transmit power of approximately 32 W (45 dBm), ca. 1,500 times lower. Alternatively, I could (in theory!) increase my satellite antenna aperture, leading to a satellite antenna with a diameter of ca. 250 meters, which is enormous compared to current satellite antennas (e.g., Starlink’s ca. 0.05 m2 aperture for a single antenna and total area in the order of 1.6 m2 for the Ku/Ka bands). Finally, I could (super theoretically) also massively improve my consumer device (e.g., smartphone) to receive gain (with 47 dB) from today’s range of -2 dBi to +5 dBi. Achieving 46 dBi gain in a smartphone receiver seems unrealistic due to size, power, and integration constraints. As the target of LEO satellite direct-to-cell services is to support commercially available cellular devices used in terrestrial, only the satellite specifications can be optimized.

Based on a simple free-space approach, it appears unreasonable that an LEO satellite communication system can provide 5G services at parity with a terrestrial cellular network to normal (unmodified) 5G consumer devices without satellite-optimized modifications. The satellite system’s requirements for parity with a terrestrial communications system are impractical (but not impossible) and, if pursued, would significantly drive up design complexity and cost, likely making such a system highly uneconomical.

At this point, you should ask yourself if it is reasonable to assume that a terrestrial communication cellular system can be taken to propagate as its environment is “free-space” like. Thus, obstacles, reflections, and scattering are ignored. Is it really okay to presume that terrain features, buildings, or atmospheric conditions do not interfere with the propagation of the terrestrial cellular signal? Of course, the answer should be that it is not okay to assume that. When considering this, let’s see if it matters much compared to the LEO satellite path loss.

TERRESTRIAL CELLULAR PROPAGATION IS NOT HAPPENING IN FREE SPACE, AND NEITHER IS A SATELLITE’S.

The Free-Space Path Loss (FSPL) formula assumes ideal conditions where signals propagate in free space without interference, blockage, or degradation, besides what would naturally be by traveling a given distance. However, as we all experience daily, real-world environments introduce additional factors such as obstructions, multipath effects, clutter loss, and environmental conditions, necessitating corrections to the FSPL approach. Moving from one room of our house to another can easily change the cellular quality and our experience (e.g., dropped calls, poorer voice quality, lower speed, changing from using 5G to 4G or even to 2G, no coverage at all). Driving through a city may also result in ups and downs with respect to the cellular quality we experience. Some of these effects are tabulated below.

Urban environments typically introduce the highest additional losses due to dense buildings, narrow streets, and urban canyons, which significantly obstruct and scatter signals. For example, the Okumura-Hata Urban Model accounts for such obstructions and adds substantial losses to the FSPL, averaging around 30–50 dB, depending on the density and height of buildings.

Suburban environments, on the other hand, are less obstructed than urban areas but still experience moderate clutter losses from trees, houses, and other features. In these areas, corrections based on the Okumura-Hata Suburban Model add approximately 10–20 dB to the FSPL, reflecting the moderate level of signal attenuation caused by vegetation and scattered structures.

Rural environments have the least obstructions, resulting in the lowest additional loss. Corrections based on the Okumura-Hata Rural Model typically add around 5–10 dB to the FSPL. These areas benefit from open landscapes with minimal obstructions, making them ideal for long-range signal propagation.

Non-line-of-sight (NLOS) conditions increase additionally the path loss, as signals must diffract or scatter to reach the receiver. This effect adds 10–20 dB in suburban and rural areas and 20–40 dB in urban environments, where obstacles are more frequent and severe. Similarly, weather conditions such as rain and foliage contribute to signal attenuation, with rain adding up to 1–5 dB/km at higher frequencies (above 10 GHz) and dense foliage introducing an extra 5–15 dB of loss.

The corrections for these factors can be incorporated into the FSPL formula to provide a more realistic estimation of signal attenuation. By applying these corrections, the FSPL formula can reflect the conditions encountered in terrestrial communication systems across different environments.

The figure above illustrates the differences and similarities concerning the coverage environment for (a) terrestrial and (b) satellite communication systems. The terrestrial signal environment, in most instances, results in the loss of the signal as it propagates through the terrestrial environment due to vegetation, terrain variations, urban topology or infrastructure, weather, and ultimately, as the signal propagates from the outdoor environment to the indoor environment it signal reduces further as it, for example, penetrates windows with coatings, outer and inner walls. The combination of distance, obstacles, and material penetration leads to a cumulative reduction in signal strength as the signal propagates through the terrestrial environment. For the satellite, as illustrated in (b), a substantial amount of signal is reduced due to the vast distance it has to travel before reaching the consumer. If no outdoor antenna connects with the satellite signal, then the satellite signal will be further reduced as it penetrates roofs, multiple ceilings, multiple floors, and walls.

It is often assumed that a satellite system has a line of sight (LoS) without environmental obstructions in its signal propagation (besides atmospheric ones). The reasoning is not unreasonable as the satellite is on top of the consumers of its services and, of course, a correct approach when the consumer has an outdoor satellite receiver (e.g., a dish) in direct LoS with the satellite. Moreover, historically, most satellite-to-Earth communication has relied on outdoor ground stations or outdoor dishes (e.g., placed on roofs or another suitable location) where the outdoor antenna on Earth provides LoS to the satellite’s antenna also compensating somewhat for the signal loss due to the distance to the satellite.

When considering a satellite direct-to-cell device, we no longer have the luxury of a satellite-optimized advanced Earth-based outdoor antenna to facilitate the communications between the satellite and the consumer device. The satellite signal has to close the connection with a standard cellular device (e.g., smartphone, tablet, …), just like the terrestrial cellular network would have to do.

However, 80% or more of our mobile cellular traffic happens indoors, in our homes, workplaces, and public places. If a satellite system had to replace existing mobile network services, it would also have to provide a service quality similar to that of consumers from the terrestrial cellular network. As shown in the above figure, this involves urban areas where the satellite signal will likely pass through a roof and multiple floors before reaching a consumer. Depending on housing density, buildings (shadowing) may block the satellite signal, resulting in substantial service degradation for consumers suffering from such degrading effects. Even if the satellite signal would not face the same challenges as a terrestrial cellular signal, such as with vegetation, terrain variations, and the horizontal dimension of urban topology (e.g., outer& inner walls, coated windows,… ), the satellite signal would still have to overcome the vertical dimension of urban topologies (e..g, roofs, ceilings, floors, etc…) to connect to consumers cellular devices.

For terrestrial cellular services, the cellular network’s signal integrity will (always) have a considerable advantage over the satellite signal because of the proximity to the consumer’s cellular device. With respect to distance alone, an LEO satellite at an altitude of 550 km will have to overcome a 50 thousand times (or a 47 dB) path loss compared to a cellular base station antenna 2.5 km away. Overcoming that path loss penalty adds considerable challenges to the antenna design, which would seem highly challenging to meet and far from what is possible with today’s technology (and economy).

CHALLENGES SUMMARIZED.

Achieving parity between a Low Earth Orbit (LEO) satellite providing Direct-to-Cell (D2C) services and a terrestrial 5G network involves overcoming significant technical challenges. The disparity arises from fundamental differences in these systems’ environments, particularly in free-space path loss, penetration loss, and power delivery. Terrestrial networks benefit from closer proximity to the consumer, higher antenna density, and lower propagation losses. In contrast, LEO satellites must address far more significant free-space path losses due to the large distances involved and the additional challenges of transmitting signals through the atmosphere and into buildings.

The D2C challenges for LEO satellites are increasingly severe at higher frequencies, such as 3.5 GHz and above. As we have seen above, the free-space path loss increases with the square of the frequency, and penetration losses through common building materials, such as walls and floors, are significantly higher. For an LEO satellite system to achieve indoor parity with terrestrial 5G services at this frequency, it would need to achieve extraordinary levels of effective isotropic radiated power (EIRP), around 65 dB, and narrow beamwidths of approximately 0.5° to concentrate power on specific service areas. This would require very high onboard power outputs, exceeding 1 kW, and large antenna apertures, around 2 m in diameter, to achieve gains near 55 dBi. These requirements place considerable demands on satellite design, increasing mass, complexity, and cost. Despite these optimizations, indoor service parity at 3.5 GHz remains challenging due to persistent penetration losses of around 20 dB, making this frequency better suited for outdoor or line-of-sight applications.

Achieving a stable beam with the small widths required for a LEO satellite to provide high-performance Direct-to-Cell (D2C) services presents significant challenges. Narrow beam widths, on the order of 0.5° to 1°, are essential to effectively focus the satellite’s power and overcome the high free-space path loss. However, maintaining such precise beams demands advanced satellite antenna technologies, such as high-gain phased arrays or large deployable apertures, which introduce design, manufacturing, and deployment complexities. Moreover, the satellite must continuously track rapidly moving targets on Earth as it orbits around 7.8 km/s. This requires highly accurate and fast beam-steering systems, often using phased arrays with electronic beamforming, to compensate for the relative motion between the satellite and the consumer. Any misalignment in the beam can result in significant signal degradation or complete loss of service. Additionally, ensuring stable beams under variable conditions, such as atmospheric distortion, satellite vibrations, and thermal expansion in space, adds further layers of technical complexity. These requirements increase the system’s power consumption and cost and impose stringent constraints on satellite design, making it a critical challenge to achieve reliable and efficient D2C connectivity.

As the operating frequency decreases, the specifications for achieving parity become less stringent. At 1.8 GHz, the free-space path loss and penetration losses are lower, reducing the signal deficit. For a LEO satellite operating at this frequency, a 2.5 m² aperture (1.8 m diameter) antenna and an onboard power output of around 800 W would suffice to deliver EIRP near 60 dBW, bringing outdoor performance close to terrestrial equivalency. Indoor parity, while more achievable than 3.5 GHz, would still face challenges due to penetration losses of approximately 15 dB. However, the balance between the reduced propagation losses and achievable satellite optimizations makes 1.8 GHz a more practical compromise for mixed indoor and outdoor coverage.

At 800 MHz, the frequency-dependent losses are significantly reduced, making it the most feasible option for LEO satellite systems to achieve parity with terrestrial 5G networks. The free-space path loss decreases further, and penetration losses into buildings are reduced to approximately 10 dB, comparable to what terrestrial systems experience. These characteristics mean that the required specifications for the satellite system are notably relaxed. A 1.5 m² aperture (1.4 m diameter) antenna, combined with a power output of 400 W, would achieve sufficient gain and EIRP (~55 dBW) to deliver robust outdoor coverage and acceptable indoor service quality. Lower frequencies also mitigate the need for extreme beamwidth narrowing, allowing for more flexible service deployment.

Most consumers’ cellular consumption happens indoors. These consumers are compared to an LEO satellite solution typically better served by existing 5G cellular broadband networks. When considering a direct-to-normal-cellular device, it would not be practical to have an LEO satellite network, even an extensive one, to replace existing 5G terrestrial-based cellular networks and the services these support today.

This does not mean that LEO satellite cannot be of great utility when connecting to an outdoor Earth-based consumer dish, as is already evident in many remote, rural, and suburban places. The summary table above also shows that LEO satellite D2C services are feasible, without too challenging modifications, at the lower cellular frequency ranges between 600 MHz to 1800 MHz at service levels close to the terrestrial systems, at least in rural areas and for outdoor services in general. In indoor situations, the LEO Satellite D2C signal is more likely to be compromised due to roof and multiple floor penetration scenarios to which a terrestrial signal may be less exposed.

WHAT GOES DOWN MUST COME UP.

LEO satellite services that provide direct to unmodified mobile cellular device services are getting us all too focused on the downlink path from the satellite directly to the device. It seems easy to forget that unless you deliver a broadcast service, we also need the unmodified cellular device to directly communicate meaningfully with the LEO satellite. The challenge for an unmodified cellular device (e.g., smartphone, tablet, etc.) to receive the satellite D2C signal has been explained extensively in the previous section. In the satellite downlink-to-device scenario, we can optimize the design specifications of the LEO satellite to overcome some (or most, depending on the frequency) of the challenges posed by the satellite’s high altitude (compared to a terrestrial base station’s distance to the consumer device). In the device direct-uplink-to-satellite, we have very little to no flexibility unless we start changing the specifications of the terrestrial device portfolio. Suppose we change the specifications for consumer devices to communicate better with satellites. In that case, we also change the premise and economics of the (wrong) idea that LEO satellites should be able to completely replace terrestrial cellular networks at service parity with those terrestrial cellular networks.

Achieving uplink communication from a standard cellular device to an LEO satellite poses significant challenges, especially when attempting to match the performance of a terrestrial 5G network. Cellular devices are designed with limited transmission power, typically in the range of 23–30 dBm (0.2–1 watt), sufficient for short-range communication with terrestrial base stations. However, when the receiving station is a satellite orbiting between 550 and 1,200 kilometers, the transmitted signal encounters substantial free-space path loss. The satellite must, therefore, be capable of detecting and processing extremely weak signals, often below -120 dBm, to maintain a reliable connection.

The free-space path loss in the uplink direction is comparable to that in the downlink, but the challenges are compounded by the cellular device’s limitations. At higher frequencies, such as 3.5 GHz, path loss can exceed 155 dB, while at 1.8 GHz and 800 MHz, it reduces to approximately 149.6 dB and 143.6 dB, respectively. Lower frequencies favor uplink communication because they experience less path loss, enabling better signal propagation over large distances. However, cellular devices typically use omnidirectional antennas with very low gain (0–2 dBi), poorly suited for long-distance communication, placing even greater demands on the satellite’s receiving capabilities.

The satellite must compensate for these limitations with highly sensitive receivers and high-gain antennas. Achieving sufficient antenna gain requires large apertures, often exceeding 4 meters in diameter for 800 MHz or 2 meters for 3.5 GHz, increasing the satellite’s size, weight, and complexity. Phased-array antennas or deployable reflectors are often used to achieve the required gain. Still, their implementation is constrained by the physical limitations and costs of launching such systems into orbit. Additionally, the satellite’s receiver must have an exceptionally low noise figure, typically in the range of 1–3 dB, to minimize internal noise and allow the detection of weak uplink signals.

Interference is another critical challenge in the uplink path. Unlike terrestrial networks, where signals from individual devices are isolated into small sectors, satellites receive signals over larger geographic areas. This broad coverage makes it difficult to separate and process individual transmissions, particularly in densely populated areas where numerous devices transmit simultaneously. Managing this interference requires sophisticated signal processing capabilities on the satellite, increasing its complexity and power demands.

The motion of LEO satellites introduces additional complications due to the Doppler effect, which causes a shift in the uplink signal frequency. At higher frequencies like 3.5 GHz, these shifts are more pronounced, requiring real-time adjustments to the receiver to compensate. This dynamic frequency management adds another layer of complexity to the satellite’s design and operation.

Among the frequencies considered, 3.5 GHz is the most challenging for uplink communication due to high path loss, pronounced Doppler effects, and poor building penetration. Satellites operating at this frequency must achieve extraordinary sensitivity and gain, which is difficult to implement at scale. At 1.8 GHz, the challenges are somewhat reduced as the path loss and Doppler effects are less severe. However, the uplink requires advanced receiver sensitivity and high-gain antennas to approach terrestrial network performance. The most favorable scenario is at 800 MHz, where the lower path loss and better penetration characteristics make uplink communication significantly more feasible. Satellites operating at this frequency require less extreme sensitivity and gain, making it a practical choice for achieving parity with terrestrial 5G networks, especially for outdoor and light indoor coverage.

Uplink, the consumer device to satellite signal direction, poses additional limitations to the frequency range. Such systems may be interesting to 600 MHz to a maximum of 1.8 GHz, which is already challenging for uplink and downlink in indoor usage. Service in the lower cellular frequency range is feasible for outdoor usage scenarios in rural and remote areas and for non-challenging indoor environments (e.g., “simple” building topologies).

The premise that LEO satellite D2C services would make terrestrial cellular networks redundant everywhere by offering service parity appears very unlikely, and certainly not with the current generation of LEO satellites being launched. The altitude range of the LEO satellites (300 – 1200 km) and frequency ranges used for most terrestrial cellular services (600 MHz to 5 GHz) make it very challenging and even impractical (for higher cellular frequency ranges) to achieve quality and capacity parity with existing terrestrial cellular networks.

LEO SATELLITE D2C ARCHITECTURE.

A subscriber would realize they have LEO satellite Direct-to-Cell coverage through network signaling and notifications provided by their mobile device and network operator. Using this coverage depends on the integration between the LEO satellite system and the terrestrial cellular network, as well as the subscriber’s device and network settings. Here’s how this process typically works:

When a subscriber moves into an area where traditional terrestrial coverage is unavailable or weak, their mobile device will periodically search for available networks, as it does when trying to maintain connectivity. If the device detects a signal from a LEO satellite providing D2C services, it may indicate “Satellite Coverage” or a similar notification on the device’s screen.

This recognition is possible because the LEO satellite extends the subscriber’s mobile network. The satellite broadcasts system information on the same frequency bands licensed to the subscriber’s terrestrial network operator. The device identifies the network using the Public Land Mobile Network (PLMN) ID, which matches the subscriber’s home network or a partner network in a roaming scenario. The PLMN is a fundamental component of terrestrial and LEO satellite D2C networks, which is the identifier that links a mobile consumer to a specific mobile network operator. It enables communication, access rights management, network interoperability, and supporting services such as voice, text, and data.

The PLMN is also directly connected to the frequency bands used by an operator and any satellite service provider, acting as an extension of the operator’s network. It ensures that devices access the appropriately licensed bands through terrestrial or satellite systems and governs spectrum usage to maintain compliance with regulatory frameworks. Thus, the PLMN links the network identification and frequency allocation, ensuring seamless and lawful operation in terrestrial and satellite contexts.

In an LEO satellite D2C network, the PLMN plays a similar but more complex role, as it must bridge the satellite system with terrestrial mobile networks. The satellite effectively operates as an extension of the terrestrial PLMN, using the same MCC and MNC codes as the consumer’s home network or a roaming partner. This ensures that consumer devices perceive the satellite network as part of their existing subscription, avoiding the need for additional configuration or specialized hardware. When the satellite provides coverage, the PLMN enables the device to authenticate and access services through the operator’s core network, ensuring consistency with terrestrial operations. It ensures that consumer authentication, billing, and service provisioning remain consistent across the terrestrial and satellite domains. In cases where multiple terrestrial operators share access to a satellite system, the PLMN facilitates the correct routing of consumer sessions to their respective home networks. This coordination is particularly important in roaming scenarios, where a consumer connected to a satellite in one region may need to access services through their home network located in another region.

For a subscriber to make use of LEO satellite coverage, the following conditions must be met:

  • Device Compatibility: The subscriber’s mobile device must support satellite connectivity. While many standard devices are compatible with satellite D2C services using terrestrial frequencies, certain features may be required, such as enhanced signal processing or firmware updates. Modern smartphones are increasingly being designed to support these capabilities.
  • Network Integration: The LEO satellite must be integrated with the subscriber’s mobile operator’s core network. This ensures the satellite extends the terrestrial network, maintaining seamless authentication, billing, and service delivery. Consumers can make and receive calls, send texts, or access data services through the satellite link without changing their settings or SIM card.
  • Service Availability: The type of services available over the satellite link depends on the network and satellite capabilities. Initially, services may be limited to text messaging and voice calls, as these require less bandwidth and are easier to support in shared satellite coverage zones. High-speed data services, while possible, may require further advancements in satellite capacity and network integration.
  • Subscription or Permissions: Subscribers must have access to satellite services through their mobile plan. This could be included in their existing plan or offered as an add-on service. In some cases, roaming agreements between the subscriber’s home network and the satellite operator may apply.
  • Emergency Use: In specific scenarios, satellite connectivity may be automatically enabled for emergencies, such as SOS messages, even if the subscriber does not actively use the service for regular communication. This is particularly useful in remote or disaster-affected areas with unavailable terrestrial networks.

Once connected to the satellite, the consumer experience is designed to be seamless. The subscriber can initiate calls, send messages, or access other supported services just as they would under terrestrial coverage. The main differences may include longer latency due to the satellite link and, potentially, lower data speeds or limitations on high-bandwidth activities, depending on the satellite network’s capacity and the number of consumers sharing the satellite beam.

Managing a call on a Direct-to-Cell (D2C) satellite network requires specific mobile network elements in the core network, alongside seamless integration between the satellite provider and the subscriber’s terrestrial network provider. The service’s success depends on how well the satellite system integrates into the terrestrial operator’s architecture, ensuring that standard cellular functions like authentication, session management, and billing are preserved.

In a 5G network, the core network plays a central role in managing calls and data sessions. For a D2C satellite service, key components of the operator’s core network include the Access and Mobility Management Function (AMF), which handles consumer authentication and signaling. The AMF establishes and maintains connectivity for subscribers connecting via the satellite. Additionally, the Session Management Function (SMF) oversees the session context for data services. It ensures compatibility with the IP Multimedia Subsystem (IMS), which manages call control, routing, and handoffs for voice-over-IP communications. The Unified Data Management (UDM) system, another critical core component, stores subscriber profiles, detailing permissions for satellite use, roaming policies, and Quality of Service (QoS) settings.

To enforce network policies and billing, the Policy Control Function (PCF) applies service-level agreements and ensures appropriate charges for satellite usage. For data routing, elements such as the User Plane Function (UPF) direct traffic between the satellite ground stations and the operator’s core network. Additionally, interconnect gateways manage traffic beyond the operator’s network, such as the Internet or another carrier’s network.

The role of the satellite provider in this architecture depends on the integration model. If the satellite system is fully integrated with the terrestrial operator, the satellite primarily acts as an extension of the operator’s radio access network (RAN). In this case, the satellite provider requires ground stations to downlink traffic from the satellites and forward it to the operator’s core network via secure, high-speed connections. The satellite provider handles radio gateway functionality, translating satellite-specific protocols into formats compatible with terrestrial systems. In this scenario, the satellite provider does not need its own core network because the operator’s core handles all call processing, authentication, billing, and session management.

In a standalone model, where the LEO satellite provider operates independently, the satellite system must include its own complete core network. This requires implementing AMF, SMF, UDM, IMS, and UPF, allowing the satellite provider to directly manage subscriber sessions and calls. In this case, interconnect agreements with terrestrial operators would be needed to enable roaming and off-network communication.

Most current D2C solutions, including those proposed by Starlink with T-Mobile or AST SpaceMobile, follow the integrated model. In these cases, the satellite provider relies on the terrestrial operator’s core network, reducing complexity and leveraging existing subscriber management systems. The LEO satellites are primarily responsible for providing RAN functionality and ensuring reliable connectivity to the terrestrial core.

REGULATORY CHALLENGES.

LEO satellite networks offering Direct-to-Cell (D2C) services face substantial regulatory challenges in their efforts to operate within frequency bands already allocated to terrestrial cellular services. These challenges are particularly significant in regions like Europe and the United States, where cellular frequency ranges are tightly regulated and managed by national and regional authorities to ensure interference-free operations and equitable access among service providers.

The cellular frequency spectrum in Europe and the USA is allocated through licensing frameworks that grant exclusive usage rights to mobile network operators (MNOs) for specific frequency bands, often through competitive auctions. For example, in the United States, the Federal Communications Commission (FCC) regulates spectrum usage, while in Europe, national regulatory authorities manage spectrum allocations under the guidelines set by the European Union and CEPT (European Conference of Postal and Telecommunications Administrations). The spectrum currently allocated for cellular services, including low-band (e.g., 600 MHz, 800 MHz), mid-band (e.g., 1.8 GHz, 2.1 GHz), and high-band (e.g., 3.5 GHz), is heavily utilized by terrestrial operators for 4G LTE and 5G networks.

In March 2024, the Federal Communications Commission (FCC) adopted a groundbreaking regulatory framework to facilitate collaborations between satellite operators and terrestrial mobile service providers. This initiative, termed “Supplemental Coverage from Space,” allows satellite operators to use the terrestrial mobile spectrum to offer connectivity directly to consumer handsets and is an essential component of FCC’s “Single Network Future.” The framework aims to enhance coverage, especially in remote and underserved areas, by integrating satellite and terrestrial networks. The FCC granted SpaceX (November 2024) approval to provide direct-to-cell services via its Starlink satellites. This authorization enables SpaceX to partner with mobile carriers, such as T-Mobile, to extend mobile coverage using satellite technology. The approval includes specific conditions to prevent interference with existing services and to ensure compliance with established regulations. Notably, the FCC also granted SpaceX’s request to provide service to cell phones outside the United States. For non-US operations, Starlink must obtain authorization from the relevant governments. Non-US operations are authorized in various sub-bands between 1429 MHz and 2690 MHz.

In Europe, the regulatory framework for D2C services is under active development. The European Conference of Postal and Telecommunications Administrations (CEPT) is exploring the regulatory and technical aspects of satellite-based D2C communications. This includes understanding connectivity requirements and addressing national licensing issues to facilitate the integration of satellite services with existing mobile networks. Additionally, the European Space Agency (ESA) has initiated feasibility studies on Direct-to-Cell connectivity, collaborating with industry partners to assess the potential and challenges of implementing such services across Europe. These studies aim to inform future regulatory decisions and promote innovation in satellite communications.

For LEO satellite operators to offer D2C services in these regulated bands, they would need to reach agreements with the licensed MNOs with the rights to these frequencies. This could take the form of spectrum-sharing agreements or leasing arrangements, wherein the satellite operator obtains permission to use the spectrum for specific purposes, often under strict conditions to avoid interference with terrestrial networks. For example, SpaceX’s collaboration with T-Mobile in the USA involves utilizing T-Mobile’s existing mid-band spectrum (i.e., PCS1900) under a partnership model, enabling satellite-based connectivity without requiring additional spectrum licensing.

In Europe, the situation is more complex due to the fragmented nature of the regulatory environment. Each country manages its spectrum independently, meaning LEO operators must negotiate agreements with individual national MNOs and regulators. This creates significant administrative and logistical hurdles, as the operator must align with diverse licensing conditions, technical requirements, and interference mitigation measures across multiple jurisdictions. Furthermore, any satellite use of the terrestrial spectrum in Europe must comply with European Union directives and ITU (International Telecommunication Union) regulations, prioritizing terrestrial services in these bands.

Interference management is a critical regulatory concern. LEO satellites operating in the same frequency bands as terrestrial networks must implement sophisticated coordination mechanisms to ensure their signals do not disrupt terrestrial operations. This includes dynamic spectrum management, geographic beam shaping, and power control techniques to minimize interference in densely populated areas where terrestrial networks are most active. Regulators in the USA and Europe will likely require detailed technical demonstrations and compliance testing before approving such operations.

Another significant challenge is ensuring equitable access to spectrum resources. MNOs have invested heavily in acquiring and deploying their licensed spectrum, and many may view satellite D2C services as a competitive threat. Regulators would need to establish clear frameworks to balance the rights of terrestrial operators with the potential societal benefits of extending connectivity through satellites, particularly in underserved rural or remote areas.

Beyond regulatory hurdles, LEO satellite operators must collaborate extensively with MNOs to integrate their services effectively. This includes interoperability agreements to ensure seamless handoffs between terrestrial and satellite networks and the development of business models that align incentives for both parties.

TAKEAWAYS.

Ditect-to-cell LEO satellite networks face considerable technology hurdles in providing services comparable to terrestrial cellular networks.

  • Overcoming free-space path loss and ensuring uplink connectivity from low-power mobile devices with omnidirectional antennas.
  • Cellular devices transmit at low power (typically 23–30 dBm), making it difficult for uplink signals to reach satellites in LEO at 500–1,200 km altitudes.
  • Uplink signals from multiple devices within a satellite beam area can overlap, creating interference that challenges the satellite’s ability to separate and process individual uplink signals.
  • Developing advanced phased-array antennas for satellites, dynamic beam management, and low-latency signal processing to maintain service quality.
  • Managing mobility challenges, including seamless handovers between satellites and beams and mitigating Doppler effects due to the high relative velocity of LEO satellites.
  • The high relative velocity of LEO satellites introduces frequency shifts (i.e., Doppler Effect) that the satellite must compensate for dynamically to maintain signal integrity.
  • Address bandwidth limitations and efficiently reuse spectrum while minimizing interference with terrestrial and other satellite networks.
  • Scaling globally may require satellites to carry varied payload configurations to accommodate regional spectrum requirements, increasing technical complexity and deployment expenses.
  • Operating on terrestrial frequencies necessitates dynamic spectrum sharing and interference mitigation strategies, especially in densely populated areas, limiting coverage efficiency and capacity.
  • Ensuring the frequent replacement of LEO satellites due to shorter lifespans increases operational complexity and cost.

On the regulatory front, integrating D2C satellite services into existing mobile ecosystems is complex. Spectrum licensing is a key issue, as satellite operators must either share frequencies already allocated to terrestrial mobile operators or secure dedicated satellite spectrum.

  • Securing access to shared or dedicated spectrum, particularly negotiating with terrestrial operators to use licensed frequencies.
  • Avoiding interference between satellite and terrestrial networks requires detailed agreements and advanced spectrum management techniques.
  • Navigating fragmented regulatory frameworks in Europe, where national licensing requirements vary significantly.
  • Spectrum Fragmentation: With frequency allocations varying significantly across countries and regions, scaling globally requires navigating diverse and complex spectrum licensing agreements, slowing deployment and increasing administrative costs.
  • Complying with evolving international regulations, including those to be defined at the ITU’s WRC-27 conference.
  • Developing clear standards and agreements for roaming and service integration between satellite operators and terrestrial mobile network providers.
  • The high administrative and operational burden of scaling globally diminishes economic benefits, particularly in regions where terrestrial networks already dominate.
  • While satellites excel in rural or remote areas, they might not meet high traffic demands in urban areas, restricting their ability to scale as a comprehensive alternative to terrestrial networks.

The idea of D2C satellite networks making terrestrial cellular networks obsolete is ambitious but fraught with practical limitations. While LEO satellites offer unparalleled reach in remote and underserved areas, they struggle to match terrestrial networks’ capacity, reliability, and low latency in urban and suburban environments. The high density of base stations in terrestrial networks enables them to handle far greater traffic volumes, especially for data-intensive applications.

  • Coverage advantage: Satellites provide global reach, particularly in remote or underserved regions, where terrestrial networks are cost-prohibitive and often of poor quality or altogether lacking.
  • Capacity limitations: Satellites struggle to match the high-density traffic capacity of terrestrial networks, especially in urban areas.
  • Latency challenges: Satellite latency, though improving, cannot yet compete with the ultra-low latency of terrestrial 5G for time-critical applications.
  • Cost concerns: Deploying and maintaining satellite constellations is expensive, and they still depend on terrestrial core infrastructure (although the savings if all terrestrial RAN infrastructure could be avoided is also very substantial).
  • Complementary role: D2C networks are better suited as an extension to terrestrial networks, filling coverage gaps rather than replacing them entirely.

The regulatory and operational constraints surrounding using terrestrial mobile frequencies for D2C services severely limit scalability. This fragmentation makes it difficult to achieve global coverage seamlessly and increases operational and economic inefficiencies. While D2C services hold promise for addressing connectivity gaps in remote areas, their ability to scale as a comprehensive alternative to terrestrial networks is hampered by these challenges. Unless global regulatory harmonization or innovative technical solutions emerge, D2C networks will likely remain a complementary, sub-scale solution rather than a standalone replacement for terrestrial mobile networks.

FURTHER READING.

  1. Kim K. Larsen, “The Next Frontier: LEO Satellites for Internet Services.” Techneconomyblog, (March 2024).
  2. Kim K. Larsen, “Stratospheric Drones & Low Earth Satellites: Revolutionizing Terrestrial Rural Broadband from the Skies?” Techneconomyblog, (January 2024).
  3. Kim K. Larsen, “A Single Network Future“, Techneconomyblog, (March 2024).
  4. T.S. Rappaport, “Wireless Communications – Principles & Practice,” Prentice Hall (1996). In my opinion, it is one of the best graduate textbooks on communications systems. I bought it back in 1999 as a regular hardcover. I have not found it as a Kindle version, but I believe there are sites where a PDF version may be available (e.g., Scribd).

ACKNOWLEDGEMENT.

I greatly acknowledge my wife, Eva Varadi, for her support, patience, and understanding during the creative process of writing this article.

Greenland: Navigating Security and Critical Infrastructure in the Arctic – A Technology Introduction.

The securitization of the Arctic involves key players such as Greenland (The Polar Bear), Denmark, the USA (The Eagle), Russia (The Brown Bear), and China (The Red Dragon), each with strategic interests in the region. Greenland’s location and resources make it central to geopolitical competition, with Denmark ensuring its sovereignty and security. Greenland’s primary allies are Denmark, the USA, and NATO member countries, which support its security and sovereignty. Unfriendly actors assessed to be potential threats include Russia, due to its military expansion in the Arctic, and China, due to its strategic economic ambitions and influence in the region. The primary threats to Greenland include military tensions, sovereignty challenges, environmental risks, resource exploitation, and economic dependence. Addressing these threats requires a balanced, cooperative approach to ensure regional stability and sustainability.

Cold winds cut like knives, Mountains rise in solitude, Life persists in ice. (Aqqaluk Lynge, “Harsh Embrace” ).

I have been designing, planning, building, and operating telecommunications networks across diverse environmental conditions, ranging from varied geographies to extreme climates. I sort of told myself that I most likely had seen it all. However (and luckily), the more I consider the complexities involved in establishing robust and highly reliable communication networks in Greenland, the more I realize the uniqueness and often extreme challenges involved with building & maintaining communications infrastructures there. The Greenlandic telecommunications incumbent Tusass has successfully built a resilient and dependable transport network that connects nearly every settlement in Greenland, no matter how small. They manage and maintain this network amidst some of the most severe environmental conditions on the planet. The staff of Tusass is fully committed to ensuring connectivity for these remote communities, recognizing that any service disruption can have severe repercussions for those living there.

As an independent board member of Tusass Greenland since 2022, I have witnessed Tusass’s dedication, passion, and understanding of the importance of improving and maintaining their network and connections for the well-being of all Greenlandic communities. To be clear, the opinions I express in this post are solely my own and do not necessarily reflect the views or opinions of Tusass. I believe that my opinions have been shaped by my Tusass and Greenlandic experience, by working closely with Tusass as an independent board member, and by a deep respect for Tusass and its employees. All information that I am using in this post is publicly available through annual reports (of Tusass) or, in general, publicly available on the internet.

Figure 1 Illustrating a coastal telecommunications site supporting the microwave long-haul transport network of Tusass up along the Greenlandic west coast. Courtesy: Tusass A/S (Greenland).

Greenland’s strategic location, its natural resources, environmental significance, and broader geopolitical context make it geopolitically a critical country. Thus, protecting and investing in Greenland’s critical infrastructure is obviously important. Not only from a national and geopolitical security perspective but also with respect to the economic development and stability of Greenland and the Arctic region. If a butterfly’s movements can cause a hurricane, imagine what an angry “polar bear” will do to the global weather and climate. The melting ice caps are enabling new shipping routes and making natural resources much more accessible, and they may also raise the stakes for regional security. For example, with China’s Polar Silk Road initiative where, China seeks to establish (or at least claim) a foothold in the Arctic in order to increase its trade routes and access to resources. This is also reflected in their 2018 declaration stating that China sees itself as a “Near-Arctic State” and concludes that China is one of the continental states that are closest to the Arctic Circle. Russia, which is a real neighboring country to the Arctic region and Circle, has also increased its military presence and economic activities in the Arctic. Recently, Russia has made claims in the Arctic to areas that overlap with what Denmark and Canada see as their natural territories, aiming to secure its northern borders and exploit the region’s resources. Russia has also added new military bases and has conducted large-scale maneuvers along its own Arctic coastline. The potential threats from increased Russian and Chinese Arctic activities pose significant security concerns. Identifying and articulating possible threat scenarios to the Arctic region involving potential hostile actors may indeed justify extraordinary measures and also highlight the need for urgent and substantial investments in and attention to Greenland’s critical infrastructure.

In this article, I focus very much on what key technologies should be considered, why specific technologies should be considered, and how those technologies could be implemented in a larger overarching security and defense architecture driving towards enhancing the safety and security of Greenland:

  • Leapfrog Quality of Critical Infrastructure: Strengthening the existing critical communications infrastructure should be a priority. With Tusass, this is the case in terms of increasing the existing transport network’s reliability and availability by adding new submarine cables and satellite backbone services and the associated satellite infrastructure. However, the backbone of the Tusass economy is a population of 57 thousand. The investments required to quantum leap the robustness of the existing critical infrastructure, as well as deploying many of the technologies discussed in this post, will not have a positive business case or a reasonable return on investment within a short period (e.g., a couple of years) if approached in the way that is the standard practice for most private corporations around the worlds. External subsidies will be required. The benefit evaluation would need to be considered over the long term, more in line with big public infrastructure projects. Most of these critical infrastructure and technology investments discussed are based on particular geopolitical assumptions and serve as risk-mitigating measures with substantial civil upside if we maintain a dual-use philosophy as a boundary condition for those investments. Overall I believe that a positive case might be made from the perspective of the possible loss of not making them rather than a typical gain or growth case expected if an investment is made.
  • Smart Infrastructure Development: Focus on building smart infrastructure, integrating sensor networks (e.g., DAS on submarine cables), and AI-driven automation for critical systems like communication networks, transportation, and energy management to improve resilience and operational efficiency. As discussed in this post, Tusass already has a strong communications network that should underpin any work on enhancing the Greenlandic defense architecture. Moreover, Tusass are experts in building and operating critical communications infrastructures in the Arctic. This is critical know-how that should be heavily relied upon in what has to come.
  • Automated Surveillance and Monitoring Systems: Invest in advanced automated surveillance technologies, such as aquatic and aerial drones, satellite-based monitoring (SIGINT and IMINT), and IoT sensors, to enhance real-time monitoring and protection of Greenland.
  • Autonomous Defense Systems: Deploy autonomous systems, including unmanned aerial vehicles (UAVs) and unmanned underwater vehicles (UUVs), to strengthen defense capabilities and ensure rapid response to potential threats in the Arctic region. These systems should be the backbone of ad-hoc private network deployments serving both defense and civilian use cases.
  • Cybersecurity and AI Integration: Implement robust cybersecurity measures and integrate artificial intelligence to protect critical infrastructure and ensure secure, reliable communication networks supporting both military and civilian applications in Greenland.
  • Dual-Use Infrastructure: Prioritize investments in infrastructure solutions that can serve both military and civilian purposes, such as communication networks and transportation facilities, to maximize benefits and resilience.
  • Local Economic and Social Benefits: Ensure that defense investments support local economic development by creating new job opportunities and improving essential services in Greenland.

I believe that Greenland needs to build a solid Greenlandic-centered know-how on a foundational level around autonomous and automated systems. In order to get there Greenland will need close and strong alliances that is aligned with the aim of achieving a greater degree of independence through clever use of the latest technologies available. Such local expertise will be essential in order to reduce the dependency on external support (e.g., from Denmark and Allies) and ensure that they can maintain operational capabilities independently, particularly during a security crisis. Automation, enabled by digitization and AI-enabled system architectures, would be key to managing and monitoring Greenland’s remote and inaccessible geography and resources efficiently and securely, minimizing the need for extensive human intervention. Leveraging autonomous defense and surveillance technologies and stepping up in digital maturity is an important path to compensating for Greenland’s small population. Additionally, implementing robust, with respect to hardware AND software, automated systems will allow Greenland to protect and maintain its critical infrastructure and services, mitigating the risks associated with (too much) reliance on Denmark or allies during a time of crisis where such resources may be scarce or impractical to timely move to Greenland.

Figure 2 A view from Tusass HQ over Nuuk, Greenland. Courtesy: Tusass A/S (Greenland).

GREENLAND – A CONCISE INTRODUCTION.

Greenland, or Kalaallit Nunaat as it is called in Greenlandic, has a surface area of about 2.2 million square kilometers with ca. 80% covered by ice and is the world’s largest island. It is an autonomous territory of Denmark with a population of approximately 57 thousand. Its surface area is comparable to that of Alaska (1.7 million km2) or Saudi Arabia (2.2 million km2). It is predominantly covered by ice, with a population scattered in smaller settlements along the western coastlines where the climate is milder and more hospitable. Greenland’s extensive coastline measures ca. 44 thousand kilometers and is one of the most remote and sparsely populated coastlines in the world. This remoteness contrasts with more densely populated and developed coastlines like the United States. The remoteness of Greenland’s coastline is further emphasized by a lack of civil infrastructure. There are no connecting roads between settlements, and most (if not all) travel between communities relies on maritime or air transport.

Greenland’s coastline presents several unique security challenges due to its particularities, such as its vast length, rugged terrain, harsh climate, and limited population. These factors make Greenland challenging to monitor and protect effectively, which is critical for several reasons:

  • The vast and inaccessible terrain.
  • Harsh climate and weather conditions.
  • Sparse population and limited infrastructure.
  • Maritime and resource security challenges.
  • Communications technology challenges.
  • Geopolitical significance.

The capital and largest city is Nuuk, located on the southwestern coast. With a population of approximately 18+ thousand or 30+% of the total, Nuuk is Greenland’s administrative and economic center, offering modern amenities and serving as the hub for the island’s limited transportation network. Sisimiut, north of Nuuk on the western coast. It is the second-largest town in Greenland, with a population of around 5,500+. Sisimiut is known for its fishing industry and serves as a base for much of the Greenlandic tourism and outdoor activities.

On the remote and inhospitable eastern coast, Tasiilaq is the largest town in the Ammassalik area, with a population of little less than 2,000. It is relatively isolated compared to the western settlements and is known for its breathtaking natural scenery and opportunities for adventure tourism (check out https://visitgreenland.com/ for much more information). In the far north, on the west coast, we have Qaanaaq (also known as Thule), which is one of the world’s most northern towns, with a population of ca. 600. Located near Qaanaaq, is the so-called Pituffik Space Base which is the United States’ northernmost military base, established in 1951, and a key component of NATO’s early warning and missile defense systems. The USA have had a military presence in Greenland since the early days of the World War II and strengthened during the Cold War. It also plays an important role in monitoring Arctic airspace and supporting the region’s avionics operations.

As of 2023, Greenland has approximately 56 inhabited settlements. I am using the word “settlement” as an all-inclusive covering communities with a population of 10s of thousands (Nuuk) down to 100s or lower. With few exceptions, there are no settlements with connecting roads or any other overland transportation connections with other settlements. All person- and goods transportation between the different settlements is taken by plane or helicopter (provided by Air Greenland) or seaborne transportation (e.g., Royal Artic Line, RAL).

Greenland is rich in natural resources. Apart from water (for hydropower), this includes significant mining, oil, and gas reserves. These natural resources are largely untapped and present substantial opportunities for economic development (and temptation for friendly as well as unfriendly actors). Greenland is believed to have one of the world’s largest deposits of rare earth elements (although by far not comparable to China), extremely valuable as an alternative to the reliance of China and critical for various high-tech applications, including electronics (e.g., your smartphone), renewable energy technologies (e.g., wind turbines and EVs), and defense systems. Graphite and platinum are also present in Greenland and are important in many industrial processes. Some estimates indicate that northeast Greenland’s waters could hold large reserves of (yet) undiscovered oil and gas reserves. Other areas are likewise believed to contain substantial hydrocarbon reserves. However, Greenland’s arctic environment presents severe exploration and extraction challenges, such as extreme cold, ice cover, and remoteness, that so far has made it also very costly and complicated to extraxt its natural resources. With the global warming the economical and practical barrier for exploitation is contineously reducing.

FROM STRATEGIC OUTPOST TO ARCTIC STRONGHOLD: THE EVOLVING SECURITY SIGNIFICANCE OF GREENLAND.

Figure 3 illustrates Greenland’s reliance on and the importance of critical communications infrastructure connecting local communities as well as bridging the rest of the world and the internet. Courtesy: DALL-E.

From a security perspective Greenland has evolved significantly since the Second World War. During World War II, its importance was primarily based on its location as a midway point between North America and Europe serving as a refueling and weather station for allied aircrafts crossing the Atlantic to and from Europe. Additionally, its remote geographical location combined with its harsh climate provided a “safe haven” for monitoring and early warning installations.

During the Cold War era, Greenland’s importance grew (again) due to its proximity to the Soviet Union (and Russia today). Greenland became a key site for early warning radar systems and an integral part of the North American Aerospace Defense Command (NORAD) network designed to detect Soviet bombers and missiles heading toward North America. In 1951, the USA-controlled Thule Air Base, today it is called Pituffik Space Base, located in northwest Greenland, was constructed with the purpose of hosting long-range bombers and providing an advanced point (from a USA perspective) for early warning and missile defense systems.

As global tensions eased in the post-Cold War period, Greenland’s strategic status diminished somewhat. However, its status is now changing again due to Russia’s increased aggression in Europe (and geopolitically) and a more assertive China with expressed interest in the Arctic. The arctic ice is melting due to climate change and has resulted in new maritime routes being possible, such as the Northern Sea Route. Also, making Arctic resources more accessible. Thus, we now observe an increased interest from global powers in the Arctic region. And as was the case during the cold-War period (maybe with much higher stakes), Greenland has become strategically critical for monitoring and controlling these emerging routes, and the Arctic in general. Particularly with the observed increased activity and interest from Russia and China.

Greenland’s position in the North Atlantic, bridging the gap between North America and Europe, has become a crucial spot for monitoring and controlling the transatlantic routes. Greenland is part of the so-called Greenland-Iceland-UK (GIUK) Gap. This gap is a critical “chokepoint” for controlling naval and submarine operations, as was evident during the Second World War (e.g., read up on the Battle of the Atlantic). Controlling the Gap increases the security of maritime and air traffic between the continents. Thus, Greenland has again become a key component in defense strategies and threat scenarios envisioned and studied by NATO (and the USA).

GREENLANDS GEOPOLITICAL ROLE.

Greenland’s recent significance in the Arctic should not be underestimated. It arises, in particular, from climate change and, as a result, melting ice caps that have and will enable new shipping routes and potential (easier) access to Greenland’s untapped natural resources.

Greenland hosts critical military and surveillance assets, including early warning radar installations as well as air & naval bases. These defense assets actively contributes to global security and is integral to NATO’s missile defense and early warning systems. They provide data for monitoring potential missile threats and other aerial activities in the North Atlantic and Arctic regions. Greenland’s air and naval bases also support specialized military operations, providing logistical hubs for allied forces operating in the Arctic and North Atlantic.

From a security perspective, Greenland’s control is not only about monitoring and defense. It is also about deterring potential threats from potential hostile actors. It allows for effective monitoring and defense of the Arctic and North Atlantic regions. Enabling the detection and tracking of submarines, ships, and aircraft. Such capabilities enhance situational awareness and operational readiness, but more importantly, it sends a message to potential adversaries (e.g., maybe unaware, as unlikely as it may be, about the deficiencies of Danish Arctic patrol ships). The ability to project power and maintain a military presence in this area is necessary for deterring potential adversaries and protecting he critical communications infrastructure (e.g., submarine cables), maritime routes, and airspace.

The strategic location of Greenland is key to contribute to the global security dynamics. Ensuring Greenland’s security and stability is essential for also maintaining control over critical transatlantic routes, monitoring Arctic activities, and protecting against potential threats from hostile actors. Making Greenland a cornerstone of the defense infrastructure and an essential area for geopolitical strategy in the North Atlantic and Arctic regions.

INFRASTRUCTURE RECOMMENDATIONS.

Recent research has focused on Greenland in the context of Arctic security (see “Greenland in Arctic Security: (De)securitization Dynamics under Climatic Thaw and Geopolitical Freeze” by M. Jacobsen et al.). The work emphasizes the importance of maintaining and enhancing surveillance and early warning systems. Greenland is advised to invest in advanced radar systems and satellite monitoring capabilities. These systems are relevant for detecting potential threats and providing timely information, ensuring national and regional security. I should point to the following traditional academic use of the word “securitization,” particularly from the Copenhagen School, which refers to framing an issue as an existential threat requiring extraordinary measures. Thus, securitization is the process by which topics are framed as matters of security that should be addressed with urgency and exeptional measures.

The research work furthermore underscores the Greenlandic need for additional strategic infrastructure development, such as enhancing or building new airport facilities and the associated infrastructure. This would for example include expanding and upgrading existing airports to improve connectivity within Greenland and with external partners (e.g., as is happening with the new airport in Nuuk). Such developments would also support economic activities, emergency response, and defense operations. Thus, it combines civic and military applications in what could be defined as dual-purpose infrastructure programs.

The above-mentioned research argues for the need to develop advanced communication systems, Signals Intelligence (SIGINT), and Image Intelligence (IMINT) gathering technologies based on satellite- and aerial-based platforms. These wide-area coverage platforms are critical to Greenland due to its vast and remote areas, where traditional communication networks may be insufficient or impractical. Satellite communication systems such as GEO, MEO, and LEO (and combinations thereof), and stratospheric high-altitude platform systems (HAPS) are relevant for maintaining robust surveillance, facilitating rapid emergency response, and ensuring effective coordination of security as well as search & rescue operations.

Expanding broadband internet access across Greenland is also a key recommendation (that is already in progress today). This involves improving the availability and reliability of communications-related connectivity by additional submarine cables and by new satellite internet services, ensuring that even the most remote communities have reliable broadband internet connectivity. All communities need to have access to broadband internet, be connected, enable economic development, improve quality of life in general, and integrate remote areas into the national and global networks. These communication infrastructure improvements are important for civilian and military purposes, ensuring that Greenland can effectively manage its security challenges and leverage new economic opportunities for its communities. It is my personal opinion that most communities or settlements are connected to the wider internet, and the priority should be to improve the redundancy, availability, and reliability of the existing critical communications infrastructure. With that also comes more quality in the form of higher internet speeds.

The applicability of at least some of the specific securitization recommendations for Greenland, as outlined in Marc Jacobsen’s “Greenland in Arctic Security: (De)securitization Dynamics Under Climatic Thaw and Geopolitical Freeze,” may be somewhat impractical given the unique characteristics of Greenland with its vast area and very small population. Quite a few recommendations (in my opinion), even if in place “today or tomorrow,” would require a critical scale of expertise, human, and industrial capital that Greenland does not have available on its own (and also is unlikely to have in the future). Thus, some of the recommendations depend on such resources to be delivered from outside Greenland, posing inherent availability risks to provide in a crisis (assuming that such capacity would even be available under normal circumstances). This dependency on external actors, particularly Danish and International investors, complicates Greenland’s ability to independently implement policies recommended by the securitization framework. It could lead to conflicts between local priorities and the interests of external stakeholders, particularly in a time of a clear and present security crisis (e.g., Russia attempting to expand west above and beyond Ukraine).

Also, as a result of Greenland’s small population there will be a limited pool of available local personnel with the needed skills to draw upon for implementing and maintaining many of the recommendations in “Greenland in Arctic Security: (De)securitization Dynamics under Climatic Thaw and Geopolitical Freeze”. Training and deploying enough high-tech skilled individuals to cover Greenland’s vast territory and technology needs is a very complex challenge given the limited human resources and challenges in getting external high-tech resouces to Greenland.

I believe Greenland should focus on establishing a comprehensive security strategy that minimizes its dependency on its natural allies and external actors in general. The dual-use approach should be integral to such a security strategy, where technology investments serve civil and defense purposes whenever possible. This approach ensures that Greenlandic society benefits directly from investments in building a robust security framework. I will come back to the various technologies that may be relevant in achieving more independence and less reliance on the external actors that are so prevalent in Greenland today.

HOW CRITICAL IS CRITICAL INFRASTRUCTURE TO GREENLAND

Communications infrastructure is seen as critical in Greenland. It has to provide a reliable and good quality service despite Greenland having some of the most unfavorable environmental conditions in which to build and operate communications networks. Greenland is characterized by vast distances between relatively small, isolated communities. Thus, this makes effective communication essential for bridging those gaps, allowing people to stay connected with each other and as well as the outside world irrespective of weather or geography. The lack of a comprehensive road network and reliance on sea and air travel further emphasize the importance of reliable and available telecommunications services, ensuring timely communication and coordination across the country.

Telecommunications infrastructure is a cornerstone of economic development in Greenland (as it has been elsewhere). It is about efficient internet and telephony services and its role in business operations, e-commerce activities, and international market connections. These aspects are important for the economic growth, education, and diversification of the many Greenlandic communities. The burgeoning tourism industry will also depend on (maybe even demand) robust communication networks to serve those tourists, ensure their safety in remote areas, and promote tourism activities in general. This illustrates very firmly that the communications infrastructure is critical (should there be any doubts).

Telecommunications infrastructure also enables distance learning in education and health services, providing people in remote areas with access to high-quality education that otherwise would not be possible (e.g., Coursera, Udemy Academy, …). Telemedicine has obvious benefits for healthcare services that are often limited in remote regions. It allows residents to receive remote medical consultations and services (e.g., by video conferencing) without the need for long-distance and time-consuming travels that often may aggravate a patient’s condition. Emergency response and public safety are other critical areas in which our communications infrastructure plays a crucial role. Greenland’s harsh and unpredictable weather can lead to severe storms, avalanches, and ice-related incidents. It is therefore important to have a reliable communication network that allows for timely warnings, supporting rescue operations & coordination, and public safety. Moreover, maritime safety also depends on a robust communication infrastructure, enabling reliable communication between ships and coastal stations.

A strong communication network can significantly enhance social connectivity, and help maintaining social ties, such as among families and communities across Greenland. Thus reduce the feeling of isolation. Supporting social cohesion in communities as well as between settlements. Telecommunications can also facilitate sharing and preserving the Greenlandic culture and language through digital media (e.g., Tusass Music), online platforms, and social networks (e.g., Facebook used by ca. 85% of the eligible population, that number is ca. 67% in Denmark).

For a government and its administration, maintaining effective and reliable communication is essential for well-functioning public services and its administration. It should facilitate coordination between different levels of government and remote administration. Additionally, environmental monitoring and research benefit greatly from a reliable and available communication infrastructure. Greenland’s unique environment attracts scientific research, and robust communication networks are essential for supporting data transmission (in general), coordination of research activities, and environmental monitoring. Greenland’s role in global climate change studies should also be supported by communication networks that provide the means of sharing essential climate data collected from remote research stations.

Last but not least. A well-protected (i.e., redundant) and highly available communications infrastructure is a cornerstone of any national defense or emergency situation. If it is well functioning, the critical communications infrastructure will support the seamless operation of military and civilian coordination, protect against cyber threats, and ensure public confidence during a crisis situation (natural or man-made). The importance of investing in and maintaining such a critical infrastructure cannot be underestimated. It plays a critical role in a nation’s overall security and resilience.

TUSASS: THE BACKBONE OF GREENLANDS CRITICAL COMMUNICATIONS INFRASTRUCTURE.

Tusass is the primary telecommunications provider in Greenland. It operates a comprehensive telecom network that includes submarine cables with 5 landing stations in Greenland, very long microwave (MW) radio chains (i.e., long-haul backbone transmission links) with MW backhaul branches to settlements along its way, and broadband satellite connections to deliver telephony, internet, and other communication services across the country. The company is wholly owned by the Government of Greenland (Naalakkersuisut). Positioning Tusass as a critical company responsible for the nation’s communications infrastructure. Tusass faces unique challenges due to the vast, remote, and rugged terrain. Extreme weather conditions make it difficult, often impossible, to work outside for at least 3 – 4 months a year. This complicates the deployment and maintenance of any infrastructure in general and a communications network in particular. The regulatory framework mandates that Tusass fulfills a so-called Public Service Obligation, or PSO. This requires Tusass to provide essential telecommunications services to all of Greenland, even the most isolated communities. This requires Tusass to continue to invest heavily in expanding and enhancing its critical infrastructure, providing reliable and high-quality services to all residents throughout Greenland.

Tusass is the main and, in most areas, the only telecommunications provider in Greenland. The company holds a dominant market position where it provides essential services such as fixed-line telephony, mobile networks, and internet services. The Greenlandic market for internet and data connections was liberalized in 2015. The liberalization allowed private Internet Service Providers (ISPs) to purchase wholesale connections from Tusass and resell them. Despite liberalization, Tusass remains the dominant force in Greenland’s telecommunications sector. Tusass’s market position can be attributed to its extensive communications infrastructure and its government ownership. With a population of 57 thousand and its vast geographical size, it would be highly uneconomical and human-resource wise very chalenging to have duplicate competing physical communications infrastructures and support organizations in Greenland. Not to mention that it would take many years before an alternative telco infrastructure could be up an running matching what is already in place. Thus, while there are smaller niche service providers, Tusass effectively operates as Greenland’s sole telecom provider.

Figure 4 Illustrates one of many of Tusass’s long-haul microwave site along Greenland’s west coast. Accessible only by helicopter. Courtesy: Tusass A/S (Greenland).

CURRENT STATE OF CRITICAL COMMUNICATIONS INFRASTRUCTURE.

The illustration below provides an overview of some of the major and critical infrastructures available in Greenland, with a focus on the communications infrastructure provided by Tusass, such as submarine cables, microwave (MW) radios radio chains, and satellite ground stations, which all connect Greenland and give access to the Internet for all of Greenland.

Figure 5 illustrates the Greenlandic telecommunications provider Tusass infrastructure. Note that Tusass is the incumbent and only telecom provider in Greenland. Currently, five hydropower plants (shown above, location only indicative) provide more than 80% of Greenland’s electricity demand. A new international airport is expected to be operational in Nuuk from November 2024. Source: from Tusass Annual Report 2023 with some additions and minor edits.

From the south of Nanortalik up to above Upernavik on the west coast, Tusass has a 1,700+ km long microwave radio chain connecting all settlements along Greenland’s west coast from the south to the north distributed, supported by 67 microwave (MW) radio sites. Thus, have a microwave radio equipment located for every ca. 25 km ensuring very high performance and availability of connectivity to the many settlements along the West Coast. This setup is called a long-haul microwave chain that uses a series of MW radio relay stations to transmit data over long distances (e.g., up to thousands of kilometers). The harsh climate with heavy rain, snow, and icing makes it very challenging to operate high-frequency, high-bandwidth microwaves (i.e., the short distances between the radio chain sites). The MW radio sites are mainly located on remote peaks in the harsh and unforgiving coastal landscape (ensuring line-of-site), making helicopters the only means of accessing these locations for maintenance and fueling. The field engineers here are pretty much superheroes maintaining the critical communications infrastructure of Greenland and understanding its life-and-death implications for all the remote communities if it breaks down (with the additional danger of meeting a very hungry polar bear and being stuck for several days on a location due to poor weather preventing the helicopter from picking the engineers up again).

Figure 6 illustrates a typical housing for field service staff when on site visits. As the weather can change very rapidly in Greenland it is not uncommon that field service staff have to wait for many days before they can be picked up again by the helicopter. Courtesy: Tusass A/S (Greenland).

Greenland relies on the “Greenland Connect” submarine cable to connect to the rest of the world and the wider internet with a modern-day throughput. The submarine cable connecting Greenland to Canada and Iceland runs from Newfoundland and Labrador in Canada to Nuuk and continues from Qaqortoq in Greenland to land in Iceland (that connects further to Copenhagen and the wider internet). Tusass, furthermore, has deployed submarine cables between 5 of the major Greenlandic settlements, including Nuuk, up the west coast and down to the south (i.e., Qaqortoq). The submarine cables provide some level of redundancies, increased availability, and substantial capacity & quality augmentation to the long-haul MW chain that carries the traffic from surrounding settlements. The submarine cables are critical and essential for the modernization and digitalization of Greenland. However, there are only two main submarine broadband cable connection points, the Canada – Nuuk and Qaqortoq – Iceland submarine connections, to and from Greenland. From a security perspective, this poses substantial and unique risks to Greenland, and its role and impact need to be considered in any work on critical infrastructure strategy. If both international submarine cables were compromised, intentionally or otherwise, it would become challenging, if possible, to sustain today’s communications demand. Most traffic would have to be supported by existing satellite capacity, which is substantially lower than the existing submarine cables can support, leaving the capacity mainly for mission-critical communications. Allowing little spare capacity for consumer and non-critical business communication needs. This said, as long as Greenlandic submarine cables, terrestrial transport, and switching infrastructure are functional, it would be possible to internally to Greenland maintain a resemblance of internet services and communication means between connected settlements using modern day network design thinking.

Moreover, while the submarine cables along the west coast offer redundancy to the land-based long-haul transport solution, there are substantial risks to settlements and their populations where the long-haul MW solution is the only means of supporting remote Greenlandic communities. Given Greenland’s unique geographic and climate challenges it is not only very costly but also time-consuming to reduce the risk of disruption to the existing lesser redundant critical infrastructure already in place (e.g., above Aasiaat north of the Arctic Circle).

Using satellites is an additional dimension, and part of the connectivity toolkit, that can be used to improve the redundancy and availability of the land- and water-based critical communications infrastructures. However, the drawback of satellite systems is that they generally are bandwidth/throughput limited and have longer signal delays (latency and round-trip time) than terrestrial-based communications systems. These issues could pose some limitations on how well some services can be supported or will function and would require a versatile traffic management & prioritization system in case the satellite solution would be the only means of connecting a relatively high-traffic area (e.g., Tasiilaq) used to ground-based support of broadband transport means with substantial more available bandwidth than accessible to the satellite solution. Particular for GEO stationary satellite services, with the satellite located at 36 thousand kilometer altitude, the data traffic flow needs to be carefully optimized in order to function well irrespective of the substantial latency experienced on such connections that at the very best can be 239 milliseconds and in practice might be closer to twice that or more. This poses significant challenges to particular TCP/IP data flows on such response-time-challenged connections and applications sensitivity short round trip times.

Optimizing and stabilizing TCP/IP data flows over GEO satellite connections requires a multi-faceted approach involving enhancements to the TCP protocol (e.g., window scaling, SACK, TCP Hypla, …), the use of hybrid and proxy solutions, application-layer adjustments, error correction mechanisms, Quality of Service (QoS) and traffic shaping, DNS optimizations, and continuous network monitoring. Combining these strategies makes it possible to mitigate some of the inherent challenges of high-latency satellite links and ensure more effective and efficient IP flows and better utilization of the available satellite link bandwidth. Optimizing control signals and latency-sensitive data flows over GEO and LEO satellite connections may also substantially reduce the sensitivity to the prohibitive long delays experienced on GEO connections, using the lower latency LEO connection (RTT < ~ 50 ms @ 500 km altitude), or, if available as a better alternative a long-haul microwave link or submarine connection.

Tusass, in collaboration with the Spanish satellite company Hispasat, make use of the Greenland geostationary satellite, Greensat. Tusass signed an agreement with Hispasat to lease space capacity (800 MHz @ Ku-band) on the Amazonas Nexus satellite until the end of its lifetime (i.e., 2038+/-). Greensat was taken into operation in the last quarter of 2023 (note: it was launched in February 2023), providing services to the satellite-only settlement areas around Qaanaaq, the northernmost settlement on the west coast of Greenland, and Tasiilaq and Ittoqortormiut (north of Tasiilaq), on the remote east coast. All mobile and fixed traffic from a satellite-only area is routed to a satellite ground station that is connected to the geostationary satellite (see the illustration below). The satellite’s primary mission is to provide broadband services to areas that, due to geography & climate and cost, are impractical to connect by submarine cable or long-haul microwave links. The Greensat satellite closes the connection to the rest of the world and the internet via a ground station on Gran Canaria. It also connects to Greenland via submarine cables in Nuuk (via Canada and Qaqortoq).

Figure 7 The image shows a large geostationary satellite ground-station antenna located in Greenland’s cold and remote area. The antenna’s primary purpose is to facilitate communication with geostationary satellites 36 thousand kilometers away, transmitting and receiving data. It may support various services such as Internet, television broadcasting, weather monitoring, and emergency communications. The components are (1) a parabolic reflector (dish), (2) a feed horn and receiver, (3) a mount and support structure, (4) control and monitoring systems, and (5) a radome (not shown on the picture) which is a structural, weatherproof enclosure that protects the antenna from environmental elements without interfering with the electromagnetic signals it transmits and receives. The LEO satellite ground stations are much smaller as the distance between the ground and the low-earth satellite is much smaller, i.e., ca. 350 – 650 km, resulting in less challenging receive and transmit conditions (compared to the connection to a geostationary satellite).

In addition, Tusass also makes use of UK-based OneWeb (Eutelsat) LEO satellite backhaul services at several locations where an area fixed and mobile traffic is routed to a point-of-presence connected to a satellite ground station that connects to a OneWeb satellite that connects to the central switching center in Nuuk (connected to another ground station).

CRITICAL PROPERTIES FOR RELIABLE AND SECURE TRANSPORT NETWORKS.

A physical transport network comprises many tangible components, such as cables, routers, and switches, which form an interconnected system capable of transmitting data. The network is designed and planned according to a given expected coverage, use and level of targeted quality (e.g., speed, latency, priority and security). Moreover, we are also concerned about such a networks availability as well as reliability. We design the physical and logical (i.e., related to higher levels of the OSI stack than the physical) network according to a given target availability, that is how many hours in a year should the network minimum be operational and available to our customers. You will see availability given in percentage of the total hours in a year (e.g., 8,760 hours in a normal year and 8,784 hours in a leap year). So an availability of 99.9% means that we target a minimum operational time of our network of 8,751 hours, or, alternatively, accept a maximum of 9 hours of downtime. The reliability of a network refers to the probability hat the network will continue to function without failure for a given period. For example, say you have a mean time between failures (MTBF) of 8750 hours and you want to figure out what the likelihood is of operating without failure for 4,380 hours (half a year), you find that there is a ca. 60% chance of operating without a failure (or 40% that a failure may occur within the next 6 months). For a critical infrastructure the availability and reliability metrics are very important to consider in any design and planning process.

In contrast to the physical network depiction, a network graph representation abstracts the physical transport network into a mathematical model where graph nodes (or vertexes) represent the network’s many components and edges (or links) represent the physical and logical connections between these network’s many components. Modellizing the physical (and logical) network allows designers and planners to study in detail a networks robustness against many types of disruptions as well as its general functioning and performance.

Suppose we are using a graph approach in our design of a critical communications network. We then need to carefully consider various graph properties critical for the network’s robustness, security, reliability, and efficiency. To achieve this, one must strive for resilience and fault tolerance by designing for increased redundancy and availability involving multiple paths, edges, or connections between nodes, preventing single points of failure (SPoF). This involves creating a network where the number of independent paths between any two nodes is maximized (often subject to economics and feasibility boundary conditions). An optimal average degree of nodes should also be a design criterion. A higher degree of nodes enhances the graph’s, and thus the underlying network’s, resilience, thus avoiding increased vulnerability.

Scalability is a crucial network property. This is best achieved through a hierarchical structure (or topology) that allows for efficient network management as the network expands. The Modularity, which is another graph KPI, ensures that the network can integrate new nodes and edges without major reconfigurations, supporting civilian expansion and military operations or dual-purpose operations. To meet low-latency and high-throughput requirements, the shortest-path routing algorithms should be applied to allow us to minimize the latency or round-trip time (and thus increase throughput). Moreover, bandwidth management should be implemented, allowing the network to handle large data volumes in a prioritized manner (if required). This also ensures that the network can accommodate peak loads and prioritize critical communication when it is compromised.

Security is a paramount property of any communications network. In today’s environment with many real and dangerous cyber threats, it may be one of the most important topics to consider. Each node and link (or edge) in a network requires robust defenses against cyber threats. In our design, we need to think about encryption, authentication, intrusion, and anomaly detection systems. Network segmentation will help isolate critical defense communications from civilian traffic, preventing breaches from compromising the entire network. Survivability is enhanced by minimizing the Network Diameter, a graph property. A low (or lower) network diameter ensures that a network can quickly reroute traffic in case of failures and is an important design element for robustness against targeted attacks and random failures.

Likewise, interoperability is essential for seamless integration between civilian and military communication systems. Flexible protocols and specifications (e.g., Open API) enable different types of traffic and varying security requirements. These frameworks provide the structure, tools, and best practices needed to build and maintain secure communication systems. Thereby protecting against the various cyber threats we have today and expect in the future. Efficiency is achieved through effective load balancing (e.g., on a logical as well as physical level) to distribute traffic evenly across the network, prevent bottlenecks, optimize performance, and design for energy-efficient operations, particularly in remote or harsh environments or in case a part of the network has been compromised.

In order to support both civilian services and defense operations, accessibility and high availability are very important design requirements to consider when having a network with extensive large-scale coverage, including in very remote areas. Incorporating redundant communication links, such as satellite, fiber optic, and wireless, are design choices that allow for high availability even under adverse and disruptive conditions. It makes good sense in an environment such as Greenland to ensure that long-haul microwave links have a given level of redundancy either by satellite backhaul, submarine cable, or additional MW redundancy. While we always strive for our designs to be cost-effective, it may be a challenge if the circumstances dictate that the best redundancy (availability) solution is solved by non-terrestrial means (e.g., by satellite or submarine means). However, efficiency should be addressed by optimizing resource allocation to balance cost with performance, ensuring civil and defense needs are met without excessive expenditure, and sharing infrastructure where feasible to reduce costs while maintaining security through logical separation.

Ultra-secure transport networks are designed to meet stringent reliability, resilience, and security requirements. These type of networks are critical for civil and defense applications, ensuring continuous operation and protection against various threats. The important graph properties that such networks should exhibit include high connectivity, redundancy, low diameter, high node degree, network segmentation, robustness to attacks, scalability, efficient load balancing, geographical diversity, and adaptive routing.

High connectivity ensures multiple independent paths between any pair of nodes in the network, which is crucial for a communication network’s resilience and fault tolerance. This allows the network to maintain functionality even if several nodes or links fail, making it capable of withstanding targeted attacks or random failures without significant performance degradation. Redundancy, which involves having multiple backup paths and nodes, enhances fault tolerance and high availability by providing alternative routes for data transmission if primary paths fail. Redundancy also applies to critical network components such as switches, routers, and communication links, ensuring no or uncritical single point of failure.

A low diameter, the longest-shortest path between any two nodes, ensures data can travel quickly across the network, minimizing latency. This is especially important in time-sensitive applications. High node degree, meaning nodes are connected to many other nodes, increases the network’s robustness and allows for multiple paths for data to traverse, contributing to security and availability. However, it is essential to manage the trade-off between having a high node degree and the complexity of the network.

Network segmentation and compartmentalization will enhance security by limiting the impact of breaches or failures on a small part of the network. This is of particular importance when having a dual-use network design. Network segmentation divides the network into multiple smaller subnetworks. Each segment may have its own security and access control policies. Network compartmentalization involves designing isolated environments where, for example, data and functionalities are separated based on their criticality and sensitivity (this is, in general, a logical separation). Both strategies help contain cyber threats as well as prevent them from spreading across an entire network. Moreover, it also allows for a more granular control over network traffic and access. With this consideration, we should have a network that is robust against various types of attacks, including both physical and cyber attacks, by using secure protocols, encryption, authentication mechanisms, and intrusion detection systems. The aim of the network topology should be to minimize the impact of potential attacks on critical network nodes and links.

In a country such as Greenland, with settlements spread out over a very long distance and supported by very long and exposed transmission links (e.g., long-haul microwave links), geographical diversity is an essential design consideration that allows us to protect the functioning of services against localized disasters or failures. Typically, this involves distributing switching and management nodes, including data centers, across different geographic locations, ensuring that a failure in one area or with a main transport link does not disrupt the major parts of a network. This is particularly important for disaster recovery and business continuity. Finally, the network should support adaptive and dynamic routing protocols that can quickly respond to changes in the network topology, such as node failures or changes in traffic patterns. Such protocols will enhance the network’s resilience by automatically finding the best real-time data transmission paths.

TUSASS NETWORK AS A GRAPH.

Real maps, such as the Greenland map shown below at the left side of Figure 8, provide valuable geographical context and are essential for understanding the physical layout and extent of, for example, a transport network. A graph representation, as shown on the right side of Figure 8, on the other hand, offers a powerful and complementary perspective of the real-world network topology. It can emphasize the structural properties (and qualities) without those disappearing in geographical details that often are not relevant to the network functioning (if designed appropriately). A graph can contain many layers of network information that pretty much describe the network stack if required (e.g., from physical transport up through IP, TCP/IP, and to the application layers). It also supports many types of advanced analysis, design scenarios, and different types of simulations. A graph representation of a communications network is an invaluable tool for network design, planning, troubleshooting, analysis, and management.

Thus, the network graph approach offers several benefits for planning and operations. Firstly, the approach can often visualize the network’s topology better than a geographical map. It facilitates the understanding of various network (and graph) relationships and interconnections between the various network components. Secondly, the graph algorithms can be applied to the network graph and support the analysis of its characteristics, such as availability and redundancy scores, connectivity in general, the shortest paths, and so forth. This kind of analysis helps us identify critical nodes or links that may be sensitive to network and service disruption. It can also help significantly in maintaining and optimizing a network’s operation.

So, analyzing the our communication network’s graph representation makes it possible to identify potential weaknesses in the physical transport network, such as single points of failure (SPoF), bottlenecks, or areas with limited or weak redundancy. These identified weaknesses can then be addressed to enhance the network’s resilience, e.g., improving our network’s redundancy, availability and thus its overall reliability.

Figure 8 The chart above shows on the left side the topology of the (real) transport network of Tusass with the reference point in the Greenlandic settlements it connects. It should be noted that the actual transport network is slightly different as there are more hops between settlements than is shown here. On the right side is a graph representation of the Tusass transport network, shown on the left. The network graph represents the transport network on the west coast north and southbound. There are three main connection categories: (Black dashed line) Microwave (MW), (Orange dashed line) Submarine Cable, and (Blue solid line) Satellite, of which there are a GEO and a LEO arrangement. The size of the node, or settlements, represents the size of the population, which is also why Nuuk has the largest circle. The graph has been drawn consistent with the Kamada-Kawai layout, which is particularly useful for small to medium graphs, providing a reasonable, intuitive visualization of the structural relationship between nodes.

In the following, it is important to understand that due to Greenland’s specific conditions, such as weather and geography, building a robust transport network regarding reliability and redundancy will always be challenging, particularly when relying on the standard toolbox for designing, planning, and creating such networks. With geographical challenges should for example be understood the resulting lack of civil infrastructure connecting settlements … such as the lack of a road network.

The Table below provides key performance indicators (KPIs) for the Greenlandic (Tusass) transport network graph, as illustrated in Figure 8 above. It represents various aspects of the transport network’s structure and connectivity. This graph consists of 93 vertices (e.g., settlements and other connection points, such as long-haul MW radio sites) and 101 edges (transport connections), and it is fully connected, meaning all nodes are reachable within the network. There is only one subgraph, indicating no isolated segments as expected.

The Average Path Length suggests that it takes on average 39 steps to travel between any two nodes. This is a relatively high number, which may indicate a less efficient network. The Diameter of a network is defined as the longest shortest path between any two nodes. It can be shown that the value of the diameter lies between the value of the radius and twice that value (and not higher;-). The diameter is found to be 32, indicating a quite high maximum distance between the most distant nodes. This suggests that the network has a quite extensive reach, as is also obvious from the various illustrations of the transport network above (Figure 8) and below (Figure 11 & 12). Apart from the fact that such a high diameter may indicate potential inefficiencies, a large diameter can also mean that, in the worst-case scenarios, such as a compromised link or connectivity issues in general, communication between some nodes involves many steps (or hops), potentially leading to higher latency and slower data transmission. Related to the Diameter, the network Radius is the minimum eccentricity of any node, which is the shortest path from the most central node to the farthest node. Here, we find the radius to be 16, which means that even the most centrally located node is relatively far from some other nodes in the network. Something that is also very obvious from the various illustrations of the transport network. This emphasizes that the network has nodes that are significantly far apart. Without sufficient redundancy in place, such a transport network may be more sensitive to disruption of the connectivity.

From the perspective of redundancy, a large diameter and radius may imply that the network has fewer alternative paths between distant nodes (i.e., a lower redundancy score). This is, for example, the case between the northern point of Kullorsuaq and Aasiaat. Aasiaat is the first settlement (from the North) to be connected both by microwave and submarine cable and thus has an alternative connectivity solution to the long-haul microwave chain. If a critical node or link fails, the alternative path latency might be considerably longer than the compromised connectivity, such as would be the case with the alternative connectivity being satellite-based, leading to inefficiencies and possible reduced performance. This can also suggest potential capacity bottlenecks where specific paths are heavily relied upon without having enough capacity to act as the sole connectivity for a given transmission path. Thus, the vulnerability of the network to failures increases, resulting in reduced performance for customers in the affected area.

We find a Graph Density, at 0.024. This value indicates a sparse network with relatively few connections compared to the number of possible connections. The Clustering Coefficient is 0.014 and indicates that there are very few tightly-knit groups of nodes (again easily confirmed by visual inspection of the graph itself, see the various figures). The value of the Average Betweenness (ca. 423) measures how often nodes act as bridges along the shortest path between other nodes, indicating a significant central node (i.e., Nuuk).

The Average Closeness of 0.0003 and the Average Eigenvector Centrality of 0.105 provide insights into settlements’ influence and accessibility within the transport network. The Average Closeness measures of how close, on average, nodes are to each other. A high value indicates that nodes (or settlements) are close to each other meaning that the information (e.g., user data, signaling) being transported over the network spreads quickly and efficiently. And not surprisingly the opposite would be the case for a low average value. For our Tusass network the average closeness is very low and suggests that the network may face challenges in accessibility and efficiency, with nodes (settlements) being relatively far from one another. This typically will have an impact on the speed and effectiveness of communication across the network. The Average Eigenvector Centrality measures the overall importance (or influence) of nodes within a network. The term Eigenvector is a mathematical concept from linear algebra that represents the stable state of the network and provides insights into the structure of the graph and thus the network. For our Tusass network the average eigenvector value is (very) low and indicates a distribution of influence across several nodes that may actually prevent reliance on a single point of failure and, in general, such structures are thought to enhance a network’s resilience and redundancy. An Average Degree of ca. 2 means that each node has about 2 connections on average, indicating a hierarchical network structure with fewer direct connections and with a somewhat low level of redundancy, consistent with what can be observed from the various illustrations shown in this post. This do indicate that our network may be more vulnerable to disruption and failures and have a relative high latency (thus, a high round trip time).

Say that for some reason, the connection to Ilulissat, a settlement north of Aasiaat on the west coast with a little under 5 thousand people, is disrupted due to a connectivity issue between Ilulissat and Qasigiannguit, a neighboring settlement to Ilulissat with ca. a thousand people. This would today disconnect ca. 11 thousand people from receiving communications services or ca. 20% of Tusass’s customer base as all settlements north of Ilulissat would likewise be disconnected because of the reliance on the broken connection to also transport their data towards Nuuk and the internet using the submarine cables out of Greenland. In the terminology of the network graph, a broken connection (or edge as it is called in graph theory) that breaks up the network into two (or more) disconnected parts is called a Bridge. Thus, the connection between Ilulissat and Qasigiannguit is a bridge, as if it is broken, disconnecting the northern part of the long-haul microwave network above Ilulissat. Similarly, if Ilulissat were a central switching hub disrupted, it would disconnect the upper northern network from the network south of Ilulissat, and we would call Ilulissat an Articulation Point. For example, a submarine cable between Aasiaat and Ilulissat would provide redundancy for this particular event, mitigating a disruption of the microwave long-haul network between Ilulissat and Aasiaat that would disconnect at least 20% of the population from communications services.

The transport network has 44 Articulation Points and 57 Bridges, highlighting vulnerabilities where node or link failures could significantly disrupt the connectivity between parts of the network, disconnecting major parts of the network and thus disrupting services. A Modularity of 0.65 suggests a moderately high presence of distinct communities, with the network divided into 8 such communities (see Figure below).

Figure 9 In network analysis, a “natural” community (or cluster) is a group of nodes that are more densely connected to each other than to nodes outside the group. Natural communities are denser subgraphs within a larger network. Identifying such communities helps in understanding the structure and function of the network. In the above analysis of how Tusass’s transport network connects to the various settlements illustrates quiet well the various categories of connectivity (e.g., long-haul microwaves only, submarine cable redundancy, satellite redundancy, etc..) in the communications network of Tusass,

A Throughput (or Degree) of 202 indicates a network with an overall capacity for data transmission. The Degree is the average number of connections per node for a network graph. In a transport network, the degree indicates how many direct connections it has to other settlements. A higher degree implies better connectivity and potentially a higher resilience and redundancy. In a fully connected network with 93 nodes, the total degree would be 93 multiplied by 92, which equals 8,556. Therefore, a value of 202 is quite low in comparison, indicating that the network is far from fully connected, which anyway would be unusual for a transport network on this side. Our transport network is relatively sparse and, thus, resulting in a lower total degree, suggesting that fewer direct paths exist between nodes. This may potentially also mean less overall network redundancy. In the case of a node or link failure, there might be fewer alternative routes, which, as a consequence, can impact network reliability and resilience. Lower degree values can also indicate limited capacity for data transmission between nodes, potentially leading to congestion or bottlenecks if certain paths become over-utilized. This can, of course, then affect the efficiency and speed of data transfer within the network as traffic congestion levels increase.

The KPIs, shown in Table 1 below, collectively indicate that our Greenlandic transport network has several critical points and connections that could affect redundancy and availability. Particularly if they become compromised or experience outages. The high number of articulation points and bridges indicates possible design weaknesses, with the low density and average degree suggesting a limited level of redundancy. In fact, Tusass has, over several years, improved its transport network resilience, focusing on increasing the level of redundancy and reducing critical single points of failure. However, the changes and additions are costly and, due to the environmental conditions of Greenland, are also time-consuming, having fewer working days available for outdoor civil work projects.

Table 1 illustrates the most important graph KPIs, also described in the text above and below, that are associated with the graph representation of the Tusass transport network represented by the settlement connectivity (approximating but not one-to-one with the actual transport network).

In graph theory, an articulation point (see Figure 10 below) is a node that, if it is removed from the network, would split the network into disconnected parts. In our story, an articulation point would be one of our Greenlandic settlements. These types of points are thus important in maintaining network connectivity and serve as points in the network where alternative redundancy schemes might serve well. Therefore, creating additional redundancy in the network’s routing paths and implementing alternative connections will mitigate the impact of a failure of an articulation point, ensuring continued operations in case of a disruption. Basically, the more redundancy that a network has, the fewer articulation points the network will have; see also the illustration below.

Figure 10 The figure above illustrates the redundancy and availability of 3 simple undirected graphs with 4 nodes. The first graph is fully connected, with no articulation points or bridges, resulting in a redundancy and availability score of 100%. Thus I can remove a Node or a Connection from the graph and the remainder will remain full connected. The second graph, which is partly connected, has one articulation point and one bridge, leading to a redundancy and availability score of 75%. If I remove the third Node or the connection between Node 3 and Node 4, I would end with a disconnected Node 4 and a graph that has been broken up in 2 (e.g., if Node 3 is removed we have 2 sub-graphs {1,2} and {4}), The third graph, also partly connected, contains two articulation points and three bridges, resulting in a redundancy score of 0% and an availability score of 50%. Articulation points and bridges are highlighted in red to emphasize their critical roles in graph connectivity. Note: An articulation point is a node whose removal disconnects the graph and a bridge is an edge whose removal disconnects the graph.

Careful consideration of articulation points is crucial in preventing network partitioning, where removing a single node can disconnect the overall network into multiple sub-segments of the network. The connectivity between different segments is obviously critical for continuous data flow and service availability. Often, design and planning requirements dictate that if a network is broken into parts due to various disruption scenarios, these parts will remain functional and continue to provide a service that is possible with reduced performance. Network designers would make use of different strategies, such as increasing the physical redundancy of the transmission network as well as making use of routing algorithms on a higher level, such as multipath routing and diverse routing paths. Moreover, optimizing the placement of articulation points and routing paths (i.e., how traffic flows through the communications network) also maximizes resource utilization and may ensure optimal network performance and service availability for an operator’s customers.

Figure 11 illustrates the many articulation points of our Greenlandic settlements, represented as red stars in the graph of the Greenlandic transport network. Removing an articulation point (a critical node) would partition the graph into multiple disconnected components and may lead to severe service interruption.

In graph theory, a bridge is a network connection (or edge) whose removal would split the graph into multiple disconnected components. This type of connection is obviously critical for maintaining connectivity and facilitating communication between different network parts. In real life with real networks, the network designers would, in general, spend considerable time to ensure that such critical connections (i.e., so-called bridges) do not have an over-proportional impact on their network availability by, for example, building alternative connections (i.e., redundant connections) or ensuring that the impact of a compromised bridge would have a minimum impact in terms of the number of customers.

For our transport network in Greenland, the long-haul microwave transport network is overall less sensitive to disruption on a settlement level, as the underlying topology is like a long spine at high capacity and reasonable redundancy built-in with branches of MW radios that connect from the spine to a particular settlement. Thus, in most cases in this analysis, the long-haul MW radio site, in proximity to a given settlement, is the actual articulation point (not the settlement itself). The Nuuk data center, a central switching hub, is, by definition, an articulation point of very high criticality.

As discussed above and shown below (Figure 12), in the context of our transport network, bridges may play a crucial role in network resilience and fault tolerance. In our story, bridges represent the transport connections connecting Greenlandic settlements and the core network back in Nuuk (i.e., the master network node). In our representations, a bridge can, for example, be (1) a Microwave connection, (2) A submarine cable connection, and (3) a satellite connection provided by Tusass’s geo stationary satellite (e.g., Greensat) or by the low-earth orbiting OneWeb satellite. By identifying and managing bridges, network designers can mitigate the impact of link failures and disruptions, ensuring continuous operation and availability of services. Moreover, keeping network bridges in mind and minimizing them when planning a transport network will significantly reduce the risk of customer-affecting outages and keep the impact of transport disruption and the subsequent network partitioning to a minimum.

Figure 12 illustrates the many (edge) bridges and transport connections present in the graph of the Greenlandic transport network. Removing a bridge would split the network (graph) into multiple disconnected components, leading to network fragmentation and parts that may no longer sustain services. The above picture is common for long microwave chains with many hops (the connections themselves). The remedy is to make shorter hops, as Tusass is doing, and ensure that the connection itself is redundant equipment-wise (e.g., if one radio fails, there is another to take over). However, such a network would remain sensitive to any disruption of the MW site location and the large MW dish antenna.

Network designers should deploy redundancy mechanisms that would minimize the risk of the disruptive impact of compromised articulation points and bridges. They have several choices to choose from, such as multipath routing (e.g., ring topologies), link aggregation, and diverse routing paths to enhance redundancy and availability. These mechanisms will help minimize the impact of bridge failures and improve the overall network availability by increasing the level of network redundancy on a physical and logical level. Moreover, optimizing the placement of bridges and routing paths in a transport network will maximize resource utilization and ensure optimal network performance and service availability.

Knowing a given networks Articulation Points and Bridges will allow us to define an Availability and a Redundancy Score that we can use to evaluate and optimize a network’s robustness and reliability. Some examples of these concepts for simpler graphs (i.e., 4 nodes) are also shown in Figure 10 above. In the context of the Greenland transport network used here, these metrics can help us understand how resilient the network is to failures.

The Availability Score measures the proportion of nodes that are not articulation points, which might compromise our network’s overall availability if compromised. This score measures the risk of exposure to service disruption in case of a disconnection. As a reminder, the articulation point, or cut-vertex, is a node that, when removed, increases the number of components of the network and, thus, potentially the amount of disconnecting parts. The formula that is used to calculate the availability score is given by the total number of settlements (e.g., 93) minus the number of articulation points (e.g., 44) divided by the total number of settlements (e.g., 93). In this context, a higher availability score indicates a more robust network where fewer nodes are critical points of failure. Suppose we get a score that is close to one. In that case, this indicates that most nodes are not articulation points, suggesting that the network can sustain multiple node failures without significant loss of connectivity (see Figure 10 for a relatively simple illustration of this).

The Redundancy Score measures the proportion of connections that are not bridges, which could result in severe service disruptions to our customers if compromised. When a bridge is compromised or removed, it increases the number of network parts. The formula for the redundancy score is the total number of transport connections (edges, e.g., 101) minus the number of bridges (e.g., 57) divided by the total number of transport connections (edges, e.g., 101). Thus, in this context of redundancy, a higher redundancy score indicates a more resilient network where fewer edges are critical points of failure. If we get a redundancy score that is close to 100%, it would indicate that most of our (transport) connections cannot be categorized as bridges. This also suggests that our network can sustain multiple connectivity failures without it, resulting in a significant loss of overall connectivity and a severe service interruption.

Having more switching centers, or central hubs, can significantly enhance a communications network’s resilience, availability, and redundancy. It also reduces the consequences and impact of disruption to critical bridges in the network. Moreover, by distributing traffic, isolating failures, and providing multiple paths for data transmission, these central hubs may ensure continuous service to our customers and improve the overall network performance. In my opinion, implementing strategies to support multiple switching centers is essential for maintaining a robust and reliable communications infrastructure capable of withstanding various disruptions and enabling scaling to meet any future demands.

For our Greenlandic transport network shown above, we find an Availability Score of 53% and a Redundancy Score of 44%. While the scores may appear on the low side, we need to keep in mind that we are in Greenland with a population of 57 thousand mainly distributed along the west coast (from south to the north) in about 50+ settlements with 30%+ living in Nuuk. Tusass communications network connects to pretty much all settlements in Greenland, covering approximately 3,500+ km on the west coast (e.g., comparable to the distance from the top of Norway all the way down to the most southern point of Sicily), and irrespective of the number of people living in them. This is also a very clear desire, expectation, and direction that has been given by the Greenlandic administration (i.e., via the universal service obligation imposed on Tusass). The Tusass transport network is not designed with strict financial KPIs in mind and with the financial requirement that a given connection to a settlement would need to have a positive return on investment within a few years (as is the prevalent norm in our Industry). The transport network of Tusass has been designed to connect all communities of Greenland to an adequate level of quality and availability, prioritizing the coverage of the Greenlandic population (and the settlements they live in) rather than whether or not it makes hard financial sense. Tusass’s network is continuously upgraded and expanded as the demand for more advanced broadband services increases (as it does anywhere else in the world).

CRITICAL TECHNOLOGIES RELEVANT TO GREENLAND AND THE WIDER ARCTIC.

Greenland’s strategic location in the Arctic and its untapped natural resources, such as rare earth elements, oil, and gas, has increasingly drawn the attention of major global powers like the United States, Russia, and China. The melting Arctic ice due to climate change is opening new shipping routes and making these resources more accessible, escalating the geopolitical competition in the region.

Greenland must establish a defense and security strategy that minimizes its dependency on its natural allies and external actors to mitigate a situation where such may not be available or have the resources to commit to Greenland. An integral part of such a security strategy should be a dual-use, civil, and defense requirement whenever possible. Ensuring that Greenlandic society gets an immediate and sustainable return on investments in establishing a solid security framework.

5G technology offers significant advancements over previous generations of wireless networks, particularly in terms of private networking, speed, reliability, and latency across a variety of coverage platforms, e.g., (normal fixed) terrestrial antennas, vehicle-based (i.e., Cell on Wheels), balloon-based, drone-based, LEO-satellite based. This makes 5G ideal for setting up ad-hoc mobile coverage areas for military and critical civil applications. One of the key capabilities of 5G that supports these use cases is network slicing, which allows for the creation of dedicated virtual networks optimized for specific requirements.

Telia Norway has conducted trials together with the Norwegian Armed Forces in Norway to demonstrate the use of 5G for military applications (note: I think this is one of the best examples of an operator-defense collaboration on deployment innovation and directly applies to Arctic conditions). These trials included setting up ad-hoc 5G networks to support various military scenarios (including in an Arctic-like climate). The key findings demonstrated the ability to provide high-speed, low-latency communications in challenging environments, supporting real-time situational awareness and secure communications for military personnel. Ericsson has also partnered with the UK Ministry of Defense to trial 5G applications for military use. These trials focused on using 5G to support secure communications, enhance situational awareness, and enable the use of autonomous systems in military operations. NATO has conducted exercises incorporating 5G technology to evaluate its potential for improving command and control, situational awareness, and logistics in multi-national military operations. These exercises have shown the potential of 5G to enhance interoperability and coordination among allied forces. It is a very meaningful dual-use technology.

5G private networks offer a dedicated and secure network environment for specific organizations or use cases, which can be particularly beneficial in the Arctic and Greenland. These private networks can provide reliable communication and data transfer in remote and harsh environments, supporting military and civil applications. For instance, in Greenland, 5G private networks can enhance communication for scientific research stations, ensuring that data from environmental monitoring and climate research is transmitted securely and efficiently. They can also support critical infrastructure, such as power grids and transportation networks, by providing a reliable communication backbone. Moreover, in Greenland, the existing public telecommunications network may be designed in such a way that it essentially could operate as a “private” network in case transmission lines connecting settlements would be compromised (e.g., due to natural or unnatural causes), possibly a “thin” LEO satellite connection out of the settlement.

5G provides ultra-fast data speeds and low latency, enabling (near) real-time communication and data processing. This is crucial for military operations and emergency response scenarios where timely information is vital. Network slicing allows a single physical 5G network to be divided into multiple virtual networks, each tailored to specific applications or user groups. This ensures that critical communications are prioritized and reliable even during network congestion. It should be considered that for Greenland, the transport network (e.g., long-haul microwave network, routing choices, and satellite connections) might be limiting how fast the ultra-fast data speeds can become and may, at least along some transport routes, limit the round trip time performance (e.g., GEO satellite connections).

5G Enhanced Mobile Broadband (eMBB) provides high-speed internet access to support applications such as video streaming, augmented reality (AR), and virtual reality (VR) for situational awareness and training. Massive Machine-Type Communications (mMTC) supports a large number of IoT devices for monitoring and controlling equipment, sensors, and vehicles in both military and civil scenarios. Ultra-Reliable (Low-Latency) Communications (URLLC) ensures dependable and timely communication for critical applications such as command and control systems as well as unmanned and autonomous communication platforms (e.g., terrestrial, aerial, and underwater drones). I should note that designing defense and secure systems for ultra-low latency (< 10 ms) requirements would be a mistake as such cannot be guaranteed under all scenarios. The ultra-reliability (and availability) of transport connectivity is a critical challenge as it ensures that a given system has sufficient autonomy. Ultra-low latency of a given connectivity is much less critical.

For military (defense) applications, 5G can be rapidly deployed in the field using portable base stations to create a mobile (private) network. This is particularly useful in remote or hostile environments where traditional infrastructure is unavailable or has been compromised. Network slicing can create a secure, dedicated network for military operations. This ensures that sensitive data and communications are protected from interception and jamming. The low latency of 5G supports (near) real-time video feeds from drones, body cameras, and other surveillance equipment, enhancing situational awareness and decision-making in combat or reconnaissance missions.

Figure 13 The hierarchical coverage architecture shown above is relevant for military or, for example, search and rescue operations in remote areas like Greenland (or the Arctic in general), integrating multiple technological layers to ensure robust communication and surveillance. LEO satellites provide extensive broadband and SIGINT & IMINT coverage, supported by GEO satellites for stable links and data processing through ground stations. High Altitude Platforms (HAPs) offer 5G, IMINT, and SIGINT coverage at mid-altitudes, enhancing communication reach and resolution. The HAP system offers an extremely mobile and versatile platform for civil and defense scenarios. An ad-hoc private 5G network on the ground ensures secure, real-time communication for tactical operations. This multi-layered architecture is crucial for maintaining connectivity and operational efficiency in Greenland’s harsh and remote environments. The multi-layered communications network integrates IOT networks that may have been deployed in the past or in a specific mission context.

In critical civil applications, 5G can provide reliable communication networks for first responders during natural disasters or large-scale emergencies. Network slicing ensures that emergency services have priority access to the network, enabling efficient coordination and response. 5G can support the rapid deployment of communication networks in disaster-stricken areas, ensuring that affected populations can access critical services and information. Network slicing can allocate dedicated resources for smart city applications, such as traffic management, public safety, and environmental monitoring, ensuring that these services remain operational even during peak usage times. Thus, for Greenland, ensuring 5G availability would be through coastal settlements and possibly coastal coverage (outside settlements) of 5G at a lower frequency range (e.g., 600 – 900 MHz), prioritizing 5G coverage rather than 5G enhanced mobile broadband (i.e., any coverage at a high coverage probability is better than no coverage at certainty).

Besides 5G, what other technologies would otherwise be of importance in a Greenland Technology Strategy as it relates to its security and ensuring its investments and efforts also return beneficially to its society (e.g., a dual-use priority):

  • It would be advisable to increase the number of community networks within the overall network that can continue functioning if cut off from the main communications network. Thus, communications services in smaller and remote settlements depend less on a main or very few central communications control and management hubs. This requires on a local settlement level, or grouping of settlements, self-healing, remote (as opposed to a central hub) management, distributed databases, regional data center (typically a few racks), edge computing, local DNS, CDNs and content hosting, satellite connection, … Most telecom infrastructure manufacturing companies have today network in a box solutions that allow for such designs. Such solutions enable private 5G networks to function isolated from a public PLMN and fixed transport network.
  • It is essential to develop a (very) highly available and redundant digital transport infrastructure leveraging the existing topology strengthened by additional submarine cables (less critical than some of the other means of connectivity), increased transport ring- & higher-redundancy topologies, multi-level satellite connections (GEO, MEO & LEO, supplier redundancy) with more satellite ground gateways on Greenland (e.g., avoiding “off-Greenland” traffic routing). In addition, a remotely controlled stratospheric drone platform could provide additional connectivity redundancy at very high broadband speeds and low latencies.
  • Satellite backhaul solutions, operating, for example, from a Low Earth Orbit (LEO), such as shown in Figure below, are extending internet services to the farthest reaches of the globe. These satellites offer many benefits, as already discussed above, in connecting remote, rural, and previously un- and under-served areas with reliable internet services. Many remote regions lack foundational telecom infrastructure, particularly long-haul transport networks for carrying traffic away from remote populated areas. Satellite backhauls do not only offer a substantially better financial solution for enhancing internet connectivity to remote areas but are often the only viable solution for connectivity. The satellite backhaul solution is an important part of the toolkit to improve on redundancy and availability of particular very long and extensive long-haul microwave transport networks through remote areas (e.g., Greenland’s rugged and frequently hostile harsh coastal areas) where increasing the level of availability and redundancy with terrestrial means may be impractical (due to environmental factors) or incredibly costly.
    – LEO satellites provide several security advantages over GEO satellites when considering resistance to hostile actions to disrupt satellite communications. One significant factor is the altitude at which LEO satellites operate, which is between 500 and 2,000 kilometers, compared to GEO satellites, which are positioned approximately 36,000 kilometers above the equator. The lower altitude makes LEO satellites less vulnerable to long-range anti-satellite (ASAT) missiles.
    – LEO satellite networks are usually composed of large constellations with many satellites, often numbering in the dozens to hundreds. This extensive LEO network constellation provides some redundancy, meaning the network can still function effectively if some satellites are “taken out.” In contrast, GEO satellites are typically much fewer in number, and each satellite covers a much larger area, so losing even one GEO satellite will have a significant impact.
    – Another advantage of LEO satellites is their rapid movement across the sky relative to the Earth’s surface, completing an orbit in about 90 to 120 minutes. This constant movement makes it more challenging for hostile actors to track and target individual satellites for extended periods. In comparison, GEO satellites remain stationary relative to a fixed point on Earth, making them easier to locate and target.
    LEO satellites’ lower altitude also results in lower latency than GEO satellites. This can benefit secure, time-sensitive communications and is less susceptible to interception and jamming due to the reduced time delay. However, any security architecture of the critical transport infrastructure should not only rely on one type of satellite configuration.
    – Both GEO and LEO satellites have their purpose and benefits. Moreover, a hierarchical multi-dimensional topology, including stratospheric drones and even autonomous underwater vehicles, is worth considering when designing critical communications architecture. It is also worth remembering that public satellite networks may offer a much higher degree of communications redundancy and availability than defense-specific constellations. However, for SIGINT & IMINT collection, the defense-specific satellite constellations are likely much more advanced (unfortunately, they are also not as numerous as their civilian “cousins”). This said, a stratospheric aerial platform (e.g., HAP) would be substantially more powerful in IMINT and possibly also for some SIGINT tasks (or/and less costly & versatile) than a defense-specific satellite solution.
Figure 14 illustrates the architecture of a Low Earth Orbit (LEO) satellite backhaul system used by providers like OneWeb as well as StarLink with their so-called “Community Gateway” (i.e., using their Ka-band). It showcases the connectivity between terrestrial internet infrastructure (i.e., Satellite Gateways) and satellites in orbit, enabling high-speed data transmission. The network consists of LEO satellites that communicate with each other (inter-satellite Comms) using the Ku and Ka frequency bands. These satellites connect to ground-based satellite gateways (GW), which interface with Points of Presence (PoP) and Internet Exchange Points (IXP), integrating the space-based network with the terrestrial internet (WWW). Note: The indicated speeds and frequency bands (e.g., Ku: 12–18 GHz, Ka: 28–40 GHz) and data speeds illustrate the network’s capabilities.
Figure 15 illustrates an LEO satellite direct-to-device communication in remote areas without terrestrially-based communications infrastructure. Satellites are the only means of communication by a normal mobile device or classical satellite phone. Courtesy: DALL-E.
  • Establish an unmanned (remotely operated) stratospheric High Altitude Platform System (HAPS) (i.e., an advanced drone-based platform) or Unmanned Aerial Vehicles (UAV) over Greenland (or The Arctic region) with payload agnostic capabilities. This could easily be run out of existing Greenlandic ground-based aviation infrastructure (e.g., Kangerlussuaq, Nuuk, or many other community airports across Greenland). This platform could eventually become autonomous or require little human intervention. The high-altitude platform could support mission-critical ad-hoc networking for civil and defense applications (over Greenland). Such a multi-purpose platform can be used for IMINT and SIGINT (i.e., for both civil & defense) and civil communication means, including establishing connectivity to the ground-based transport network in case of disruptions. Lastly, a HAPS may also permanently offer high-quality and capacity 5G mobile services or act as a private ultra-secure 5G network in an ad-hoc mission-specific scenario. For a detailed account of stratospheric drones and how these compared with low-earth satellites, see my recent article “Stratospheric Drones & Low Earth Satellites: Revolutionizing Terrestrial Rural Broadband from the Skies?”.
    Stratospheric drones, which operate in the stratosphere at altitudes around 20 to 50 kilometers, offer several security advantages over traditional satellite communications and submarine communication cables, especially from a Greenlandic perspective. These drones are less accessible and harder to target due to their altitude, which places them out of reach for most ground-based anti-aircraft systems and well above the range of most manned aircraft. This makes them less vulnerable to hostile actions compared to satellites, which can be targeted by anti-satellite (ASAT) missiles, or submarine cables, which can be physically cut or damaged by underwater operations. The drones would stay over Greenlandic, or NATO, territory while by nature, design, and purpose, submarine communications cables and satellites, in general, are extending far beyond the territory of Greenland.
    – The mobility and flexibility of stratospheric drones allow them to be quickly repositioned as needed, making it difficult for adversaries to consistently target them. Unlike satellites that follow predictable orbits or submarine cables with fixed routes, these drones can change their location dynamically to respond to threats or optimize their coverage. This is particularly advantageous for Greenland, whose vast and harsh environment makes maintaining and protecting fixed communication infrastructure challenging.
    – Deploying a fleet of stratospheric drones provides redundancy and scalability. If one drone is compromised or taken out of service, others can fill the gap, ensuring continuous communication coverage. This distributed approach reduces the risk of a single point of failure, which is more pronounced with individual satellites or single submarine cables. For Greenland, this means a more reliable and resilient communication network that can adapt to disruptions.
    – Stratospheric drones can be rapidly deployed and recovered, making it an easier platform to maintain and upgrade them as needed compared to for example satellite based platforms and even terrestrial deployed networks. This quick deployment capability is crucial for Greenland, where harsh weather conditions can complicate the maintenance and repair of fixed infrastructure. Unlike satellites that require expensive and complex launches or submarine cables that involve extensive underwater laying and maintenance efforts, drones offer a more flexible and manageable solution.
    – Drones can also establish secure, line-of-sight communication links that are less susceptible to interception and jamming. Operating closer to the ground compared to satellites allows the use of higher frequencies narrower beams that are more difficult to jam. Additionally, drones can employ advanced encryption and frequency-hopping techniques to further secure their communications, ensuring that sensitive data remains protected. Stratospheric drones can also be equipped with advanced surveillance and countermeasure technologies to detect and respond to threats. For instance, they can carry sensors to monitor the electromagnetic spectrum for jamming attempts and deploy countermeasures to mitigate these threats. This proactive defense capability enhances their security profile compared to passive communication infrastructure like satellites or cables.
    – From a Greenlandic perspective, stratospheric drones offer significant advantages. They can be deployed over specific areas of interest, providing targeted communication coverage for remote or strategically important regions. This is particularly useful for covering Greenland’s vast and sparsely populated areas. Modern stratospheric drones are designed to support multi-dimensional payloads, or as it might also be called, payload agnostic (e.g., SIGINT & IMINT equipment, 5G base station and advanced antenna, laser communication systems, …) and stay operational for extended periods, ranging from weeks to months, ensuring sustained communication coverage without the need for frequent replacements or maintenance.
    – Last but not least, Greenland may be an ideal safe testing ground due to its vast, remote and thinly populated regions.
Figure 16 illustrates a Non-Terrestrial Network consisting of a stratospheric High Altitude Platform (HAP) drone-based constellation providing terrestrial Cellular broadband services to terrestrial mobile users delivered to their normal 5G terminal equipment that may range from smartphone and tablets to civil and military IOT networks and devices. Each hexagon represents a beam inside the larger coverage area of the stratospheric drone. One could assign three HAPs to cover a given area to deliver very high-availability services to a rural area. The operating altitude of a HAP constellation is between 10 and 50 km, with an optimum of around 20 km. It is assumed that there is inter-HAP connectivity, e.g., via laser links. Of course, it is also possible to contemplate having the gNB (full 5G radio node) in the stratospheric drone entirely, allowing easier integration with LEO satellite backhauls, for example. There might even be applications (e.g., defense, natural & unnatural disaster situations, …) where a standalone 5G SA core is integrated.
  • Unmanned Underwater Vehicles (UUV), also known as Autonomous Underwater Vehicles (AUV), are obvious systems to deploy for underwater surveillance & monitoring that may also have obvious dual-use purposes (e.g., fisheries & resource management, iceberg tracking and navigation, coastal defense and infrastructure protection such as for submarine cables). Depending on the mission parameters and type of AUV, the range is between up to 100 kilometers (e.g., REMUS100) to thousands of kilometers (e.g., SeaBed2030) and an operational time (endurance) from max. 24 hours (e.g., REMUS100, Bluefin-21), to multiple days (e.g., Boing Echo Voyager), to several months (SeaBed2030). A subset of this kind of underwater solution would be swarm-like AUV constellations. See Figure 17 below for an illustration.
  • Increase RD&T (Research, Development & Trials) on Arctic Internet of Things (A-IOT) (note: require some level of coverage, minimum satellite) for civil, defense/military (e.g., Military IOT nor M-IOT) and dual-use applications, such as surveillance & reconnaissance, environmental monitoring, infrastructure security, etc… (note: IOTs are not only for terrestrial use cases but also highly interesting for aquatic applications in combination with AUV/UUVs). Military IoT refers to integrating IoT technologies tailored explicitly for military applications. These devices enhance operational efficiency, improve situational awareness, and support decision-making processes in various military contexts. Military IoT encompasses various connected devices, sensors, and systems that collect, transmit, and analyze data to support defense and security operations. In the vast and remote regions of Greenland and the Arctic, military IoT devices can be deployed for continuous surveillance and reconnaissance. This includes using drones, such as advanced HAPS, equipped with cameras and sensors to monitor borders, track the movements of ships and aircraft, and detect any unauthorized activities. Military IoT sensors can also monitor Arctic environmental conditions, tracking ice thickness changes, weather patterns, and sea levels. Such data is crucial for planning and executing military operations in the challenging Arctic environment but is also of tremendous value for the Greenlandic society. The importance of dual-use cases, civil and defense, cannot be understated; here are some examples:
    Infrastructure Monitoring and Maintenance: (Military Use Case) IoT sensors can be deployed to monitor the structural integrity of military installations, such as bases and airstrips, ensuring they remain operational and safe for use. These sensors can detect stress, wear, and potential damage due to extreme weather conditions. These IoT devices and networks can also be deployed for perimeter defense and monitoring. (Civil Use Case) The same technology can be applied to civilian infrastructure, including roads, bridges, and public buildings. Continuous monitoring can help maintain these civil infrastructures by providing early warnings about potential failures, thus preventing accidents and ensuring public safety.
    Secure Communication NetworksMilitary Use Case: Military IoT devices can establish secure communication networks in remote areas, ensuring that military units can maintain reliable and secure communications even in the Arctic’s harsh conditions. This is critical for coordinating operations and responding to threats. Civil Use Case: In civilian contexts, these communication networks can enhance connectivity in remote Greenlandic communities, providing essential services such as emergency communications, internet access, and telemedicine. This helps bridge the digital divide and improve residents’ quality of life.
    Environmental Monitoring and Maritime SafetyMilitary Use Case: Military IoT devices, such as underwater sensor networks and buoys, can be deployed to monitor sea conditions, ice movements, and potential maritime threats. These devices can provide real-time data critical for naval operations, ensuring safe navigation and strategic planning. Civil Use Case: The same sensors and buoys can be used for civilian purposes, such as ensuring the safety of commercial shipping lanes, fishing operations, and maritime travel. Real-time monitoring of sea conditions and icebergs can prevent maritime accidents and enhance the safety of maritime activities.
    Fisheries Management and SurveillanceMilitary Use Case: IoT devices can monitor and patrol Greenlandic waters for illegal fishing activities and unauthorized maritime incursions. Drones and underwater sensors can track vessel movements, ensuring that military forces can respond to potential security threats. Civil Use Case: These monitoring systems can support fisheries management by tracking fish populations and movements, helping to enforce sustainable fishing practices and prevent overfishing. This data is important for the local economy, which heavily relies on fishing.
  • Implement Distributed Acoustic Sensing (DAS) on submarine cables. DAS utilizes existing fiber-optic cables, such as those used for telecommunications, to detect and monitor acoustic signals in the underwater environment. This innovative technology leverages the sensitivity of fiber-optic cables to vibrations and sound waves, allowing for the detection of various underwater activities. This could also be integrated with the AUV and A-IOTs-based sensor systems. It should be noted that jamming a DAS system is considerably more complex than jamming traditional radio-frequency (RF) or wireless communication systems. DAS’s significant security and defense advantages might justify deploying more submarine cables around Greenland. This investment is compelling because of enhanced surveillance and security, improved connectivity, and strategic and economic benefits. By leveraging DAS technology, Greenland could strengthen its national security, support economic development, and maintain its strategic importance in the Arctic region.
  • Greenland should widely embrace autonomous systems deployment and technologies based on artificial intelligence (AI). AI is a technology that could compensate for the challenges of having a vast geography, a hostile climate, and a small population. This may, by far, be one of the most critical components of a practical security strategy for Greenland. Getting experience with autonomous systems in a Greenlandic and Arctic setting should be prioritized. Collaboration & knowledge exchange with Canadian and American universities should be structurally explored, as well as other larger (friendly) countries with Arctic interests (e.g., Norway, Iceland, …).
  • Last but not least, cybersecurity is an essential, even foundational, component of the securitization of Greenland and the wider Arctic, addressing the protection of critical infrastructure, the integrity of surveillance and monitoring systems, and the defense against geopolitical cyber threats. The present state and level of maturity of cybersecurity and defense (against cyber threats) related to Greenland’s critical infrastructure has to improve substantially. Prioritizing cybersecurity may have to be at the expense of other critical activities due to limited resources with relevant expertise available to businesses in Greenland). Today, international collaboration is essential for Greenland to develop strong cyber defense capabilities, ensure secure communication networks, and implement effective incident response plans. However, it is essential for Greenland’s security that a cybersecurity architecture is tailor-made to the particularities of Greenland and allows Greenland to operate independently should friendly actors and allies not be in a position to provide assistance.
Figure 17 Above illustrates an Unmanned Underwater Vehicle (UUV) near the coast of Greenland inspecting a submarine cable. The UUV is a robotic device that operates underwater without a human onboard, controlled either autonomously or remotely. In and around Greenland’s coastline, UUVs may serve both defense and civilian purposes. For defense, they can patrol for submarines, monitor underwater traffic, and detect potential threats, enhancing maritime security. Civilian applications include search & rescue missions, scientific research, where UUVs map the seabed, study marine life, and monitor environmental changes, crucial for understanding climate change impacts. Additionally, they inspect underwater infrastructure like submarine cables, ensuring their integrity and functionality. UUVs’ versatility makes them invaluable for comprehensive underwater exploration and security along Greenland’s long coast line. Integrated defense architectures may combine the UUV, Distributed Acoustic Sensor (DAS) networks deployed at submarine cables, and cognitive AI-based closed-loop security solutions (e.g., autonomous operation). Courtesy: DALL-E.

How do we frame some of the above recommendations into a context of securitization in the academic sense of the word aligned with the Copenhagen School (as I understand it)? I will structure this as the “Securitizing Actor(s),” “Extraordinary Measures Required,” and the “Geopolitical Implications”:

Example 1: Improving Communications networks as a security priority.

Securitizing Actor(s): Greenland’s government, possibly supported by Denmark and international allies (e.g., The USA’s Pituffik Space Base on Greenland), frames the lack of higher availability and reliable communication networks as an existential threat to national security, economic development, and stability, including the ability to defend Greenland effectively during a global threat or crisis.

Extraordinary Measures Required: Greenland can invest in advanced digital communication technologies to address the threat. This includes upgrading infrastructure such as fiber-optic cables, satellite communication systems, stratospheric high-altitude platform (HAP) with IMINT, SIGINT, and broadband communications payload, and 5G wireless networks to ensure they are reliable and can handle increased data traffic. Implementing advanced cybersecurity measures to protect these networks from cyber threats is also crucial. Additionally, investments in broadband expansion to remote areas ensure comprehensive coverage and connectivity.

Geopolitical Implications: By framing the reliability and availability of digital communications networks as a national security issue, Greenland ensures that significant resources are allocated to upgrade and maintain these critical infrastructures. Greenland may also attract European Union investments to leapfrogging the critical communications infrastructure. This improves Greenland’s day-to-day communication and economic activities and enhances its strategic importance by ensuring secure and efficient information flow. Reliable digital networks are essential for attracting international investments, supporting digital economies, and maintaining social cohesion.

Example 2: Geopolitical Competition in the Arctic

Securitizing Actor(s): The Greenland government, aligned with Danish and international allies’ interests, views the increasing presence of Russian and Chinese activities in the Arctic as a direct threat to Greenland’s sovereignty and security.

Extraordinary Measures Required: In response, Greenland can adopt advanced surveillance and defense technologies, such as Distributed Acoustic Sensing (DAS) systems to monitor underwater activities and Unmanned Aerial & Underwater Vehicles (UAVs & UUVs) for continuous aerial surveillance. Additionally, deploying advanced communication networks, including satellite-based systems, ensures secure and reliable information flow.

Geopolitical Implications: By framing foreign powers’ increased activities as a security threat (e.g., Russia and China), Greenland can attract NATO and European Union investments and support for deploying cutting-edge surveillance and defense technologies. This enhances Greenland’s security infrastructure, deters potential adversaries, and solidifies its strategic importance within the alliance.

Example 3: Cybersecurity as a National Security Priority.

Securitizing Actor(s): Greenland, aligned with its allies, frames the potential for cyber-attacks on critical infrastructure (such as power grids, communication networks, and military installations) as an existential threat to national security.

Extraordinary Measures Required: To address this threat, Greenland can invest in state-of-the-art cybersecurity technologies, including artificial intelligence-driven threat detection systems, encrypted communication channels, and comprehensive incident response frameworks. Establishing partnerships with global cybersecurity firms and participating in international cybersecurity exercises can also be part of the strategy.

Geopolitical Implications: By securitizing cybersecurity, Greenland ensures that significant resources are allocated to protect its digital infrastructure. This safeguards its critical systems and enhances its attractiveness as a secure location for international investments, reinforcing its geopolitical stability and economic growth.

Example 4: Arctic IoT and Dual-Use Military IoT Networks as a Security Priority.

Securitizing Actor(s): Greenland’s government, supported by Denmark and international allies, frames the lack of Arctic IoT and dual-use military IoT networks as an existential threat to national security, economic development, and environmental monitoring.

Extraordinary Measures Required: Greenland can invest in deploying Arctic IoT and dual-use military IoT networks to address the threat. These networks involve a comprehensive system of interconnected sensors, devices, and communication technologies designed to operate in the harsh Arctic environment. This includes deploying sensors for environmental monitoring, enhancing surveillance capabilities, and improving communication and data-sharing across military and civilian applications.

Geopolitical Implications: By framing the lack of Arctic IoT and dual-use military IoT networks as a national security issue, Greenland ensures that significant resources are allocated to develop and maintain these advanced technological infrastructures. This improves situational awareness and operational efficiency and enhances Greenland’s strategic importance by providing real-time data and robust monitoring capabilities. Reliable IoT networks are essential for protecting critical infrastructure, supporting economic activities, and maintaining environmental and national security.

THE DANISH DEFENSE & SECURITY AGREEMENT COVERING THE PERIOD 2024 TO 2033.

Recently, Denmark approved its new defense and security agreement for the period 2024-2033. This strongly emphasizes Denmark’s strategic reorientation in response to the new geopolitical realities. A key element in the Danish commitment to NATO’s goals includes a spending level approaching and possibly superseding the 2% of GDP on defense by 2030. It is not 2% for the sake of 2%. There really is a lot to be done, and as soon as possible. The agreement entails significant financial investments totaling approximately 190 billion DKK (or ca. 25+ billion euros) over the next ten years to quantum leap defense capabilities and critical infrastructure.

The defense agreement emphasizes the importance of enhancing security in the Arctic region, including, of course, Greenland. Thus, Greenland’s strategic significance in the current geopolitical landscape is recognized, particularly in light of Russian activities and Chinese expressed intentions (e.g., re: the “Polar Silk Road”). The agreement aims to strengthen surveillance, sovereignty enforcement, and collaboration with NATO in the Arctic. As such, we should expect investments to improve surveillance capabilities that would strengthen the enforcement of Greenland’s sovereignty. Ensuring that Greenland and Denmark can effectively monitor and protect its Arctic territories (together with its allies). The defense agreement stresses the importance of supporting NATO’s mission in the Arctic region, contributing to collective defense and deterrence efforts.

What I very much like in the new defense agreement is the expressed focus on dual-use infrastructure investments that benefit Greenland’s defense (& military) and civilian sectors. This includes upgrading existing facilities and enhancing operational capabilities in the Arctic that allow a rapid response to security threats. The agreement ensures that defense investments also bring economic and social benefits to Greenlandic society, consistent with a dual-use philosophy. In order for this to become a reality, it will involve a close collaboration with local authorities, businesses, and research institutions to support the local economy and create new job opportunities (as well as ensure that there is a local emphasis on relevant education to ensure that such investments are locally sustainable and not relying on an “army” of Danes and others of non-Greenlandic origin).

The defense agreement unsurprisingly expresses a strong commitment to enhancing cybersecurity measures as well as addressing hybrid threats in Greenland. This reflects the broader security challenges of the new technology introduction required, the present cyber-maturity level, and, of course, the current (and future expected) geopolitical tensions. The architects behind the agreement have also realized that there is a big need to improve recruitment, retention, and appropriate training within the defense forces, ensuring that personnel are well-prepared to operate in the Arctic environment in general and in Greenland in particular.

It is great to see that the Danish “Defense and Security Agreement” for 2024-2033 reflects the principles of securitization by framing Greenland’s security as an existential threat and justifying substantial investments and strategic initiatives in response. The focus of the agreement is on enhancing critical infrastructure, surveillance platforms, and international cooperation while ensuring that the benefits of the local economy align with the concept of securitization. That is to ensure that Greenland is well-prepared to address current and future security challenges and anticipated threats in the Arctic region.

The agreement underscores the importance of advanced surveillance systems, such as, for example, satellite-based monitoring and sophisticated radar systems as mentioned in the agreement. These technologies are deemed important for maintaining situational awareness and ensuring the security of Denmark’s territories, including Greenland and the Arctic region in general. In order to improve response times as well as effectiveness, enhanced surveillance capabilities are essential for detecting and tracking potential threats. Moreover, such capabilities are also important for search and rescue, and many other civilian use cases are consistent with the intention to ensure that applied technologies for defense purposes have dual-use capabilities and can also be used for civilian purposes.

There are more cyber threats than ever before. These threats are getting increasingly sophisticated with the advance of AI and digitization in general. So, it is not surprising that cybersecurity technologies are also an important topic in the agreement. The increasing threat of cyber attacks, particularly against critical infrastructure and often initiated by hostile state actors, necessitates a robust cybersecurity defense in order to protect our critical infrastructure and the sensitive information it typically contains. This includes implementing advanced encryption, intrusion detection systems, and secure communication networks to safeguard against cyber threats.

The defense agreement also highlights the importance of having access to unmanned systems or drones. There are quite a few examples of such systems as discussed in some detail above, and can be found in my more extensive article “Stratospheric Drones & Low Earth Satellites: Revolutionizing Terrestrial Rural Broadband from the Skies?“. There are two categories of drones that may be interesting. One is the unmanned version that typically is remotely controlled in an operations center at a distance from the actual unmanned platform. The other is the autonomous (or semi-autonomous) drone version that is enabled by AI and many integrated sensors to operate independently of direct human control or at least largely without real-time human intervention. Examples such as Unmanned Vehicles (UVs) and Autonomous Vehicles (AVs) are typically associated with underwater (UUV/UAV) or aerial (UAV/AAV) platforms. This kind of technology provides versatile, very flexible surveillance & reconnaissance, and defense platforms that are not reliant on a large staff of experts to operate. They are particularly valuable in the Arctic region, where harsh environmental conditions can limit the effectiveness of manned missions.

The development and deployment of dual-use technologies are also emphasized in the agreement. These technologies, which have both civilian and military applications, are necessary for maximizing the return on investment in defense infrastructure. It may also, at the moment, be easier to find funding if it is defense-related. Technology examples include advancements in satellite communications and broadband networks, enhancing military capabilities, and civilian connectivity, particularly how those various communications technologies can seamlessly integrate with one another is very important.

Furthermore, artificial intelligence (AI) has been identified as a transformative technology for defense and security. While AI is often referred to as a singular technology. However, it is actually an umbrella term that encompasses a broad spectrum of frameworks, tools, and techniques that have a common basis in models that are being trained on large (or very large) sets of data in order to offer various predictive capabilities of increasing sophistication. This leads to the expectation that, for example, AI-driven analytics and decision-making applications will enhance the operational efficiency and, not unimportantly, the quality of real-time decision-making in the field (which may or may not be correct and for sure may be somewhat optimistic expectations at least at a basic level). AI-enabled defense platforms or applications are likely to result in improved threat detection as well as being able to support strategic planning. As long as the risk of false outcomes is acceptable, such a system will enrich the defense systems and provide significant advantages in managing complex and highly dynamic security environments and time-critical threat scenarios.

Lastly, the agreement stresses the need for advanced logistics and supply chain technologies. Efficient logistics are critical for sustaining military operations and ensuring the timely delivery of equipment and supplies. Automation, real-time tracking, and predictive analytics in logistics management can significantly improve the resilience and responsiveness of defense operations.

AT THIS POINT IN MY GREENLANDIC JOURNEY.

In my career, I have designed, planned, built, and operated telecommunications networks in many places under vastly different environmental conditions (e.g., geography and climate). The more I think about building robust and highly reliable communication networks in Greenland, including all the IT & compute enablers required, the more I appreciate how challenging and different it is to do so in Greenland. Tusass has built a robust and reliable transport network connecting nearly all settlements in Greenland down to the smallest size. Tusass operates and maintains this network under some of the harshest environmental conditions in the world, with an incredible dedication to all those settlements that depend on being connected to the outside world and where a compromised connection may have dire consequences for the unconnected community.

Figure 18 Shows a coastal radio site in Greenland. It illustrates one of the frequent issues of the critical infrastructure being covered by ice as well as snow. Courtesy: Tusass A/S (Greenland),

Comparing the capital spending level of Tusass in Greenland with the averages of other Western European countries, we find that Tusass does not invest significantly more of its revenue than the telco industry’s country averages of many other Western European countries. In fact, its 5-year average Capex to Revenue ratio is close to the Western European country average (19% over the period 2019 to 2023). In terms of capital investments compared to the revenue generating units (RGUs), Tusass does have the highest level of 18.7 euros per RGU per month, based on a 5-year average over the period 2019 to 2023, in comparison with the average of several Western European markets, coming out at 6.6 euros per RGU per month, as shown in the chart below. This difference is not surprising when considering the available population in Greenland compared to the populations in the countries considered in the comparison. The variation of capital investments for Tusass also shows a much larger variation than other countries due to substantially less population to bear the burden of financing big capital-intensive projects, such as the deployment of new submarine cables (e.g., typically coming out at 30 to 50 thousand euros per km), new satellite connections (normally 10+ million euros depending on the asset arrangement), RAN modernization (e.g., 5G), and so forth … For example, the average absolute capital spend was 14.0±1.5 million euros between 2019 and 2022, while 2023 was almost 40 million euros (a little less than 4% of the annual defense and security budget of Denmark) due to, according with Tusass annual report, RAN modernization (e.g., 5G), satellite (e.g., Greensat) and submarine cable investments (initial seabed investigation). All these investments bring better quality through higher reliability, integrity, and availability of Greenland’s critical communications infrastructure although there are not a large population (e.g., millions) to spread such these substantial investments over.

Figure 19 In a Western European context, Greenland does not, on average, invest substantially more in telecom infrastructure relative to its revenues and revenue-generating units (i.e., its customer service subscriptions) despite having a very low population of about 57 thousand and an area of 2.2 million square kilometers, the size of Alaska and only 33% smaller than India. The chart shows the country’s average Capex to Revenue ratio and the Capex in euros per RGU per month over the last 5 years (2019 to 2023) for Greenland (e.g., Tusass annual reports) and Western Europe (using data from New Street Research).

The capital investments required to leapfrog Greenland’s communications network availability and redundancy scores beyond 70% (versus 53% and 44%, respectively, in 2023) would be very substantial, requiring both additional microwave connections (including redesigns), submarine cables, and new satellite arrangements, and new ground stations (e.g., to or in settlements with more than a population of 1,000 inhabitants).

Those investments would serve the interests of the Greenlandic society and that of Denmark and NATO in terms of boosting the defense and security of Greenland, which is also consistent with all the relevant parties’ expressed intent of securitization of Greenland. The required capital investments related to further leapfrogging the safety, availability, and reliability, above and beyond the current plans, of the critical communications infrastructure would be far higher than previously capital spend levels by Tusass (and Greenland) and unlikely to be economically viable using conventional business financial metrics (e.g., net present value NPV > 0 and internal rate of return IRR > a given hurdle rate). The investment needs to be seen as geopolitical relevant for the security & safety of Greenland, and with a strong focus on dual-use technologies, also as beneficial to the Greenlandic society.

Even with unlimited funding and financing to enhance Greenland’s safety and security, the challenging weather conditions and limited availability of skilled resources mean that it will take considerable time to successfully complete such an extensive program. Designing, planning and building a solid defense and security architecture meaningful to Greenlandic conditions will take time. Though, I am also convinced that there are already pieces of the puzzle operational today that is important any future work.

Figure 18 An aerial view of one of Tusass’s west coast sites supporting coastal radio as well as hosting one of the many long-haul microwave sites along the west coast of Greenland. Courtesy: Tusass A/S (Greenland).

RECOMMENDATIONS.

A multifaceted approach is essential to ensure that Greenland’s strategic and infrastructure development aligns with its unique geographical and geopolitical context.

Firstly, Greenland should prioritize the development of dual-use critical infrastructure and the supporting architectures that can serve both civilian and defense (& military) purposes. For example expanding and upgrading airport facilities (e.g., as is happening with the new airport in Nuuk), enhancing broadband internet access (e.g., as Tusass is very much focusing on adding more submarine cables and satellite coverage), and developing advanced integrated communication platforms like satellite-based and unmanned aerial systems (UAS), such as payload agnostic stratospheric high altitude platforms (HAPs). Such dual-use infrastructure platforms could bolster the national security. Moreover it could support economic activities that would improve community connectivity, and enhance the quality of life for Greenland’s residents irrespective of where they live in Greenland. There is little doubt that securing funding from international allies (e.g., European Union, NATO, …) and public-private partnerships will be crucial in supporting the financing of these projects. Also ensuring that civil and defense needs are met efficiently and with the right balance.

Additionally, it is important to invest in critical enablers like advanced monitoring and surveillance technologies for the security & safety. Greenland should in particular focus on satellite monitoring, Distributed Acoustic Sensing (DAS) on its submarine cables, and Unmanned Vehicles for Underwater and Aerial applications (e.g., UUVs & UAVs). Such systems will enable a more comprehensive monitoring of activities around and over Greenland. This would allow Greenland to secure its maritime routes, and protecting Greenland’s natural resources (among other things). Enhanced surveillance capabilities will also provide multi-dimensional real-time data for national security, environmental monitoring, and disaster response scenarios. Collaborating with NATO and other international partners should focus on sharing technology know-how, expertise in general, and intelligence that will ensure that Greenland’s surveillance capabilities are on par with global standards.

Tusass’s transport network connecting (almost) all of Greenland’s settlements is an essential and critical asset for Greenland. It should be the backbone for any dual-use enhancement serving civil as well as defense scenarios. Adding additional submarine cables and more satellite connections are important (on-going) parts of those enhancements and will substantially increase both the network availability, resilience and hardening to disruptions natural as well as man-made kinds. However, increasing the communications networks ability to fully, or even partly, function in case of network parts being cut off from a few main switching centers may be something that could be considered. With todays technologies might also be affordable to do and fit well with Tusass’s multi-dimensional connectivity strategy using terrestrial means (e.g., microwave connections), sub-marine cables and satellites.

Last but not least, considering Greenland’s limited human resources, the technologies and advanced platforms implemented must have a large degree of autonomy and self-reliance. This will likely only be achieved with solid partnerships and strong alliances with Denmark and other natural allies, including the Nordic countries in and near the Arctic Circle (e.g., Island, Faroe Island, Norway, Sweden, Finland, The USA, and Canada). In particular, Norway has had recent experience with the dual use of ad-hoc and private 5G networking for defense applications. Joint operation of UUV and UAVs integrated with DAS and satellite constellation could be operated within the Arctic Circle. Developing and implementing advanced AI-based technologies should be a priority. Such collaborations could also make these advanced technologies much more affordable than if only serving one country. These technologies can compensate for the sparse population and vast geographical challenges that Greenland and the larger Arctic Circle pose, providing efficient and effective infrastructure management, surveillance, and economic development solutions. Achieving a very high degree of autonomous operation of the multi-dimensional technology landscape required for leapfrogging the security of Greenland, the Greenlandic Society, and its critical infrastructure would be essential for Greenland to be self-reliant and less dependent on substantial external resources that may be problematic to guaranty in times of crisis.

By focusing on these recommendations, Greenland can enhance its strategic importance, improve its critical infrastructure resilience, and ensure sustainable economic growth while maintaining its unique environmental heritage.

Being a field technician in Greenland poses some occupational hazards that is unknown in most other places. Apart from the harsh weather, remoteness of many of the infrastructure locations, on many occasions field engineers have encountered hungry polar bears in the field. The polar bear is a very dangerous predator that is always on the look out for its next protein-rich meal.

FURTHER READING.

  1. Tusass Annual Reports 2023 (more reports can be found here).
  2. Naalakkersuisut / Government of Greenland Ministry for Statehood and Foreign Affairs, “Greenland in the World — Nothing about us without us: Greenland’s Foreign, Security, and Defense Policy 2024-2033 – an Arctic Strategy.” (February 2024). The Danish title of this Document (also published in Greenlandic as the first language): “Grønland i Verden — Intet om os, uden os: Grønlands udenrigs-, sikkerheds- og forsvarspolitiske strategi for 2024-2033 — en Arktisk Strategi”.
  3. Martin Brum, “Greenland’s first security strategy looks west as the Arctic heats up.” Arctic Business Journal (February 2024).
  4. Marc Jacobsen, Ole Wæver, and Ulrik Pram Gad, “Greenland in Arctic Security: (De)securitization Dynamics under Climatic Thaw and Geopolitical Freeze.” (2024), University of Michigan Press. See also the video associated with the book launch. It’s not the best quality (sound/video), but if you just listen and follow the slides offline, it is actually really interesting.
  5. Michael Paul and Göran Swistek, “Russia in the Arctic: Development Plans, Military Potential, and Conflict Prevention,” SWP (Stiftung Wissenschaft und Politik) Research Paper, (February 2022). Some great maps are provided that clearly visualize the Arctic – Russia relationships.
  6. Marc Lanteigne, “The Rise (and Fall?) of the Polar Silk Road.” The Diplomat, (August 2022).
  7. Trym Eiterjord, “What the 14th Five-Year Plan says about China’s Arctic Interests”, The Arctic Institute, (November 2023). The link also includes references to several other articles related to the China-Arctic relationship from the Arctic Institute China Series 2023.
  8. Barry Buzan, Ole Wæver, and Jaap de Wilde, “Security: A New Framework for Analysis”, (1998), Lynne Rienner Publishers Inc..
  9. Kim Kyllesbech Larsen, The Next Frontier: LEO Satellites for Internet Services. | techneconomyblog, (March 2024).
  10. Kim Kyllesbech Larsen, Stratospheric Drones & Low Earth Satellites: Revolutionizing Terrestrial Rural Broadband from the Skies? | techneconomyblog, (January 2024).
  11. Deo, Narsingh. “Graph Theory with Applications to Engineering and Computer Science,” Dover Publications. This book is a reasonably accessible starting point for learning more about graphs. If this is new to you, I recommend going for the following Geeks for Geeks ” Introduction to Graph Data Structure” (April 2024), which provides a quick intro to the world of graphs.
  12. Mike Dano, “Pentagon puts 5G at center of US military’s communications future”, Light Reading (December 2020).
  13. Juan Pedro Tomas, “Telia to develop private 5G for Norway’s Armed Forces”, RCR Wireless (June 2022).
  14. Iain Morris, “Telia is building 5G cell towers for the battlefield”, Light Reading (June 2023).
  15. Saleem Khawaja, “How military uses of the IoT for defense applications are expanding”, Army Technology (March 2023).
  16. Mary Lee, James Dimarogonas, Edward Geist, Shane Manuel, Ryan A. Schwankhart, Bryce Downing, “Opportunities and Risks of 5G Military Use in Europe”, RAND (March 2023).
  17. Mike Dano, “NATO soldiers test new 5G tech“, Light Reading (October 2023).
  18. NATO publication, “5G Technology: Nokia Meets with NATO Allied Command Transformation to Discuss Military Applications”, (May 2024).
  19. Michael Hill, “NATO tests AI’s ability to protect critical infrastructure against cyberattacks” (January 2023).
  20. Forsvarsministeriet, Danmark, “Dansk forsvar og sikkerhed 2024-2033.” (June 2023): Danish Defense & Security Agreement (Part I).
  21. Forsvarsministeriet, Denmark, “Anden delaftale under forsvarsforliget 2024-2033“, (April 2024): Danish Defense & Security Agreement (Part II).
  22. The State Council Information Office of the People’s Republic of China, “China’s Arctic Policy”, (January 2018).

ACKNOWLEDGEMENT.

I greatly acknowledge my wife, Eva Varadi, for her support, patience, and understanding during the creative process of writing this article. I am incredible thankful to Tusass for providing many great pictures used in the post that illustrates the (good weather!) conditions that Tusass field technicians are faced with in the field working tirelessly on the critical communications infrastructure throughout Greenland. While the pictures shown in this post are really beautiful and breathtaking, the weather is unforgiven frequently stranding field workers for days at some of those remote site locations. Add to this picture the additional dangers of a hungry polar bear that will go to great length getting its weekly protein intake.

A Single Network Future.

How to think about a single network future? What does it entail, and what is it good for?

Well, imagine a world where your mobile device, unchanged and unmodified, connects to the nearest cell tower and satellites orbiting Earth, ensuring customers will always be best connected, getting the best service, irrespective of where they are. Satellite-based supplementary coverage (from space) seeks to deliver on this vision by leveraging superior economic coverage in terms of larger footprint (than feasible with terrestrial networks) and better latency (compared to geostationary satellite solutions) to bring connectivity directly to unmodified consumer handsets (e.g., smartphone, tablet, IoT devices), enhance emergency communication, and foster advancements in space-based technologies. The single network future does not only require certain technological developments, such as 3GPP Non-Terrestrial Network standardization efforts (e.g., Release 17 and forward). We also need the regulatory spectrum policy to change, allowing today’s terrestrially- and regulatory-bounded cellular frequency spectra to be re-used by satellite operators providing the same mobile service under satellite coverage in areas without terrestrial communications infrastructure, as mobile customers enjoy within the normal terrestrial cellular network.

It is estimated that less than 40% of the world’s population, or roughly 2.9 billion people, have never used the internet (as of 2023). That 60% of the world population have access to internet and 40% have not, is the digital divide. A massive gap most pronounced in developing countries, rural & remote areas, and among older populations and economically disadvantaged groups. Most of the 2.9 billion on the wrong side of the divide live in areas lacking terrestrial-based technology infrastructure that would readily facilitate access to the internet. It lacks the communications infrastructure because it may either be impractical or (and) un-economical to deploy, including difficulty in monetizing and yielding a positive return on investment over a relatively short period. Satellites that are allowed by regulatory means to re-use terrestrially-based cellular spectrum for supplementary (to terrestrial) coverage can largely solve the digital divide challenges (as long as affordable mobile devices and services are available to the unconnected).

This blog explores some of the details of the, in my opinion, forward-thinking FCC’s Supplementary Coverage from Space (SCS) framework and vision of a Single Network in which mobile cellular communication is not limited to tera firma but supplemented and enhanced by satellites, ensuring connectivity everywhere.

SUPPLEMENTARY COVERAGE FROM SPACE.

Federal Communications Commission (FCC) recently published a new regulatory framework (“Report & Order and further notice of proposed rulemaking“) designed to facilitate the integration of satellite and terrestrial networks to provide Supplemental Coverage from Space (SCS), marking a significant development toward achieving ubiquitous connectivity. In the following, I will use the terms “SCS framework” and ” SCS initiative” to cover the reference to the FCC’s regulatory framework. The SCS initiative, which, to my knowledge, is the first of its kind globally, aims to allow satellite operators and terrestrial service providers to collaborate, leveraging the spectrum previously allocated exclusively for terrestrial services to extend connectivity directly to consumer handsets, what is called satellite direct-to-device (D2D), especially in remote, unserved, and underserved areas. The proposal is expected to enhance emergency communication availability, foster advancements in space-based technologies, and promote the innovative and efficient use of spectrum resources.

The “Report and Order” formalizes a spectrum-use framework, adopting a secondary mobile-satellite service (MSS) allocation in specific frequency bands devoid of primary non-flexible-use legacy incumbents, both federal and non-federal. Let us break this down in a bit more informal language. So, the FCC proposes to designate certain parts of the radio frequency spectrum (see below) for mobile-satellite services on a “secondary” basis. In spectrum management, an allocation is deemed “secondary” when it allows for the operation of a service without causing interference to the “primary” services in the same band. This means that the supplementary satellite service, deemed secondary, must accept interference from primary services without claiming protection. Moreover, this only applies to locations that lack (i.e., devoid of) the use of a given frequency band by existing ” primary” spectrum users (i.e., incumbents), non-federal as well as federal primary uses.

The setup encourages collaboration and permits supplemental coverage from space (SCS) in designated bands where terrestrial licensees, holding all licenses for a channel throughout a geographically independent area (GIA), lease access to their terrestrial spectrum rights to a satellite operator. Furthermore, the framework establishes entry criteria for satellite operators to apply for or modify an existing “part 25” space station license for SCS operations, that is the regulatory requirements established by the FCC governing the licensing and operation of satellite communications in the United States. The framework also outlines a licensing-by-rule approach for terrestrial devices acting as SCS earth stations, referring to a regulatory and technological framework where conventional consumer devices, such as smartphones or tablets, are equipped to communicate directly with satellites (after all we do talk about Direct-2-Device).

The above picture showcases a moment in the remote Arizona desert where an individual receives a direct signal to the device from a Low-Earth Orbit (LEO) satellite to his or her smartphone. The remote area has no terrestrial cellular coverage, and supplementary coverage from space is the only way for individuals with a subscription to access their cellular services or make a distress call apart from using a costly satellite phone service. It should be remembered that the SCS service is likely to be capacity-limited due to the typical large satellite coverage area and possible limited available SCS spectrum bandwidth.

Additionally, the Further Notice of Proposed Rulemaking seeks further commentary on aspects such as 911 service provision and the protection of radio astronomy, indicating the FCC’s consistent commitment to refining and expanding the SCS framework responsibly. This commitment ensures that the framework will continue to evolve, adapting to new challenges and opportunities and providing a solid foundation for future developments.

BALANCING THE AIRWAVES IN THE USA.

Two agencies in the US manage the frequency spectrum, the Federal Communications Commission (FCC) and the National Telecommunications and Information Administration (NTIA) . They collaboratively manage and coordinate frequency spectrum use and reuse for satellites, among other applications, within the United States. This partnership is important for maintaining a balanced approach to spectrum management that supports federal and non-federal needs, ensuring that satellite communications and other services can operate effectively without causing harmful interference to each other.

The Federal Communications Commission, the FCC for short, is an independent agency that exclusively regulates all non-Federal spectrum use across the United States. FCC allocates spectrum licenses for commercial use, typically through spectrum auctions. A new or re-purposed commercialized spectrum has been reclaimed from other uses, both from federal uses and existing commercial uses. Spectrum can be re-purposed either because newer, more spectrally efficient technologies become available (e.g., the transition from analog to digital broadcasting) or it becomes viable to shift operation to other spectrum bands with less commercial value (and, of course, without jeopardizing existing operational excellence). It is also possible that spectrum, previously having been for exclusive federal use (e.g., military applications, fixed satellite uses, etc.), can be shared, such as the case with Citizens Broadband Radio Service (CBRS), which allows non-federal parties access to 150 MHz in the 3.5 GHz band (i.e., band 48). However, it has recently been concluded that (centralized) dynamic spectrum sharing only works in certain use cases and is associated with considerable implementation complexities. Multiple parties with possible vastly different requirements co-exist within a given band, which is a work in progress and may not be consistent with the commercialized spectrum operation required for high-quality broadband cellular operation.

Alongside the FCC, the National Telecommunications and Information Administration (NTIA) plays a crucial role in US spectrum management. The NTIA is the sole authority responsible for authorizing Federal spectrum use. It also serves as the principal adviser on telecommunications policies to the President of the United States, coordinating the views of the Executive Branch. The NTIA manages a significant portion of the spectrum, approximately 2,398 MHz (69%), within the range of 225 MHz to 3.7 GHz, known as the ‘beachfront spectrum’. Of the total 3,475 MHz, 591 MHz (17%) is exclusively for Federal use, and 1,807 MHz (52%) is shared or coordinated between Federal and non-Federal entities. This leaves 1,077 MHz (31%) for exclusive commercial use, which falls under the management of the FCC.

NTIA, in collaboration with the FCC, has been instrumental in the past in freeing up substantial C-band spectrum, 480 MHz in total, of which 100 MHz is conditioned on prioritized sharing (i.e., Auction 105), for commercial and shared use that subsequently has been auctioned off over the last three years raising USD 109 billion. In US Dollar (USD) per MHz per population count (pop), we have, on average, ca. USD 0.68 per MHz-pop from the C-band auctions in the US, compared to USD 0.13 per MHz-pop in Europe C-band auctions and USD 0.23 per MHz-pop in APAC auctions. It should be remembered that the United States exclusive-use spectrum licenses can be regarded as an indefinite-lived intangible asset, while European spectrum rights expire between 10 and 20 years. This may explain a big part of the difference between US-based spectrum pricing and Europe and Asia.

The FCC and the NTIA jointly manage all the radio spectrum in the United States, licensed (e.g., cellular mobile frequencies, TV signals) and unlicensed (e.g., WiFi, MW Owens). The NTIA oversees spectrum use for Federal purposes, while the FCC is responsible for non-Federal use. In addition to its role in auctioning spectrum licenses, the FCC is also authorized to redistribute licenses. This authority allows the FCC to play a vital role in ensuring efficient spectrum use and adapting to changing needs.

THE SINGLE NETWORK.

The Supplementary Coverage from Space (SCS) framework creates an enabling regulatory framework for satellite operators to provide mobile broadband services to unmodified mobile devices (i.e., D2D services), such as smartphones and other terrestrial cellular devices, in rural and remote areas without such services, where no or only scarce terrestrial infrastructure exists. By leveraging SCS, terrestrial cellular broadband services will be enhanced, and the combination may result in a unified network. This network will ensure continuous and ubiquitous access to communication services, overcoming geographical and environmental challenges. Thus, this led to the inception of the Single Network that can provide seamless connectivity across diverse environments, including remote, unserved, and underserved areas.

The above picture illustrates the idea behind the FCC’s SCS framework and “Single Network” on a high level. In this example, an LEO satellite provides direct-to-device (D2D) supplementary coverage in rural and remote areas, using an advanced phase-array antenna, to unmodified user equipment (e.g., smartphone, tablet, cellular-IoT, …) in the same frequency band (i.e., f1,sat) owned and used by a terrestrial operator operating a cellular network (f1). The LEO satellite operator must partner with the terrestrial spectrum owner to manage and coordinate the frequency re-use in areas where the frequency owner (i.e., mobile/cellular operator) does not have the terrestrial-based infrastructure to deliver a service to its customers (i.e., typically remote, rural areas where terrestrial infrastructure is impractical and uneconomic to deploy). The satellite operator has to avoid geographical regions where the frequency (e.g., f1) is used by the spectrum owner, typically in urban, suburban, and rural areas (where terrestrial cellular infrastructure has already been deployed and service offered).

How does the “Single Network” of FCC differ from the 3GPP Non-Terrestrial Network (NTN) standardization? Simply put, the “Single Network” is a regulatory framework that paves the way for satellite operators to re-use the terrestrial cellular spectrum on their non-terrestrial (satellite-based) network. The 3GPP NTN standardization initiatives, e.g., Release 16, 17 and 18+, are a technical effort to incorporate satellite communication systems within the 5G network architecture. Shortly, the following 3GPP releases are it relates to how NTN should function with terrestrial 5G networks;

  • Release 15 laid the groundwork for 5G New Radio (NR) and started to consider the broader picture of integrating non-terrestrial networks with terrestrial 5G networks. It marks the beginning of discussions on how to accommodate NTNs within the 5G framework, focusing on study items rather than specific NTN standards.
  • Release 16 took significant steps toward defining NTN by including study items and work items specifically aimed at understanding and specifying the adjustments needed for NR to support communication with devices served by NTNs. Release 16 focuses on identifying modifications to the NR protocol and architecture to accommodate the unique characteristics of satellite communication, such as higher latency and different mobility characteristics compared to terrestrial networks.
  • Release 17 further advancements in NTN specifications aiming to integrate specific technical solutions and standards for NTNs within the 5G architecture. This effort includes detailed specifications for supporting direct connectivity between 5G devices and satellites, covering aspects like signal timing, frequency bands, and protocol adaptations to handle the distinct challenges posed by satellite communication, such as the Doppler effect and signal delay.
  • Release 18 and beyond will continue to evolve its standards to enhance NTN support, addressing emerging requirements and incorporating feedback from early implementations. These efforts include refining and expanding NTN capabilities to support a broader range of applications and services, improving integration with terrestrial networks, and enhancing performance and reliability.

The NTN architecture ensures (should ensure) that satellite communications systems can seamlessly integrate into 5G networks, supporting direct communication between satellites and standard mobile devices. This integration idea includes adapting 5G protocols and technologies to accommodate the unique characteristics of satellite communication, such as higher latency and different signal propagation conditions. The NTN standardization aims to expand the reach of 5G services to global scales, including maritime, aerial, and sparsely populated land areas, thereby aligning with the broader goal of universal service coverage.

The FCC’s vision of a “single network” and the 3GPP NTN standardization aims to integrate satellite and terrestrial networks to extend connectivity, albeit from slightly different angles. The FCC’s concept provides a regulatory and policy framework to enable such integration across different network types and service providers, focusing on the broad goal of universal connectivity. In contrast, 3GPP’s NTN standardization provides the technical specifications and protocols to make this integration possible, particularly within next-generation (5G) networks. At the same time, 3GPP’s NTN efforts address the technical underpinnings required to realize that vision in practice, especially for 5G technologies. The FCC’s “single network” concept lays the regulatory foundation for enabling satellite and terrestrial cellular network service integration to the same unmodified device portfolio. Together, they are highly synergistic, addressing the regulatory and technical challenges of creating a seamlessly connected world.

Depicting a moment in the Colorado mountains, a hiker receives a direct signal from a Low Earth Orbit (LEO) satellite supplementary coverage to their (unmodified) smartphone. The remote area has no terrestrial cellular coverage. It should be remembered that the SCS service is likely to be capacity-limited due to the typical large satellite coverage area and possible limited available SCS spectrum bandwidth.

SINGLE NETWORK VS SATELLITE ATC

The FCC’s Single Network vision and the Supplemental Coverage from Space (SCS) concept, akin to the Satellite Ancillary Terrestrial Component (ATC) architectural concept (an area that I spend a significant portion of my career working on operationalizing and then defending … a different story though), share a common goal of merging satellite and terrestrial networks to fortify connectivity. These strategies, driven by the desire to enhance the reach and reliability of communication services, particularly in underserved regions, hold the promise of expanded service coverage.

The Single Network and SCS initiatives broadly focus on comprehensively integrating satellite services with terrestrial infrastructures, aiming to directly connect satellite systems with standard consumer devices across various services and frequency bands. This expansive approach seeks to ensure ubiquitous connectivity, significantly closing the coverage gaps in current network deployments. Conversely, the Satellite ATC concept is more narrowly tailored, concentrating on using terrestrial base stations to complement and enhance satellite mobile services. This method explicitly addresses the need for improved signal availability and service reliability in urban or obstructed areas by integrating terrestrial components within the satellite network framework.

Although the Single Network and Satellite ATC shared goals, the paths to achieving them diverge significantly in the application, regulatory considerations, and technical execution. The SCS concept, for instance, involves navigating regulatory challenges associated with direct-to-device satellite communications, including the complexities of spectrum sharing and ensuring the harmonious coexistence of satellite and terrestrial services. This highlights the intricate nature of network integration, making your audience more aware of the regulatory and technical hurdles in this field.

The distinction between the two concepts lies in their technological and implementation specifics, regulatory backdrop, and focus areas. While both aim to weave together the strengths of satellite and terrestrial technologies, the Single Network and SCS framework envisions a more holistic integration of connectivity solutions, contrasting with the ATC’s targeted approach to augmenting satellite services with terrestrial network support. This illustrates the evolving landscape of communication networks, where the convergence of diverse technologies opens new avenues for achieving seamless and widespread connectivity.

THE RELATED SCS FREQUENCIES & SPECTRUM.

The following frequency bands and the total bandwidth associated with the frequency have by the FCC been designated for Supplemental Coverage from Space (SCS):

  • 70MHz @ 600 MHz Band
  • 96 MHz @ 700 MHz Band
  • 50 MHz @ 800 MHz Band
  • 130 MHz @ Broadband PCS
  • 10 MHz @ AWS-H Block

The above comprises a total frequency bandwidth of 350+ MHz, currently used for terrestrial cellular services across the USA. According to the FCC, the above frequency bands and spectrum can also be used for satellite direct-to-device SCS services to normal mobile devices without built-in satellite transceiver functionality. Of course, this is subject to spectrum owners’ approval and contractual and commercial arrangements.

Moreover, the 758-769/788-799 MHz band, licensed to the First Responder Network Authority (FirstNet), is also eligible for SCS under the established framework. This frequency band has been selected to enhance connectivity in remote, unserved, and underserved areas by facilitating collaborations between satellite and terrestrial networks within these specific frequency ranges.

SpaceX recently reported a peak download speed of 17 Mb/s from a satellite direct to an unmodified Samsung Android Phone using 2×5 MHz of T-Mobile USA’s PCS (i.e., the G-block). The speed corresponds to a downlink spectral efficiency of ~3.4 Mbps/MHz/beam, which is pretty impressive. Using this as rough guidance for the ~350 MHz, we should expect this to be equivalent to an approximate download speed of ca. 600 Mbps (@ 175 MHz) per satellite beam. As the satellite antenna technology improves, we should expect that spectral efficiency will also increase, resulting in increasing downlink throughput.

SCS INFANCY, BUT ALIVE AND KICKING.

In the FCC’s framework on the Supplemental Coverage from Space (SCS), the partnership between SpaceX and T-Mobile is described as a collaborative effort where SpaceX would utilize a block of T-Mobile’s mid-band Personal Communications Services (PCS G-Block) spectrum across a nationwide footprint. This initiative aims to provide service to T-Mobile’s subscribers in rural and remote locations, thereby addressing coverage gaps in T-Mobile’s terrestrial network. The FCC has facilitated this collaboration by allowing SpaceX and T-Mobile to deploy and test their proposed SCS system while their pending applications and the FCC’s proceedings continue.

Specifically, SpaceX has been authorized (by FCC’s Space Bureau) to deploy a modified version of its second-generation (2nd generation) Starlink satellites with SCS-capable antennas that can operate in specific frequencies. FCC authorized experimental testing on terrestrial locations for SpaceX and T-Mobile to progress with their SCS system, although SpaceX’s requests for broader authority remain under consideration by the FCC.

Lynk Global has partnered with mobile network operators (MNOs) outside the United States to allow the MNOs’ customers to send texts using Lynk’s satellite network. In 2022, the FCC authorized Lynk’s request to operate a non-geostationary satellite orbit (NGSO) satellite system (e.g., Low-Earth Orbit, Medium Earth Orbit, or Highly-Elliptical Orbit) intended for text message communications in locations outside the United States and in countries where Lynk has obtained agreements with MNOs and the required local regulatory approval. Lynk aims to deploy ten mobile-satellite service (MSS) satellites as part of a “cellular-based satellite communications network” operating on cellular frequencies globally in the 617-960 MHz band (i.e., within the UHF band), targeting international markets only.

Lynk has announced contracts with more than 30 MNOs (full list not published) covering over 50 countries for Lynk’s “satellite-direct-to-standard-mobile-phone-system,” which provides emergency alerts and two-way Short Message Service (SMS) messaging. Lynk currently has three LEO satellites in orbit as of March 2023, and they plan to expand their constellation to include up to 5,000 satellites with 50 additional satellites planned for end of 2024, and with that substantially broadening its geographic coverage and service capabilities​​. Lynk recently claimed that they had in Hawaii achieved repeated successful downlink speeds above 10 Mbps with several mass market unmodified smartphones (10+ Mbps indicates a spectral efficiency of 2+ Mbps/MHz/beam). Lynk Mobile has also, recently (July 2023) demonstrated (as a proof of concept) phone calls via their LEO satellite between two unmodified smartphones (see the YouTube link).

AST SpaceMobile is also mentioned for its partnerships with several MNOs, including AT&T and Vodafone, to develop its direct-to-device or satellite-to-smartphone service. Overall AST SpaceMobile has announced it has entered into “more than 40 agreements and understandings with mobile network operators globally” (e.g., AT&T, Vodafone, Rakuten, Orange, Telefonica, TIM, MTN, Ooredoo, …). In 2020, AST filed applications with the FCC seeking U.S. market access for gateway links in the V-band for its SpaceMobile satellite system, which is planned to consist of 243 LEO satellites. AST clarified that its operation in the United States would collaborate with terrestrial licensee partners without seeking to operate independently on terrestrial frequencies​​.

AST SpaceMobile BlueWalker 3 (BW3) LEO satellite 64 square-meter phased array. Source: AST SpaceMobile.

AST SpaceMobile’s satellite antenna design marks a pioneering step in satellite communications. AST recently deployed the largest commercial phased array antenna into Low Earth Orbit (LEO). On September 10, 2022, AST SpaceMobile launched its prototype direct-to-device testbed BlueWalker 3 (BW3) satellite. This mission marked a significant step forward in the company’s efforts to test and validate its technology for providing direct-to-cellphone communication via a Low Earth Orbit (LEO) satellite network. The launch of BW3 aimed to demonstrate the capabilities of its large phased array antenna, a critical component for the AST’s targeted global broadband service.

The BW3’s phased array antenna with a surface area of 64 square meters is technologically quite advanced (actually, I find it very beautiful and can’t wait to see the real thing for their commercial constellation) and designed for dynamic beamforming as one would expect for a state-of-art direct-to-device satellite. The BlueWalker 3, a proof of concept design, supports a frequency range of 100 MHz in the UHF band, with 5 MHz channels and a spectral efficiency expected to be 3 Mbps/MHz/channel. This capability is crucial for establishing direct-to-device communications, as it allows the satellite to concentrate its signals on specific geographic areas or directly on mobile devices, enhancing the quality of coverage and minimizing potential interference with terrestrial networks. AST SpaceMobile is expected to launch the first 5 of 243 LEO satellites, BlueBirds, on SpaceX’s Falcon 9 in the 2nd quarter of 2024. The first 5 will be similar to BW3 design including the phased array antenna. Subsequent AST satellites are expected to be larger with substantially up-scaled phased array antenna supporting an even larger frequency span covering the most of the UHF band and supporting 40 MHz channels with peak download speeds of 120 Mbps (using their estimated 3 Mbps/MHz/channel).

These above examples underscore the the ongoing efforts and potential of satellite service providers like Starlink/SpaceX, Lynk Global, and AST SpaceMobile within the evolving SCS framework. The examples highlight the collaborative approach between satellite operators and terrestrial service providers to achieve ubiquitous connectivity directly to unmodified cellular consumer handsets.

PRACTICAL PREREQUISITES.

In general, the satellite operator would need a terrestrial frequency license owner willing to lease out its spectrum for services in areas where that spectrum has not been deployed on its network infrastructure or where the license holder has no infrastructure deployed. And, of course, a terrestrial communication service provider owning spectrum and interested in extending services to remote areas would need a satellite operator to provide direct-to-device services to its customers. Eventually, terrestrial operators might see an economic benefit in decommissioning uneconomical rural terrestrial infrastructure and providing satellite broadband cellular services instead. This may be particularly interesting in low-density rural and remote areas supported today by a terrestrial communications infrastructure.

Under the SCS framework, terrestrial spectrum owners can make leasing arrangements with satellite operators. These agreements would allow satellite services to utilize the terrestrial cellular spectrum for direct satellite communication with devices, effectively filling coverage gaps with satellite signals. This kind of arrangement could be similar to the one between T-Mobile USA and StarLink to offer cellular services in the absence of T-Mobile cellular infrastructure, e.g., mainly remote and rural areas.

As the regulatory body for non-federal frequencies, the FCC delineates a regulatory environment that specifies the conditions under which the spectrum can be shared or used by terrestrial and satellite services, minimizing the risk of harmful interference (which both parties should be interested in anyway). This includes setting technical standards and identifying suitable frequency bands supporting dual use. The overarching goal is to bolster the reach and reliability of cellular networks in remote areas, enhancing service availability.

For terrestrial cellular networks and spectrum owners, this means adhering to FCC regulations that govern these new leasing arrangements and the technical criteria designed to protect incumbent services from interference. The process involves meticulous planning and, if necessary, implementing measures to mitigate interference, ensuring that the integration of satellite and terrestrial networks proceeds smoothly.

Moreover, the SCS framework should leapfrog innovation and allow network operators to broaden their service offerings into areas where they are not present today. This could include new applications, from emergency communications facilitated by satellite connectivity to IoT deployments and broadband access in underserved locations.

Depicting a moment somewhere in the Arctic (e.g., Greenland), an eco-tourist receives a direct signal from a Low Earth Orbit (LEO) satellite supplementary coverage to their (unmodified) smartphone. The remote area has no terrestrial cellular coverage. It should be remembered that the SCS service is likely to be capacity-limited due to the typical large satellite coverage area and possible limited available SCS spectrum bandwidth. Several regulatory, business, and operational details must be in place for the above service to work.

TECHNICAL PREREQUISITES FOR DELIVERING SATELLITE SCS SERVICES.

Satellite constellations providing D2D services are naturally targeting supplementary coverage of geographical areas where no terrestrial cellular services are present at the target frequency bands used by the satellite operator.

As the satellite operator has gotten access to the terrestrial cellular spectrum for its supplementary coverage direct-to-device service, it has a range of satellite technical requirements that either need to be in place of an existing constellation (though that might require some degree of foresight) or a new satellite would need to be designed consistent with frequency band and range, the targeted radio access technology such as LTE or 5G (assuming the ambition eventually is beyond messaging), and the device portfolio that the service aims to support (e.g., smartphone, tablet, IoTs, …). In general, I would assume that existing satellite constellations would not automatically support SCS services they have not been designed for upfront. It would make sense (economically) if a spectrum arrangement already exists between the satellite and terrestrial cellular spectrum owner and operator.

Direct-to-device LEO satellites directly connect to unmodified mobile devices such as smartphones, tablets, or other personal devices. This necessitates a design that can accommodate low-power signals and small antennas typically found on consumer devices. Therefore, these satellites often incorporate advanced beamforming capabilities through phased array antennas to focus signals precisely on specific geographic locations, enhancing signal strength and reliability for individual users. Moreover, the transceiver electronics must be highly sensitive and capable of handling simultaneous connections, each potentially requiring different levels of service quality. As the satellite provides services over remote and scarcely populated areas, at least initially, there is no need for high-capacity designs, e.g., typically requiring terrestrial cellular-like coverage areas and large frequency bandwidths. The satellites are designed to operate in frequency bands compatible with terrestrial consumer devices, necessitating coordination and compliance with various regulatory standards compared to traditional satellite services.

Implementing satellite-based SCS successfully hinges on complying with many fairly sophisticated technical requirements, such as phased array antenna design and transceiver electronics, enabling direct communication with consumer devices terrestrially. The phased array antenna, a cornerstone of this architecture, must possess advanced beamforming capabilities, allowing it to dynamically focus and steer its signal beams towards specific geographic areas or even moving targets on the Earth’s surface. This flexibility is super important for maximizing the coverage and quality of the communication link with individual devices, which might be spread across diverse and often challenging terrains. The antenna design needs to be wideband and highly efficient to handle the broad spectrum of frequencies designated for SCS operations, ensuring compatibility with the communication standards used by consumer devices (e.g., 4G LTE, 5G).

An illustration of a LEO satellite with a phased array antenna providing direct to smartphone connectivity at a 850 MHz carrier frequency. All practical purposes the antenna beamforming at a LEO altitude can be considered far-field. Thus the electromagnetic fields behave as planar waves and the antenna array becomes more straightforward to design and to manage performance (e.g., beam steering at very high accuracy).

Designing phased array antennas for satellite-based direct-to-device services, envisioned by the SCS framework, requires considering various technical design parameters to ensure the system’s optimal performance and efficiency. These antennas are crucial for effective direct-to-device communication, encompassing multiple technical and practical considerations.

The SCS frequency band not only determines the operational range of the antenna but also its ability to communicate effectively with ground-based devices through the Earth’s atmosphere; in this respect, lower frequencies are better than higher frequencies. The frequency, or frequencies, significantly influences the overall design of the antenna, affecting everything from its physical dimensions to the materials used in its construction. The spacing and configuration of the antenna elements are carefully planned to prevent interference while maximizing coverage and connectivity efficiency. Typically, element spacing is kept around half the operating frequency wavelength, and the configuration involves choosing linear, planar, or circular arrays.

Beamforming capabilities are at the heart of the phased array design, allowing for the precise direction of communication beams toward targeted areas on the ground. This necessitates advanced signal processing to adjust signal phases dynamically and amplitudes, enabling the system to focus its beams, compensate for the satellite’s movement, and handle numerous connections.

The antenna’s polarization strategy is chosen to enhance signal reception and minimize interference. Dual (e.g., horizontal & vertical) or circular (e.g., right or left hand) polarization ensures compatibility with a wide range of devices and as well as more efficient spectrum use. Polarization refers to the orientation of the electromagnetic waves transmitted or received by an antenna. In satellite communications, polarization is used to differentiate between signals and increase the capacity of the communication link without requiring additional frequency bandwidth.

Physical constraints of size, weight, and form factor are also critical, dictated by the satellite’s design and launch parameters, including the launch cost. The antenna must be compact and lightweight to fit within the satellite’s structure and comply with launch weight limitations, impacting the satellite’s overall design and deployment mechanisms.

Beyond the antenna, the transceiver electronics within the satellite play an important role. These must be capable of handling high-throughput data to accommodate simultaneous connections, each demanding reliable and quality service. Sensitivity is another critical factor, as the electronics need to detect and process the relatively weak signals sent by consumer-grade devices, which possess much less power than traditional ground stations. Moreover, given the energy constraints inherent in satellite platforms, these transceiver systems must efficiently manage the power to maintain optimal operation over long durations as it directly relates to the satellite’s life span.

Operational success also depends on the satellite’s compliance with regulatory standards, particularly frequency use and signal interference. Achieving this requires a deep integration of technology and regulatory strategy, ensuring that the satellite’s operations do not disrupt existing services and align with global communication protocols.

CONCERNS.

The FCC’s Supplemental Coverage from Space (SCS) framework has been met with both anticipation and critique, reflecting diverse stakeholder interests and concerns. While the framework aims to enhance connectivity by integrating satellite and terrestrial networks, several critiques and concerns have been raised:

Interference concerns: A primary critique revolves around potential interference with existing terrestrial services. Stakeholders worry that SCS operations might disrupt the current users, including terrestrial mobile networks and other satellite services. A significant challenge is ensuring that SCS services coexist harmoniously with these incumbent services without causing harmful interference.

Certification of terrestrial mobile devices: FCC requires that terrestrial mobile devices has to be certified SCS. The expressed concerns have been multifaceted, reflecting the complexities of integrating satellite communication capabilities into standard consumer mobile devices. These concerns, as in particular highlighted in the FCC’s SCS framework, revolving around technical, regulatory, and practical aspects. As 3GPP NTN standardization are considering changes to mobile devices that would enhance the direct connectivity between device and satellite, it may at least for devices developed for NTN communication make sense to certify those.

Spectrum allocation and management: Spectrum allocation for SCS poses another concern, particularly the repurposing of spectrum bands previously dedicated to other uses. Critics argue that spectrum reallocation must be carefully managed to avoid disadvantaging existing services or limiting future innovation in those bands.

Regulatory and licensing framework: The complexity of the regulatory and licensing framework for SCS services has also been a point of contention. Critics suggest that the framework could be burdensome for new entrants or more minor players, potentially stifling innovation and competition in the satellite and telecommunications industries.

Technical and operational challenges: The technical requirements for SCS, including the need for advanced phased array antennas and the integration of satellite systems with terrestrial networks, pose significant challenges. Concerns about the feasibility and cost of developing and deploying the necessary technology at scale have been raised.

Market and economic impacts: There are concerns about the SCS framework’s economic implications, particularly its impact on existing market dynamics. Critics worry that the framework might favor certain players or technologies, potentially leading to market consolidation or barriers to entry for innovative solutions.

Environmental and space traffic management: The increased deployment of satellites for SCS services raises concerns about space debris and the sustainability of space activities. Critics emphasize the need for robust space traffic management and debris mitigation strategies to ensure the long-term viability of space operations.

Global coordination and equity: The global nature of satellite communications underscores the need for international coordination and equitable access to SCS services. Critics point out the importance of ensuring that the benefits of SCS extend to all regions, particularly those currently underserved by telecommunications infrastructure.

FURTHER READING.

  1. FCC-CIRC2403-03, Report and Order and further notice of proposed rulemaking, related to the following context: “Single Network Future: Supplemental Coverage from Space” (February 2024).
  2. A. Vanelli-Coralli, N. Chuberre, G. Masini, A. Guidotti, M. El Jaafari, “5G Non-Terrestrial Networks.”, Wiley (2024). A recommended reading for deep diving into NTN networks of satellites, typically the LEO kind, and High-Altitude Platform Systems (HAPS) such as stratospheric drones.
  3. Kim Kyllesbech Larsen, The Next Frontier: LEO Satellites for Internet Services. | techneconomyblog, (March 2024).
  4. Kim Kyllesbech Larsen, Stratospheric Drones: Revolutionizing Terrestrial Rural Broadband from the Skies? | techneconomyblog, (January 2024).
  5. Kim Kyllesbech Larsen, Spectrum in the USA – An overview of Today and a new Tomorrow. | techneconomyblog, (May 2023).
  6. Starlink, “Starlink specifications” (Starlink.com page). The following Wikipedia resource is also quite good: Starlink.
  7. R.K. Mailloux, “Phased Array Antenna Handbook, 3rd Edition”, Artech House, (September 2017).
  8. Professor Emil Björnson, “Basics of Antennas and Beamforming”, (2019). Provides a high-level understand of what beamforming is in relative simple terms.
  9. Professor Emil Björnson, “Physically Large Antenna Arrays: When the Near-Field Becomes Far-Reaching”, (2022). Provides a high-level understand of what phased array and their working in relative simple terms with lots of simply illustrations. I also recommend to check Prof. Björnson’s “Reconfigurable intelligent surfaces: Myths and realities” (2020).
  10. AST SpaceMobile website: https://ast-science.com/ Constellation Areas: Internet, Direct-to-Cell, Space-Based Cellular Broadband, Satellite-to-Cellphone. 243 LEO satellites planned. 2 launched.
  11. Jon Brodkin, “Google and AT&T invest in Starlink rival for satellite-to-smartphone service”, Ars Technica (January 2024). There is a very nice picture of AST’s 64 square meter large BlueWalker 3 phased array antenna (i.e., with a total supporting bandwidth of 100 MHz with a channels of 5 MHz and a theoretical spectral efficiency of 3 Mbps/MHz/channel).
  12. Lynk Global website: https://lynk.world/ (see also FCC Order and Authorization). It should be noted that Lynk can operate within 617 to 960 MHz (Space-to-Earth) and 663 to 915 MHz (Earth-to-Space). However, only outside the USA. Constellation Area: IoT / M2M, Satellite-to-Cellphone, Internet, Direct-to-Cell. 8 LEO satellites out of 10 planned.
  13. NewSpace Index: https://www.newspace.im/ I find this resource to have excellent and up-to-date information on commercial satellite constellations.
  14. Up-to-date rocket launch schedule and launch details can be found here: https://www.rocketlaunch.live/

ACKNOWLEDGEMENT.

I greatly acknowledge my wife, Eva Varadi, for her support, patience, and understanding during the creative process of writing this article.

The Next Frontier: LEO Satellites for Internet Services.

THE SPACE RACE IS ON.

If all current commercial satellite plans were to be realized within the next decade, we would have more, possibly substantially more, than 65 thousand satellites circling Earth. Today, that number is less than 10 thousand, with more than half that number realized by StarLink’s Low Earth Orbit (LEO) constellation over the last couple of years (i.e., since 2018).

While the “Arms Race” during the Cold War was “a thing” mainly between The USA and the former Soviet Union, the Space Race will, in my opinion, be “battled out” between the commercial interests of the West against the political interest of China (as illustrated in Figure 1 below). The current numbers strongly indicate that Europe, Canada, the Middle East, Africa, and APAC (minus China) will likely and largely be left on the sideline to watch the US and China impose, in theory, a “duopoly” in LEO satellite-based services. However, in practice, it will be a near-monopoly when considering security concerns between the West and the (re-defined) East block.

Figure 1 Illustrates my thesis that we will see a Space Race over the next 10 years between a (or very few) commercial LEO constellation, represented by a Falcon-9 like design (for maybe too obvious reasons), and a Chinese-state owned satellite constellation. (Courtesy: DALL-E).

As of end of 2023, more than 50% of launched and planned commercial LEO satellites are USA-based. Of those, the largest fraction is accounted for by the US-based StarLink constellation (~75%). More than 30% are launched or planned by Chinese companies headed by the state-owned Guo Wang constellation rivaling Elon Musk’s Starlink in ambition and scale. Europe comes in at a distant number 3 with about 8% of the total of fixed internet satellites. Apart from being disappointed, alas, not surprised by the European track record, it is somewhat more baffling that there are so few Indian and African satellite (there are none) constellations given the obvious benefits such satellites could bring to India and the African continent.

India is a leading satellite nation with a proud tradition of innovative satellite designs and manufacturing and a solid track record of satellite launches. However, regarding commercial LEO constellations, India still needs to catch up on some opportunities here. Having previously worked on the economics and operationalizing a satellite ATC (i.e., a satellite service with an ancillary terrestrial component) internet service across India, it is mind-blowing (imo) how much economic opportunity there is to replace by satellite the vast terrestrial cellular infrastructure in rural India. Not to mention a quantum leap in communication broadband services resilience and availability that could be provided. According to the StarLink coverage map, the regulatory approval in India for allowing StarLink (US) services is still pending. In the meantime, Eutelsat’s OneWeb (EU) received regulatory approval in late 2023 for its satellite internet service over India in collaboration with Barthi Enterprises (India), that is also the largest shareholder in the recently formed Eutelsat Group with 21.2%. Moreover, Jio’s JioSpaceFiber satellite internet services were launched in several Indian states at the end of 2023, using the SES (EU) MEO O3b mPower satellite constellation. Despite the clear satellite know-how and capital available, it appears there is little activity for Indian-based LEO satellite development, taking up the competition with international operators.

The African continent is attracting all the major LEO satellite constellations such as StarLink (US), OneWeb (EU), Amazon Kuipers (US), and Telesat Lightspeed (CAN). However, getting regulatory approval for their satellite-based internet services is a complex, time-consuming, and challenging process with Africa’s 54 recognized sovereign countries. I would expect that we will see the Chinese-based satellite constellations (e.g., Guo Wang) taking up here as well due to the strong ties between China and several of the African nations.

This article is not about SpaceX’s StarLink satellite constellation. Although StarLink is mentioned a lot and used as an example. Recently, at the Mobile World Congress 2024 in Barcelona, talking to satellite operators (but not StarLink) providing fixed broadband satellite services, we joked about how long into a meeting we could go before SpaceX and StarLink would be mentioned (~ 5 minutes where the record, I think).

This article is about the key enablers (frequencies, frequency bandwidth, antenna design, …) that make up an LEO satellite service, the LEO satellite itself, the kind of services one should expect from it, and its limitations.

There is no doubt that LEO satellites of today have an essential mission: delivering broadband internet to rural and remote areas with little or no terrestrial cellular or fixed infrastructure to provide internet services. Satellites can offer broadband internet to remote areas with little population density and a population spread out reasonably uniformly over a large area. A LEO satellite constellation is not (in general) a substitute for an existing terrestrial communications infrastructure. Still, it can enhance it by increasing service availability and being an important remedy for business continuity in remote rural areas. Satellite systems are capacity-limited as they serve vast areas, typically with limited spectral resources and capacity per unit area.

In comparison, we have much smaller coverage areas with demand-matched spectral resources in a terrestrial cellular network. It is also easier to increase capacity in a terrestrial cellular system by adding more sectors or increasing the number of sites in an area that requires such investments. Adding more cells, and thus increasing the system capacity, to satellite coverage requires a new generation of satellites with more advanced antenna designs, typically by increasing the number of phased-array beams and more complex modulation and coding mechanisms that boost the spectral efficiency, leading to increased capacity and quality for the services rendered to the ground. Increasing the system capacity of a cellular communications system by increasing the number of cells (i.e., cell splitting) works the same in satellite systems as it does for a terrestrial cellular system.

So, on average, LEO satellite internet services to individual customers (or households), such as those offered by StarLink, are excellent for remote, lowly populated areas with a nicely spread-out population. If we de-average this statement. Clearly, within the satellite coverage area, we may have towns and settlements where, locally, the population density can be fairly large despite being very small over the larger footprint covered by the satellite. As the capacity and quality of the satellite is a shared resource, serving towns and settlements of a certain size may not be the best approach to providing a sustainable and good customer experience as the satellite resources exhaust rapidly in such scenarios. In such scenarios, a hybrid architecture is of much better use as well as providing all customers in a town or settlement with the best service possible leveraging the existing terrestrial communications infrastructure, cellular as well as fixed, with that of a satellite backhaul broadband connection between a satellite ground gateway and the broadband internet satellite. This is offered by several satellite broadband providers (both from GEO, MEO and LEO orbits) and has the beauty of not only being limited to one provider. Unfortunately, this particular finesse, is often overlooked by the awe of massive scale of the StarLink constellation.

AND SO IT STARTS.

When I compared the economics of stratospheric drone-based cellular coverage with that of LEO satellites and terrestrial-based cellular networks in my previous article, “Stratospheric Drones: Revolutionizing Terrestrial Rural Broadband from the Skies?”, it was clear that even if LEO satellites are costly to establish, they provide a substantial cost advantage over cellular coverage in rural and remote areas that are either scarcely covered or not at all. Although the existing LEO satellite constellations have limited capacity compared to a terrestrial cellular network and would perform rather poorly over densely populated areas (e.g., urban and suburban areas), they can offer very decent fixed-wireless-access-like broadband services in rural and remote areas at speeds exceeding even 100 Mbps, such as shown by the Starlink constellation. Even if the provided speed and capacity is likely be substantially lower than what a terrestrial cellular network could offer, it often provides the missing (internet) link. Anything larger than nothing remains infinitely better.

Low Earth Orbit (LEO) satellites represent the next frontier in (novel) communication network architectures, what we in modern lingo would call non-terrestrial networks (NTN), with the ability to combine both mobile and fixed broadband services, enhancing and substituting terrestrial networks. The LEO satellites orbit significantly closer to Earth than their Geostationary Orbit (GEO) counterparts at 36 thousand kilometers, typically at altitudes between 300 to 2,000 kilometers, LEO satellites offer substantially reduced latency, higher bandwidth capabilities, and a more direct line of sight to receivers on the ground. It makes LEO satellites an obvious and integral component of non-terrestrial networks, which aim to extend the reach of existing fixed and mobile broadband services, particularly in rural, un-and under-served, or inaccessible regions as a high-availability element of terrestrial communications networks in the event of natural disasters (flooding, earthquake, …), or military conflict, in which the terrestrial networks are taken out of operation.

Another key advantage of LEO satellite is that the likelihood of a line-of-sight (LoS) to a point on the ground is very high compared to establishing a LoS for terrestrial cellular coverage that, in general, would be very low. In other words, the signal propagation from a LEO satellite closely approximates that of free space. Thus, all the various environmental signal loss factors we must consider for a standard terrestrial-based cellular mobile network do not apply to our satellite with signal propagation largely being determined by the distance between the satellite and the ground (see Figure 2).

Figure 2 illustrates the difference between terrestrial cellular coverage from a cell tower and that of a Low Earth Orbit (LEO) Satellite. The benefit of seeing the world from above is that environmental and physical factors have substantially less impact on signal propagation and quality primarily being impacted by distance as it approximates free space propagation with signal attenuation mainly determined by the Line-of-Sight (LoS) distance from antenna to Earth. This situation is very different for a terrestrial-based cellular tower with its radiated signal being substantially compromised by environmental factors.

Low Earth Orbit (LEO) satellites, compared to GEO and MEO-based higher-altitude satellite systems, in general, have simpler designs and smaller sizes, weights, and volumes. Their design and architecture are not just a function of technological trends but also a manifestation of their operational environment. The (relative) simplicity of LEO satellites also allows for more standardized production, allowing for off-the-shelf components and modular designs that can be manufactured in larger quantities, such as the case with CubeSats standard and SmallSats in general. The lower altitude of LEO satellites translates to a reduced distance from the launch site to the operational orbit, which inherently affects the economics of satellite launches. This proximity to Earth means that the energy required to propel a satellite into LEO is significantly less than needed to reach Geostationary Earth Orbit (GEO), resulting in lower launch costs.

The advent of LEO satellite constellations marks an important shift in how we approach global connectivity. With the potential to provide ubiquitous internet coverage in rural and remote places with little or no terrestrial communications infrastructure, satellites are increasingly being positioned as vital elements in global communication. The LEO satellites, as well as stratospheric drones, have the ability to provide economical internet access, as addressed in my previous article, in remote areas and play a significant role in disaster relief efforts. For example, when terrestrial communication networks may be disrupted after a natural disaster, LEO satellites can quickly re-establish communication links to normal cellular devices or ad-how earth-based satellite systems, enabling efficient coordination of rescue and relief operations. Furthermore, they offer a resilient network backbone that complements terrestrial infrastructure.

The Internet of Things (IoT) benefits from the capabilities of LEO satellites. Particular in areas where there is little or no existing terrestrial communications networks. IoT devices often operate in remote or mobile environments, from sensors in agricultural fields to trackers across shipping routes. LEO satellites provide reliable connectivity to IoT networks, facilitating many applications, such as non- and near real-time monitoring of environmental data, seamless asset tracking over transcontinental journeys, and rapid deployment of smart devices in smart city infrastructures. As an example, let us look at the minimum requirements for establishing a LEO satellite constellation that can gather IoT measurements. At an altitude of 550 km the satellite would take ca. 1.5 hour to return to a given point on its orbit. Earth rotates (see also below) which require us to deploy several orbital planes to ensure that we have continuous coverage throughout the 24 hours of a day (assuming this is required). Depending on the satellite antenna design, the target coverage area, and how often a measurement is required, a satellite constellation to support an IoT business may not require much more than 20 (lower measurement frequency) to 60 (higher measurement frequency, but far from real real-time data collection) LEO satellites (@ 550 km).

For defense purposes, LEO satellite systems present unique advantages. Their lower orbits allow for high-resolution imagery and rapid data collection, which are crucial for surveillance, reconnaissance, and operational awareness. As typically more LEO satellites will be required, compared to a GEO satellite, such systems also offer a higher degree of redundancy in case of anti-satellite (ASAT) warfare scenarios. When integrated with civilian applications, military use cases can leverage the robust commercial infrastructure for communication and geolocation services, enhancing capabilities while distributing the system’s visibility and potential targets.

Standalone military LEO satellites are engineered for specific defense needs. These may include hardened systems for secure communication, resistance to jamming, and interception. For instance, they can be equipped with advanced encryption algorithms to ensure secure transmission of sensitive military data. They also carry tailored payloads for electronic warfare, signal intelligence, and tactical communications. For example, they can host sensors for detecting and locating enemy radar and communication systems, providing a significant advantage in electronic warfare. As the line between civilian and military space applications blurs, dual-use LEO satellite systems are emerging, capable of serving civilian broadband and specialized military requirements. It should be pointed out that there also military applications, such as signal gathering, that may not be compatible with civil communications use cases.

In a military conflict, the distributed architecture and lower altitude of LEO constellations may offer some advantages regarding resilience and targetability compared to GEO and MEO-based satellites. Their more significant numbers (i.e., 10s to 1000s) compared to GEO, and the potential for quicker orbital resupply can make them less susceptible to complete system takedown. However, their lower altitudes could make them accessible to various ASAT technologies, including ground-based missiles or space-based kinetic interceptors.

It is not uncommon to encounter academic researchers and commentators who give the impression that LEO satellites could replace existing terrestrial-based infrastructures and solve all terrestrial communications issues known to man. That is (of course) not the case. Often, such statements appears to be based an incomplete understanding of the capacity limitation of satellite systems. Due to satellites’ excellent coverage with very large terrestrial footprints, the satellite capacity is shared over very large areas. For example, consider an LEO satellite at 550 km altitude. The satellite footprint, or coverage area (aka ground swath), is the area on the Earth’s surface over which the satellite can establish a direct line of sight. The satellite footprint in our example diameter would be ca. five thousand five hundred kilometers. An equivalent area of ca. 23 million square kilometers is more than twice that of the USA (or China or Canada). Before you get too excited, the satellite antenna will typically restrict the surface area the satellite will cover. The extent of the observable world that is seen at any given moment by the satellite antenna is defined as the Field of View (FoV) and can vary from a few degrees (narrow beams, small coverage area) to 40 degrees or higher (wide beams, large coverage areas). At a FoV of 20 degrees, the antenna footprint would be ca. 2 thousand 400 kilometers, equivalent to a coverage area of ca. 5 million square kilometers.

In comparison, for a FoV of 0.8 degrees, the antenna footprint would only be 100 kilometers. If our satellite has a 16-satellite beam capability, it would translate into a coverage diameter of 24 km per beam. For the StarLink system based on the Ku-band (13 GHz) and a cell downlink (Satellite-to-Earth) capacity of ca. 680 Mbps (in 250 MHz) we would have ca. 2 Mbps per km2 unit coverage area. Compared to a terrestrial rural cellular site with 85 MHz (Downlink, Base station antenna to customer terminal), it would deliver 10+ Mbps per km2 unit coverage area.

It is always good to keep in mind that “Satellites mission is not to replace terrestrial communications infrastructures but supplement and enhance them”, and furthermore, “Satellites offer the missing (internet) link in areas where there is no terrestrial communications infrastructure present”. Satellites offer superior coverage to any terrestrial communications infrastructure. Satellites limitations are in providing capacity, and quality, at population scale as well as supporting applications and access technologies requiring very short latencies (e.g., smaller than 10 ms).

In the following, I will focus on terrestrial cellular coverage and services that LEO satellites can provide. At the end of my blog, I hope I have given you (the reader) a reasonable understanding of how terrestrial coverage, capacity, and quality work in a (LEO) satellite system and have given you an impression of key parameters we can add to the satellite to improve those.

EARTH ROTATES, AND SO DO SATELLITES.

Before getting into the details of low earth orbit satellites, let us briefly get a couple of basic topics off the table. Skipping this part may be a good option if you are already into and in the know satellites. Or maybe carry on an get a good laugh of those terra firma cellular folks that forgot about the rotation of Earth 😉

From an altitude and orbit (around Earth) perspective, you may have heard of two types of satellites: The GEO and the LEO satellites. Geostationary (GEO) satellites are positioned in a geostationary orbit at ~36 thousand kilometers above Earth. That the satellite is geostationary means it rotates with the Earth and appears stationary from the ground, requiring only one satellite to maintain constant coverage over an area that can be up to one-third of Earth’s surface. Low Earth Orbit (LEO) satellites are positioned at an altitude between 300 to 2000 kilometers above Earth and move relative to the Earth’s surface at high speeds, requiring a network or constellation to ensure continuous coverage of a particular area.

I have experienced that terrestrial cellular folks (like myself) when first thinking about satellite coverage are having some intuitive issues with satellite coverage. We are not used to our antennas moving away from the targeted coverage area, and our targeted coverage area, too, is moving away from our antenna. The geometry and dynamics of terrestrial cellular coverage are simpler than they are for satellite-based coverage. For LEO satellite network planners, it is not rocket science (pun intended) that the satellites move around in their designated orbit over Earth at orbital speeds of ca. 70 to 80 km per second. Thus, at an altitude of 500 km, a LEO satellite orbits Earth approximately every 1.5 hours. Earth, thankfully, rotates. Compared to its GEO satellite “cousin,” the LEO satellite ” is not “stationary” from the perspective of the ground. Thus, as Earth rotates, the targeted coverage area moves away from the coverage provided by the orbital satellite.

We need several satellites in the same orbit and several orbits (i.e., orbital planes) to provide continuous satellite coverage of a target area. This is very different from terrestrial cellular coverage of a given area (needles to say).

WHAT LEO SATELLITES BRING TO THE GROUND.

Anything is infinitely more than nothing. The Low Earth Orbit satellite brings the possibility of internet connectivity where there previously was nothing, either because too few potential customers spread out over a large area made terrestrial-based services hugely uneconomical or the environment is too hostile to build normal terrestrial networks within reasonable economics.

Figure 3 illustrates a low Earth satellite constellation providing internet to rural and remote areas as a way to solve part of the digital divide challenge in terms of availability. Obviously, the affordability is likely to remain a challenge unless subsidized by customers who can afford satellite services in other places where availability is more of a convenience question. (Courtesy: DALL-E)

The LEO satellites represent a transformative shift in internet connectivity, providing advantages over traditional cellular and fixed broadband networks, particularly for global access, speed, and deployment capabilities. As described in “Stratospheric Drones: Revolutionizing Terrestrial Rural Broadband from the Skies?”, LEO satellite constellations, or networks, may also be significantly more economical than equivalent cellular networks in rural and remote areas where the economics of coverage by satellite, as depicted in the above Figure 3, is by far better than by traditional terrestrial cellular means.

One of the foremost benefits of LEO satellites is their ability to offer global coverage as well as reasonable broadband and latency performance that is difficult to match with GEO and MEO satellites. The GEO stationary satellite obviously also offers global broadband coverage, the unit coverage being much more extensive than for a LEO satellite, but it is not possible to offer very low latency services, and it is more difficult to provide high data rates (in comparison to a LEO satellite). LEO satellites can reach the most remote and rural areas of the world, places where laying cables or setting up cell towers is impractical. This is a crucial step in delivering communications services where none exist today, ensuring that underserved populations and regions gain access to internet connectivity.

Another significant advantage is the reduction in latency that LEO satellites provide. Since they orbit much closer to Earth, typically at an altitude between 350 to 700 km, compared to their geostationary counterparts that are at 36 thousand kilometers altitude, the time it takes for a communications signal to travel between the user and the satellite is significantly reduced. This lower latency is crucial for enhancing the user experience in real-time applications such as video calls and online gaming, making these activities more enjoyable and responsive.

An inherent benefit of satellite constellations is their ability for quick deployment. They can be deployed rapidly in space, offering a quicker solution to achieving widespread internet coverage than the time-consuming and often challenging process of laying cables or erecting terrestrial infrastructure. Moreover, the network can easily be expanded by adding more satellites, allowing it to dynamically meet changing demand without extensive modifications on the ground.

LEO satellite networks are inherently scalable. By launching additional satellites, they can accommodate growing internet usage demands, ensuring that the network remains efficient and capable of serving more users over time without significant changes to ground infrastructure.

Furthermore, these satellite networks offer resilience and reliability. With multiple satellites in orbit, the network can maintain connectivity even if one satellite fails or is obstructed, providing a level of redundancy that makes the network less susceptible to outages. This ensures consistent performance across different geographical areas, unlike terrestrial networks that may suffer from physical damage or maintenance issues.

Another critical advantage is (relative) cost-effectiveness compared to a terrestrial-based cellular network. In remote or hard-to-reach areas, deploying satellites can be more economical than the high expenses associated with extending terrestrial broadband infrastructure. As satellite production and launch costs continue to decrease, the economics of LEO satellite internet become increasingly competitive, potentially reducing the cost for end-users.

LEO satellites offer a promising solution to some of the limitations of traditional connectivity methods. By overcoming geographical, infrastructural, and economic barriers, LEO satellite technology has the potential to not just complement but effectively substitute terrestrial-based cellular and fixed broadband services, especially in areas where such services are inadequate or non-existent.

Figure 4 below provides an overview of LEO satellite coverage with fixed broadband services offered to customers in the Ku band with a Ka backhaul link to ground station GWs that connect to, for example, the internet. Having inter-satellite communications (e.g., via laser links such as those used by Starlink satellites as per satellite version 1.5) allows for substantially less ground-station gateways. Inter-satellite laser links between intra-plane satellites are a distinct advantage in ensuring coverage for rural and remote areas where it might be difficult, very costly, and impractical to have a satellite ground station GW to connect to due to the lack of global internet infrastructure.

Figure 4 In general, a satellite is required to have LoS to its ground station gateway (GW); in other words, the GW needs to be within the coverage footprint of the satellite. For LEO satellites, which are at low altitudes, between 300 and 2000 km, and thus have a much lower footprint than MEO and GEO satellites, this would result in a need for a substantial amount of ground stations. This is depicted in (a) above. With inter-satellite laser links (SLL), e.g., those implemented by Starlink, it is possible to reduce the ground station gateways significantly, which is particularly helpful in rural and very remote areas. These laser links enable direct communication between satellites in orbit, which enhances the network’s performance, reliability, and global reach.

Inter-satellite laser links (ISLLs), or, as it is also called Optical Inter-satellite Links (OISK), are an advanced communication technology utilized by satellite constellations, such as for example Starlink, to facilitate high-speed secure data transmission directly between satellites. Inter-satellite laser links are today (primarily) designed for intra-plane communication within satellite constellations, enabling data transfer between satellites that share the same orbital plane. This is due to the relatively stable geometries and predictable distances between satellites in the same orbit, which facilitate maintaining the line-of-sight connections necessary for laser communications. ISLLs mark a significant departure from traditional reliance on ground stations for inter-satellite communication, and as such the ISL offers many benefits, including the ability to transmit data at speeds comparable to fiber-optic cables. Additionally, ISLLs enable satellite constellations to deliver seamless coverage across the entire planet, including over oceans and polar regions where ground station infrastructure is limited or non-existent. The technology also inherently enhances the security of data transmissions, thanks to the focused nature of laser beams, which are difficult to intercept.

However, the deployment of ISLLs is not without challenges. The technology requires a clear line of sight between satellites, which can be affected by their orbital positions, necessitating precise control mechanisms. Moreover, the theoretical limit to the number of satellites linked in a daisy chain is influenced by several factors, including the satellite’s power capabilities, the network architecture, and the need to maintain clear lines of sight. High-power laser systems also demand considerable energy, impacting the satellite’s power budget and requiring efficient management to balance operational needs. The complexity and cost of developing such sophisticated laser communication systems, combined with very precise pointing mechanisms and sensitive detectors, can be quite challenging and need to be carefully weighted against building satellite ground stations.

Cross-plane ISLL transmission, or the ability to communicate between satellites in different orbital planes, presents additional technical challenges, as it is technically highly challenging to maintain a stable line of sight between satellites moving in different orbital planes. However, the potential for ISLLs to support cross-plane links is recognized as a valuable capability for creating a fully interconnected satellite constellation. The development and incorporation of cross-plane ISLL capabilities into satellites are an area of active research and development. Such capabilities would reduce the reliance on ground stations and significantly increase the resilience of satellite constellations. I see the development as a next-generation topic together with many other important developments as described in the end of this blog. However, the power consumption of the ISLL is a point of concern that needs careful attention as it will impact many other aspects of the satellite operation.

THE DIGITAL DIVIDE.

The digital divide refers to the “internet haves and haves not” or “the gap between individuals who have access to modern information and communication technology (ICT),” such as the internet, computers, and smartphones, and those who do not have access. This divide can be due to various factors, including economic, geographic, age, and educational barriers. Essentially, as illustrated in Figure 5, it’s the difference between the “digitally connected” and the “digitally disconnected.”.

The significance of the digital divide is considerable, impacting billions of people worldwide. It is estimated that a little less than 40% of the world’s population, or roughly 2.9 billion people, had never used the internet (as of 2023). This gap is most pronounced in developing countries, rural areas, and among older populations and economically disadvantaged groups.

The digital divide affects individuals’ ability to access information, education, and job opportunities and impacts their ability to participate in digital economies and the modern social life that the rest of us (i.e., the other side of the divide or the privileged 60%) have become used to. Bridging this divide is crucial for ensuring equitable access to technology and its benefits, fostering social and economic inclusion, and supporting global development goals.

Figure 5 illustrates the digital divide, that is, the gap between individuals with access to modern information and communication technology (ICT), such as the internet, computers, and smartphones, and those who do not have access. (Courtesy: DALL-E)

CHALLENGES WITH LEO SATELLITE SOLUTIONS.

Low-Earth-orbit satellites offer compelling advantages for global internet connectivity, yet they are not without challenges and disadvantages when considered substitutes for cellular and fixed broadband services. These drawbacks underscore the complexities and limitations of deploying LEO satellite technology globally.

The capital investment required and the ongoing costs associated with designing, manufacturing, launching, and maintaining a constellation of LEO satellites are substantial. Despite technological advancements and increased competition driving costs down, the financial barrier to entry remains high. Compared to their geostationary counterparts, the relatively short lifespan of LEO satellites necessitates frequent replacements, further adding to operational expenses.

While LEO satellites offer significantly reduced latency (round trip times, RTT ~ 4 ms) compared to geostationary satellites (RTT ~ 240 ms), they may still face latency and bandwidth limitations, especially as the number of users on the satellite network increases. This can lead to reduced service quality during peak usage times, highlighting the potential for congestion and bandwidth constraints. This is also the reason why the main business model of LEO satellite constellations is primarily to address coverage and needs in rural and remote locations. Alternatively, the LEO satellite business model focuses on low-bandwidth needs such as texting, voice messaging, and low-bandwidth Internet of Things (IoT) services.

Navigating the regulatory and spectrum management landscape presents another challenge for LEO satellite operators. Securing spectrum rights and preventing signal interference requires coordination across multiple jurisdictions, which can complicate deployment efforts and increase the complexity of operations.

The environmental and space traffic concerns associated with deploying large numbers of satellites are significant. The potential for space debris and the sustainability of low Earth orbits are critical issues, with collisions posing risks to other satellites and space missions. Additionally, the environmental impact of frequent rocket launches raises further concerns.

FIXED-WIRELESS ACCESS (FWA) BASED LEO SATELLITE SOLUTIONS.

Using the NewSpace Index database, updated December 2023, there are currently more than 6,463 internet satellites launched, of which 5,650 (~87%) from StarLink, and 40,000+ satellites planned for launch, with SpaceX’s Starlink satellites having 11,908 planned (~30%). More than 45% of the satellites launched and planned support multi-application use cases. Thus internet, together with, for example, IoT (~4%) and/or Direct-2-Device (D2D, ~39%). The D2D share is due to StarLink’s plans to provide services to mobile terminals with their latest satellite constellation. The first six StarLink v2 satellites with direct-to-cellular capability were successfully launched on January 2nd, 2024. Some care should be taken in the share of D2D satellites in the StarLink number as it does not consider the different form factors of the version 2 satellite that do not all include D2D capabilities.

Most LEO satellites, helped by StarLink satellite quantum, operational and planned, support satellite fixed broadband internet services. It is worth noting that the Chinese Guo Wang constellation ranks second in terms of planned LEO satellites, with almost 13,000 planned, rivaling the StarLink constellation. After StarLink and Guo Wang are counted there is only 34% or ca. 16,000 internet satellites left in the planning pool across 30+ satellite companies. While StarLink is privately owned (by Elon Musk), the Guo Wang (國網 ~ “The state network”) constellation is led by China SatNet and created by the SASAC (China’s State-Owned Assets Supervision and Administration Commission). SASAC oversees China’s biggest state-owned enterprises. I expect that such an LEO satellite constellation, which would be the second biggest LEO constellation, as planned by Guo Wang and controlled by the Chinese State, would be of considerable concern to the West due to the possibility of dual-use (i.e., civil & military) of such a constellation.

StarLink coverage as of March 2024 (see StarLink’s availability map) does not provide services in Russia, China, Iran, Iraq, Afghanistan, Venezuela, and Cuba (20% of Earth’s total land base surface area). There are still quite a few countries in Africa and South-East Asia, including India, where regulatory approval remains pending.

Figure 6 NewSpace Index data of commercial satellite constellations in terms of total number of launched and planned (top) per company (or constellation name) and (bottom) per country.

While the term FWA, fixed wireless access, is not traditionally used to describe satellite internet services, the broadband services offered by LEO satellites can be considered a form of “wireless access” since they also provide connectivity without cables or fiber. In essence, LEO satellite broadband is a complementary service to traditional FWA, extending wireless broadband access to locations beyond the reach of terrestrial networks. In the following, I will continue to use the term FWA for the fixed broadband LEO satellite services provided to individual customers, including SMEs. As some of the LEO satellite businesses eventually also might provide direct-to-device (D2D) services to normal terrestrial mobile devices, either on their own acquired cellular spectrum or in partnership with terrestrial cellular operators, the LEO satellite operation (or business architecture) becomes much closer to terrestrial cellular operations.

Figure 7 Illustrating a Non-Terrestrial Network consisting of a Low Earth Orbit (LEO) satellite constellation providing fixed broadband services, such as Fixed Wireless Access, to individual terrestrial users (e.g., Starlink, Kuiper, OneWeb,…). Each hexagon represents a satellite beam inside the larger satellite coverage area. Note that, in general, there will be some coverage overlap between individual satellites, ensuring a continuous service. The operating altitude of an LEO satellite constellation is between 300 and 2,000 km, with most aiming to be at 450 to 550 km altitude. It is assumed that the satellites are interconnected, e.g., laser links. The User Terminal antenna (UT) is dynamically orienting itself after the best line-of-sight (in terms of signal quality) to a satellite within UT’s field-of-view (FoV). The FoV has not been shown in the picture above so as not to overcomplicate the illustration.

Low Earth Orbit (LEO) satellite services like Starlink have emerged to provide fixed broadband internet to individual consumers and small to medium-sized enterprises (SMEs) targeting rural and remote areas often where no other broadband solutions are available or with poor legacy copper- or coax-based infrastructure. These services deploy constellations of satellites orbiting close to Earth to offer high-speed internet with the significant advantage of reaching rural and remote areas where traditional ground-based infrastructure is absent or economically unfeasible.

One of the most significant benefits of LEO satellite broadband is the ability to deliver connectivity with lower latency compared to traditional satellite internet delivered by geosynchronous satellites, enhancing the user experience for real-time applications. The rapid deployment capability of these services also means that areas in dire need of internet access can be connected much quicker than waiting for ground infrastructure development. Additionally, satellite broadband’s reliability is less affected by terrestrial challenges, such as natural disasters that can disrupt other forms of connectivity.

The satellite service comes with its challenges. The cost of user equipment, such as satellite dishes, can be a barrier for some users. So, can the installation process be of the terrestrial satellite dish required to establish the connection to the satellite. Moreover, services might be limited by data caps or experience slower speeds after reaching certain usage thresholds, which can be a drawback for users with high data demands. Weather conditions can also impact the signal quality, particularly at the higher frequencies used by the satellite, albeit to a lesser extent than geostationary satellite services. However, the target areas where the fixed broadband satellite service is most suited are rural and remote areas that either have no terrestrial broadband infrastructure (terrestrial cellular broadband or wired broadband such as coax or fiber)

Beyond Starlink, other providers are venturing into the LEO satellite broadband market. OneWeb is actively developing a constellation to offer internet services worldwide, focusing on communities that are currently underserved by broadband. Telesat Lightspeed is also gearing up to provide broadband services, emphasizing the delivery of high-quality internet to the enterprise and government sectors.

Other LEO satellite businesses, such as AST SpaceMobile and Lynk Mobile, are taking a unique approach by aiming to connect standard mobile phones directly to their satellite network, extending cellular coverage beyond the reach of traditional cell towers. More about that in the section below (see “New Kids on the Block – Direct-to-Devices LEO satellites”).

I have been asked why I appear somewhat dismissive of the Amazon’s Project Kuiper in a previous version of article particular compared to StarLink (I guess). The expressed mission is to “provide broadband services to unserved and underserved consumers, businesses in the United States, …” (FCC 20-102). Project Kuiper plans for a broadband constellation of 3,226 microsatellites at 3 altitudes (i.e., orbital shells) around 600 km providing fixed broadband services in the Ka-band (i.e.,~ 17-30 GHz). In its US-based FCC (Federal Communications Commission) filling and in the subsequent FCC authorization it is clear that the Kuiper constellation primarily targets contiguous coverage of the USA (but mentions that services cannot be provided in the majority of Alaska, … funny I thought that was a good definition of a underserved remote and scarcely populated area?). Amazon has committed to launch 50% (1,618 satellites) of their committed satellites constellation before July 2026 (until now 2+ has been launched) and the remaining 50% before July 2029. There is however far less details on the Kuiper satellite design, than for example is available for the various versions of the StarLink satellites. Given the Kuiper will operate in the Ka-band there may be more frequency bandwidth allocated per beam than possible in the StarLink satellites using the Ku-band for customer device connectivity. However, Ka-band is at a higher frequency which may result in a more compromised signal propagation. In my opinion based on the information from the FCC submissions and correspondence, the Kuiper constellation appear less ambitious compared to StarLink vision, mission and tangible commitment in terms of aggressive launches, very high level of innovation and iterative development on their platform and capabilities in general. This may of course change over time and as more information becomes available on the Amazon’s Project Kuiper.

FWA-based LEO satellite solutions – takeaway:

  • LoS-based and free-space-like signal propagation allows high-frequency signals (i.e., high throughput, capacity, and quality) to provide near-ideal performance only impacted by the distance between the antenna and the ground terminal. Something that is, in general, not possible for a terrestrial-based cellular infrastructure.
  • Provides satellite fixed broadband internet connectivity typically using the Ku-band in geographically isolated locations where terrestrial broadband infrastructure is limited or non-existent.
  • Lower latency (and round trip time) compared to MEO and GEO satellite internet solutions.
  • Current systems are designed to provide broadband internet services in scarcely populated areas and underserved (or unserved) regions where traditional terrestrial-based communications infrastructures are highly uneconomical and/or impractical to deploy.
  • As shown in my previous article (i.e., “Stratospheric Drones: Revolutionizing Terrestrial Rural Broadband from the Skies?”), LEO satellite networks may be an economical interesting alternative to terrestrial rural cellular networks in countries with large scarcely populated rural areas requiring tens of thousands of cellular sites to cover. Hybrid models with LEO satellite FWA-like coverage to individuals in rural areas and with satellite backhaul to major settlements and towns should be considered in large geographies.
  • Resilience to terrestrial disruptions is a key advantage. It ensures functionality even when ground-based infrastructure is disrupted, which is an essential element for maintaining the Business Continuity of an operator’s telecommunications services. Particular hierarchical architectures with for example GEO-satellite, LEO satellite and Earth-based transport infrastructure will result in very high reliability network operations (possibly approaching ultra-high availability, although not with service parity).
  • Current systems are inherently capacity-limited due to their vast coverage areas (i.e., lower performance per unit coverage area). In the peak demand period, they will typically perform worse than terrestrial-based cellular networks (e.g., LTE or 5G).
  • In regions where modern terrestrial cellular and fixed broadband services are already established, satellite broadband may face challenges competing with these potentially cheaper, faster, and more reliable services, which are underpinned by the terrestrial communications infrastructure.
  • It is susceptible to weather conditions, such as heavy rain or snow, which can degrade signal quality. This may impact system capacity and quality, resulting in inconsistent customer experience throughout the year.
  • Must navigate complex regulatory environments in each country, which can affect service availability and lead to delays in service rollout.
  • Depending on the altitude, LEO satellites are typically replaced on a 5—to 7-year cycle due to atmospheric drag (which increases as altitude decreases; thus, the lower the altitude, the shorter a satellite’s life). This ultimately means that any improvements in system capacity and quality will take time to be thoroughly enjoyed by all customers.

SATELLITE BACKHAUL SOLUTIONS.

Figure 8 illustrates the architecture of a Low Earth Orbit (LEO) satellite backhaul system used by providers like OneWeb as well as StarLink with their so-called “Community Gateway”. It showcases the connectivity between terrestrial internet infrastructure (i.e., Satellite Gateways) and satellites in orbit, enabling high-speed data transmission. The network consists of LEO satellites that communicate with each other (inter-satellite Comms) using the Ku and Ka frequency bands. These satellites connect to ground-based satellite gateways (GW), which interface with Points of Presence (PoP) and Internet Exchange Points (IXP), integrating the space-based network with the terrestrial internet (WWW). Note: The indicated speeds and frequency bands (e.g., Ku: 12–18 GHz, Ka: 28–40 GHz) and data speeds illustrate the network’s capabilities.

LEO satellites providing backhaul connectivity, such as shown in Figure 8 above, are extending internet services to the farthest reaches of the globe. These satellites offer many benefits, as already discussed above, in connecting remote, rural, and previously un- and under-served areas with reliable internet services. Many remote regions lack foundational telecom infrastructure, particularly long-haul transport networks needed for carrying traffic away from remote populated areas. Satellite backhauls do not only offer a substantially better financial solution for enhancing internet connectivity to remote areas but are often the only viable solution for connectivity.

Take, for example, Greenland. The world’s largest non-continental island, the size of Western Europe, is characterized by its sparse population and distinct unconnected by road settlement patterns mainly along the West Coast (as well as a couple of settlements on the East Coast), influenced mainly by its vast ice sheets and rugged terrain. With a population of around 56+ thousand, primarily concentrated on the west coast, Greenland’s demographic distribution is spread out over ca. 50+ settlements and about 20 towns. Nuuk, the capital, is the island’s most populous city, housing over 18+ thousand residents and serving as the administrative, economic, and cultural hub. Terrestrial cellular networks serve settlements’ and towns’ communication and internet services needs, with the traffic carried back to the central switching centers by long-haul microwave links, sea cables, and satellite broadband connectivity. Several settlements connectivity needs can only be served by satellite backhaul, e.g., settlements on the East Coast (e.g., Tasiilaq with ca. 2,000 inhabitants and Ittoqqotooormiit (an awesome name!) with around 400+ inhabitants). LEO satellite backhaul solutions serving Satellite-only communities, such as those operated and offered by OneWeb (Eutelsat), could provide a backhaul transport solution that would match FWA latency specifications due to better (round trip time) performance than that of a GEO satellite backhaul solution.

It should also be clear that remote satellite-only settlements and towns may have communications service needs and demand that a localized 4G (or 5G) terrestrial cellular network with a satellite backhaul can serve much better than, for example, relying on individual ad-hoc connectivity solution from for example Starlink. When the area’s total bandwidth demand exceeds the capacity of an FWA satellite service, a localized terrestrial network solution with a satellite backhaul is, in general, better.

The LEO satellites should offer significantly reduced latency compared to their geostationary counterparts due to their closer proximity to the Earth. This reduction in delay is essential for a wide range of real-time applications and services, from adhering to modern radio access (e.g., 4G and 5G) requirements, VoIP, and online gaming to critical financial transactions, enhancing the user experience and broadening the scope of possible services and business.

Among the leading LEO satellite constellations providing backhaul solutions today are SpaceX’s Starlink (via their community gateway), aiming to deliver high-speed internet globally with a preference of direct to consumer connectivity; OneWeb, focusing on internet services for businesses and communities in remote areas; Telesat’s Lightspeed, designed to offer secure and reliable connectivity; and Amazon’s Project Kuiper, which plans to deploy thousands of satellites to provide broadband to unserved and underserved communities worldwide.

Satellite backhaul solutions – takeaway:

  • Satellite-backhaul solutions are excellent, cost-effective solution for providing an existing isolated cellular (and fixed access) network with high-bandwidth connectivity to the Internet (such as in remote and deep rural areas).
  • LEO satellites can reduce the need for extensive and very costly ground-based infrastructure by serving as a backhaul solution. For some areas, such as Greenland, the Sahara, or the Brazilian rainforest, it may not be practical or economical to connect by terrestrial-based transmission (e.g., long-haul microwave links or backbone & backhaul fiber) to remote settlements or towns.
  • An LEO-based backhaul solution supports applications and radio access technologies requiring a very low round trip time scale (RTT<50 ms) than is possible with a GEO-based satellite backhaul. However, the optimum RTT will depend on where the LEO satellite ground gateway connects to the internet service provider and how low the RTT can be.
  • The collaborative nature of a satellite-backhaul solution allows the terrestrial operator to focus on and have full control of all its customers’ network experiences, as well as optimize the traffic within its own network infrastructure.
  • LEO satellite backhaul solutions can significantly boost network resilience and availability, providing a secure and reliable connectivity solution.
  • Satellite-backhaul solutions require local ground-based satellite transmission capabilities (e.g., a satellite ground station).
  • The operator should consider that at a certain threshold of low population density, direct-to-consumer satellite services like Starlink might be more economical than constructing a local telecom network that relies on satellite backhaul (see above section on “Fixed Wireless Access (FWA) based LEO satellite solutions”).
  • Satellite backhaul providers require regulatory permits to offer backhaul services. These permits are necessary for several reasons, including the use of radio frequency spectrum, operation of satellite ground stations, and provision of telecommunications services within various jurisdictions.
  • The Satellite life-time in orbit is between 5 to 7 years depending on the LEO altitude. A MEO satellite (2 to 36 thousand km altitude) last between 10 to 20 years (GEO). This also dictates the modernization and upgrade cycle as well as timing of your ROI investment case and refinancing needs.

NEW KIDS ON THE BLOCK – DIRECT-TO-DEVICE LEO SATELLITES.

A recent X-exchange (from March 2nd):

Elon Musk: “SpaceX just achieved peak download speed of 17 Mb/s from a satellite direct to unmodified Samsung Android Phone.” (note: the speed correspond to a spectral efficiency of ~3.4 Mbps/MHz/beam).

Reply from user: “That’s incredible … Fixed wireless networks need to be looking over their shoulders?”

Elon Musk: “No, because this is the current peak speed per beam and the beams are large, so this system is only effective where there is no existing cellular service. This services works in partnership with wireless providers, like what @SpaceX and @TMobile announced.”

Figure 9 illustrating a LEO satellite direct-to-device communication in a remote areas without any terrestrially-based communications infrastructure. Satellite being the only means of communications either by a normal mobile device or by classical satphone. (Courtesy: DALL-E).

Low Earth Orbit (LEO) Satellite Direct-to-Device technology enables direct communication between satellites in orbit and standard mobile devices, such as smartphones and tablets, without requiring additional specialized hardware. This technology promises to extend connectivity to remote, rural, and underserved areas globally, where traditional cellular network infrastructure is absent or economically unfeasible to deploy. The system can offer lower latency communication by leveraging LEO satellites, which orbit closer to Earth than geostationary satellites, making it more practical for everyday use. The round trip time (RTT), the time it takes the for the signal to travel from the satellite to the mobile device and back, is ca. 4 milliseconds for a LEO satellite at 550 km compared to ca. 240 milliseconds for a geosynchronous satellite (at 36 thousand kilometers altitude).

The key advantage of a satellite in low Earth orbit is that the likelihood of a line-of-sight to a point on the ground is very high compared to establishing a line-of-sight for terrestrial cellular coverage that, in general, would be very low. In other words, the cellular signal propagation from a LEO satellite closely approximates that of free space. Thus, all the various environmental signal loss factors we must consider for a standard terrestrial-based mobile network do not apply to our satellite. In other, more simplistic words, the signal propagation directly from the satellite to the mobile device is less compromised than it typically would be from a terrestrial cellular tower to the same mobile device. The difference between free-space propagation, which considers only distance and frequency, and the terrestrial signal propagation models, which quantifies all the gains and losses experienced by a terrestrial cellular signal, is very substantial and in favor of free-space propagation.  As our Earth-bound cellular intuition of signal propagation often gets in the way of understanding the signal propagation from a satellite (or antenna in the sky in general), I recommend writing down the math using the formula of free space propagation loss and comparing this with terrestrial cellular link budget models, such as for example the COST 231-Hata Model (relatively simple) or the more recent 3GPP TR 38.901 Model (complex). In rural and sub-urban areas, depending on the environment, in-door coverage may be marginally worse, fairly similar, or even better than from terrestrial cell tower at a distance. This applies to both the uplink and downlink communications channel between the mobile device and the LEO satellite, and is also the reason why higher frequency (with higher frequency bandwidths available) use on LEO satellites can work better than in a terrestrial cellular network.

However, despite its potential to dramatically expand coverage, after all that is what satellites do, LEO Satellite Direct-to-Device technology is not a replacement for terrestrial cellular services and terrestrial communications infrastructures for several reasons: (a) Although the spectral efficiency can be excellent, the frequency bandwidth (in MHz) and data speeds (in Mbps) available through satellite connections are typically lower than those provided by ground-based cellular networks, limiting its use for high-bandwidth applications. (b) The satellite-based D2D services are, in general, capacity-limited and might not be able to handle higher user density typical for urban areas as efficiently as terrestrial networks, which are designed to accommodate large numbers of users through dense deployment of cell towers. (c) Environmental factors like buildings or bad weather can more significantly impact satellite communications’ reliability and quality than terrestrial services. (d) A satellite D2D service requires regulatory approval (per country), as the D2D frequency typically will be limited to terrestrial cellular services and will have to be coordinated and managed with any terrestrial use to avoid service degradation (or disruption) for customers using terrestrial cellular services also using the frequency. The satellites will have to be able to switch off their D2D service when the satellite covers jurisdictions that have not provided approval or where the relevant frequency/frequencies are in use terrestrially.

Using the NewSpace Index database, updated December 2023, there are current more than 8,000 Direct-to Device (D2D), or Direct-2-Cell (D2C), satellites planned for launch, with SpaceX’s Starlink v2 having 7,500 planned. The rest, 795 satellites, are distributed on 6 other satellite operators (e.g. AST Mobile, Sateliot (Spain), Inmarsat (HEO-orbit), Lynk,…). If we look at satellites designed for IoT connectivity we get in total 5,302, with 4,739 (not including StarLink) still planned, distributed out over 50+ satellite operators. The average IoT satellite constellation including what is currently planned is ~95 satellites with the majority targeted for LEO. The the satellite operators included in the 50+ count have confirmed funding with a minimum amount of US$2 billion (half of the operators have only funding confirmed without an amount). About 2,937 (435 launched) satellites are being planned to only serve IoT markets (note: I think this seems a bit excessive). With Swarm Technologies, a SpaceX subsidiary rank number 1 in terms of both launched and planned satellites. Swarm Technologies having launched at least 189 CubeSats (e.g., both 0.25U and 1U types) and have planned an addition 150. The second ranked IoT-only operator is Orbcomm with 51 satellites launched and an additional 52 planned. The average launched of the remaining IoT specific satellites operators are 5 with on average planning to launch 55 (over 42 constellations).

There are also 3 satellite operators (i.e., Chinese-based Galaxy Space: 1,000 LEO-sats; US-based Mangata Networks: 791 MEO/HEO-sats, and US-based Omnispace: 200 LEO?-sats) that have planned a total of 2,000 satellites to support 5G applications with their satellite solutions and one operator (i.e., Hanwha Systems) has planned 2,000 LEO satellites for 6G.

The emergence of LEO satellite direct-to-device (D2D) services, as depicted in the Figure 10 below, is at the forefront of satellite communication innovations, offering a direct line of connectivity between devices that bypasses the need for traditional cellular-based ground-based network infrastructure (e.g., cell towers). This approach benefits from the relatively short distance of hundreds of kilometers between LEO satellites and the Earth, reducing communication latency and broadening bandwidth capabilities compared to their geostationary counterparts. One of the key advantages of LEO D2D services is their ability to provide global coverage with an extensive number of satellites, i.e., in their 100s to 1000s depending the targeted quality of service, to support the services, ensuring that even the most remote and underserved areas have access to reliable communication channels. They are also critical in disaster resilience, maintaining communications when terrestrial networks fail due to emergencies or natural disasters.

Figure 10 This schematic presents the network architecture for satellite-based direct-to-device (D2D) communication facilitated by Low Earth Orbit (LEO) satellites, exemplified by collaborations like Starlink and T-Mobile US, Lynk Mobile, and AST Space Mobile. It illustrates how satellites in LEO enable direct connectivity between user equipment (UE), such as standard mobile devices and IoT (Internet of Things) devices, using terrestrial cellular frequencies and VHF/UHF bands. The system also shows inter-satellite links operating in the Ka-band for seamless network integration, with satellite gateways (GW) linking the space-based network to ground infrastructure, including Points of Presence (PoP) and Internet Exchange Points (IXP), which connect to the wider internet (WWW). This architecture supports innovative services like Omnispace and Astrocast, offering LEO satellite IoT connectivity. The network could be particularly crucial for defense and special operations in remote and challenging environments, such as the deserts or the Arctic regions of Greenland, where terrestrial networks are unavailable. As an example shown here, using regular terrestrial cellular frequencies in both downlink (~300 MHz to 7 GHz) and uplinks (900 MHz or lower to 2.1 GHz) ensures robust and versatile communication capabilities in diverse operational contexts.

While the majority of the 5,000+ Starlink constellation is 13 GHz (Ku-band), at the beginning of 2024, SpaceX launched a few 2nd generation Starlink satellites that support direct connections from the satellite to a normal cellular device (e.g., smartphone), using 5 MHz of T-Mobile USA’s PCS band (1900 MHz). The targeted consumer service, as expressed by T-Mobile USA, provides texting capabilities across the USA for areas with no or poor existing cellular coverage. This is fairly similar to services at similar cellular coverage areas presently offered by, for example, AST SpaceMobileOmniSpace, and Lynk Global LEO satellite services with reported maximum downlink speed approaching 20 Mbps. The so-called Direct-2-Device, where the device is a normal smartphone without satellite connectivity functionality, is expected to develop rapidly over the next 10 years and continue to increase the supported user speeds (i.e., utilized terrestrial cellular spectrum) and system capacity in terms of smaller coverage areas and higher number of satellite beams.

Table 1 below provides an overview of the top 13 LEO satellite constellations targeting (fixed) internet services (e.g., Ku band), IoT and M2M services, and Direct-to-Device (or Direct-to-Cell, D2C) services. The data has been compiled from the NewSpace Index website, which should be with data as of 31st of December 2023. The Top-satellite constellation rank has been based on the number of launched satellites until the end of 2023. Two additional Direct-2-Cell (D2C or Direct-to-Device, D2D) LEO satellite constellations are planned for 2024-2025. One is SpaceX Starlink 2nd generation, which launched at the beginning of 2024, using T-Mobile USA’s PCS Band to connect (D2D) to normal terrestrial cellular handsets. The other D2D (D2C) service is Inmarsat’s Orchestra satellite constellation based on L-band (for mobile terrestrial services) and Ka for fixed broadband services. One new constellation (Mangata Networks, see also the NewSpace constellation information) targeting 5G services. With two 5G constellations already launched, i.e., Galaxy Space (Yinhe) launched 8 LEO satellites, 1,000 planned using Q- and V-bands (i.e., not a D2D cellular 5G service), and OmniSpace launched two satellites and appear to have planned a total of 200 satellites. Moreover, currently, there is one planned constellation targeting 6G by the South Korean Hanwha Group (a bit premature, but interesting to follow nevertheless) with 2,000 6G (LEO) satellites planned.

Most currently launched and planned satellite constellations offering (or plan to provide) Direct-2-Cell services, including IoT and M2M, are designed for low-frequency bandwidth services that are unlikely to compete with terrestrial cellular networks’ quality of service where reasonable good coverage (or better) exists.

Table 1 An overview of the Top-14 LEO satellite constellations targeting (fixed) internet services (e.g., Ku band), IoT and M2M services, and Direct-to-Device (or direct-to-cell) services. The data has been compiled from the NewSpace Index website, which should be with data as of 31st of December 2023.

The deployment of LEO D2D services also navigates a complicated regulatory landscape, with the need for harmonized spectrum allocation across different regions. Managing interference with terrestrial cellular networks and other satellite operations is another interesting challenge albeit complex aspect, requiring sophisticated solutions to ensure signal integrity. Moreover, despite the cost-effectiveness of LEO satellites in terms of launch and operation, establishing a full-fledged network for D2D services demands substantial initial investment, covering satellite development, launch, and the setup of supporting ground infrastructure.

LEO satellites with D2D-based capabilities – takeaway:

  • Provides lower-bandwidth services (e.g., GPRS/EDGE/HSDPA-like) where no existing terrestrial cellular service is present.
  • (Re-)use on Satellite of the terrestrial cellular spectrum.
  • D2D-based satellite services may become crucial in business continuity scenarios, providing redundancy and increased service availability to existing terrestrial cellular networks. This is particularly essential as a remedy for emergency response personnel in case terrestrial networks are not functional. Limited capacity (due to little assigned frequency bandwidth) over a large coverage area serving rural and remote areas with little or no cellular infrastructure.
  • Securing regulatory approval for satellite services over independent jurisdictions is a complex and critical task for any operator looking to provide global or regional satellite-based communications. The satellite operator may have to switch off transmission over jurisdictions where no permission has been granted.
  • If the spectrum is also deployed on the ground, satellite use of it must be managed and coordinated (due to interference) with the terrestrial cellular networks.
  • Require lowly or non-utilized cellular spectrum in the terrestrial operator’s spectrum portfolio.
  • D2D-based communications require a more complex and sophisticated satellite design, including the satellite antenna resulting in higher manufacturing and launch cost.
  • The IoT-only commercial satellite constellation “space” is crowded with a total of 44 constellations (note: a few operators have several constellations). I assume that many of those plans will eventually not be realized. Note that SpaceX Swarm Technology is leading and in terms of total numbers (available in the NewSpace Index) database will remain a leader from the shear amount of satellites once their plan has been realized. I expect we will see a Chinese constellation in this space as well unless the capability will be built into the Guo Wang constellation.
  • The Satellite life-time in orbit is between 5 to 7 years depending on the altitude. This timeline also dictates the modernization and upgrade cycle as well as timing of your ROI investment and refinancing needs.
  • Today’s D2D satellite systems are frequency-bandwidth limited. However, if so designed, satellites could provide a frequency asymmetric satellite-to-device connection. For instance, the downlink from the satellite to the device could utilize a high frequency (not used in the targeted rural or remote area) and a larger bandwidth, while the uplink communication between the terrestrial device and the LEO satellite could use a sufficiently lower frequency and smaller frequency bandwidth.

MAKERS OF SATELLITES.

In the rapidly evolving space industry, a diverse array of companies specializes in manufacturing satellites for Low Earth Orbit (LEO), ranging from small CubeSats to larger satellites for constellations similar to those used by OneWeb (UK) and Starlink (USA). Among these, smaller companies like NanoAvionics (Lithuania) and Tyvak Nano-Satellite Systems (USA) have carved out niches by focusing on modular and cost-efficient small satellite platforms typically below 25 kg. NanoAvionics is renowned for its flexible mission support, offering everything from design to operation services for CubeSats (e.g., 1U, 3U, 6U) and larger small satellites (100+ kg). Similarly, Tyvak excels in providing custom-made solutions for nano-satellites and CubeSats, catering to specific mission needs with a comprehensive suite of services, including design, manufacturing, and testing.

UK-based Surrey Satellite Technology Limited (SSTL) stands out for its innovative approach to small, cost-effective satellites for various applications, with cost-effectiveness in achieving the desired system’s performance, reliability, and mission objectives at a lower cost than traditional satellite projects that easily runs into USD 100s of million. SSTL’s commitment to delivering satellites that balance performance and budget has made it a popular satellite manufacturer globally.

On the larger end of the spectrum, companies like SpaceX (USA) and Thales Alenia Space (France-Italy) are making significant strides in satellite manufacturing at scale. SpaceX has ventured beyond its foundational launch services to produce thousands of small satellites (250+ kg) for its Starlink broadband constellation, which comprises 5,700+ LEO satellites, showcasing mass satellite production. Thales Alenia Space offers reliable satellite platforms and payload integration services for LEO constellation projects.

With their extensive expertise in aerospace and defense, Lockheed Martin Space (USA) and Northrop Grumman (USA) produce various satellite systems suitable for commercial, military, and scientific missions. Their ability to support large-scale satellite constellation projects from design to launch demonstrates high expertise and reliability. Similarly, aerospace giants Airbus Defense and Space (EU) and Boeing Defense, Space & Security (USA) offer comprehensive satellite solutions, including designing and manufacturing small satellites for LEO. Their involvement in high-profile projects highlights their capacity to deliver advanced satellite systems for a wide range of use cases.

Together, these companies, from smaller specialized firms to global aerospace leaders, play crucial roles in the satellite manufacturing industry. They enable a wide array of LEO missions, catering to the burgeoning demand for satellite services across telecommunications, Earth observation, and beyond, thus facilitating access to space for diverse clients and applications.

ECONOMICS.

Before going into details, let’s spend some time on an example illustrating the basic components required for building a satellite and getting it to launch. Here, I point at a super cool alternative to the above-mentioned companies, the USA-based startup Apex, co-founded by CTO Max Benassi (ex-SpaceX and Astra) and CEO Ian Cinnamon. To get an impression of the macro-components of a satellite system, I recommend checking out the Apex webpage and “playing” with their satellite configurator. The basic package comes at a price tag of USD 3.2 million and a 9-month delivery window. It includes a 100 kg satellite bus platform, a power system, a communication system based on X-band (8 – 12 GHz), and a guidance, navigation, and control package. The basic package does not include a solar array drive assembly (SADA), which plays a critical role in the operation of satellites by ensuring that the satellite’s solar panels are optimally oriented toward the Sun. Adding the SADA brings with it an additional USD 500 thousand. Also, the propulsion mechanism (e.g., chemical or electric; in general, there are more possibilities) is not provided (+ USD 450 thousand), nor are any services included (e.g., payload & launch vehicle integration and testing, USD 575 thousand), including SADAs, propulsion, and services, Apex will have a satellite launch ready for an amount of close to USD 4.8 million.

However, we are not done. The above solution still needs to include the so-called payload, which relates to the equipment or instruments required to perform the LEO satellite mission (e.g., broadband communications services), the actual satellite launch itself, and the operational aspects of a successful post-launch (i.e., ground infrastructure and operation center(s)).

Let’s take SpaceX’s Starlink satellite as an example illustrating mission and payload more clearly. The Starlink satellite’s primary mission is to provide fixed-wireless access broadband internet to an Earth-based fixed antenna using. The Starlink payload primarily consists of advanced broadband internet transmission equipment designed to provide high-speed internet access across the globe. This includes phased-array antennas for communication with user terminals on the ground, high-frequency radio transceivers to facilitate data transmission, and inter-satellite links allowing satellites to communicate in orbit, enhancing network coverage and data throughput.

The economical aspects of launching a Low Earth Orbit (LEO) satellite project span a broad spectrum of costs from the initial concept phase to deployment and operational management. These projects commence with research and development, where significant investments are made in designengineering, and the iterative process of prototyping and testing to ensure the satellite meets its intended performance and reliability standards in harsh space conditions (e.g., vacuum, extreme temperature variations, radiation, solar flares, high-velocity impacts with micrometeoroids and man-made space debris, erosion, …).

Manufacturing the satellite involves additional expenses, including procuring high-quality components that can withstand space conditions and assembling and integrating the satellite bus with its mission-specific payload. Ensuring the highest quality standards throughout this process is crucial to minimizing the risk of in-orbit failure, which can substantially increase project costs. The payload should be seen as the heart of the satellite’s mission. It could be a set of scientific instruments for measuring atmospheric data, optical sensors for imaging, transponders for communication, or any other equipment designed to fulfill the satellite’s specific objectives. The payload will vary greatly depending on the mission, whether for Earth observation, scientific research, navigation, or telecommunications.

Of course, there are many other types and more affordable options for LEO satellites than a Starlink-like one (although we should also not ignore achievements of SpaceX and learn from them as much as possible). As seen from Table 1, we have a range of substantially smaller satellite types or form factors. The 1U (i.e., one unit) CubeSat is a satellite with a form factor of 10x10x11.35 cm3 and weighs no more than 1.33 kilograms. A rough cost range for manufacturing a 1U CubeSat could be from USD 50 to 100+ thousand depending on mission complexity and payload components (e.g., commercial-off-the-shelf or application or mission-specific design). The range includes considering the costs associated with the satellite’s design, components, assembly, testing, and initial integration efforts. The cost range, however, does not include other significant costs associated with satellite missions, such as launch services, ground station operations, mission control, and insurance, which is likely to (significantly) increase the total project cost. Furthermore, we have additional form factors, such as 3U CubeSat (10x10x34.05 cm3, <4 kg), manufacturing cost in the range of USD 100 to 500+ thousand, 6U CubeSat (20x10x34 cm3, <12 kg), that can carry more complex payload solutions than the smaller 1U and 3U, with the manufacturing cost in the range of USD 200 thousand to USD 1+ million and 12U satellites (20x20x34 cm3, <24 kg) that again support complex payload solutions and in general will be significantly more expensive to manufacture.

Securing a launch vehicle is one of the most significant expenditures in a satellite project. This cost not only includes the price of the rocket and launch itself but also encompasses integration, pre-launch services, and satellite transportation to the launch site. Beyond the launch, establishing and maintaining the ground segment infrastructure, such as ground stations and a mission control center, is essential for successful satellite communication and operation. These facilities enable ongoing tracking, telemetry, and command operations, as well as the processing and management of the data collected by the satellite.

The SpaceX Falcon rocket is used extensively by other satellite businesses (see above Table 1) as well as by SpaceX for their own Starlink constellation network. The rocket has a payload capability of ca. 23 thousand kg and a volume handling capacity of approximately 300 cubic meters. SpaceX has launched around 60 Starlink satellites per Falcon 9 mission for the first-generation satellites. The launch cost per 1st generation satellite would then be around USD 1 million per satellite using the previously quoted USD 62 million (2018 figure) for a Falcon 9 launch. The second-generation Starlink satellites are substantially more advanced compared to the 1st generation. They are also heavier, weighing around a thousand kilograms. A Falcon 9 would only be able to launch around 20 generation 2 satellites (only considering the weight limitation), while a Falcon Heavy could lift ca. 60 2nd gen. satellites but also at a higher price point of USD 90 million (2018 figure). Thus the launch cost per satellite would be between USD 1.5 million using Falcon Heavy and USD 3.1 million using Falcon 9. Although the launch cost is based on price figures from 2018, the expected efficiency gained from re-use may have either kept the cost level or reduced it further as expected, particularly with Falcon Heavy.

Satellite businesses looking to launch small volumes of satellites, such as CubeSats, have a variety of strategies at their disposal to manage launch costs effectively. One widely adopted approach is participating in rideshare missions, where the expenses of a single launch vehicle are shared among multiple payloads, substantially reducing the cost for each operator. This method is particularly attractive due to its cost efficiency and the regularity of missions offered by, for example, SpaceX. Prices for rideshare missions can start from as low as a few thousand dollars for very small payloads (like CubeSats) to several hundred thousand dollars for larger small satellites. For example, SpaceX advertises rideshare prices starting at $1 million for payloads up to 200 kg. Alternatively, dedicated small launcher services cater specifically to the needs of small satellite operators, offering more tailored launch options in terms of timing and desired orbit. Companies such as Rocket Lab (USA) and Astra (USA) launch services have emerged, providing flexibility that rideshare missions might not, although at a slightly higher cost. However, these costs remain significantly lower than arranging a dedicated launch on a larger vehicle. For example, Rocket Lab’s Electron rocket, specializing in launching small satellites, offers dedicated launches with prices starting around USD 7 million for the entire launch vehicle carrying up to 300 kg. Astra has reported prices in the range of USD 2.5 million for a dedicated LEO launch with their (discontinued) Rocket 3 with payloads of up to 150 kg. The cost for individual small satellites will depend on their share of the payload mass and the specific mission requirements.

Satellite ground stations, which consist of arrays of phased-array antennas, are critical for managing the satellite constellation, routing internet traffic, and providing users with access to the satellite network. These stations are strategically located to maximize coverage and minimize latency, ensuring that at least one ground station is within the line of sight of satellites as they orbit the Earth. As of mid-2023, Starlink operated around 150 ground stations worldwide (also called Starlink Gateways), with 64 live and an additional 33 planned in the USA. The cost of constructing a ground station would be between USD 300 thousand to half a million not including the physical access point, also called the point-of-presence (PoP), and transport infrastructure connecting the PoP (and gateway) to the internet exchange where we find the internet service providers (ISPs) and the content delivery networks (CDNs). The Pop may add another USD 100 to 200 thousand to the ground infrastructure unit cost. The transport cost from the gateway to the Internet exchange can vary a lot depending on the gateway’s location.

Insurance is a critical component of the financial planning for a satellite project, covering risks associated with both the launch phase and the satellite’s operational period in orbit. These insurances are, in general, running at between 5% to 20% of the total project cost depending on the satellite value, the track record of the launch vehicle, mission complexity, and duration (i.e., typically 5 – 7 years for a LEO satellite at 500 km) and so forth. Insurance could be broken up into launch insurance and insurance covering the satellite once it is in orbit.

Operational costs, the Opex, include the day-to-day expenses of running the satellite, from staffing and technical support to ground station usage fees.

Regulatory and licensing fees, including frequency allocation and orbital slot registration, ensure the satellite operates without interfering with other space assets. Finally, at the end of the satellite’s operational life, costs associated with safely deorbiting the satellite are incurred to comply with space debris mitigation guidelines and ensure a responsible conclusion to the mission.

The total cost of an LEO satellite project can vary widely, influenced by the satellite’s complexity, mission goals, and lifespan. Effective project management and strategic decision-making are crucial to navigating these expenses, optimizing the project’s budget, and achieving mission success.

Figure 11 illustrates an LEO CubeSat orbiting above the Earth, capturing the satellite’s compact design and its role in modern space exploration and technology demonstration. Note that the CubeSat design comes in several standardized dimensions, with the reference design, also called 1U, being almost 1 thousandth of a cubic meter and weighing less than 1.33 kg. More advanced CubeSat satellites would typically be 6U or higher.

CubeSats (e.g., 1U, 3U, 6U, 12U):

  • Manufacturing Cost: Ranges from USD 50,000 for a simple 1U CubeSat to over USD 1 million for a more complex missions supported by 6U (or higher) CubeSat with advanced payloads (and 12U may again amount to several million US dollars).
  • Launch Cost: This can vary significantly depending on the launch provider and the rideshare opportunities, ranging from a few thousand dollars for a 1U CubeSat on a rideshare mission to several million dollars for a dedicated launch of larger CubeSats or small satellites.
  • Operational Costs: Ground station services, mission control, and data handling can add tens to hundreds of thousands of dollars annually, depending on the mission’s complexity and duration.

Small Satellites (25 kg up to 500 kg):

  • Manufacturing Cost: Ranges from USD 500,000 to over 10 million, depending on the satellite’s size, complexity, and payload requirements.
  • Launch Cost: While rideshare missions can reduce costs, dedicated launches for small satellites can range from USD 10 million to 62 million (e.g., Falcon 9) and beyond (e.g., USD 90 million for Falcon Heavy).
  • Operational Costs: These are similar to CubeSats, but potentially higher due to the satellite’s larger size and more complex mission requirements, reaching several hundred thousand to over a million dollars annually.

The range for the total project cost of a LEO satellite:

Given these considerations, the total cost range for a LEO satellite project can vary from as low as a few hundred thousand dollars for a simple CubeSat project utilizing rideshare opportunities and minimal operational requirements to hundreds of millions of dollars for more complex small satellite missions requiring dedicated launches and extensive operational support.

It is important to note that these are rough estimates, and the actual cost can vary based on specific mission requirements, technological advancements, and market conditions.

CAPACITY AND QUALITY

Figure 12 Satellite-based cellular capacity, or quality measured, by the unit or total throughput in Mbps is approximately driven by the amount of spectrum (in MHz) times the effective spectral efficiency (in Mbps/MHz/units) times the number of satellite beams resulting in cells on the ground.

The overall capacity and quality of satellite communication systems, given in Mbps, is on a high level, the product of three key factors: (i) the amount of frequency bandwidth in MHz allocated to the satellite operations multiplied by (ii) the effective spectral efficiency in Mbps per MHz over a unit satellite-beam coverage area multiplied by (iii) the number of satellite beams that provide the resulting terrestrial cell coverage. Thus, in other words:

Satellite Capacity (in Mbps) =
Frequency Bandwidth in MHz ×
Effective Spectral Efficiency in Mbps/MHz/Beam ×
Number of Beams (or Cells)

Consider a satellite system supporting 8 beams (and thus an equivalent of terrestrial coverage cells), each with 250 MHz allocated within the same spectral frequency range, can efficiently support ca. 680 Mbps per beam. This is achieved with an antenna setup that effectively provides a spectral efficiency of ~2.7 Mbps/MHz/cell (or beam) in the downlink (i.e., from the satellite to the ground). Moreover, the satellite typically will have another frequency and antenna configuration that establishes a robust connection to the ground station that connects to the internet via, for example, third-party internet service providers. The 680 Mbps is then shared among users that are within the satellite beam coverage, e.g., if you have 100 customers demanding a service, the speed each would experience on average would be around 7 Mbps. This may not seem very impressive compared to the cellular speeds we are used to getting with an LTE or 5G terrestrial cellular service. However, such speeds are, of course, much better than having no means of connecting to the internet.

Higher frequencies (i.e., in the GHz range) used to provide terrestrial cellular broadband services are in general quiet sensitive to the terrestrial environment and non-LoS propagation. It is a basic principle of physics that signal propagation characteristics, including the range and penetration capabilities of an electromagnetic waves, is inversely related to their frequency. Vegetation and terrain becomes an increasingly critical factor to consider in higher frequency propagation and the resulting quality of coverage. For example trees, forests, and other dense foliage can absorb and scatter radio waves, attenuating signals. The type and density of vegetation, along with seasonal changes like foliage density in summer versus winter, can significantly impact signal strength. Terrains often include varied topographies such as housing, hills, valleys, and flat plains, each affecting signal reach differently. For instance, housing, hilly or mountainous areas may cause signal shadowing and reflection, while flat terrains might offer less obstruction, enabling signals to travel further. Cellular mobile operators tend to like high frequencies (GHz) for cellular broadband services as it is possible to get substantially more system throughput in bits per second available to deliver to our demanding customers than at frequencies in the MHz range. As can be observed in Figure 12 above, we see that the frequency bandwidth is a multiplier for the satellite capacity and quality. Cellular mobile operators tend to “dislike” higher frequencies because of their poorer propagation conditions in their terrestrially based cellular networks resulting in the need for increased site densification at a significant incremental capital expense.

The key advantage of a LEO satellite is that the likelihood of a line-of-sight to a point on the ground is very high compared to establishing a line-of-sight for terrestrial cellular coverage that, in general, would be very low. In other words, the cellular signal propagation from an satellite closely approximates that of free space. Thus, all the various environmental signal loss factors we must consider for a standard terrestrial-based mobile network do not apply to our satellite having only to overcome the distance from the satellite antenna to the ground.

Let us first look at the satellite frequency component of the above satellite capacity, and quality, formula:

FREQUENCY SPECTRUM FOR SATELLITES.

The satellite frequency spectrum encompasses a range of electromagnetic frequencies allocated specifically for satellite communication. These frequencies are divided into bands, commonly known as L-band (e.g., mobile broadband), S-band (e.g., mobile broadband), C-band, X-band (e.g., mainly used by military), Ku-band (e.g., fixed broadband), Ka-band (e.g., fixed broadband), and V-band. Each serves different satellite applications due to its distinct propagation characteristics and capabilities. The spectrum bandwidth used by satellites refers to the width of the frequency range that a satellite system is licensed to use for transmitting and receiving signals.

Careful management of satellite spectrum bandwidth is critical to prevent interference with terrestrial communications systems. Since both satellite and terrestrial systems can operate on similar frequency ranges, there is a potential for crossover interference, which can degrade the performance of both systems. This is particularly important for bands like C-band and Ku-band, which are also used for terrestrial cellular networks and other applications like broadcasting.

Using the same spectrum for both satellite and terrestrial cellular coverage within the same geographical area is challenging due to the risk of interference. Satellites transmit signals over vast areas, and if those signals are on the same frequency as terrestrial cellular systems, they can overpower the local ground-based signals, causing reception issues for users on the ground. Conversely, the uplink signals from terrestrial sources can interfere with the satellite’s ability to receive communications from its service area.

Regulatory bodies such as the International Telecommunication Union (ITU) are crucial in mitigating these interference issues. They coordinate the allocation of frequency bands and establish regulations that govern their use. This includes defining geographical zones where certain frequencies may be used exclusively for either terrestrial or satellite services, as well as setting limits on signal power levels to minimize the chance of interference. Additionally, technology solutions like advanced filtering, beam shaping, and polarization techniques are employed to further isolate satellite communications from terrestrial systems, ensuring that both may coexist and operate effectively without mutual disruption.

The International Telecommunication Union (ITU) has designated several frequency bands for Fixed Satellite Services (FSS) and Mobile Satellite Services (MSS) that can be used by satellites operating in Low Earth Orbit (LEO). The specific bands allocated for satellite services, FSS and MSS, are determined by the ITU’s Radio Regulations, which are periodically updated to reflect global telecommunication’s evolving needs and address emerging technologies. Here are some of the key frequency bands commonly considered for FSS and MSS with LEO satellites:

V-Band 40 GHz to 75 GHz (microwave frequency range).
The V-band is appealing for Low Earth Orbit (LEO) satellite constellations designed to provide global broadband internet access. LEO satellites can benefit from the V-band’s capacity to support high data rates, which is essential for serving densely populated areas and delivering competitive internet speeds. The reduced path loss at lower altitudes, compared to GEO, also makes the V-band a viable option for LEO satellites. Due to the higher frequencies offered by V-band it also is significant more sensitive to atmospheric attenuation, (e.g., oxygen absorption around 60 GHz), including rain fade, which is likely to affect signal integrity. This necessitates the development of advanced technologies for adaptive coding and modulation, power amplification, and beamforming to ensure reliable communication under various weather conditions. Several LEO satellite operators have expressed an interest in operationalizing the V-band in their satellite constellations (e.g., StarLink, OneWeb, Kuiper, Lightspeed). This band should be regarded as an emergent LEO frequency band.

Ka-Band 17.7 GHz to 20.2 GHz (Downlink) & 27.5 GHz to 30.0 GHz (Uplink).
The Ka-band offers higher bandwidths, enabling greater data throughput than lower bands. Not surprising this band is favored by high-throughput satellite solutions. It is widely used by fixed satellite services (FSS). This makes it ideal for high-speed internet services. However, it is more susceptible to absorption and scattering by atmospheric particles, including raindrops and snowflakes. This absorption and scattering effect weakens the signal strength when it reaches the receiver. To mitigate rain fade effects in the Ka-band, satellite, and ground equipment must be designed with higher link margins, incorporating more powerful transmitters and more sensitive receivers. Additionally, adaptive modulation and coding techniques can be employed to adjust the signal dynamically in response to changing weather conditions. Overall, the system is more costly and, therefore, primarily used for satellite-to-earth ground station communications and high-performance satellite backhaul solutions.

For example, Starlink and OneWeb use the Ka-band to connect to satellite Earth gateways and point-of-presence, which connect to Internet Exchange and the wider internet. It is worth noticing that the terrestrial 5 G band n256 (26.5 to 29.5 GHz) falls within the Ka-band’s uplink frequency band. Furthermore, SES’s mPower satellites, operating at Middle Earth Orbit (MEO), operate exclusively in this band, providing internet backhaul services.

Ku-Band 12.75 GHz to 13.25 GHz (Downlink) & 14.0 GHz to 14.5 GHz (Uplink).
The Ku-band is widely used for FSS satellite communications, including fixed satellite services, due to its balance between bandwidth availability and susceptibility to rain fade. It is suitable for broadband services, TV broadcasting, and backhaul connections. For example, Starlink and OneWeb satellites are using this band to provide broadband services to earth-based customer terminals.

X-Band 7.25 GHz to 7.75 GHz (Downlink) & 7.9 GHz to 8.4 GHz (Uplink).
The X-band in satellite applications is governed by international agreements and national regulations to prevent interference between different services and to ensure the efficient use of the spectrum. The X-band is extensively used for secure military satellite communications, offering advantages like high data rates and relative resilience to jamming and eavesdropping. It supports a wide range of military applications, including mobile command, control, communications, computer, intelligence, surveillance, and reconnaissance (i.e., C4ISR) operations. Most defense-oriented satellites operate at geostationary orbit, ensuring constant coverage of specific geographic areas (e.g., Airbus Skynet constellations, Spain’s XTAR-EUR, and France’s Syracuse satellites). Most European LEO defense satellites, used primarily for reconnaissance, are fairly old, with more than 15 years since the first launch, and are limited in numbers (i.e., <10). The most recent European LEO satellite system is the French-based Multinational Space-based Imaging System (MUSIS) and Composante Spatiale Optique (CSO), where the first CSO components were launched in 2018. There are few commercial satellites utilizing the X-band.

C-Band 3.7 GHz to 4.2 GHz (Downlink) & 5.925 GHz to 6.425 GHz (Uplink)
C-band is less susceptible to rain fade and is traditionally used for satellite TV broadcasting, maritime, and aviation communications. However, parts of the C-band are also being repurposed for terrestrial 5G networks in some regions, leading to potential conflicts and the need for careful coordination. The C-band is primarily used in geostationary orbit (GEO) rather than Low Earth Orbit (LEO), due to the historical allocation of C-band for fixed satellite services (FSS) and its favorable propagation characteristics. I haven’t really come across any LEO constellation using the C-band. GEO FSS satellite operators using this band extensively are SES (Luxembourg), Intelsat (Luxembourg/USA), Eutelsat (France), Inmarsat (UK), etc..

S-Band 2.0 GHz to 4.0 GHz
S-band is used for various applications, including mobile communications, weather radar, and some types of broadband services. It offers a good compromise between bandwidth and resistance to atmospheric absorption. Both Omnispace (USA) and Globalstar (USA) LEO satellites operate in this band. Omnispace is also interesting as they have expressed intent to have LEO satellites supporting the 5G services in the band n256 (26.5 to 29.5 GHz), which falls within the uplink of the Ka-band.

L-Band 1.0 GHz to 2.0 GHz
L-band is less commonly used for fixed satellite services but is notable for its use in mobile satellite services (MSS), satellite phone communications, and GPS. It provides good coverage and penetration characteristics. Both Lynk Mobile (USA), offering Direct-2-Device, IoT, and M2M services, and Astrocast (Switzerland), with their IoT/M2M services, are examples of LEO satellite businesses operating in this band.

UHF 300 MHz to 3.0 GHz
The UHF band is more widely used for satellite communications, including mobile satellite services (MSS), satellite radio, and some types of broadband data services. It is favored for its relatively good propagation characteristics, including the ability to penetrate buildings and foliage. For example, Fossa Systems LEO pico-satellites (i.e., 1p form-factor) use this frequency for their IoT and M2M communications services.

VHF 30 MHz to 300 MHz

The VHF band is less commonly used in satellite communications for commercial broadband services. Still, it is important for applications such as satellite telemetry, tracking, and control (TT&C) operations and amateur satellite communications. Its use is often limited due to the lower bandwidth available and the higher susceptibility to interference from terrestrial sources. Swarm Technologies (USA and a SpaceX subsidiary) using 137-138 MHz (Downlink) and 148-150 MHz (Uplink). However, it appears that they have stopped taking new devices on their network. Orbcomm (USA) is another example of a satellite service provider using the VHF band for IoT and M2M communications. There is very limited capacity in this band due to many other existing use cases, and LEO satellite companies appear to plan to upgrade to the UHF band or to piggyback on direct-2-cell (or direct-2-device) satellite solutions, enabling LEO satellite communications with 3GPP compatible IoT and M2M devices.

SATELLITE ANTENNAS.

Satellites operating in Geostationary Earth Orbit (GEO), Medium Earth Orbit (MEO), and Low Earth Orbit (LEO) utilize a variety of antenna types tailored to their specific missions, which range from communication and navigation to observation (e.g., signal intelligence). The satellite’s applications influence the selection of an antenna, the characteristics of its orbit, and the coverage area required.

Antenna technology is intrinsically linked to spectral efficiency in satellite communications systems and of course any other wireless systems. Antenna designs influence how effectively a communication system can transmit and receive signals within a given frequency band, which is the essence of spectral efficiency (i.e., how much information per unit time in bits per second can I squeeze through my communications channel).

Thus, advancements in antenna technology are fundamental to improving spectral efficiency, making it a key area of research and development in the quest for more capable and efficient communication systems.

Parabolic dish antennas are prevalent for GEO satellites due to their high gain and narrow beam width, making them ideal for broadcasting and fixed satellite services. These antennas focus a tight beam on specific areas on Earth, enabling strong and direct signals essential for television, internet, and communication services. Horn antennas, while simpler, are sometimes used as feeds for larger parabolic antennas or for telemetry, tracking, and command functions due to their reliability. Additionally, phased array antennas are becoming more common in GEO satellites for their ability to steer beams electronically, offering flexibility in coverage and the capability to handle multiple beams and frequencies simultaneously.

Phased-array antennas are indispensable in for MEO satellites, such as those used in navigation systems like GPS (USA), BeiDou (China), Galileo (European), or GLONASS (Russian). These satellite constellations cover large areas of the Earth’s surface and can adjust beam directions dynamically, a critical feature given the satellites’ movement relative to the Earth. Patch antennas are also widely used in MEO satellites, especially for mobile communication constellations, due to their compact and low-profile design, making them suitable for mobile voice and data communications.

Phased-array antennas are very important for LEO satellites use cases as well, which include broadband communication constellations like Starlink and OneWeb. Their (fast) beam-steering capabilities are essential for maintaining continuous communication with ground stations and user terminals as the satellites quickly traverse the sky. The phased-array antenna also allow for optimizing coverage with both narrow as well as wider field of view (from the perspective of the satellite antenna) that allow the satellite operator to trade-off cell capacity and cell coverage.

Simpler Dipole antennas are employed for more straightforward data relay and telemetry purposes in smaller satellites and CubeSats, where space and power constraints are significant factors. Reflect array antennas, which offer a mix of high gain and beam steering capabilities, are used in specific LEO satellites for communication and observation applications (e.g., for signal intelligence gathering), combining features of both parabolic and phased array antennas.

Mission specific requirements drive the choice of antenna for a satellite. For example, GEO satellites often use high-gain, narrowly focused antennas due to their fixed position relative to the Earth, while MEO and LEO satellites, which move relatively closer to the Earth’s surface, require antennas capable of maintaining stable connections with moving ground terminals or covering large geographical areas.

Advanced antenna technologies such as beamforming, phased-arrays, and Multiple In Multiple Out (MMO) antenna configurations are crucial in managing and utilizing the spectrum more efficiently. They enable precise targeting of radio waves, minimizing interference, and optimizing bandwidth usage. This direct control over the transmission path and signal shape allows more data (bits) to be sent and received within the same spectral space, effectively increasing the communication channel’s capacity. In particular, MIMO antenna configurations and advanced antenna beamforming have enabled terrestrial mobile cellular access technologies (e.g., LTE and 5G) to quantum leap the effective spectral efficiency, broadband speed and capacity orders of magnitude above and beyond older technologies of 2G and 3G. Similar principles are being deployed today in modern advanced communications satellite antennas, providing increased capacity and quality within the satellite cellular coverage area provided by the satellite beam.

Moreover, antenna technology developments like polarization and frequency reuse directly impact a satellite system’s ability to maximize spectral resources. Allowing simultaneous transmissions on the same frequency through different polarizations or spatial separations effectively double the capacity without needing additional spectrum.

WHERE DO WE END UP.

If all current commercial satellite plans were realized, within the next decade, we would have more, possibly substantially more than 65 thousand satellites circling Earth. Today, that number is less than 10 thousand, with more than half that number realized by StarLink’s LEO constellation. Imagine the increase in, and the amount of, space debris circling Earth within the next 10 years. This will likely pose a substantial increase in operational risk for new space missions and will have to be addressed urgently.

Over the next decade, we may have at least 2 major LEO satellite constellations. One from Starlink with an excess of 12 thousand satellites, and one from China, the Guo Wang, the state network, likewise with 12 thousand LEO satellites. One global satellite constellation is from an American-based commercial company; the other is a worldwide satellite constellation representing the Chinese state. It would not be too surprising to see that by 2034, the two satellite constellations will divide Earth in part, being serviced by a commercial satellite constellation (e.g., North America, Europe, parts of the Middle East, some of APAC including India, possibly some parts of Africa). Another part will likely be served by a Chinese-controlled LEO constellation providing satellite broadband service to China, Russia, significant parts of Africa, and parts of APAC.

Over the next decade, satellite services will undergo transformative advancements, reshaping the architecture of global communication infrastructures and significantly impacting various sectors, including broadband internet, global navigation, Earth observation, and beyond. As these services evolve, we should anticipate major leaps in satellite technologies, driven by innovation in propulsion systems, miniaturization of technology, advancements in onboard processing capabilities, increasing use of AI and machine learning leapfrogging satellites operational efficiency and performance, breakthrough in material science reducing weight and increasing packing density, leapfrogs in antenna technology, and last but not least much more efficient use of the radio frequency spectrum. Moreover, we will see the breakthrough innovation that will allow better co-existence and autonomous collaboration of frequency spectrum utilization between non-terrestrial and terrestrial networks reducing the need for much regulatory bureaucracy that might anyway be replaced by decentralized autonomous organizations (DAOs) and smart contracts. This development will be essential as satellite constellations are being integrated into 5G and 6G network architectures as the non-terrestrial network cellular access component. This particular topic, like many in this article, is worth a whole new article on its own.

I expect that over the next 10 years we will see electronically steerable phased-array antennas, as a notable advancement. These would offer increased agility and efficiency in beamforming and signal direction. Their ability to swiftly adjust beams for optimal coverage and connectivity without physical movement makes them perfect for the dynamic nature of Low Earth Orbit (LEO) satellite constellations. This technology will becomes increasingly cost-effective and energy-efficient, enabling widespread deployment across various satellite platforms (not only LEO designs). The advance in phased-array antenna technology will facilitate substantial increase in the satellite system capacity by increasing the number of beams, the variation on beam size (possibly down to a customer ground station level), and support multi-band operations within the same antenna.

Another promising development is the integration of metamaterials in antenna design, which will lead to more compact, flexible, and lightweight antennas. The science of metamaterials is super interesting and relates to manufacturing artificial materials to have properties not found in naturally occurring materials with unique electromagnetic behaviors arising from their internal structure. Metamaterial antennas is going to offer superior performance, including better signal control and reduced interference, which is crucial for maintaining high-quality broadband connections. These materials are also important for substantially reducing the weight of the satellite antenna, while boosting its performance. Thus, the technology will also support bringing the satellite launch cost down dramatically.

Although primarily associated MIMO antennas with terrestrial networks, I would also expect that massive MIMO technology will find applications in satellite broadband systems. Satellite systems, just like ground based cellular networks, can significantly increase their capacity and efficiency by utilizing many antenna elements to simultaneously communicate with multiple ground terminals. This could be particularly transformative for next-generation satellite networks, supporting higher data rates and accommodating more users. The technology will increase the capacity and quality of the satellite system dramatically as it has done on terrestrial cellular networks.

Furthermore, advancements in onboard processing capabilities will allow satellites to perform more complex signal processing tasks directly in space, reducing latency and improving the efficiency of data transmission. Coupled with AI and machine learning algorithms, future satellite antennas could dynamically optimize their operational parameters in real-time, adapting to changes in the network environment and user demand.

Additionally, research into quantum antenna technology may offer breakthroughs in satellite communication, providing unprecedented levels of sensitivity and bandwidth efficiency. Although still early, quantum antennas could revolutionize signal reception and transmission in satellite broadband systems. In the context of LEO satellite systems, I am particularly excited about utilizing the Rydberg Effect to enhance system sensitivity could lead to massive improvements. The heightened sensitivity of Rydberg atoms to electromagnetic fields could be harnessed to develop ultra-sensitive detectors for radio frequency (RF) signals. Such detectors could surpass the performance of traditional semiconductor-based devices in terms of sensitivity and selectivity, enabling satellite systems to detect weaker signals, improve signal-to-noise ratios, and even operate effectively over greater distances or with less power. Furthermore, space could potentially be the near-ideal environment for operationalizing Rydberg antenna and communications systems as space had near-perfect vacuum, very low-temperatures (in Earth shadow at least or with proper thermal management), relatively free of electromagnetic radiation (compared to Earth), as well as its micro-gravity environment that may facilitate long-range “communications” between Rydberg atoms. This particular topic may be further out in the future than “just” a decade from now, although it may also be with satellites we will see the first promising results of this technology.

One key area of development will be the integration of LEO satellite networks with terrestrial 5G and emerging 6G networks, marking a significant step in the evolution of Non-Terrestrial Network (NTN) architectures. This integration promises to deliver seamless, high-speed connectivity across the globe, including in remote and rural areas previously underserved by traditional broadband infrastructure. By complementing terrestrial networks, LEO satellites will help achieve ubiquitous wireless coverage, facilitating a wide range of applications and use cases from high-definition video streaming to real-time IoT data collection.

The convergence of LEO satellite services with 5G and 6G will also spur network management and orchestration innovation. Advanced techniques for managing interference, optimizing handovers between terrestrial and non-terrestrial networks, and efficiently allocating spectral resources will be crucial. It would be odd not to mention it here, so artificial intelligence and machine learning algorithms will, of course, support these efforts, enabling dynamic network adaptation to changing conditions and demands.

Moreover, the next decade will likely see significant improvements in the environmental sustainability of LEO satellite operations. Innovations in satellite design and materials, along with more efficient launch vehicles and end-of-life deorbiting strategies, will help mitigate the challenges of space debris and ensure the long-term viability of LEO satellite constellations.

In the realm of global connectivity, LEO satellites will have bridged the digital divide, offering affordable and accessible internet services to billions of people worldwide unconnected today. In 2023 the estimate is that there are about 3 billion people, almost 40% of all people in the world today, that have never used internet. In the next decade, it must be our ambition that with LEO satellite networks this number is brought down to very near Zero. This will have profound implications for education, healthcare, economic development, and global collaboration.

FURTHER READING.

  1. A. Vanelli-Coralli, N. Chuberre, G. Masini, A. Guidotti, M. El Jaafari, “5G Non-Terrestrial Networks.”, Wiley (2024). A recommended reading for deep diving into NTN networks of satellites, typically the LEO kind, and High-Altitude Platform Systems (HAPS) such as stratospheric drones.
  2. I. del Portillo et al., “A technical comparison of three low earth orbit satellite constellation systems to provide global broadband,” Acta Astronautica, (2019).
  3. Nils Pachler et al., “An Updated Comparison of Four Low Earth Orbit Satellite Constellation Systems to Provide Global Broadband” (2021).
  4. Starlink, “Starlink specifications” (Starlink.com page). The following Wikipedia resource is quite good as well: Starlink.
  5. Quora, “How much does a satellite cost for SpaceX’s Starlink project and what would be the cheapest way to launch it into space?” (June 2023). This link includes a post from Elon Musk commenting on the cost involved in manufacturing the Starlink satellite and the cost of launching SpaceX’s Falcon 9 rocket.
  6. Michael Baylor, “With Block 5, SpaceX to increase launch cadence and lower prices.”, nasaspaceflight.com (May, 2018).
  7. Gwynne Shotwell, TED Talk from May 2018. She quotes here a total of USD 10 billion as a target for the 12,000 satellite network. This is just an amazing visionary talk/discussion about what may happen by 2028 (in 4-5 years ;-).
  8. Juliana Suess, “Guo Wang: China’s Answer to Starlink?”, (May 2023).
  9. Makena Young & Akhil Thadani, “Low Orbit, High Stakes, All-In on the LEO Broadband Competition.”, Center for Strategic & International Studies CSIS, (Dec. 2022).
  10. AST SpaceMobile website: https://ast-science.com/ Constellation Areas: Internet, Direct-to-Cell, Space-Based Cellular Broadband, Satellite-to-Cellphone. 243 LEO satellites planned. 2 launched.
  11. Lynk Global website: https://lynk.world/ (see also FCC Order and Authorization). It should be noted that Lynk can operate within 617 to 960 MHz (Space-to-Earth) and 663 to 915 MHz (Earth-to-Space). However, only outside the USA. Constellation Area: IoT / M2M, Satellite-to-Cellphone, Internet, Direct-to-Cell. 8 LEO satellites out of 10 planned.
  12. Omnispace website: https://omnispace.com/ Constellation Area: IoT / M2M, 5G. Ambition to have the world’s first global 5G non-terrestrial network. Initial support 3GPP-defined Narrow-Band IoT radio interface. Planned 200 LEO and <15 MEO satellites. So far, only 2 satellites have been launched.
  13. NewSpace Index: https://www.newspace.im/ I find this resource to have excellent and up-to-date information on commercial satellite constellations.
  14. R.K. Mailloux, “Phased Array Antenna Handbook, 3rd Edition”, Artech House, (September 2017).
  15. A.K. Singh, M.P. Abegaonkar, and S.K. Koul, “Metamaterials for Antenna Applications”, CRC Press (September 2021).
  16. T.L. Marzetta, E.G. Larsson, H. Yang, and H.Q. Ngo, “Fundamentals of Massive MIMO”, Cambridge University Press, (November 2016).
  17. G.Y. Slepyan, S. Vlasenko, and D. Mogilevtsev, “Quantum Antennas”, arXiv:2206.14065v2, (June 2022).
  18. R. Huntley, “Quantum Rydberg Receiver Shakes Up RF Fundamentals”, EE Times, (January 2022).
  19. Y. Du, N. Cong, X. Wei, X. Zhang, W. Lou, J. He, and R. Yang, “Realization of multiband communications using different Rydberg final states”, AIP Advances, (June 2022). Demonstrating the applicability of the Rydberg effect in digital transceivers in the Ku and Ka bands.

ACKNOWLEDGEMENT.

I greatly acknowledge my wife, Eva Varadi, for her support, patience, and understanding during the creative process of writing this article.

Stratospheric Drones & Low Earth Satellites: Revolutionizing Terrestrial Rural Broadband from the Skies?

“From an economic and customer experience standpoint, deploying stratospheric drones may be significantly more cost effective than establishing extra terrestrial infrastructures”.

This article, in a different and somewhat shorter format, has also been published by New Street Research under the title “Stratospheric drones: A game changer for rural networks?”. You will need to register with New Street Research to get access.

As a mobile cellular industry expert and a techno-economist, the first time I was presented with the concept of stratospheric drones, I feel the butterflies in my belly. That tingling feeling that I was seeing something that could be a huge disruptor of how mobile cellular networks are being designed and built. Imagine getting rid of the profitability-challenged rural cellular networks (i.e., the towers, the energy consumption, the capital infrastructure investments), and, at the same time, offering much better quality to customers in rural areas than is possible with the existing cellular network we have deployed there. A technology that could fundamentally change the industry’s mobile cellular cost structure for the better at a quantum leap in quality and, in general, provide economical broadband services to the unconnected at a fraction of the cost of our traditional ways of building terrestrial cellular coverage.

Back in 2015, I got involved with Deutsche Telekom AG Group Technology, under the leadership of Bruno Jacobfeuerborn, in working out the detailed operational plans, deployment strategies, and, of course, the business case as well as general economics of building a stratospheric cellular coverage platform from scratch with the UK-based Stratospheric Platform Ltd [2] in which Deutsche Telekom is an investor. The investment thesis was really in the way we expected the stratospheric high-altitude platform to make a large part of mobile operators’ terrestrial rural cellular networks obsolete and how it might strengthen mobile operator footprints in countries where rural and remote coverage was either very weak or non-existing (e.g., The USA, an important market for Deutsche Telekom AG).

At the time, our thoughts were to have an operational stratospheric coverage platform operationally by 2025, 10 years after kicking off the program, with more than 100 high-altitude platforms covering a major Western European country serving rural areas. As it so often is, reality is unforgiving, as it often is with genuinely disruptive ideas. Getting to a stage of deployment and operation at scale of a high-altitude platform is still some years out due to the lack of maturity of the flight platform, including regulatory approvals for operating a HAP network at scale, increasing the operating window of the flight platform, fueling, technology challenges with the advanced antenna system, being allowed to deployed terrestrial-based cellular spectrum above terra firma, etc. Many of these challenges are progressing well, although slowly.

Globally, various companies are actively working on developing stratospheric drones to enhance cellular coverage. These include aerospace and defense giants like Airbus, advancing its Zephyr drone, and BAE Systems, collaborating with Prismatic for their PHASA-35 UAV. One of the most exciting HAPS companies focusing on developing world-leading high-altitude aircraft that I have come across during my planning work on how to operationalize a Stratospheric cellular coverage platform is the German company Leichtwerk AG, which has their hydrogen-fueled StratoStreamer as well as a solar-powered platform under development with the their StratoStreamer being close to production-ready. Telecom companies like Deutsche Telekom AG and BT Group are experimenting with hydrogen-powered drones in partnership with Stratospheric Platforms Limited. Through its subsidiary HAPSMobile, SoftBank is also a significant player with its Sunglider project. Additionally, entities like China Aerospace Science and Technology Corporation and Cambridge Consultants contribute to this field by co-developing enabling technologies (e.g., advanced phased-array antenna, fuel technologies, material science, …) critical for the success and deployability of high-altitude platforms at scale, aiming to improve connectivity in rural, remote, and underserved areas.

The work on integrating High Altitude Platform (HAP) networks with terrestrial cellular systems involves significant coordination with international regulatory bodies like the International Telecommunication Union Radiocommunication Sector (ITU-R) and the World Radiocommunication Conference (WRC). This process is crucial for securing permission to reuse terrestrial cellular spectrum in the stratosphere. Key focus areas include negotiating the allocation and management of frequency bands for HAP systems, ensuring they don’t interfere with terrestrial networks. These efforts are vital for successfully deploying and operating HAP systems, enabling them to provide enhanced connectivity globally, especially in rural areas where terrestrial cellular frequencies are already in use and remote and underserved regions. At the latest WRC-2023 conference, Softbank successfully gained approval within the Asia-Pacific region to use mobile spectrum bands for stratospheric drone-based mobile broadband cellular services.

Most mobile operators have at least 50% of their cellular network infrastructure assets in rural areas. While necessary for providing the coverage that mobile customers have come to expect everywhere, these sites carry only a fraction of the total mobile traffic. Individually, rural sites have poor financial returns due to their proportional operational and capital expenses.

In general, the Opex of the cellular network takes up between 50% and 60% of the Technology Opex, and at least 50% of that can be attributed to maintaining and operating the rural part of the radio access network. Capex is more cyclical than Opex due to, for example, the modernization of radio access technology. Nevertheless, over a typical modernization cycle (5 to 7 years), the rural network demands a little bit less but a similar share of Capex overall as for Opex. Typically, the Opex share of the rural cellular network may be around 10% of the corporate Opex, and its associated total cost is between 12% and 15% of the total expenses.

The global telecom towers market size in 2023 is estimated at ca. 26+ billion euros, ca. 2.5% of total telecom turnover, with a projected growth of CAGR 3.3% from now to 2030. The top 10 Tower management companies manage close to 1 million towers worldwide for mobile CSPs. Although many mobile operators have chosen to spin off their passive site infrastructure, there are still some remaining that may yet to spin off their cellular infrastructure to one of many Tower management companies, captive or independent, such as American Tower (224,019+ towers), Cellnex Telecom (112,737+ towers), Vantage Towers (46,100+ towers), GD Towers (+41,600 towers), etc…

IMAGINE.

Focusing on the low- or no-profitable rural cellular coverage.

Imagine an alternative coverage technology to the normal cellular one all mobile operators are using that would allow them to do without the costly and low-profitable rural cellular network they have today to satisfy their customers’ expectations of high-quality ubiquitous cellular coverage.

For the alternative technology to be attractive, it would need to deliver at least the same quality and capacity as the existing terrestrial-based cellular coverage for substantially better economics.

If a mobile operator with a 40% EBITDA margin did not need its rural cellular network, it could improve its margin by a sustainable 5% and increase its cash generation in relative terms by 50% (i.e., from 0.2×Revenue to 0.3×Revenue), assuming a capex-to-revenue ratio of 20% before implementing the technology being reduced to 15% after due to avoiding modernization and capacity investments in the rural areas.

Imagine that the alternative technology would provide a better cellular quality to the consumer for a quantum leap reduction of the associated cost structure compared to today’s cellular networks.

Such an alternative coverage technology might also impact the global tower companies’ absolute level of sustainable tower revenues, with a substantial proportion of revenue related to rural site infrastructure being at risk.

Figure 1 An example of an unmanned autonomous stratospheric coverage platform. Source: Cambridge Consultants presentation (see reference [2]) based on their work with Stratospheric Platforms Ltd (SPL) and SPL’s innovative high-altitude coverage platform.

TERRESTRIAL CELLULAR RURAL COVERAGE – A MATTER OF POOR ECONOMICS.

When considering the quality we experience in a terrestrial cellular network, a comprehensive understanding of various environmental and physical factors is crucial to predicting the signal quality accurately. All these factors generally work against cellular signal propagation regarding how far the signal can reach from the transmitting cellular tower and the achievable quality (e.g., signal strength) that a customer can experience from a cellular service.

Firstly, the terrain plays a significant role. Rural landscapes often include varied topographies such as hills, valleys, and flat plains, each affecting signal reach differently. For instance, hilly or mountainous areas may cause signal shadowing and reflection, while flat terrains might offer less obstruction, enabling signals to travel further.

At higher frequencies (i.e., above 1 GHz), vegetation becomes an increasingly critical factor to consider. Trees, forests, and other dense foliage can absorb and scatter radio waves, attenuating signals. The type and density of vegetation, along with seasonal changes like foliage density in summer versus winter, can significantly impact signal strength.

The height and placement of transmitting and receiving antennas are also vital considerations. In rural areas, where there are fewer tall buildings, the height of the antenna can have a pronounced effect on the line of sight and, consequently, on the signal coverage and quality. Elevated antennas mitigate the impact of terrain and vegetation to some extent.

Furthermore, the lower density of buildings in rural areas means fewer reflections and less multipath interference than in urban environments. However, larger structures, such as farm buildings or industrial facilities, must be factored in, as they can obstruct or reflect signals.

Finally, the distance between the transmitter and receiver is fundamental to signal propagation. With typically fewer cell towers spread over larger distances, understanding how signal strength diminishes with distance is critical to ensuring reliable coverage at a high quality, such as high cellular throughput, as the mobile customer expects.

The typical way for a cellular operator to mitigate the environmental and physical factors that inevitably result in loss of signal strength and reduced cellular quality (i.e., sub-standard cellular speed) is to build more sites and thus incur increasing Capex and Opex in areas that in general will have poor economical payback associated with any cellular assets. Thus, such investments make an already poor economic situation even worse as the rural cellular network generally would have very low utilization.

Figure 2 Cellular capacity or quality measured by the unit or total throughput is approximately driven by the amount of spectrum (in MHz) times the effective spectral efficiency (in Mbps/MHz/units) times the number of cells or capacity units deployed. When considering the effective spectral efficiency, one needs to consider the possible “boost” that a higher order MiMo or Advanced Antenna System will bring over and above the Single In Single Out (SISO) antenna would result in.

As our alternative technology also would need to provide at least the same quality and capacity it is worth exploring what can be expected in terms of rural terrestrial capacity. In general, we have that the cellular capacity (and quality) can be written as (also shown in Figure 2 above):

Throughput (in Mbps) =
Spectral Bandwidth in MHz ×
Effective Spectral Efficiency in Mbps/MHz/Cell ×
Number of Cells

We need to keep in mind that an additional important factor when considering quality and capacity is that the higher the operational frequency, the lower the radius (all else being equal). Typically, we can improve the radius at higher frequencies by utilizing advanced antenna beam forming, that is, concentrate the radiated power per unit coverage area, which is why you will often hear that the 3.6 GHz downlink coverage radius is similar to that of 1800 MHz (or PCS). This 3.6 GHz vs. 1.8 GHz coverage radius comparison is made when not all else is equal. Comparing a situation where the 1800 MHz (or PCS) radiated power is spread out over the whole coverage area compared to a coverage situation where the 3.6 GHz (or C-band in general) solution makes use of beamforming, where the transmitted energy density is high, allowing to reach the customer at a range that would not be possible if the 3.6 GHz radiated power would have been spread out over the cell like the example of the 1800 MHz.

As an example, take an average Western European rural 5G site with all cellular bands between 700 and 2100 MHz activated. The site will have a total of 85 MHz DL and 75 MHz UL, with a 10 MHz difference between DL and UL due to band 38 Supplementary Downlink SDL) operational on the site. In our example, we will be optimistic and assume that the effective spectral efficiency is 2 Mbps per MHz per cell (average over all bands and antenna configurations), which would indicate a fair amount of 4×4 and 8×8 MiMo antenna systems deployed. Thus, the unit throughput we would expect to be supplied by the terrestrial rural cell would be 170 Mbps (i.e., 85 MHz × 2.0 Mbps/MHz/Cell). With a rural cell coverage radius between 2 and 3 km, we then have an average throughput per square kilometer of 9 Mbps/km2. Due to the low demand and high-frequency bandwidth per active customer, DL speeds exceeding 100+ Mbps should be relatively easy to sustain with 5G standalone, with uplink speeds being more compromised due to larger coverage areas. Obviously, the rural quality can be improved further by deploying advanced antenna systems and increasing the share of higher-order MiMo antennas in general, as well as increasing the rural site density. However, as already pointed out, this would not be an economically reasonable approach.

THE ADVANTAGE OF SEEING FROM ABOVE.

Figure 3 illustrates the difference between terrestrial cellular coverage from a cell tower and that of a stratospheric drone or high-altitude platform (“Antenna-in-the-Sky”). The benefit of seeing the world from above is that environmental and physical factors have substantially less impact on signal propagation and quality primarily being impacted by distance as it approximates free space propagation. This situation is very different for a terrestrial-based cellular tower with its radiated signal being substantially impacted by the environment as well as physical factors.

It may sound silly to talk about an alternative coverage technology that could replace the need for the cellular tower infrastructure that today is critical for providing mobile broadband coverage to, for example, rural areas. What alternative coverage technologies should we consider?

If, instead of relying on terrestrial-based tower infrastructure, we could move the cellular antenna and possibly the radio node itself to the sky, we would have a situation where most points of the ground would be in the line of sight to the “antenna-in-the-sky.” The antenna in the sky idea is a game changer in terms of coverage itself compared to conventional terrestrial cellular coverage, where environmental and physical factors dramatically reduce signal propagation and signal quality.

The key advantage of an antenna in the sky (AIS) is that the likelihood of a line-of-sight to a point on the ground is very high compared to establishing a line-of-sight for terrestrial cellular coverage that, in general, would be very low. In other words, the cellular signal propagation from an AIS closely approximates that of free space. Thus, all the various environmental signal loss factors we must consider for a standard terrestrial-based mobile network do not apply to our antenna in the sky.

Over the last ten years, we have gotten several technology candidates for our antenna-in-the-sky solution, aiming to provide terrestrial broadband services as a substitute, or enhancement, for terrestrial mobile and fixed broadband services. In the following, I will describe two distinct types of antenna-in-the-sky solutions: (a) Low Earth Orbit (LEO) satellites, operating between 500 to 2000 km above Earth, that provide terrestrial broadband services such as we know from Starlink (SpaceX), OneWeb (Eutelsat Group), and Kuiper (Amazon), and (b) So-called, High Altitude Platforms (HAPS), operating at altitudes between 15 to 30 km (i.e., in the stratosphere). Such platforms are still in the research and trial stages but are very promising technologies to substitute or enhance rural network broadband services. The HAP is supposed to be unmanned, highly autonomous, and ultimately operational in the stratosphere for an extended period (weeks to months), fueled by green hydrogen and possibly solar. The high-altitude platform is thus also an unmanned aerial vehicle (UAV), although I will use the term stratospheric drone and HAP interchangeably in the following.

Low Earth Orbit (LEO) satellites and High Altitude Platforms (HAPs) represent two distinct approaches to providing high-altitude communication and observation services. LEO satellites, operating between 500 km and 2,000 km above the Earth, orbit the planet, offering broad global coverage. The LEO satellite platform is ideal for applications like satellite broadband internet, Earth observation, and global positioning systems. However, deploying and maintaining these satellites involves complex, costly space missions and sophisticated ground control. Although, as SpaceX has demonstrated with the Starlink LEO satellite fixed broadband platform, the unitary economics of their satellites significantly improve by scale when the launch cost is also considered (i.e., number of satellites).

Figure 4 illustrates a non-terrestrial network architecture consisting of a Low Earth Orbit (LEO) satellite constellation providing fixed broadband services to terrestrial users. Each hexagon represents a satellite beam inside the larger satellite coverage area. Note that, in general, there will be some coverage overlap between individual satellites, ensuring a continuous service including interconnected satellites. The user terminal (UT) dynamically aligns itself, aiming at the best quality connection provided by the satellites within the UT field of vision.

Figure 4 Illustrating a Non-Terrestrial Network consisting of a Low Earth Orbit (LEO) satellite constellation providing fixed broadband services to terrestrial users (e.g., Starlink, Kuiper, OneWeb,…). Each hexagon represents a satellite beam inside the larger satellite coverage area. Note that, in general, there will be some coverage overlap between individual satellites, ensuring a continuous service. The operating altitude of a LEO satellite constellation is between 300 and 2,000 km. It is assumed that the satellites are interconnected, e.g., laser links. The User Terminal antenna (UT) is dynamically orienting itself after the best line-of-sight (in terms of signal quality) to a satellite within UT’s field-of-view (FoV). The FoV has not been shown in the picture above so as not to overcomplicate the illustration. It should be noted just like with the drone it is possible to integrate the complete gNB on the LEO satellite. There might even be applications (e.g., defense, natural & unnatural disaster situations, …) where a standalone 5G SA core is integrated.

On the other hand, HAPs, such as unmanned (autonomous) stratospheric drones, operate at altitudes of approximately 15 km to 30 km in the stratosphere. Unlike LEO satellites, the stratospheric drone can hover or move slowly over specific areas, often geostationary relative to the Earth’s surface. This characteristic makes them more suitable for localized coverage tasks like regional broadband, surveillance, and environmental monitoring. The deployment and maintenance of the stratospheric drones are managed from the Earth’s surface and do not require space launch capabilities. Furthermore, enhancing and upgrading the HAPs is straightforward, as they will regularly be on the ground for fueling and maintenance. Upgrades are not possible with an operational LEO satellite solution where any upgrade would have to wait on a subsequent generation and new launch.

Figure 5 illustrates the high-level network architecture of an unmanned autonomous stratospheric drone-based constellation providing terrestrial cellular broadband services to terrestrial mobile users delivered to their normal 5G terminal equipment. Each hexagon represents a beam arising from the phased-array antenna integrated into the drone’s wingspan. To deliver very high-availability services to a rural area, one could assign three HAPs to cover a given area. The drone-based non-terrestrial network is drawn consistent with the architectural radio access network (RAN) elements from Open RAN, e.g., Radio Unit (RU), Distributed Unit (DU), and Central Unit (CU). It should be noted that the whole 5G gNB (the 5G NodeB), including the CU, could be integrated into the stratospheric drone, and in fact, so could the 5G standalone (SA) packet core, enabling full private mobile 5G networks for defense and disaster scenarios or providing coverage in very remote areas with little possibility of ground-based infrastructure (e.g., the arctic region, or desert and mountainous areas).

Figure 5 illustrates a Non-Terrestrial Network consisting of a stratospheric High Altitude Platform (HAP) drone-based constellation providing terrestrial Cellular broadband services to terrestrial mobile users delivered to their normal 5G terminal equipment. Each hexagon represents a beam inside the larger coverage area of the stratospheric drone. To deliver very high-availability services to a rural area, one could assign three HAPs to cover a given area. The operating altitude of a HAP constellation is between 10 to 50 km with an optimum of around 20 km. It is assumed that there is inter-HAP connectivity, e.g., via laser links. Of course, it is also possible to contemplate having the gNB (full 5G radio node) in the stratospheric drone entirely, which would allow easier integration with LEO satellite backhauls, for example. There might even be applications (e.g., defense, natural & unnatural disaster situations, …) where a standalone 5G SA core is integrated.

The unique advantage of the HAP operating in the stratosphere is (1) The altitude is advantageous for providing wider-area cellular coverage with a near-ideal quality above and beyond what is possible with conventional terrestrial-based cellular coverage because of very high line-of-sight likelihood due to less environment and physical issues that substantially reduces the signal propagation and quality of a terrestrial coverage solution, and (2) More stable atmospheric conditions characterize the stratosphere compared to the troposphere below it. This stability allows the stratospheric drone to maintain a consistent position and altitude with less energy expenditure. The stratosphere offers more consistent and direct sunlight exposure for a solar-powered HAP with less atmospheric attenuation. Moreover, due to the thinner atmosphere at stratospheric altitudes, the stratospheric drone will experience a lower air resistance (drag), increasing the energy efficiency and, therefore, increasing the operational airtime.

Figure 6 illustrates Leichtwerk AG’s StratoStreamer HAP design that is near-production ready. Leichtwerk AG works closely together with AESA towards the type certificate that would make it possible to operationalize a drone constellation in Europe. The StratoStreamer has a wingspan of 65 meter and can carry a payload of 100+ kg. Courtesy: Leichtwerk AG.

Each of these solutions has its unique advantages and limitations. LEO satellites provide extensive coverage but come with higher operational complexities and costs. HAPs offer more focused coverage and are easier to manage, but they need the global reach of LEO satellites. The choice between these two depends on the specific requirements of the intended application, including coverage area, budget, and infrastructure capabilities.

In an era where digital connectivity is indispensable, stratospheric drones could emerge as a game-changing technology. These unmanned (autonomous) drones, operating in the stratosphere, offer unique operational and economic advantages over terrestrial networks and are even seen as competitive alternatives to low earth orbit (LEO) satellite networks like Starlink or OneWeb.

STRATOSPHERIC DRONES VS TERRESTRIAL NETWORKS.

Stratospheric drones positioned much closer to the Earth’s surface than satellites, provide distinct signal strength and latency benefits. The HAP’s vantage point in the stratosphere (around 20 km above the Earth) ensures a high probability of line-of-sight with terrestrial user devices, mitigating the adverse effects of terrain obstacles that frequently challenge ground-based networks. This capability is particularly beneficial in rural areas in general and mountainous or densely forested areas, where conventional cellular towers struggle to provide consistent coverage.

Why the stratosphere? The stratosphere is the layer of Earth’s atmosphere located above the troposphere, which is the layer where weather occurs. The stratosphere is generally characterized by stable, dry conditions with very little water vapor and minimal horizontal winds. It is also home to the ozone layer, which absorbs and filters out most of the Sun’s harmful ultraviolet radiation. It is also above the altitude of commercial air traffic, which typically flies at altitudes ranging from approximately 9 to 12 kilometers (30,000 to 40,000 feet). These conditions (in addition to those mentioned above) make operating a stratospheric platform very advantageous.

Figure 6 illustrates the coverage fundamentals of (a) a terrestrial cellular radio network with the signal strength and quality degrading increasingly as one moves away from the antenna and (b) the terrestrial coverage from a stratospheric drone (antenna in the sky) flying at an altitude of 15 to 30 km. The stratospheric drone, also called a High-Altitude Platform (HAP), provides near-ideal signal strength and quality due to direct line-of-sight (LoS) with the ground, compared to the signal and quality from a terrestrial cellular site that is influenced by its environment and physical factors and the fact that LoS is much less likely in a conventional terrestrial cellular network. It is worth keeping in mind that the coverage scenarios where a stratospheric drone and a low earth satellite may excel in particular are in rural areas and outdoor coverage in more dense urban areas. In urban areas, the clutter, or environmental features and objects, will make line-of-site more challenging, impacting the strength and quality of the radio signals.

Figure 6 The chart above illustrates the coverage fundamentals of (a) a terrestrial cellular radio network with the signal strength and quality degrading increasingly as one moves away from the antenna and (b) the terrestrial coverage from a stratospheric drone (antenna in the sky) flying at an altitude of 15 to 30 km. The stratospheric drone, also called a High Altitude Platform (HAP), provides near-ideal signal strength and quality due to direct line-of-sight (LoS) with the ground, compared to the signal & quality from a terrestrial cellular site that is influenced by its environment and physical factors and the fact that LoS is much less likely in a conventional terrestrial cellular network.

From an economic and customer experience standpoint, deploying stratospheric drones may be significantly more cost-effective than establishing extensive terrestrial infrastructure, especially in remote or rural areas. The setup and operational costs of cellular towers, including land acquisition, construction, and maintenance, are substantially higher compared to the deployment of stratospheric drones. These aerial platforms, once airborne, can cover vast geographical areas, potentially rendering numerous terrestrial towers redundant. At an operating height of 20 km, one would expect a coverage radius ranging from 20 km up to 500 km, depending on the antenna system, application, and business model (e.g., terrestrial broadband services, surveillance, environmental monitoring, …).

The stratospheric drone-based coverage platform, and by platform, I mean the complete infrastructure that will replace the terrestrial cellular network, will consist of unmanned autonomous drones with a considerable wingspan (e.g., 747-like of ca. 69 meters). For example, European (German) Leichtwerk’s StratoStreamer has a wingspan of 65 meters and a wing area of 197 square meters with a payload of 120+ kg (note: in comparison a Boing 747 has ca. 500+ m2 wing area but its payload is obviously much much higher and in the range of 50 to 60 metric tons). Leichtwerk AG work closely together with AESA in order to achieve the European Union Aviation Safety Agency (EASA) type certificate that would allow the HAPS to integrate into civil airspace (see refs. [34] for what that means).

An advanced antenna system is positioned under the wings (or the belly) of the drone. I will assume that the coverage radius provided by a single drone is 50 km, but it can dynamically be made smaller or larger depending on the coverage scenario and use case. The drone-based advanced antenna system breaks up the coverage area (ca. six thousand five hundred plus square kilometers) into 400 patches (i.e., a number that can be increased substantially), averaging approx. 16 km2 per patch and a radius of ca. 2.5 km. Due to its near-ideal cellular link budget, the effective spectral efficiency is expected to be initially around 6 Mbps per MHz per cell. Additionally, the drone does not have the same spectrum limitations as a rural terrestrial site and would be able to support frequency bands in the downlink from ~900 MHz up to 3.9 GHz (and possibly higher, although likely with different antenna designs). Due to the HAP altitude, the Earth-to-HAP uplink signal will be limited to a lower frequency spectrum to ensure good signal quality is being received at the stratospheric antenna. It is prudent to assume a limit of 2.1 GHz to possibly 2.6 GHz. All under the assumption that the stratospheric drone operator has achieved regulatory approval for operating the terrestrial cellular spectrum from their coverage platform. It should be noted that today, cellular frequency spectrum approved for terrestrial use cannot be used at an altitude unless regulatory permission has been given (more on this later).

Let’s look at an example. We would need ca. 46 drones to cover the whole of Germany with the above-assumed specifications. Furthermore, if we take the average spectrum portfolio of the 3 main German operators, this will imply that the stratospheric drone could be functioning with up to 145 MHz in downlink and at least 55 MHz uplink (i.e., limiting UL to include 2.1 GHz). Using the HAP DL spectral efficiency and coverage area we get a throughput density of 70+ Mbps/km2 and an effective rural cell throughput of 870 Mbps. In terrestrial-based cellular coverage, the contribution to quality at higher frequencies is rapidly degrading as a function of the distance to the antenna. This is not the case for HAP-based coverage due to its near-ideal signal propagation.

In comparison, the three incumbent German operators have on average ca. 30±4k sites per operator with an average terrestrial coverage area of 12 km2 and a coverage radius of ca. 2.0 km (i.e., smaller in cities, ~1.3 km, larger in rural areas, ~2.7 km). Assume that the average cost of ownership related only to the passive part of the site is 20+ thousand euros and that 50% of the 30k sites (expect a higher number) would be redundant as the rural coverage would be replaced by stratospheric drones. Such a site reduction quantum conservatively would lead to a minimum gross monetary reduction of 300 million euros annually (not considering the cost of the alternative technology coverage solution).

In our example, the question is whether we can operate a stratospheric drone-based platform covering rural Germany for less than 300 million euros yearly. Let’s examine this question. Say the stratospheric drone price is 1 million euros per piece (similar to the current Starlink satellite price, excluding the launch cost, which would add another 1.1 million euros to the satellite cost). For redundancy and availability purposes, we assume we need 100 stratospheric drones to cover rural Germany, allowing me to decommission in the radius of 15 thousand rural terrestrial sites. The decommissioning cost and economical right timing of tower contract termination need to be considered. Due to the standard long-term contracts may be 5 (optimistic) to 10+ years (realistic) year before the rural network termination could be completed. Many Telecom businesses that have spun out their passive site infrastructure have done so in mutual captivity with the Tower management company and may have committed to very “sticky” contracts that have very little flexibility in terms of site termination at scale (e.g., 2% annually allowed over total portfolio).

We have a capital expense of 100 million for the stratospheric drones.  We also have to establish the support infrastructure (e.g., ground stations, airfield suitability rework, development, …), and consider operational expenses. The ballpark figure for this cost would be around 100 million euros for Capex for establishing the supporting infrastructure and another 30 million euros in annual operational expenses. In terms of steady-state Capex, it should be at most 20 million per year. In our example, the terrestrial rural network would have cost 3 billion euros, mainly Opex, over ten years compared to 700 million euros, a little less than half as Opex, for the stratospheric drone-based platform (not considering inflation).

The economical requirements of a stratospheric unmanned and autonomous drone-based coverage platform should be superior compared to the current cellular terrestrial coverage platform. As the stratospheric coverage platform scales and increasingly more stratospheric drones are deployed, the unit price is also likely to reduce accordingly.

Spectrum usage rights yet another critical piece.

It should be emphasized that the deployment of cellular frequency spectrum in stratospheric and LEO satellite contexts is governed by a combination of technical feasibility, regulatory frameworks, coordination to prevent interference, and operational needs. The ITU, along with national regulatory bodies, plays a central role in deciding the operational possibilities and balancing the needs and concerns of various stakeholders, including satellite operators, terrestrial network providers, and other spectrum users. Today, there are many restrictions and direct regulatory prohibitions in repurposing terrestrially assigned cellular frequencies for non-terrestrial purposes.

The role of the World Radiocommunications Conference (WRC) role is pivotal in managing the global radio-frequency spectrum and satellite orbits. Its decisions directly impact the development and deployment of various radiocommunication services worldwide, ensuring their efficient operation and preventing interference across borders. The WRC’s work is fundamental to the smooth functioning of global communication networks, from television and radio broadcasting to cellular networks and satellite-based services. The WRC is typically held every three to four years, with the latest one, WRC-23, held in Dubai at the end of 2023, reference [13] provides the provisional final acts of WRC-23 (December 2023). In landmark recommendation, WRC-23 relaxed the terrestrial-only conditions for the 698 to 960 MHz and 1,71 to 2.17 GHz, and 2.5 to 2.69 GHz frequency bands to also apply for high-altitude platform stations (HAPS) base stations (“Antennas-in -Sky”). It should be noted that there are slightly different frequency band ranges and conditions, depending on which of the three ITU-R regions (as well as exceptions for particular countries within a region) the system will be deployed in. Also the HAPS systems do not enjoy protection or priority over existing use of those frequency bands terrestrially. It is important to note that the WRC-23 recommendation only apply to coverage platforms (i.e., HAPS) in the range from 20 to 50 km altitude. These WRC-23 frequency-bands relaxation does not apply to satellite operation. With the recognized importance of non-terrestrial networks and the current standardization efforts (e.g., towards 6G), it is expected that the fairly restrictive regime on terrestrial cellular spectrum may be relaxed further to also allow mobile terrestrial spectrum to be used in “Antenna-in-the-Sky” coverage platforms. Nevertheless, HAPS and terrestrial use of cellular frequency spectrum will have to be coordinated to avoid interference and resulting capacity and quality degradation.

SoftBank announced recently (i.e., 28 December 2023 [11]), after deliberations at the WRC-23, that they had successfully gained approval within the Asia-Pacific region (i.e., ITU-R region 3) to use mobile spectrum bands, namely 700-900MHz, 1.7GHz, and 2.5GHz, for stratospheric drone-based mobile broadband cellular services (see also refs. [13]). As a result of this decision, operators in different countries and regions will be able to choose a spectrum with greater flexibility when they introduce HAPS-based mobile broadband communication services, thereby enabling seamless usage with existing smartphones and other devices.

Another example of re-using terrestrial licensed cellular spectrum above ground is SpaceX direct-to-cell capable 2nd generation Starlink satellites.

On January 2nd, 2024, SpaceX launched their new generation of Starlink satellites with direct-to-cell capabilities to close a connection to a regular mobile cellular phone (e.g., smartphone). The new direct-to-cell Starlink satellites use T-Mobile US terrestrial licensed cellular frequency band (i.e., 2×5 MHz Band 25, PCS G-block) and will work, according to T-Mobile US, with most of their existing mobile phones. The initial direct-to-cell commercial plans will only support low-bandwidth text messaging and no voice or more bandwidth-heavy applications (e.g., streaming). Expectations are that the direct-to-cell system would deliver up to 18.3 Mbps (3.66 Mbps/MHz/cell) downlink and up to 7.2 Mbps (1.44 Mbps/MHz/cell) uplink over a channel bandwidth of 5 MHz (maximum).

Given that terrestrial 4G LTE systems struggle with such performance, it will be super interesting to see what the actual performance of the direct-to-cell satellite constellation will be.

COMPARISON WITH LEO SATELLITE BROADBAND NETWORKS.

When juxtaposed with LEO satellite networks such as Starlink (SpaceX), OneWeb (Eutelsat Group), or Kuiper (Amazon), stratospheric drones offer several advantages. Firstly, the proximity to the Earth’s surface (i.e., 300 – 2,000 km) results in lower latency, a critical factor for real-time applications. While LEO satellites, like those used by Starlink, have reduced latency (ca. 3 ms round-trip-time) compared to traditional geostationary satellites (ca. 240 ms round-trip-time), stratospheric drones can provide even quicker response times (one-tenth of an ms in round-trip-time), making the stratospheric drone substantially more beneficial for applications such as emergency services, telemedicine, and high-speed internet services.

A stratospheric platform operating at 20 km altitude and targeting surveillance, all else being equal, would be 25 times better at distinguishing objects apart than an LEO satellite operating at 500 km altitude. The global aerial imaging market is expected to exceed 7 billion euros by 2030, with a CAGR of 14.2% from 2021. The flexibility of the stratospheric drone platform allows for combining cellular broadband services and a wide range of advanced aerial imaging services. Again, it is advantageous that the stratospheric drone regularly returns to Earth for fueling, maintenance, and technology upgrades and enhancements. This is not possible with an LEO satellite platform.

Moreover, the deployment and maintenance of stratospheric drones are, in theory, less complex and costly than launching and maintaining a constellation of satellites. While Starlink and similar projects require significant upfront investment for satellite manufacturing and rocket launches, stratospheric drones can be deployed at a fraction of the cost, making them a more economically viable option for many applications.

The Starlink LEO satellite constellation currently is the most comprehensive satellite (fixed) broadband coverage service. As of November 2023, Starlink had more than 5,000 satellites in low orbit (i.e., ca. 550 km altitude), and an additional 7,000+ are planned to be deployed, with a total target of 12+ thousand satellites. The current generation of Starlink satellites has three downlink phased-array antennas and one uplink phase-array antenna. This specification translates into 48 beams downlink (satellite to ground) and 16 beams uplink (ground to satellite). Each Starlink beam covers approx. 2,800 km2 with a coverage range of ca. 30 km, over which a 250 MHz downlink channel (in the Ku band) has been assigned. According to Portillo et al. [14], the spectral efficiency is estimated to be 2.7 Mbps per MHz, providing a total throughput of a maximum of 675 Mbps in the coverage area or a throughput density of ca. 0.24 Mbps per km2.

According to the latest Q2-2023 Ookla speed test it is found that “among the 27 European countries that were surveyed, Starlink had median download speeds greater than 100 Mbps in 14 countries, greater than 90 Mbps in 20 countries, and greater than 80 in 24 countries, with only three countries failing to reach 70 Mbps” (see reference [18]). Of course, the actual customer experience will depend on the number of concurrent users demanding resources from the LEO satellite as well as weather conditions, proximity of other users, etc. Starlink themselves seem to have set an upper limit of 220 Mbps download speed for their so-called priority service plan or otherwise 100 Mbps (see [19] below). Quite impressive performance if there are no other broadband alternatives available.

According to Elon Musk, SpaceX aims to reduce each Starlink satellite’s cost to less than one million euros. However, according to Elon Musk, the unit price will depend on the design, capabilities, and production volume. The launch cost using the SpaceX Falcon 9 launch vehicle starts at around 57 million euros, and thus, the 50 satellites would add a launch cost of ca. 1.1 million euros per satellite. SpaceX operates, as of September 2023, 150 ground stations (“Starlink Gateways”) globally that continue to connect the satellite network with the internet and ground operations. At Starlink’s operational altitude, the estimated satellite lifetime is between 5 and 7 years due to orbital decay, fuel and propulsion system exhaustion, and component durability. Thus, a LEO satellite business must plan for satellite replacement cycles. This situation differs greatly from the stratospheric drone-based operation, where the vehicles can be continuously maintained and upgraded. Thus, they are significantly more durable, with an expected useful lifetime exceeding ten years and possibly even 20 years of operational use.

Let’s consider our example of Germany and what it would take to provide LEO satellite coverage service targeting rural areas. It is important to understand that a LEO satellite travels at very high speeds (e.g., upwards of 30 thousand km per hour) and thus completes an orbit around Earth in between 90 to 120 minutes (depending on the satellite’s altitude). It is even more important to remember that Earth rotates on its axis (i.e., 24 hours for a full rotation), and the targeted coverage area will have moved compared to a given satellite orbit (this can easily be several hundreds to thousands of kilometers). Thus, to ensure continuous satellite broadband coverage of the same area on Earth, we need a certain number of satellites in a particular orbit and several orbits to ensure continuous coverage at a target area on Earth. We would need at least 210 satellites to provide continuous coverage of Germany. Most of the time, most satellites would not cover Germany, and the operational satellite utilization will be very low unless other areas outside Germany are also being serviced.

Economically, using the Starlink numbers above as a guide, we incur a capital expense of upwards of 450 million euros to realize a satellite constellation that could cover Germany. Let’s also assume that the LEO satellite broadband operator (e.g., Starlink) must build and launch 20 satellites annually to maintain its constellation and thus incur an additional Capex of ca. 40+ million euros annually. This amount does not account for the Capex required to build the ground network and the operations center. Let’s say all the rest requires an additional 10 million euros Capex to realize and for miscellaneous going forward. The technology-related operational expenses should be low, at most 30 million euros annually (this is a guesstimate!) and likely less. So, covering Germany with an LEO broadband satellite platform over ten years would cost ca. 1.3 billion euros. Although substantially more costly than our stratospheric drone platform, it is still less costly than running a rural terrestrial mobile broadband network.

Despite being favorable compared in economic to the terrestrial cellular network, it is highly unlikely to make any operational and economic sense for a single operator to finance such a network, and it would probably only make sense if shared between telecom operators in a country and even more so over multiple countries or states (e.g., European Union, United States, PRC, …).

Despite the implied silliness of a single mobile operator deploying a satellite constellation for a single Western European country (irrespective of it being fairly large), the above example serves two purposes; (1) To illustrates how economically in-efficient rural mobile networks are that a fairly expansive satellite constellation could be more favorable. Keep in mind that most countries have 3 or 4 of them, and (2) It also shows that the for operators to share the economics of a LEO satellite constellation over larger areal footprint may make such a strategy very attractive economically,

Due to the path loss at 550 km (LEO) being substantially higher than at 20 km (stratosphere), all else being equal, the signal quality of the stratospheric broadband drone would be significantly better than that of the LEO satellite. However, designing the LEO satellite with more powerful transmitters and sensitive receivers can compensate for the factor of almost 30 in altitude difference to a certain extent. Clearly, the latency performance of the LEO satellite constellation would be inferior to that of the stratospheric drone-based platform due to the significantly higher operating altitude.

It is, however, the capacity rather than shared cost could be the stumbling block for LEOs: For a rural cellular network or stratospheric drone platform, we see the MNOs effectively having “control” over the capex costs of the network, whether it be the RAN element for a terrestrial network, or the cost of whole drone network (even if it in the future, this might be able to become a shared cost).

However, for the LEO constellation, we think the economics of a single MNO building a LEO constellation even for their own market is almost entirely out of the question (ie multiple €bn capex outlay). Hence, in this situation, the MNOs will rely on a global LEO provider (ie Starlink, or AST Space Mobile) and will “lend” their spectrum to their in their respective geography in order to provide service. Like the HAPs, this will also require further regulatory approvals in order to free up terrestrial spectrum for satellites in rural areas.

We do not yet have the visibility of the payments the LEOs will require, so there is the potential that this could be a lower cost alternative again to rural networks, but as we show below, we think the real limitation for LEOs might not be the shared capacity rental cost, but that there simply won’t be enough capacity available to replicate what a terrestrial network can offer today.

However, the stratospheric drone-based platform provides a near-ideal cellular performance to the consumer, close to the theoretical peak performance of a terrestrial cellular network. It should be emphasized that the theoretical peak cellular performance is typically only experienced, if at all, by consumers if they are very near the terrestrial cellular antenna and in a near free-space propagation environment. This situation is a very rare occurrence for the vast majority of mobile consumers.

Figure 7 summarizes the above comparison between a rural terrestrial cellular network with the non-terrestrial cellular networks such as LEO satellites and Stratospheric drones.

Figure 7 Illustrating a comparison between terrestrial cellular coverage with stratospheric drone-based (“Antenna-in-the-sky”) cellular coverage and Low Earth Orbit (LEO) satellite coverage options.

While the majority of the 5,500+ Starlink constellation is 13 GHz (Ku-band), at the beginning of 2024, Space X launched a few 2nd generation Starlink satellites that support direct connections from the satellite to a normal cellular device (e.g., smartphone), using 5 MHz of T-Mobile USA’s PCS band (1900 MHz). The targeted consumer service, as expressed by T-Mobile USA, is providing texting capabilities over areas with no or poor existing cellular coverage across the USA. This is fairly similar to services at similar cellular coverage areas presently offered by, for example, AST SpaceMobile, OmniSpace, and Lynk Global LEO satellite services with reported maximum speed approaching 20 Mbps. The so-called Direct-2-Device, where the device is a normal smartphone without satellite connectivity functionality, is expected to develop rapidly over the next 10 years and continue to increase the supported user speeds (i.e., utilized terrestrial cellular spectrum) and system capacity in terms of smaller coverage areas and higher number of satellite beams.

Table 1 below provides an overview of the top 10 LEO satellite constellations targeting (fixed) internet services (e.g., Ku band), IoT and M2M services, and Direct-to-Device (or direct-to-cell) services. The data has been compiled from the NewSpace Index website, which should be with data as of 31st of December 2023. The Top-10 satellite constellation rank has been based on the number of launched satellites until the end of 2023. Two additional Direct-2-Cell (D2C or Direct-to-Device, D2D) LEO satellite constellations are planned for 2024-2025. One is SpaceX Starlink 2nd generation, which launched at the beginning of 2024, using T-Mobile USA’s PCS Band to connect (D2D) to normal terrestrial cellular handsets. The other D2D (D2C) service is Inmarsat’s Orchestra satellite constellation based on L-band (for mobile terrestrial services) and Ka for fixed broadband services. One new constellation (Mangata Networks) targeting 5G services. With two 5G constellations already launched, i.e., Galaxy Space (Yinhe) launched 8 LEO satellites, 1,000 planned using Q- and V-bands (i.e., not a D2D cellular 5G service), and OmniSpace launched two satellites and have planned 200 in total. Moreover, currently, there is one planned constellation targeting 6G by the South Korean Hanwha Group (a bit premature, but interesting nevertheless) with 2,000 6G LEO Satellites planned. Most currently launched and planned satellite constellations offering (or plan to provide) Direct-2-Cell services, including IoT and M2M, are designed for low-frequency bandwidth services that are unlikely to compete with terrestrial cellular networks’ quality of service where reasonable good coverage (or better) exists.

In Table 1 below, we then show 5 different services with the key input variables as cell radius, spectral efficiency and downlink spectrum. From this we can derive what the “average” capacity could be per square kilometer of rural coverage.

We focus on this metric as the best measure of capacity available once multiple users are on the service the spectrum available is shared. This is different from “peak” speeds which are only relevant in the case of very few users per cell.

  • We start with terrestrial cellular today for bands up to 2.1GHz and show that assuming a 2.5km cell radius, the average capacity is equivalent to 11Mbps per sq.km.
  • For a LEO service using Ku-band, i.e., with 250MHz to an FWA dish, the capacity could be ca. 2Mbps per sq.km.
  • For a LEO-based D2D device, what is unknown is what the ultimate spectrum allowance could be for satellite services with cellular spectrum bands, and spectral efficiency. Giving the benefit of the doubt on both, but assuming the beam radius is always going to be larger, we can get to an “optimistic” future target of 2Mbps per sq. km, i.e., 1/5th of a rural terrestrial network.
  • Finally, we show for a stratospheric drone, that given similar cell radius to a rural cell today, but with higher downlink available and greater spectral efficiency, we can reach ca. 55Mbps per sq. km, i.e. 5x what a current rural network can offer.

INTEGRATING WITH 5G AND BEYOND.

The advent of 5G, and eventually 6G, technology brings another dimension to the utility of stratospheric drones delivering mobile broadband services. The high-altitude platform’s ability to seamlessly integrate with existing 5G networks makes them an attractive option for expanding coverage and enhancing network capacity at superior economics, particularly in rural areas where the economics for terrestrial-based cellular coverage tend to be poor. Unlike terrestrial networks that require extensive groundwork for 5G rollout, the non-terrestrial network operator (NTNO) can rapidly deploy stratospheric drones to provide immediate 5G coverage over large areas. The high-altitude platform is also incredibly flexible compared to both LEO satellite constellations and conventional rural cellular network flexibility. The platform can easily be upgraded during its ground maintenance window and can be enhanced as the technology evolves. For example, upgrading to and operationalizing 6G would be far more economical with a stratospheric platform than having to visit thousands or more rural sites to modernize or upgrade the installed active infrastructure.

SUMMARY.

Stratospheric drones represent a significant advancement in the realm of wireless communication. Their strategic positioning in the stratosphere offers superior coverage and connectivity compared to terrestrial networks and low-earth satellite solutions. At the same time, their economic efficiency makes them an attractive alternative to ground-based infrastructures and LEO satellite systems. As technology continues to evolve, these high-altitude platforms (HAPs) are poised to play a crucial role in shaping the future of global broadband connectivity and ultra-high availability connectivity solutions, complementing the burgeoning 5G networks and paving the way for next-generation three-dimensional communication solutions. Moving away from today’s flat-earth terrestrial-locked communication platforms.

The strategic as well as the disruptive potential of the unmanned autonomous stratospheric terrestrial coverage platform is enormous, as shown in this article. It has the potential to make most of the rural (at least) cellular infrastructure redundant, resulting in substantial operational and economic benefits to existing mobile operators. At the same time, the HAPs could, in rural areas, provide much better service overall in terms of availability, improved coverage, and near-ideal speeds compared to what is the case in today’s cellular networks. It might also, at scale, become a serious competitive and economical threat to LEO satellite constellations, such as, for example, Starlink and Kuipers, that would struggle to compete on service quality and capacity compared to a stratospheric coverage platform.

Although the strategic, economic, as well as disruptive potential of the unmanned autonomous stratospheric terrestrial coverage platform is enormous, as shown in this article, the flight platform and advanced antenna technology are still in a relatively early development phase. Substantial regulatory work remains in terms of permitting the terrestrial cellular spectrum to be re-used above terra firma at the “Antenna-in-the-Sky. The latest developments out of WRC-23 for Asia Pacific appear very promising, showing that we are moving in the right direction of re-using terrestrial cellular spectrum in high-altitude coverage platforms. Last but not least, operating an unmanned (autonomous) stratospheric platform involves obtaining certifications as well as permissions and complying with various flight regulations at both national and international levels.

Terrestrial Mobile Broadband Network – takeaway:

  • It is the de facto practice for mobile cellular networks to cover nearly 100% geographically. The mobile consumer expects a high-quality, high-availability service everywhere.
  • A terrestrial mobile network has a relatively low area coverage per unit antenna with relatively high capacity and quality.
  • Mobile operators incur high and sustainable infrastructure costs, especially in rural areas with low or no return on that cost.
  • Physical obstructions and terrain limit performance (i.e., non-free space characteristics).
  • Well-established technology with high reliability.
  • The potential for high bandwidth and low latency in urban areas with high demand may become a limiting factor for LEO satellite constellations and stratospheric drone-based platforms. Thus, it is less likely to provide operational and economic benefits covering high-demand, dense urban, and urban areas.

LEO Satellite Network – takeaway:

  • The technology is operational and improving. There is currently some competition (e.g., Starlink, Kuiper, OneWeb, etc.) in this space, primarily targeting fixed broadband and satellite backhaul services. Increasingly, new LEO satellite-based business models are launched providing lower-bandwidth cellular-spectrum based direct-to-device (D2D) text, 4G and 5G services to regular consumer and IoT devices (i.e., Starlink, Lynk Global, AST SpaceMobile, OmniSpace, …).
  • Broader coverage, suitable for global reach. It may only make sense when the business model is viewed from a worldwide reach perspective (e.g., Starlink, OneWeb,…), resulting in much-increased satellite network utilization.
  • An LEO satellite broadband network can cover a vast area per satellite due to its high altitude. However, such systems are in nature capacity-limited, although beam-forming antenna technologies (e.g., phased array antennas) allow better capacity utilization.
  • The LEO satellite solutions are best suited for low-population areas with limited demand, such as rural and largely unpopulated areas (e.g., sea areas, deserts, coastlines, Greenland, polar areas, etc.).
  • Much higher latency compared to terrestrial and drone-based networks. 
  • Less flexible once in orbit. Upgrades and modernization only via replacement.
  • The LEO satellite has a limited useful operational lifetime due to its lower orbital altitude (e.g., 5 to 7 years).
  • Lower infrastructure cost for rural coverage compared to terrestrial networks, but substantially higher than drones when targeting regional areas (e.g., Germany or individual countries in general).
  • Complementary to the existing mobile business model of communications service providers (CSPs) with a substantial business risk to CSPs in low-population areas where little to no capacity limitations may occur.
  • Requires regulatory permission (authorization) to operate terrestrial frequencies on the satellite platform over any given country. This process is overseen by national regulatory bodies in coordination with the International Telecommunication Union (ITU) as well as national regulators (e.g., FCC in the USA). Satellite operators must apply for frequency bands for uplink and downlink communications and coordinate with the ITU to avoid interference with other satellites and terrestrial systems. In recent years, however, there has been a trend towards more flexible spectrum regulations, allowing for innovative uses of the spectrum like integrating terrestrial and satellite services. This flexibility is crucial in accommodating new technologies and service models.
  • Operating a LEO satellite constellation requires a comprehensive set of permissions and certifications that encompass international and national space regulations, frequency allocation, launch authorization, adherence to space debris mitigation guidelines, and various liability and insurance requirements.
  • Both LEO and MEO satellites is likely going to be complementary or supplementary to stratospheric drone-based broadband cellular networks offering high-performing transport solutions and possible even acts as standalone or integrated (with terrestrial networks) 5G core networks or “clouds-in-the-sky”.

Stratospheric Drone-Based Network – takeaway:

  • It is an emerging technology with ongoing research, trials, and proof of concept.
  • A stratospheric drone-based broadband network will have lower deployment costs than terrestrial and LEO satellite broadband networks.
  • In rural areas, the stratospheric drone-based broadband network offers better economics and near-ideal quality than terrestrial mobile networks. In terms of cell size and capacity, it can easily match that of a rural mobile network.
  • The solution offers flexibility and versatility and can be geographically repositioned as needed. The versatility provides a much broader business model than “just” an alternative rural coverage solution (e.g., aerial imaging, surveillance, defense scenarios, disaster area support, etc.).
  • Reduced latency compared to LEO satellites.
  • Also ideal for targeted or temporary coverage needs.
  • Complementary to the existing mobile business model of communications service providers (CSPs) with additional B2B and public services business potential from its application versatility.
  • Potential substantial negative impact on the telecom tower business as the stratospheric drone-based broadband network would make (at least) rural terrestrial towers redundant.
  • May disrupt a substantial part of the LEO satellite business model due to better service quality and capacity leaving the LEO satellite constellations revenue pool to remote areas and specialized use cases.
  • Requires regulatory permission to operate terrestrial frequencies (i.e., frequency authorization) on the stratospheric drone platform (similar to LEO satellites). Big steps have are already been made at the latest WRC-23, where the frequency bands 698 to 960 MHz, 1710 to 2170 MHz, and 2500 to 2690 MHz has been relaxed to allow for use in HAPS operating at 20 to 50 km altitude (i.e., the stratosphere).
  • Operating a stratospheric platform in European airspace involves obtaining certifications as well as permissions and (of course) complying with various regulations at both national and international levels. This includes the European Union Aviation Safety Agency (EASA) type certification and the national civil aviation authorities in Europe.

FURTHER READING.

  1. New Street Research “Stratospheric drones: A game changer for rural networks?” (January 2024).
  2. https://hapsalliance.org/
  3. https://www.stratosphericplatforms.com/, see also “Beaming 5G from the stratosphere” (June, 2023) and “Cambridge Consultants building the world’s largest  commercial airborne antenna” (2021).
  4. Iain Morris, “Deutsche Telekom bets on giant flying antenna”, Light Reading (October 2020).
  5. “Deutsche Telekom and Stratospheric Platforms Limited (SPL) show Cellular communications service from the Stratosphere” (November 2020).
  6. “High Altitude Platform Systems: Towers in the Skies” (June 2021).
  7. “Stratospheric Platforms successfully trials 5G network coverage from HAPS vehicle” (March 2022).
  8. Leichtwerk AG, “High Altitude Platform Stations (HAPS) – A Future Key Element of Broadband Infrastructure” (2023). I recommend to closely follow Leichtwerk AG which is a world champion in making advanced gliding planes. The hydrogen powered StratoStreamer HAP is near-production ready, and they are currently working on a solar-powered platform. Germany is renowned for producing some of the best gliding planes in the world (after WWII Germany was banned from developing and producing aircrafts, military as well as civil. These restrictions was only relaxed in the 60s). Germany has a long and distinguished history in glider development, dating back to the early 20th century. German manufacturers like Schleicher, Schempp-Hirth, and DG Flugzeugbau are among the world’s leading producers of high-quality gliders. These companies are known for their innovative designs, advanced materials, and precision engineering, contributing to Germany’s reputation in this field.
  9. Jerzy Lewandowski, “Airbus Aims to Revolutionize Global Internet Access with Stratospheric Drones” (December 2023).
  10. Utilities One, “An Elevated Approach High Altitude Platforms in Communication Strategies”, (October 2023).
  11. Rajesh Uppal, “Stratospheric drones to provide 5g wireless communications global internet border security and military surveillance”  (May 2023).
  12. Softbank, “SoftBank Corp.-led Proposal to Expand Spectrum Use for HAPS Base Stations Agreed at World Radiocommunication Conference 2023 (WRC-23)”, press release (December 2023).
  13. ITU Publication, World Radiocommunications Conference 2023 (WRC-23), Provisional Final Acts, (December 2023). Note 1: The International Telecommunication Union (ITU) divides the world into three regions for the management of radio frequency spectrum and satellite orbits: Region 1: includes Europe, Africa, the Middle East west of the Persian Gulf including Iraq, the former Soviet Union, and Mongolia, Region 2: covers the Americas, Greenland, and some of the eastern Pacific Islands, and Region 3: encompasses Asia (excl. the former Soviet Union), Australia, the southwest Pacific, and the Indian Ocean’s islands.
  14. Geoff Huston, “Starlink Protocol Performance” (November 2023). Note 2: The recommendations, such as those designated with “ADD” (additional), are typically firm in the sense that they have been agreed upon by the conference participants. However, they are subject to ratification processes in individual countries. The national regulatory authorities in each member state need to implement these recommendations in accordance with their own legal and regulatory frameworks.
  15. Curtis Arnold, “An overview of how Starlink’s Phased Array Antenna “Dishy McFlatface” works.”, LinkedIn (August 2023).
  16. Quora, “How much does a satellite cost for SpaceX’s Starlink project and what would be the cheapest way to launch it into space?” (June 2023).
  17. The Clarus Network Group, “Starlink v OneWeb – A Comprehensive Comparison” (October 2023).
  18. Brian Wang, “SpaceX Launches Starlink Direct to Phone Satellites”, (January 2024).
  19. Sergei Pekhterev, “The Bandwidth Of The StarLink Constellation…and the assessment of its potential subscriber base in the USA.”, SatMagazine, (November 2021).
  20. I. del Portillo et al., “A technical comparison of three low earth orbit satellite constellation systems to provide global broadband,” Acta Astronautica, (2019).
  21. Nils Pachler et al., “An Updated Comparison of Four Low Earth Orbit Satellite Constellation Systems to Provide Global Broadband” (2021).
  22. Shkelzen Cakaj, “The Parameters Comparison of the “Starlink” LEO Satellites Constellation for Different Orbital Shells” (May 2021).
  23. Mike Puchol, “Modeling Starlink capacity” (October 2022).
  24. Mike Dano, “T-Mobile and SpaceX want to connect regular phones to satellites”, Light Reading (August 2022).
  25. Starlink, “SpaceX sends first text message via its newly launched direct to cell satellites” (January 2024).
  26. GSMA.com, “New Speedtest Data Shows Starlink Performance is Mixed — But That’s a Good Thing” (2023).
  27. Starlink, “Starlink specifications” (Starlink.com page).
  28. AST SpaceMobile website: https://ast-science.com/ Constellation Areas: Internet, Direct-to-Cell, Space-Based Cellular Broadband, Satellite-to-Cellphone. 243 LEO satellites planned. 2 launched.
  29. Lynk Global website: https://lynk.world/ (see also FCC Order and Authorization). It should be noted that Lynk can operate within 617 to 960 MHz (Space-to-Earth) and 663 to 915 MHz (Earth-to-Space). However, only outside the USA. Constellation Area: IoT / M2M, Satellite-to-Cellphone, Internet, Direct-to-Cell. 8 LEO satellites out of 10 planned.
  30. Omnispace website: https://omnispace.com/ Constellation Area: IoT / M2M, 5G. World’s first global 5G non terrestrial network. Initial support 3GPP-defined Narrow-Band IoT radio interface. Planned 200 LEO and <15 MEO satellites. So far only 2 satellites launched.
  31. NewSpace Index: https://www.newspace.im/ I find this resource having excellent and up-to date information of commercial satellite constellations.
  32. Wikipedia, “Satellite constellation”.
  33. LEOLABS Space visualization – SpaceX Starlink mapping. (deselect “Debris”, “Beams”, and “Instruments”, and select “Follow Earth”). An alternative visualization service for Starlink & OneWeb satellites is the website Satellitemap.space (you might go to settings and turn on signal Intensity which will give you the satellite coverage hexagons).
  34. European Union Aviation Safety Agency (EASA). Note that an EASA-type Type Certificate is a critical document in the world of aviation. This certificate is a seal of approval, indicating that a particular type of aircraft, engine, or aviation component meets all the established safety and environmental standards per EASA’s stringent regulations. When an aircraft, engine, or component is awarded an EASA Type Certificate, it signifies a thorough and rigorous evaluation process that it has undergone. This process assesses everything from design and manufacturing to performance and safety aspects. The issuance of the certificate confirms that the product is safe for use in civil aviation and complies with the necessary airworthiness requirements. These requirements are essential to ensure aircraft operating in civil airspace safety and reliability. Beyond the borders of the European Union, an EASA Type Certificate is also highly regarded globally. Many countries recognize or accept these certificates, which facilitate international trade in aviation products and contribute to the global standardization of aviation safety.

ACKNOWLEDGEMENT.

I greatly acknowledge my wife, Eva Varadi, for her support, patience, and understanding during the creative process of writing this article.

I also owe a lot of gratitude to James Ratzer, Partner at New Street Research, for editorial suggestions, great discussions and challenges making the paper far better than it otherwise would have been. I would also like to thank Russel Waller, Pan European Telecoms and ESG Equity Analyst at New Street Research, for being supportive and insistent to get something written for NSR.

I also greatly appreciate my past collaboration and the many discussions on the topic of Stratospheric Drones in particular and advanced antenna designs and properties in general that I have had with Dr. Jaroslav Holis, Senior R&D Manager (Group Technology, Deutsche Telekom AG) over the last couple of years. When it comes to my early involvement in Stratospheric Drones activities with Group Technology Deutsche Telekom AG, I have to recognize my friend, mentor, and former boss, Dr. Bruno Jacobfeuerborn, former CTO Deutsche Telekom AG and Telekom Deutschland, for his passion and strong support for this activity since 2015. My friend and former colleague Rachid El Hattachi deserves the credit for “discovering” and believing in the opportunities that a cellular broadband-based stratospheric drone brings to the telecom industry.

Many thanks to CEO Dr. Reiner Kickert of Leichtwerk AG for providing some high resolution pictures of his beautiful StratoStreamer.

Thanks to my friend Amit Keren for suggesting a great quote that starts this article.

Any errors or unclarities are solely due to myself and not the collaborators and colleagues that have done their best to support this piece.

Telco energy consumption – a path to a greener future?

To my friend Rudolf van der Berg this story is not about how volumetric demand (bytes or bits) results in increased energy consumption (W·h). That notion is silly, as we both “violently” agree on ;-). I recommend that readers also check out Rudolf’s wonderful presentation, “Energy Consumption of the Internet (May 2023),” which he delivered at the RIPE86 student event this year in 2023.

Recently, I had the privilege to watch a presentation by a seasoned executive talk about what his telco company is doing for the environment regarding sustainability and CO2 reduction in general. I think the company is doing something innovative beyond compensating shortfalls with buying certificates and (mis)use of green energy resources.

They replace (reasonably) aggressively their copper infrastructure (country stat for 2022: ~90% of HH/~16% subscriptions) with green sustainable fiber (country stat for 2022: ~78%/~60%). This is an obvious strategy that results in a quantum leap in customer experience potential and helps reduce overall energy consumption resulting from operating the ancient copper network.

Missing a bit imo, was the consideration of and the opportunity to phase out the HFC network (country stat for 2022: ~70%/~60%) and reduce the current HFC+Fibre overbuild of 1.45 and, of course, reduce the energy consumption and operational costs (and complexity) of operating two fixed broadband technologies (3 if we include the copper). However, maybe understandably enough, substantial investments have been made in upgrading to Docsis 3.1. An investment that possibly still is somewhat removed from having been written off.

The “wtf-moment” (in an otherwise very pleasantly and agreeable session) came when the speaker alluded that as part of their sustainability and CO2 reduction strategy, the telco was busy migrating from 4G LTE to 5G with the reasoning that 5G is 90% more energy efficient compared to 4G.

Firstly, it is correct that 5G is (in apples-for-apples comparisons!) ca. 90% more efficient in delivering a single bit compared to 4G. The metric we use is Joules-per-bit or Watts-seconds-per-bit. It is also not uncommon at all to experience Telco executives hinting at the relative greenness of 5G (it is, in my opinion, decidedly not a green broadband communications technology … ).

Secondly, so what! Should we really care about relative energy consumption? After all, we pay for absolute energy consumption, not for whatever relativized measure of consumed energy.

I think I know the answer from the CFO and the in-the-know investors.

If the absolute energy consumption of 5G is higher than that of 4G, I will (most likely) have higher operational costs attributed to that increased power consumption with 5G. If I am not in an apples-for-apples situation, which rarely is the case, and I am anyway really not in, the 5G technology requires substantially more power to provide for new requirements and specifications. I will be worse off regarding the associated cost in absolute terms of money. Unless I also have a higher revenue associated with 5G, I am economically worse off than I was with the older technology.

Having higher information-related energy efficiency in cellular communications systems is a feature of the essential requirement of increasingly better spectral efficiency all else being equal. It does not guarantee that, in absolute monetary terms, a Telco will be better off … far from it!

THE ENERGY OF DELIVERING A BIT.

Energy, which I choose to represent in Joules, is equal to the Power (in Watt or W) that I need to consume per time-unit for a given output unit (e.g., a bit) times the unit of time (e.g., a second) it took to provide the unit.

Take a 4G LTE base station that consumes ca. 5.0kW to deliver a maximum throughput of 160 Mbps per sector (@ 80 MHz per sector). The information energy efficiency of the specific 4G LTE base station (e.g., W·s per bit) would be ca. 10 µJ/bit. The 4G LTE base station requires 10 micro (one millionth) Joules to deliver 1 bit (in 1 second).

In the 5G world, we would have a 5G SA base station, using the same frequency bands as 4G and with an additional 10 MHz @ 700MHz and 100 MHz @ 3.5 GHz included. The 3.5 GHz band is supported by an advanced antenna system (AAS) rather than a classical passive antenna system used for the other frequency bands. This configuration consumes 10 kW with ~40% attributed to the 3.5 GHz AAS, supporting ~1 Gbps per sector (@ 190 MHz per sector). This example’s 5G information energy efficiency would be ca. 0.3 µJ/bit.

In this non-apples-for-apples comparison, 5G is about 30 times more efficient in delivering a bit than 4G LTE (in the example above). Regarding what an operator actually pays for, 5G is twice as costly in energy consumption compared to 4G.

It should be noted that the power consumption is not driven by the volumetric demand but by the time that demand exists and the load per unit of time. Also, base stations will have a power consumption even when idle with the degree depending on the intelligence of the energy management system applied.

So, more formalistic, we have

E per bit = P (in W) · time (in sec) per bit, or in the basic units

J / bit = W·s / bit = W / (bit/s) = W / bps = W / [ MHz · Mbps/MHz/unit · unit-quantity ]

E per bit = P (in W) / [ Bandwidth (in MHz) · Spectral Efficiency (in Mbps/MHz/unit) · unit-quantity ]

It is important to remember that this is about the system spec information efficiency and that there is no direct relationship between the Power that you need and the outputted information your system will ultimately support bit-wise.

\frac{E_{4G}}{bit} \; = \; \frac {\; P_{4G} \;} {\; B_{4G} \; \cdot \; \eta_{4G,eff} \; \cdot N \;\;\;} and \;\;\; \frac{E_{5G}}{bit} \; = \; \frac {\; P_{5G} \;} {\; B_{5G} \; \cdot \; \eta_{5G,eff} \; \cdot N \;}

Thus, the relative efficiency between 4G and 5G is

\frac{E_{4G}/bit}{E_{5G}/bit} \; = \; \frac{\; P_{4G} \;}{\; P_{5G}} \; \cdot \; \frac{\; B_{5G} \;}{\; B_{4G}} \; \cdot \; \frac{\; \eta_{5G,eff} \;}{\; \eta_{4G,eff}}

Currently (i.e., 2023), the various components of the above are approximately within the following ranges.

\frac{P_{4G}}{P_{5G}} \; \lesssim \; 1

\frac{B_{5G}}{B_{4G}} \; > \;2

\frac{\; \eta_{5G,eff} \;}{\; \eta_{4G,eff}} \; \approx \; 10

The power consumption of a 5G RAT is higher than that of a 4G RAT. As we add higher frequency spectrum (e.g., C-band, 6GHz, 23GHz,…) to the 5G RAT, increasingly more spectral bandwidth (B) will be available compared to what was deployed for 4G. This will increase the bit-wise energy efficiency of 5G compared to 4G, although the power consumption is also expected to increase as higher frequencies are supported.

If the bandwidth and system power consumption is the same for both radio access technologies (RATs), then we have the relative information energy efficiency is

\frac{E_{4G}/bit}{E_{5G}/bit} \; \approx \; \frac{\; \eta_{5G,eff} \;}{\; \eta_{4G,eff}} \; \gtrsim \; 10

Depending on the relative difference in spectral efficiency. 5G is specified and designed to have at least ten times (10x) the spectral efficiency of 4G. If you do the math (assuming apples-to-apples applies), it is no surprise that 5G is specified to be 90% more efficient in delivering a bit (in a given unit of time) compared to 4G LTE.

And just to emphasize the obvious,

E_{RAT} \; = \; P_{RAT} \; \cdot \; t \; \approx \; E_{idle} \; + \; P_{BB, RAT} \; \cdot \; t \; +\sum_{freq}P_{freq,\; antenna\; type}\; \cdot \; t_{freq} \;

RAT refers to the radio access technology, BB is the baseband, freq the cellular frequencies, and idle to the situation where the system is not being utilized.

Volume in Bytes (or bits) does not directly relate to energy consumption. As frequency bands are added to a sector (of a base station), the overall power consumption will increase. Moreover, the more computing is required in the antenna, such as for advanced antenna systems, including massive MiMo antennas, the more power will be consumed in the base station. The more the frequency bands are being utilized in terms of time, the higher will the power consumption be.

Indirectly, as the cellular system is being used, customers consume bits and bytes (=8·bit) that will depend on the effective spectral efficiency (in bps/Hz), the amount of effective bandwidth (in Hz) experienced by the customers, e.g., many customers will be in a coverage situation where they may not benefit for example from higher frequency bands), and the effective time they make use of the cellular network resources. The observant reader will see that I like the term “effective.” The reason is that customers rarely enjoy the maximum possible spectral efficiency. Likely, not all the frequency spectrum covering customers is necessarily being applied to individual customers, depending on their coverage situation.

In the report “A Comparison of the Energy Consumption of Broadband Data Transfer Technologies (November 2021),” the authors show the energy and volumetric consumption of mobile networks in Finland over the period from 2010 to 2020. To be clear, I do not support the author’s assertion of causation between volumetric demand and energy consumption. As I have shown above, volumetric usage does not directly cause a given power consumption level. Over the 10-year period shown in the report, they observe a 70% increase in absolute power consumption (from 404 to 686 GWh, CAGR ~5.5%) and a factor of ~70 in traffic volume (~60 TB to ~4,000 TB, CAGR ~52%). Caution should be made in resisting the temptation to attribute the increase in energy over the period to be directly related to the data volume increase, however weak it is (i.e., note that the authors did not resist that temptation). Rudolf van der Berg has raised several issues with the approach of the above paper (as well as with many other related works) and indicated that the data and approach of the authors may not be reliable. Unfortunately, in this respect, it appears that systematic, reliable, and consistent data in the Telco industry is hard to come by (even if that data should be available to the individual telcos).

Technology change from 2G/3G to 4G, site densification, and more frequency bands can more than easily explain the increase in energy consumption (and all are far better explanations than data volume). It should be noted that there will also be reasons that decrease power consumption over time, such as more efficient electronics (e.g., via modernization), intelligent power management applications, and, last but not least, switching off of older radio access technologies.

The factors that drive a cell site’s absolute energy consumption is

  • Radio access technology with new technologies generally consumes more energy than older ones (even if the newer technologies have become increasingly more spectrally efficient).
  • The antenna type and configuration, including computing requirements for advanced signal processing and beamforming algorithms (that will improve the spectral efficiency at the expense of increased absolute energy consumption).
  • Equipment efficiency. In general, new generations of electronics and systems designs tend to be more energy-efficient for the same level of performance.
  • Intelligent energy management systems that allow for effective power management strategies will reduce energy consumption compared to what it would have been without such systems.
  • The network optimization goal policy. Is the cellular network planned and optimized for meeting the demands and needs of the customers (i.e., the economic design framework) or for providing the peak performance to as many customers as possible (i.e., the Umlaut/Ookla performance-driven framework)? The Umlaut/Ookla-optimized network, maxing out on base station configuration, will observe substantially higher energy consumption and associated costs.
The absolute cellular energy consumption has continued to rise as new radio access technologies (RAT) have been introduced irrespective of the leapfrog in those RATS spectral (bits per Hz) and information-related (Joules per bit) efficiencies.

WHY 5G IS NOT A GREEN TECHNOLOGY?

Let’s first re-acquaint ourselves with the 2015 vision of the 5G NGMN whitepaper;

“5G should support a 1,000 times traffic increase in the next ten years timeframe, with energy consumption by the whole network of only half that typically consumed by today’s networks. This leads to the requirement of an energy efficiency increase of x2000 in the next ten years timeframe.” (Section 4.2.2 Energy Efficiency, 5G White Paper by NGMN Alliance, February 2015).

The bold emphasis is my own and not in the paper itself. There is no doubt that the authors of the 5G vision paper had the ambition of making 5G a sustainable and greener cellular alternative than historically had been the case.

So, from the above statement, we have two performance figures that illustrate the ambition of 5G relative to 4G. Firstly, we have a requirement that the 5G energy efficiency should be 2000x higher than 4G (as it was back in the beginning of 2015).

\frac{E_{4G}/bit}{E_{5G}/bit} \; = \; \frac{\; P_{4G} \;}{\; P_{5G}} \; \cdot \; \frac{\; B_{5G} \;}{\; B_{4G}} \; \cdot \; \frac{\; \eta_{5G,eff} \;}{\; \eta_{4G,eff}} \; \geq \; 2,000

or

\frac{\; P_{4G} \;}{\; P_{5G}} \; \cdot \; \frac{\; B_{5G} \;}{\; B_{4G}} \; \geq \; 200

if

\frac{\; \eta_{5G,eff} \;}{\; \eta_{4G,eff}} \; \approx \; 10

Getting more spectrum bandwidth is relatively trivial as you go up in frequency and into, for example, the millimeter wave range (and beyond). However, getting 20+ GHz (e.g., 200+x100 MHz @ 4G) of additional practically usable spectrum bandwidth would be rather (=understatement) ambitious.

And that the absolute energy consumption of the whole 5G network should be half of what it was with 4G

\frac{E_{5G}}{E_{4G}} \; = \; \frac{\; P_{5G} \; \cdot \; t\;}{\; P_{4G} \; \cdot \; t}\; \approx \; \frac{\; P_{5G} \;}{\; P_{4G} \; } \; \leq \; \frac{1}{2}

If you think about this for a moment. Halfing the absolute energy consumption is an enormous challenge, even if it would have been with the same RAT. It requires innovation leapfrogs across the RAT electronic architecture, design, and material science underlying all of it. In other words, fundamental changes are required in the RF frontend (e.g., Power amplifiers, transceivers), baseband processing, DSP, DAC, ADC, cooling, control and management systems, algorithms, compute, etc…

But reality eats vision for breakfast … There really is no sign that the super-ambitious goal set by the NGMN Alliance in early 2015 is even remotely achievable even if we would give it another ten years (i.e., 2035). We are more than two orders of magnitude away from the visionary target of NGMN, and we are almost at the 10-year anniversary of the vision paper. We more or less get the benefit of the relative difference in spectral efficiency (x10), but no innovation beyond that has contributed very much to quantum leap cellular energy efficiency bit-wise.

I know many operators who will say that from a sustainability perspective, at least before the energy prices went through the roof, it really does not matter that 5G, in absolute terms, leads to substantial increases in energy consumption. They use green energy to supply the energy demand from 5G and pay off $CO_2$ deficits with certificates.

First of all, unless the increased cost can be recovered with the customers (e.g., price plan increase), it is a doubtful economic venue to pursue (and has a bit of a Titanic feel to it … going down together while the orchestra is playing).

Second, we should ask ourselves whether it is really okay for any industry to greedily consume sustainable and still relatively scarce green resources without being incentivized (or encouraged) to pursue alternatives and optimize across mobile and fixed broadband technologies. Particularly when fixed broadband technologies, such as fiber, are available, that would lead to a very sizable and substantial reduction in energy consumption … as customers increasingly adapt to fiber broadband.

Fiber is the greenest and most sustainable access technology we can deploy compared to cellular broadband technologies.

SO WHAT?

5G is a reality. Telcos are and will continue to invest substantially into 5G as they migrate their customers from 4G LTE to what ultimately will be 5G Standalone. The increase in customer experience and new capabilities or enablers are significant. By now, most Telcos will (i.e., 2023) have a very good idea of the operational expense associated with 5G (if not … you better do the math). Some will have been exploring investing in their own green power plants (e.g., solar, wind, hydrogen, etc.) to mitigate part of the energy surge arising from transitioning to 5G.

I suspect that as Telcos start reflecting on Open RAN as they pivot towards 6G (-> 2030+), above and beyond what 6G, as a RAT, may bring of additional operational expense pain, there will be new energy consumption and sustainability surprises to the cellular part of Telcos P&L. In general, breaking up an electronic system into individual (non-integrated) parts, as opposed to being integrated into a single unit, is likely to result in an increased power consumption. Some of the operational in-efficiencies that occur in breaking up a tightly integrated design can be mitigated by power management strategies. Though in order to get such power management strategies to work at the optimum may force a higher degree of supplier uniformity than the original intent of breaking up the tightly integrated system.

However, only Telcos that consider both their mobile and fixed broadband assets together, rather than two silos apart, will gain in value for customers and shareholders. Fixed-mobile (network) conversion should be taken seriously and may lead to very different considerations and strategies than 10+ years ago.

With increasing coverage of fiber and with Telcos stimulating aggressive uptake, it will allow those to redesign the mobile networks for what they were initially supposed to do … provide convenience and service where there is no fixed network present, such as when being mobile and in areas where the economics of a fixed broadband network makes it least likely to be available (e.g., rural areas) although LEO satellites (i.e., here today), maybe stratospheric drones (i.e., 2030+), may offer solid economic alternatives for those places. Interestingly, further simplifying the cellular networks supporting those areas today.

TAKE AWAY.

Volume in Bytes (or bits) does not directly relate to the energy consumption of the underlying communications networks that enable the usage.

The duration, the time scale, of the customer’s usage (i.e., the use of the network resources) does cause power consumption.

The bit-wise energy efficiency of 5G is superior to that of 4G LTE. It is designed that way via its spectral efficiency. Despite this, a 5G site configuration is likely to consume more energy than a 4G LTE site in the field and, thus, not a like-for-like in terms of number of bands and type of antennas deployed.

The absolute power consumption of a 5G configuration is a function of the number of bands deployed, the type of antennas deployed, intelligent energy management features, and the effective time 5G resources that customers have demanded.

Due to its optical foundation, Fiber is far more energy efficient in both bit-wise relative terms and absolute terms than any other legacy fixed (e.g., xDSL, HFC) or cellular broadband technology (e.g., 4G, 5G).

Looking forward and with the increasing challenges of remaining sustainable and contributing to CO2 reduction, it is paramount to consider an energy-optimized fixed and mobile converged network architecture as opposed to today’s approach of optimizing the fixed network separately from the cellular network. As a society, we should expect that the industry works hard to achieve an overall reduction in energy consumption, relaxing the demand on existing green energy infrastructures.

With 5G as of today, we are orders of magnitude from the original NGMN vision of energy consumption of only half of what was consumed by cellular networks ten years ago (i.e., 2014), requiring an overall energy efficiency increase of x2000.

Be aware that many Telcos and Infrastructure providers will use bit-wise energy efficiency when they report on energy consumption. They will generally report impressive gains over time in the energy that networks consume to deliver bits to their customers. This is the least one should expect.

Last but not least, the telco world is not static and is RAT-wise not very clean, as mobile networks will have several RATs deployed simultaneously (e.g., 2G, 4G, and 5G). As such, we rarely (if ever) have apples-to-apples comparisons on cellular energy consumption.

ACKNOWLEDGEMENT.

I greatly acknowledge my wife, Eva Varadi, for her support, patience, and understanding during the creative process of writing this article. I also greatly appreciate the discussion on this topic that I have had with Rudolf van der Berg over the last couple of years. I thank him for pointing out and reminding me (when I forget) of the shortfalls and poor quality of most of the academic work and lobbying activities done in this area.

PS

If you are aiming at a leapfrog in absolute energy reduction of your cellular network, above and beyond what you get with your infrastructure suppliers (e.g., Nokia, Ericsson, Huawei…), I really recommend you take a look at Opanga‘s machine learning-based Joule ML solution. The Joules ML has been proven to reduce RAN energy costs by 20% – 40% on top of what the RAT supplier’s (e.g., Ericsson, Nokia, Huawei, etc.) own energy management solutions may bring.

Disclosure: I am associated with Opanga and on their Industry Advisory Board.

The Nature of Telecom Capex – a 2023 Update.

CAPEX … IT’S PERSONAL

I built my first Telco technology Capex model back in 1999. I had just become responsible for what then was called Fixed Network Engineering with a portfolio of all technology engineering design & planning except for the radio access network but including all transport aspects from access up to Core and out to the external world. I got a bit frustrated that every time an assumption changed (e.g., business/marketing/sales), I needed to involve many people in my organization to revise their Capex demand. People that were supposed to get our greenfield network rolled out to our customers. Thus, I built my first Capex model that would take the critical business assumptions, size my network (including the radio access network), and consistently assign the right Capex amounts to each category. The model allowed for rapid turnaround on revised business assumptions and a highly auditable track of changes, planning drivers, and unit prices. Since then, I have built best-practice Capex (and technology Opex) models for many Deutsche Telekom AGs and Ooredoo Group entities. Moreover, I have been creating numerous network and business assessment and valuation models (with an eye on M&A), focusing on technology drivers behind Capex and Opex for many different types of telco companies (30+) operating in an extensive range of market environments around the world (20+). Creating and auditing techno-economical models, making those operational and of high quality, it has (for me) been essential to be extensively involved operationally in the telecom sector.

PRELUDE TO CAPEX.

Capital investments, or Capital Expenditures, or just Capex for short, make Telcos go around. Capex is the monetary means used by your Telco to acquire, develop, upgrade, modernize, and maintain tangible, as well as, in some instances, intangible, assets and infrastructure. We can find Capex back under “Property, Plants, and Buildings” (or PPB) in a company’s balance sheet or directly in the profit & loss (or income) statement. Typically for an investment to be characterized as a capital expense, it needs to have a useful lifetime of at least 2 years and be a physical or tangible asset.

What about software? A software development asset is, by definition, intangible or non-physical. However, it can, and often is, assigned Capex status, although such an assignment requires a bit more judgment (and auditorial approvals) than for a real physical asset.

The “Modern History of Telecom” (in Europe) is well represented by Figure 1, showing the fixed-mobile total telecom Capex-to-Revenue ratio from 1996 to 2025.

From 1996 to 2012, most of the European Telco Capex-to-Revenue ratio was driven by investment into mobile technology introductions such as 2G (GSM) in 1996 and 3G (UMTS) in 2000 to 2002 as well as initial 4G (LTE) investments. It is clear that investments into fixed infrastructure, particularly modernizing and enhancing, have been down-prioritized only until recently (e.g., up to 2010+) when incumbents felt obliged to commence investing in fiber infrastructure and urgent modernization of incumbents’ fixed infrastructures in general. For a long time, the investment focus in the telecom industry was mobile networks and sweating the fixed infrastructure assets with attractive margins.

Figure 1 illustrates the “Modern History of Telecom” in Europe. It shows the historical development of Western Europe Telecom Capex to Revenue ratio trend from 1996 to 2025. The maximum was about 28% at the time 2G (GSM) was launched and at minimum after the cash crunch after ultra-expensive 3G licenses and the dot.com crash of 2020. In recent years, since 2008, Capex to Revenue has been steadily increasing as 4G was introduced and fiber deployment started picking up after 20210. It should be emphasized that the Capex to Revenue trend is for both Mobile and Fixed. It does not include frequency spectrum investments.

Across this short modern history of telecom, possibly one of the worst industry (and technology) investments have been the investments we did into 3G. In Europe alone, we invested 100+ billion Euro (i.e., not included in the Figure) into 2100 MHz spectrum licenses that were supposed to provide mobile customers “internet-in-their-pockets”. Something that was really only enabled with the introduction of 4G from 2010 onwards.

Also, from 2010 onwards, telecom companies (in Europe) started to invest increasingly in fiber deployment as well as upgrading their ailing fixed transport and switching networks focusing on enabling competitive fixed broadband services. But fiber investments have picked up in a significant way in the overall telecom Capex, and I suspect it will remain so for the foreseeable future.

Figure 2 When we take the European Telco revenue (mobile & fixed) over the period 1996 to 2025, it is clear that the mobile business model quantum leaped revenue from its inception to around 2008. After this, it has been in steady decline, even if improvement has been observed in the fixed part of the telco business due to the transition from voice-dominated to broadband. Source: https://stats.oecd.org/

As can be observed from Figure 1, since the telecom credit crunch between 2000 and 2003, the Capex share of revenue has steadily increased from just around 12% in 2004, right after the credit crunch, to almost 20% in 2021. Over the period from 2008 to 2021, the industry’s total revenue has steadily declined, as can be seen in Figure 2. Taking the last 10 years (2011-2021) of mobile and fixed revenue data has, on average, reduced by 4+ billion euros a year. The cumulative annual growth rate (CAGR) was at a great +6% from the inception of 2G services in 1996 to 2008, the year of the “great recession.” From 2008 until 2021, the CAGR has been almost -2% in annual revenue loss for Western Europe.

What does that mean for the absolute total Capex spend over the same period? Figure 3 provides the trend of mobile and fixed Capex spending over the period. Since the “happy days” of 2G and 3G Capex spending, Capex rapidly declined after the industry spent 100+ billion Euro on 3G spectrum alone (i.e., 800+ million euros per MHz or 4+ euros per MHz-pop) before the required multi-billion Euro in 3G infrastructure. Though, after 2009, which was the lowest Capex spend after the 3G licenses were acquired, the telecom industry has steadily grown its annual total Capex spend with ca. +1 billion Euro per year (up to 2021) financing new technology introductions (4G and 5G), substantial mobile radio and core modernizations (a big refresh ca. every 6 -7 years), increasing capacity to continuously cope with consumer demand for broadband, fixed transport, and core infrastructure modernization, and last but not least (since the last ~ 8 years) increasing focus on fiber deployment. Over the same period from 2009 to 2021, the total revenue has declined by ca. 5 billion euros per year in Western Europe.

Figure 3 Using the above “Total Capex to Revenue” (Figure 1) and “Total Revenue” (Figure 2) allows us to estimate the absolute “Total Capex” over the same period. Apart from the big Capex swing around the introduction of 2G and 3G and the sharp drop during the “credit crunch” (2000 – 2003), Capex has grown steadily whilst the industry revenue has declined.

It will be very interesting to see how the next 10 years will develop for the telecom industry and its capital investment. There is still a lot to be done on 5G deployment. In fact, many Telcos are just getting started with what they would characterize as “real 5G”, which is 5G standalone at mid-band frequencies (e.g., > 3 GHz for Europe, 2.5 GHz for the USA), modernizing antenna structures from standard passive (low-order) to active antenna systems with higher-order MiMo antennas, possible mmWave deployments, and of course, quantum leap fiber deployment in laggard countries in Europe (e.g., Germany, UK, Greece, Netherlands, … ). Around 2028 to 2030, it would be surprising if the telecoms industry would not commence aggressively selling the consumer the next G. That is 6G.

At this moment, the next 3 to 5 years of Capital spending are being planned out with the aim of having the 2024 budgets approved by November or December. In principle, the long-term plans, that is, until 2027/2028, have agreed on general principles. Though, with the current financial recession brewing. Such plans would likely be scrutinized as well.

I have, over the last year since I published this article, been asked whether I had any data on Ebitda over the period for Western Europe. I have spent considerable time researching this, and the below chart provides my best shot at such a view for the Telecom industry in Western Europe from the early days of mobile until today. This, however, should be taken with much more caution than the above Caex and Revenues, as individual Telco’ s have changed substantially over the period both in their organizational structure and how results have been represented in their annual reports.

Figure 4 illustrates the historical development of the EBITDA margin over the period from 1995 to 2022 and a projection of the possible trends from 2023 onwards. Caution: telcos’ corporate and financial structures (including reporting and associated transparency into details) have substantially changed over the period. The early first 10+ years are more uncertain concerning margin than the later years. Directionally it is representative of the European Telco industry. Take Deutsche Telekom AG, it “lost” 25% of its revenue between 2005 and 2015 (considering only German & European segments). Over the same period, it shredded almost 27% of its Opex.

CAVEATS

Of course, Capex to Revenue ratios, any techno-economical ratio you may define, or cost distributions of any sort are in no way the whole story of a Telco life-and-budget cycle. Over time, due to possible structural changes in how Telcos operate, the past may not reflect the present and may even be less telling in the future.

Telcos may have merged with other Telcos (e.g., Mobile with Fixed), they may have non-Telco subsidiaries (i.e., IT consultancies, management consultancies, …), they may have integrated their fixed and mobile business units, they may have spun off their infrastructure, making use of towercos for their cell site needs (e.g., GD Towers, Vantage, Cellnex, American Towers …), open fibercos (e.g., Fiberhost Poland, Open Dutch Fiber, …) for their fiber needs, hyperscale cloud providers (e.g., AWS, Amazon, Microsoft Azure, ..) for their platform requirements. Capex and Opex will go left and right, up and down, depending on each of the above operational elements. All that may make comparing one Telco’s Capex with another Telco’s investment level and operational state-of-affairs somewhat uncertain.

I have dear colleagues who may be much more brutal. In general, they are not wrong but not as brutally right as their often high grounds could indicate. But then again, I am not a black-and-white guy … I like colors.

So, I believe that investment levels, or more generally, cost levels, can be meaningfully compared between Telcos. Cost, be it Opex or Capex, can be estimated or modeled with relatively high accuracy, assuming you are in the know. It can be compared with other comparables or non-comparables. Though not by your average financial controller with no technology knowledge and in-depth understanding.

Alas, with so many things in this world, you must understand what you are doing, including the limitations.

IT’S THAT TIME OF THE YEAR … CAPEX IS IN THE AIR.

It is the time of the year when many telcos are busy updating their business and financial planning for the following years. It is not uncommon to plan for 3 to 5 years ahead. It involves scenario planning and stress tests of those scenarios. Scenarios would include expectations of how the relevant market will evolve as well as the impact of the political and economic environment (e.g., covid lockdowns, the war in Ukraine, inflationary pressures, supply-chain challenges, … ) and possible changes to their asset ownership (e.g., infrastructure spin-offs).

Typically, between the end of the third or beginning of the fourth quarter, telecommunications businesses would have converged upon a plan for the coming years, and work will focus on in-depth budget planning for the year to come, thus 2024. This is important for the operational part of the business, as work orders and purchase orders for the first quarter of the following year would need to be issued within the current year.

The planning process can be sophisticated, involving many parts of the organization considering many scenarios, and being almost mathematical in its planning nature. It can be relatively simple with the business’s top-down financial targets to adhere to. In most instances, it’s likely a combination of both. Of course, if you are a publicly-traded company or part of one, your past planning will generally limit how much your new planning can change from the old. That is unless you improve upon your old plans or have no choice but to disappoint investors and shareholders (typically, though, one can always work on a good story). In general, businesses tend to be cautiously optimistic about uncertain business drivers (e.g., customer growth, churn, revenue, EBITDA) and conservatively pessimistic on business drivers of a more certain character (e.g., Capex, fixed cost, G&A expenses, people cost, etc..). All that without substantially and negatively changing plans too much between one planning horizon to the next.

Capital expense, Capex, is one of the foundations, or enablers, of the telco business. It finances the building, expansion, operation, and maintenance of the telco network, allowing customers to enjoy mobile services, fixed broadband services, TV services, etc., of ever-increasing quality and diversity. I like to look at Capex as the investments I need to incur in order to sustain my existing revenues, grow my revenues (preferably beating inflationary pressures), and finance any efficiency activities that will reduce my operational expenses in the future.

If we want to make the value of Capex to the corporation a little firmer, we need a little bit of financial calculus. We can write a company’s value (CV) as

CV \; = \; \frac{FCFF_0 \; (1 \; + \; g)}{\; WACC \; - \; g \; }

With g being the expected growth rate in free cash flow in perpetuity, WACC is the Weighted Average Cost of Capital, and FCFF is the Free Cash Flow to the Firm (i.e., company) that we can write as follows;

FCFF = NOPLAT + Depreciation & Amortization (DA) – ∆ Working Capital – Capex,

with NOPLAT being the Net Operating Profit Less Adjusted Taxes (i.e., EBIT – Cash Taxes). So if I have two different Capex budgets with everything else staying the same despite the difference in Capex (if true life would be so easy, right?);

CV_X \; - \; CV_Y \; = \; \Delta Capex \; \left[ \frac{1 \; - \; g}{\; WACC \; - \; g \;} \right]

assuming that everything except the proposed Capex remains the same. With a difference of, for example, 10 Million euros, a future growth rate g = 0% (maybe conservative), and a WACC of 5% (note: you can find the latest average WACC data for the industry here, which is updated regularly by New York University Leonard N. Stern School of Business. The 5% chosen here serves as an illustration only (e.g., this was approximately representative of Telco Europe back in 2022, as of July 2023, it was slightly above 6%). You should always choose the weighted average cost of capital that is applicable to your context). The above formula would tell us that the investment plan having 10 Million euros less would be 200 Million euros more valuable (20× the Capex not spent). Anyone with a bit of (hands-on!) experience in budget business planning would know that the above valuation logic should be taken with a mountain of salt. If you have two Capex plans with no positive difference in business or financial value, you should choose the plan with less Capex (and don’t count yourself rich on what you did not do). Of course, some topics may require Capex without obvious benefits to the top or bottom line. Such examples are easy to find, e.g., regulatory requirements or geo-political risks force investments that may appear valueless or even value destructive. Those require meticulous considerations, and timing may often play a role in optimizing your investment strategy around such topics. In some cases, management will create a narrative around a corporate investment decision that fits an optimized valuation, typically hedging on one-sided inflated risks to the business if not done. Whatever decision is made, it is good to remember that Capex, and resulting Opex, is in most cases a certainty. The business benefits in terms of more revenue or more customers are uncertain as is assuming your business will be worth more in a number of years if your antennas are yellow and not green. One may call this the “Faith-based case of more Capex.”

Figure 5 provides an overview of Western Europe of annual Fixed & Mobile Capex, Total and Service Revenues, and Capex to Revenue ratio (in %). Source: New Street Research Western Europe data.

Figure 5 provides an overview of Western European telcos’ revenue, Capex, and Capex to Revenue ratio. Over the last five years, Western European telcos have been spending increasingly higher Capex levels. In 2021 the telecom Capex was 6 billion euros higher than what was spent in 2017, about 13% higher. Fixed and mobile service revenue increased by 14 billion euros, yielding a Capex to Service revenue ratio of 23% in 2021 compared to 20.6% in 2017. In most cases, the total revenue would be reported, and if luck has its way (or you are a subscriber to New Street Research), the total Capex. Thus, capturing both the mobile and the fixed business, including any non-service-related revenues from the company. As defined in this article, non-service-related revenues would comprise revenues from wholesales, sales of equipment (e.g., mobile devices, STB, and CPEs), and other non-service-specific revenues. As a rule of thumb, the relative difference between total and service-related revenues is usually between 1.1 to 1.3 (e.g., the last 5-year average for WEU was 1.17). 

One of the main drivers for the Western European Capex has firstly been aggressive fiber-to-the-premise (FTTP) deployment and household fiber connectivity, typically measured in homes passed across most of the European metropolitan footprint as well as urban areas in general. As fiber covers more and more residential households, increased subscription to fiber occurs as well. This also requires substantial additional Capex for a fixed broadband business. Figure 6 illustrates the annual FTTP (homes passed) deployment volume in Western Europe as well as the total household fiber coverage.

Figure 6 above shows the fiber to the premise (FTTP) home passed deployment per anno from 2018 to 2021 Actual (source: European Commission’s “Broadband Coverage in Europe 2021” authored by Omdia et al.) and 2021 to 2025 projected numbers (i.e., this author’s own assessment). During the period from 2018 to 2021, household fiber coverage grew from 27% to 43% and is expected to grow to at least 71% by 2026 (not including overbuilt, thus unique household covered). The overbuilt data are based on a work in progress model and really should be seen as directional (it is difficult to get data with respect to the overbuilt).

A large part of the initial deployment has been in relatively dense urban areas as well as relying on aerial fiber deployment outside bigger metropolitan centers. For example, in Portugal, with close to 90% of households covered with fiber as of 2021, the existing HFC infrastructure (duct, underground passageways, …) was a key enabler for the very fast, economical, and extensive household fiber coverage there. Although many Western European markets will be reaching or exceeding 80% of fiber coverage in their urban areas, I would expect to continue to see a substantial amount of Capex being attributed. In fact, what is often overlooked in the assessment of the Capex volume being committed to fiber deployment, is that the unit-Capex is likely to increase substantially as countries with no aerial deployment option pick up their fiber rollout pace (e.g., Germany, the UK, Netherlands) and countries with an already relatively high fiber coverage go increasingly suburban and rural.

Figure 7 above shows the total fiber to the premise (FTTP) home remaining per anno from 2018 to 2021 Actual (source: European Commission’s “Broadband Coverage in Europe 2021” authored by Omdia et al.). The 2022 to 2030 projected remaining households are based on the author’s own assessment and does not consider overbuilt numbers.

The second main driver is in the domain of mobile network investment. The 5G radio access deployment has been a major driver in 2020 and 2021. It is expected to continue to contribute significantly to mobile operators Capex in the coming 5 years. For most Western European operators, the initial 5G deployment was at 700 MHz, which provides a very good 5G coverage. However, due to limited frequency spectral bandwidth, there are not very impressive speeds unless combined with a solid pre-existing 4G network. The deployment of 5G at 700 MHz has had a fairly modest effect on Mobile Capex (apart from what operators had to pay out in the 5G spectrum auctions to acquire the spectrum in the first place). Some mobile networks would have been prepared to accommodate the 700 MHz spectrum being supported by existing lower-order or classical antenna infrastructure. In 2021 and going forward, we will see an increasing part of the mobile Capex being allocated to 3.X GHz deployment. Far more sophisticated antenna systems, which co-incidentally also are far more costly in unit-Capex terms, will be taken into use, such as higher-order MiMo antennas from 8×8 passive MiMo to 32×32 and 64×64 active antennas systems. These advanced antenna systems will be deployed widely in metropolitan and urban areas. Some operators may even deploy these costly but very-high performing antenna systems in suburban and rural clutter with the intention to provide fixed-wireless access services to areas that today and for the next 5 – 7 years continue to be under-served with respect to fixed broadband fiber services.

Overall, I would also expect mobile Capex to continue to increase above and beyond the pre-2020 level.

As an external investor with little detailed insights into individual telco operations, it can be difficult to assess whether individual businesses or the industry are investing sufficiently into their technical landscape to allow for growth and increased demand for quality. Most publicly available financial reporting does not provide (if at all) sufficient insights into how capital expenses are deployed or prioritized across the many facets of a telco’s technical infrastructure, platforms, and services. As many telcos provide mobile and fixed services based on owned or wholesaled mobile and fixed networks (or combinations there off), it has become even more challenging to ascertain the quality of individual telecom operations capital investments.

Figure 8 illustrates why analysts like to plot Total Revenue against Total Capex (for fixed and mobile). It provides an excellent correlation. Though great care should be taken not to assume causation is at work here, i.e., “if I invest X Euro more, I will have Y Euro more in revenues.” It may tell you that you need to invest a certain level of Capex in sustaining a certain level of Revenue in your market context (i.e., country geo-socio-economic context). Source: New Street Research Western Europe data covering the following countries: AT, BE, DK, FI, FR, DE, GR, IT, NL, NO, PT, ES, SE, CH, and UK.

Why bother with revenues from the telco services? These would typically drive and dominate the capital investments and, as such, should relate strongly to the Capex plans of telcos. It is customary to benchmark capital spending by comparing the Capex to Revenue (see Figure 8), indicating how much a business needs to invest into infrastructure and services to obtain a certain income level. If nothing is stated, the revenue used for the Capex-to-Revenue ratio would be total revenue. For telcos with fixed and mobile businesses, it’s a very high-level KPI that does not allow for too many insights (in my opinion). It requires some de-averaging to become more meaningful.

THE TELCO TECHNOLOGY FACTORY

Figure 8 (below) illustrates the main capital investment areas and cost drivers for telecommunications operations with either a fixed broadband network, a mobile network, or both. Typically, around 90% of the capital expenditures will be invested into the technology factory comprising network infrastructure, products, services, and all associated with information technology. The remaining ca. 10% will be spent on non-technical infrastructures, such as shops, office space, and other non-tech tangible assets.

Figure 9 Telco Capex is spent across physical (or tangible) infrastructure assets, such as communications equipment, brick & mortar that hosts the equipment, and staff. Furthermore, a considerable amount of a telcos Capex will also go to human development work, e.g., for IT, products & services, either carried out directly by own staff or third parties (i.e., capitalized labor). The above illustrates the macro-levels that make out a mobile or fixed telecommunications network, and the most important areas Capex will be allocated to.

If we take the helicopter view on a telco’s network, we have the customer’s devices, either mobile devices (e.g., smartphone, Internet of Things, tablet, … ) or fixed devices, such as the customer premise equipment (CPE) and set-top box. Typically the broadband network connection to the customer’s premise would require a media converter or optical network terminator (ONT). For a mobile network, we have a wireless connection between the customer device and the radio access network (RAN), the cellular network’s most southern point (or edge). Radio access technology (e.g., 3G, 4G, or 5G) is very important determines for the customer experience. For a fixed network connection, we have fiber or coax (cable) or copper connecting the customer’s premise and the fixed network (e.g., street cabinet). Access (in general) follows the distribution of the customers’ locations and concentration, and their generated traffic is aggregated increasingly as we move north and up towards and into the core network. In today’s modern networks, big-fat-data broadband connections interconnect with the internet and big public data centers hosting both 3rd party and operator-provided content, services, and applications that the customer base demands. In many existing networks, data centers inside the operator’s own “walls” likewise will have service and application platforms that provide customers with more of the operator’s services. Such private data centers, including what is called micro data centers (μDCs) or edge DCs, may also host 3rd party content delivery networks that enable higher quality content services to a telco’s customer base due to a higher degree of proximity to where the customers are located compared to internet-based data centers (that could be located anywhere in the world).

Figure 10 illustrates break-out the details of a mobile as well as a fixed (fiber-based) network’s infrastructure elements, including the customers’ various types of devices.

Figure 10 illustrates that on a helicopter level, a fixed and a classical mobile network structure are reasonably similar, with the main difference of one network carrying the mobile traffic and the other the fixed traffic. The traffic in the fixed network tends to be at least ten larger than in the mobile network. They mainly differ in the access node and how it connects to the customer. For fixed broadband, the physical connection is established between, for example, the ONL (Optical Line Terminal) in the optical distribution network and ONT (Optical Line Terminal) at the customer’s home via a fiber line (i.e., wired). The wireless connection for mobile is between the Radio Node’s antenna and the end-user device. Note: AAS: Advanced Antenna System (e.g., MiMo, massive-MiMo), BBU: Base-band unit, CPE: Customer Premise Equipment, IOT: Internet of Things, IX: Internet Exchange, OLT: Optical Line Termination, and ONT: Optical Network Termination (same as ONU: Optical Network Unit).

From Figure 10 above, it should be clear that there are a lot of similarities between the mobile and fixed networks, with the biggest difference being that the mobile access network establishes a wireless connection to the customer’s devices versus the fixed access network physically wired connection to the device situated at the customer’s premises.

This is good news for fixed-mobile telecommunications operators as these will have considerable architectural and, thus, investment synergies due to those similarities. Although, the sad truth is that even today, many fixed-mobile telco companies, particularly incumbent, remain far away from having achieved fixed-mobile network harmonization and conversion.

Moreover, there are many questions to be asked as well as concerns when it comes to our industry’s Capex plans; what is the Capex required to accommodate data growth, are existing budgets allowing for sufficient network densification (to accommodate growth and quality), and what is the Capex trade-off between frequency spectrum acquisition, antenna technology, and site densification, how much Capex is justified to pursue the best network in a given market, what is the suitable trade-off between investing in fiber to the home and aggressive 5G deployment, should (incumbent) telco’s pursue fixed wireless access (FWA) and how would that impact their capital plans, what is the right antenna strategy, etc…

On a high level, I will provide guidance on many of the above questions, in this article and in forthcoming ones.

THE CAPEX STRUCTURE OF A TELECOM COMPANY.

When taking a macro look at Capex and not yet having a good idea about the breakdown between mobile and fixed investment levels, we are helped that on a macro level, the Capex categories are similar for a fixed and a mobile network. Apart from the last mile (access) in a fixed network is a fixed line (e.g., fiber, coax, or copper) and a wireless connection in a mobile network; the rest is comparable in nature and function. This is not surprising as a business with a fixed-mobile infrastructure would (should!) leverage the commonalities in transport and part of the access architecture.

In the fixed business, devices required to enable services on the fixed-line network at the fixed customers’ home (e.g., CPE, STB, …) are a capital expense driven by new customers and device replacement. This is not the case for mobile devices (i.e., an operational expense).

Figure 11 above illustrates the major Capex elements and their distribution defined by the median, lower and upper quantiles (the box), and lower and upper extremes (the whiskers) of what one should expect of various elements’ contribution to telco Capex. Note: CPE: Customer Premise Equipment, STB: Set-Top Box.

CUSTOMER PREMISE EQUIPMENT (CPE) & SET-TOP BOXES (STB) investments ARE between 10% to 20% of the TelEcoM Capex.

The capital investment level into Customer premise equipment (CPE) depends on the expected growth in the fixed customer base and the replacement of old or defective CPEs already in the fixed customer base. We would generally expect this to make out between 10% to 20% of the total Capex of a fixed-mobile telco (and 0% in a mobile-only business). When migrating from one access technology (e.g., copper/xDSL phase-out, coaxial cable) to another (e.g., fiber or hybrid coaxial cable), more Capex may be required. Similar considerations for set-top boxes (STB) replacement due to, for example, a new TV platform, non-compliance with new requirements, etc. Many Western European incumbents are phasing out their extensive and aging copper networks and replacing those with fiber-based networks. At the same time, incumbents may have substantial capital requirements phasing out their legacy copper-based access networks, the capital burden on other competitor telcos in markets where this is happening if such would have a significant copper-based wholesale relationship with the incumbent.

In summary, over the next five years, we should expect an increase in CPE-based Caped due to the legacy copper phase-out of incumbent fixed telcos. This will also increase the capital pressure in transport and access categories.

CPE & STB Capex KPIs: Capex share of Total and Capex per Gross Added Customer.

Capex modeling comment: Use your customer forecast model as the driver for new CPEs. Your research should give you an idea of the price range of CPEs used by your target fixed broadband business. Always include CPE replacement in the existing base and the gross adds for the new CPEs. Many fixed broadband retail businesses have been conservative in the capabilities of CPEs they have offered to their customer base (e.g., low-end cheaper CPEs, poor WiFi quality, ≤1Gbps), and it should be considered that these may not be sufficient for customer demand in the following years. An incumbent with a large install base of xDSL customers may also have a substantial migration (to fiber) cost as CPEs are required to be replaced with fiber cable CPEs. Due to the current supply chain and delivery issues, I would assume that operators would be willing to pay a premium for getting critical stock as well as having priority delivery as stock becomes available (e.g., by more expensive shipping means).

Core network & service platformS, including data centers, investments ARE between 8% to 12% of the telecom Capex.

Core network and service platforms should not take up more than 10% of the total Capex. We would regard anything less than 5% or more than 15% as an anomaly in Capital prioritization. This said, over the next couple of years, many telcos with mobile operations will launch 5G standalone core networks, which is a substantial change to the existing core network architecture. This also raises the opportunity for lifting and shifting from monolithic systems or older cloud frameworks to cloud-native and possibly migrating certain functions onto public cloud domains from one or more hyperscalers (e.g., AWS, Azure, Google). As workloads are moved from telco-owned data centers and own monolithic core systems, telco technology cost structure may change from what prior was a substantial capital expense to an operational expense. This is particularly true for software-related developments and licensing.

Another core network & service platform Capex pressure point may come from political or investor pressure to replace Chinese network elements, often far removed from obsolescence and performance issues, with non-Chinese alternatives. This may raise the Core network Capex level for the next 3 to 5 years, possibly beyond 12%. Alas, this would be temporary.

In summary, the following topics would likely be on the Capex priority list;

1. Life-cycle management investments (I like to call Business-as-Usual demand) into software and hardware maintenance, end-of-life replacements, growth (software licenses, HW expansions), and miscellaneous topics. This area tends to dominate the Capex demand unless larger transformational projects exist. It is also the first area to be de-prioritized if required. Working with Priority 1, 2, and 3 categorizations is a good Capital planning methodology. Where Priority 1 is required within the following budget year 1, Prio. 2 is important but can wait until year two without building up too much technical debt and Prio. 3 is nice to have and not expected to be required for the next two subsequent budget years.

2. 5G (Standalone, SA) Core Network deployment (timeline: 18 – 24 months).

3. Network cloudification, initially lift-and-shift with subsequent cloud-native transformation. The trigger point will be enabling the deployment of the 5G standalone (SA) core. Operators will also take the opportunity to clean up their data centers and network core location (timeline: 24 – 36 months).

4. Although edge computing data centers (DC) typically are supposed to support the radio access network (e.g., for Open-RAN), the capital assignment would be with the core network as the expertise for this resides here. The intensity of this Capex (if built by the operator, otherwise, it would be Opex) will depend on the country’s size and fronthaul/backhaul design. The investment trigger point would generally commence on Open-RAN deployment (e.g., 1&1 & Telefonica Germany). The edge DC (or μDC) would most like be standard container-sized (or half that size) and could easily be provided by independent towerco or specific edge-DC 3rd party providers lessening the Capex required for the telco. For smaller geographies (e.g., Netherlands, Denmark, Austria, …), I would not expect this item to be a substantial topic for the Capex plans. Mainly if Open-RAN is not being pursued over the next 5 – 10 years by mainstream incumbent telcos.

5. Chinese supplier replacement. The urgency would depend on regulatory pressure, whether compensation is provided (unlikely) or not, and the obsolescence timeline of the infrastructure in question. Given the high quality at very affordable economics, I expect this not to have the biggest priority and will be executed within timelines dictated more by economics and obsolescence timelines. In any case, I expect that before 2025 most European telcos will have phased out Chinese suppliers from their Core Networks, incl. any Service platforms in use today (timeline: max. 36 months).

6. Cybersecurity investments strengthen infrastructure, processes, and vital data residing in data centers, service platforms, and core network elements. I expect a substantial increase in Capex (and Opex) arising from the telco’s focus on increasing the cyber protection of their critical telecom infrastructure (timeline: max 18 months with urgency).

Core Capex KPIs: Capex share of Total (knowing the share, it is straightforward to get the Capex per Revenue related to the Core), Capex per Incremental demanded data traffic (in Gigabits and Gigabyte per second), Capex per Total traffic, Capex per customer.

Capex modeling comment: In case I have little specific information about an operator’s core network and service platforms, I would tend to model it as a Euro per Customer, Euro per-incremental customer, and Euro per incremental traffic. Checking that I am not violating my Capex range that this category would typically fall within (e.g., 8% to 12%). I would also have to consider obsolescence investments, taking, for example, a percentage of previous cumulated core investments. As mobile operators are in the process, or soon will be, of implementing a 5G standalone core, having an idea of the number of 5G customers and their traffic would be useful to factor that in separately in this Capex category.

Estimating the possible Capex spend on Edge-RAN locations, I would consider that I need ca. 1 μDC per 450 to 700 km2 of O-RAN coverage (i.e., corresponding to a fronthaul distance between the remote radio and the baseband unit of 12 to 15 km). There may be synergies between fixed broadband access locations and the need for μ-datacenters for an O-RAN deployment for an integrated fixed-mobile telco. I suspect that 3rd party towercos, or alike, may eventually also offer this kind of site solutions, possibly sharing the cost with other mobile O-RAN operators.

Transport – core, metro & aggregation investments are between 5% to 15% of Telecom Capex.

The transport network consists of an optical transport network (OTN) connecting all infrastructure nodes via optical fiber. The optical transport network extends down to the access layer from the Core through the Metro and Aggregation layers. On top, the IP network ensures logical connection and control flow of all data transported up and downstream between the infrastructure nodes. As data traffic is carried from the edge of the network upstream, it is aggregated at one or several places in the network (and, of course, disaggregated in the downstream direction). Thus, the higher the transport network, the more bandwidth is supported on the optical and the IP layers. Most of the Capex investment needs would ensure that sufficient optical and IP capacity is available, supporting the growth projections and new service requirements from the business and that no bottlenecks can occur that may have disastrous consequences on customer experience. This mainly comes down to adding cards and ports to the already installed equipment, upgrading & replacing equipment as it reaches capacity or quality limitations, or eventually becoming obsolete. There may be software license fees associated with growth or the introduction of new services that also need to be considered.

Figure 12 above illustrates (high-level) the transport network topology with the optical transport network and IP networking on top. Apart from optical and IP network equipment, this area often includes investments into IP application functions and related hardware (e.g., BNG, DHCP, DNS, AAA RADIUS Servers, …), which have not been shown in the above. In most cases, the underlying optical fiber network would be present and sufficiently scalable, not requiring substantial Capex apart from some repair and minor extensions. Note DWDM: Dense Wavelength-Division multiplexing is an optical fiber multiplexing technology that increases the bandwidth utilization of a FON, BNG: Border Network Gateway connecting subscribers to a network or an internet service providers (ISP) network, important in wholesale arrangements where a 3rd party provides aggregation and access. DHCP: Dynamic Host Configuration Protocol providing IP address allocation and client configurations. AAA: Authentication, Authorization, and Accounting of the subscriber/user, RADIUS: Remote Authentication Dial-In User Service (Server) providing the AAA functionalities.

Although many telcos operate fixed-mobile networks and might even offer fixed-mobile converged services, they may still operate largely separate fixed and mobile networks. It is not uncommon to find very different transport design principles as well as supplier landscapes between fixed and mobile. The maturity, when each was initially built, and technology roadmaps have historically been very different. The fixed traffic dynamics and data volumes are several times higher than mobile traffic. The geographical presence between fixed and mobile tends to be very different (unless the telco of interest is the incumbent with a considerable copper or HFC network). However, the biggest reason for this state of affairs has been people and technology organizations within the telcos resisting change and much more aggressive transport consolidation, which would have been possible.

The mobile traffic could (should!) be accommodated at least from the metro/aggregation layers and upstream through the core transport. There may even be some potential for consolidation on front and backhauls that are worth considering. This would lead to supplier consolidation and organizational synergies as the technology organizations converged into a fixed-mobile engineering organization rather than two separate ones.

I would expect the share of Capex to be on the higher end of the likely range and towards the 10+% at least for the next couple of years, mainly if fixed and mobile networks are being harmonized on the transport level, which may also create an opportunity reduce and harmonize the supplier landscape.

In summary, the following topics would likely be on the Capex priority list;

  1. Life-cycle management (business-as-usual) investments, accommodating growth including new service and quality requirements (annual business-as-usual). There are no indications that the traffic or mobile traffic growth rate over the next five years will be very different from the past. If anything, the 5-year CAGR is slightly decreasing.
  2. Consolidating fixed and mobile transport networks (timelines: 36 to 60 months, depending on network size and geography). Some companies are already in the process of getting this done.
  3. Chinese supplier replacement. To my knowledge, there are fewer regulatory discussions and political pressure for telcos to phase out transport infrastructure. Nevertheless, with the current geopolitical climate (and the upcoming US election in 2024), telcos need to consider this topic very carefully; despite economic (less competition, higher cost), quality, and possible innovation, consequences may result in a departure from such suppliers. It would be a natural consideration in case of modernization needs. An accelerated phase-out may be justified to remove future risks arising from geopolitical pressures.

While I have chosen not to include the Access transport under this category, it is not uncommon to see its budget demand assigned to this category, as the transport side of access (fronthaul and backhaul transport) technically is very synergetic with the transport considerations in aggregation, metro, and core.

Transport Capex KPIs: Capex share of Total, the amount of Capex allocated to Mobile-only and Fixed-only (and, of course, to a harmonized/converged evolved transport network), The Utilization level (if data is available or modeled to this level). The amount of Capex-spend on fiber deployment, active and passive optical transport, and IP.

Capex modeling comment: I would see whether any information is available on a number of core data centers, aggregation, and metro locations. If this information is available, it is possible to get an impression of both core, aggregation, and metro transport networks. If this information is not available, I would assume a sensible transport topology given the particularities of the country where the operator resides, considering whether the operator is an incumbent fixed operator with mobile, a mobile-only operation, or a mobile operator that later has added fixed broadband to its product portfolio. If we are not talking about a greenfield operation, most, if not all, will already be in place, and mainly obsolescence, incremental traffic, and possible transport network extensions would incur Capex. It is important to understand whether fixed-mobile operations have harmonized and integrated their transport infrastructure or large-run those independently of each other. There is substantial Capex synergy in operating an integrated transport network, although it will take time and Capex to get to that integration point.

Access investments are typically between 35% to 50% of the Telecom Capex.

Figure 13 (above) is similar to Figure 8 (above), emphasizing the access part of Fixed and Mobile networks. I have extended the mobile access topology to capture newer development of Open-RAN and fronthaul requirements with pooling (“centralizing”) the baseband (BBU) resources in an edge cloud (e.g., container-sized computing center). Fronthaul & Open-RAN poses requirements to the access transport network. It can be relatively costly to transform a legacy RAN backhaul-only based topology to an Open-RAN fronthaul-based topology. Open-RAN and fronthaul topologies for Greenfield deployments are more flexible and at least require less Capex and Opex. 

Mobile Access Capex.

I will define mobile access (or radio access network, RAN) as everything from the antenna on the site location that supports the customers’ usage (or traffic demand) via the active radio equipment (on-site or residing in an edge-cloud datacenter), through the fronthaul and backhaul transport, up to the point before aggregation (i.e., pre-aggregation). It includes passive and active infrastructure on-site, steal & mortar or storage container, front- and backhaul transport, data center software & equipment (that may be required in an edge data center), and any other hardware or software required to have a functional mobile service on whatever G being sold by the mobile operator.

Figure 14 above illustrates a radio access network architecture that is typically deployed by an incumbent telco supporting up to 4G and 5G. A greenfield operation on 5G (and maybe 4G) could (maybe should?) choose to disaggregate the radio access node using an open interface, allowing for a supplier mix between the remote radio head (RRH and digital frontend) at the site location and the centralized (or distributed) baseband unit (BBU). Fronthaul connects the antenna and RRH with a remote BBU that is situated at an edge-cloud data center (e.g., storage container datacenter unit = micro-data center, μDC). Due to latency constraints, the distance between the remote site and the BBU should not be much more than 10 km. It is customary to name the 5G new radio node a gNB (g-Node-B) like the 4G radio node is named eNB (evolved-Node-B).

When considering the mobile access network, it is good to keep in mind that, at the moment, there are at least two main flavors (that can be mixed, of course) to consider. (1) A classical architecture with the site’s radio access hardware and software from a single supplier, with a remote radio head (RRH) as well as digital frontend processing at or near the antenna. The radio nodes do not allow for mixing suppliers between the remote RF and the baseband. Radio nodes are connected to backhaul transmission that may be enabled by fiber or microwave radios. This option is simple and very well-proven. However, it comes with supplier lock-in and possibly less efficient use of baseband resources as these are likewise fixed to the radio node that the baseband unit is installed. (2) A new Open- or disaggregated radio access network (O-RAN), with the Antenna and RHH at the site location (the RU, radio unit in O-RAN), then connected via fronthaul (≤ 10 – 20 km distance) to a μDC that contains the baseband unit (the DU, distributed unit in O-RAN). The μDC would then be connected to the backhaul that connects northbound to the Central Unit (CU), aggregation, and core. The open interface between the RRH (and digital frontend) and the BBU allows different suppliers and hosts the RAN-specific software on common off-the-shelf (COTS) computing equipment. It allows (in theory) for better scaling and efficiency with the baseband resources. However, the framework has not been standardized by the usual bodies of standardization (e.g., 3GPP) and is not universally accepted as a common standard that all telco suppliers would adhere to. It also has not reached maturity yet (sort of obvious) and is currently (as of July 2022) seen to be associated with substantial cyber-security risks (re: maturity). It may be an interesting deployment model for greenfield operations (e.g., Rakuten Mobile Japan, Jio India, 1&1 Germany, Dish Mobile USA). The O-RAN options are depicted in Figure 15 below.

Figure 15 The above illustrates a generic Open RAN architecture starting with the Advanced Antenna System (AAS) and the Radio Unit (RU). The RU contains the functionality associated with the (OSI model) layer 1, partitioned into the lower layer 1 functions with the upper layer 1 functions possibly moved out of the RU and into the Distributed Unit (DU) connected via the fronthaul transport. The DU, which typically will be connected to several RUs, must ensure proper data link management, traffic control, addressing, and reliable communication with the RU (i.e., layer 2 functionalities). The distributed unit connects via the mid-haul transport link to the so-called Central Unit (CU), which typically will be connected to several DUs. The CU plays an important role in the overall ORAN architecture, acting as a central control and management vehicle that coordinates the operations of DUs and RUs, ensuring an efficient and effective operation of the ORAN network. As may be obvious, from the summary of its functionality, layer 3 functionalities reside in the CU. The Central Unit connects via backhaul, aggregation, and core transport to the core network.

For established incumbent mobile operators, I do not see Option (2) as very attractive, at least for the next 5 – 7 years when many legacy technologies (i.e., non-5G) remain to be supported. The main concern should be the maturity, lack of industry-wise standardization, as well as cost of transforming existing access transport networks into compliance with a fronthaul framework. Most likely, some incumbents, the “brave” ones, will deploy O-RAN for 1 or a few 5G bands and keep their legacy networks as is. Most incumbent mobile operators will choose (actually have chosen already) conventional suppliers and the classical topology option to provide their 5G radio access network as it has the highest synergy with the access infrastructure already deployed. Thus, if my assertion is correct, O-RAN will only start becoming mass-market mainstream in 5 to 7 years, when existing deployments become obsolete, and may ultimately become mass-market viable by the introduction of 6G towards the end of the twenties. The verdict is very much still out there, in my opinion.

Planning the mobile-radio access networks Capex requirements is not (that) difficult. Most of it can be mathematically derived and be easily assessed against growth expectations, expected (or targeted) network utilization (or efficiency), and quality. The growth expectations (should) come from consumer and retail businesses’ forecast of mobile customers over the next 3 to 5 years, their expected usage (if they care, otherwise technology should), or data-plan distribution (maybe including technology distributions, if they care. Otherwise, technology should), as well as the desired level of quality (usually the best).

Figure 16 above illustrates a typical cellular planning structural hierarchy from the sector perspective. One site typically has 3 sectors. One sector can have multiple cells depending on the frequency bands installed in the (multi-band) antennas. Massive MiMo antenna systems provide target cellular beams toward the user’s device that extend the range of coverage (via the beam). Very fast scheduling will enable beams to be switched/cycled to other users in the covered sector (a bit oversimplified). Typically, the sector is planned according to the cell utilization, thus on a frequency-by-frequency basis.

Figure 17 illustrates that most investment drivers can be approached as statistical distributions. Those distributions will tell us how much investment is required to ensure that a critical parameter X remains below a pre-defined critical limit Xc within a given probability (i.e., the proportion of the distribution exceeding Xc). The planning approach will typically establish a reference distribution based on actual data. Then based on marketing forecasts, the planners will evolve the reference based on the expected future usage that drives the planning parameter. Example: Let X be the customer’s average speed in a radio cell (e.g., in a given sector of an antenna site) in the busy hour. The business (including technology) has decided it will target 98% of its cells and should provide better than 10 Mbps for more than 50% of the active time a customer uses a given cell. Typically, we will have several quality-based KPIs, and the more breached they are, the more likely it will be that a Capex action is initiated to improve the customer experience.

Network planners will have access to much information down to the cell level (i.e., the active frequency band in a given sector). This helps them develop solid planning and statistical models that provide confidence in the extrapolation of the critical planning parameters as demand changes (typically increases) that subsequently drive the need for expansions, parameter adjustments, and other optimization requirements. As shown in Figure 17 above, it is customary to allow for some cells to breach a defined critical limit Xc, usually though it is kept low to ensure a given customer experience level. Examples of planning parameters could be cell (and sector) utilization in the busy hour, active concurrent users in cell (or sector), duration users spend at a or lower deemed poor speed level in a given cell, physical resource block (the famous PRB, try to ask what it stands for & what it means😉) utilization, etc.

The following topics would likely be on the Capex priority list;

  1. New radio access deployment Capex. This may be for building new sites for coverage, typically in newly built residential areas, and due to capacity requirements where existing sites can no longer support the demand in a given area. Furthermore, this Capex also covers a new technology deployment such as 5G or deploying a new frequency band requiring a new antenna solution like 3.X GHz would do. As independent tower infrastructure companies (towerco) increasingly are used to providing the required passive site infrastructure solution (e.g., location, concrete, or steel masts/towers/poles), this part will not be a Capex item but be charged as Opex back to the mobile operator. From a European mobile radio access network Capex perspective, the average cost of a total site solution, with active as well as passive infrastructure, should have been reduced by ca. 100 thousand plus Euro, which may translate into a monthly Opex charge of 800 to 1300 Euro per site solution. It should be noted that while many operators have spun off their passive site solutions to third parties and thus effectively reduced their site-related Capex, the cost of antennas has increased dramatically as operators have moved away from classical simple SiSo (Single-in Singe-out) passive antennas to much more advanced antenna systems supporting multiple frequency bands, higher-order antennas (e.g., MiMo) and recently also started deploying active antennas (i.e., integrated amplifiers). This is largely also driven by mobile operators commissioning more and more frequency bands on their radio-access sites. The planning horizon needs at least to be 2 years and preferably 3 to 5 years.
  2. Capex investments that accommodate anticipated radio access growth and increased quality requirements. It is normal to be between 18 – 24 months ahead of the present capacity demand overall, accepting no more than 2% to 5% of cells (in BH) to breach a critical specification limit. Several such critical limits would be used for longer-term planning and operational day-to-day monitoring.
  3. Life-cycle management (business-as-usual) investments such as software annual fees, including licenses that are typically structured around the technologies deployed (e.g., 2G, 3G, 4G, and 5G) and active infrastructure modernization replacing radio access equipment (e.g., baseband units, radio units, antennas, …) that have become obsolete. Site reworks or construction optimization would typically be executed (on request from the operator) by the Towerco entity, where the mobile operator leases the passive site infrastructure. Thus, in such instances may not be a Capex item but charged back as an Operational expense to the telco.
  4. Even if there have been fewer regulatory discussions and political pressure for telcos to phase out radio access, Chinese supplier replacement should be considered. Nevertheless, with the current geopolitical climate (and the upcoming US election), telcos need to consider this topic very carefully; despite economic (less competition, higher cost), quality, and possible innovation, consequences may result in a departure from such suppliers. It would be a natural consideration in case of modernization needs. An accelerated phase-out may be justified to remove future risks arising from geopolitical pressures, although it would result in above-and-beyond capital commitment over a shorter period than otherwise would be the case. Telco valuation may suffer more in the short to medium term than otherwise would have been the case with a more natural phaseout due to obsolescence.

Mobile Access Capex KPIs: Capex share of Total, Access Utilization (reported/planned data traffic demand to the data traffic that could be supplied if all or part of the spectrum was activated), Capex per Site location, Capex per Incremental data traffic demand (in Gigabyte and Gigabit per second which is the real investment driver), Capex per Total Traffic (in Gigabyte and Gigabit per second), Capex per Mobile Customer and Capex to Mobile Revenue (preferably service revenue but the total is fine if the other is not available). As a rule of thumb, 50% of a mobile network typically covers rural areas, which also may carry less than 20% of the total data traffic.

Whether actual and planned Capex is available or an analyst is modeling it, the above KPIs should be followed over an extended period. A single year does not tell much of a story.

Capex modeling comment: When modeling the Capex required for the radio access network, you need to have an idea about how many sites your target telco has. There are many ways to get to that number. In most European countries, it is a matter of public record. Most telcos, nowadays, rarely build their own passive site infrastructure but get that from independent third-party tower companies (e.g., CellNex w. ca. 75k locations, Vantage Towers w. ca. 82k locations, … ) or site-share on another operators site locations if available. So, modeling the RAN Capex is a matter of having a benchmark of the active equipment, knowing what active equipment is most likely to be deployed and how much. I see this as being an iterative modeling process. Given the number of sites and historical Capex, it is possible to come to a reasonable estimate of both volumes of sites being changed and the range of unit Capex (given good guestimates of active equipment pricing range). Of course, in case you are doing a Capex review, the data should be available to you, and the exercise should be straightforward. The mobile Capex KPIs above will allow for consistency checks of a modeling exercise or guide a Capex review process.

I recommend using the classical topology described above when building a radio access model. That is unless you have information that the telco under analysis is transforming to a disaggregated topology with both fronthaul and backhaul. Remember you are not only required to capture the Capex for what is associated with the site location but also what is spent on the access transport. Otherwise, there is a chance that you over-estimate the unit-Capex for the site-related investments.

It is also worth keeping in mind that typically, the first place a telecom company would cut Capex (or down-prioritize) is pressured during the planning process would be in the radio access network category. The reason is that the site-related unitary capex tends to be incredibly well-defined. If you reduce your rollout to 100 site-related units, you should have a very well-defined quantum of Capex that can be allocated to another category. Also, the operational impact of cutting in this category tends to be very well-defined. Depending on how well planned the overall Capex has been done, there typically would be a slack of 5% to 10% overall that could be re-assigned or ultimately reduced if financial results warrant such a move.

Fixed Access Capex.

As mobile access, fixed access is about getting your service out to your customers. Or, if you are a wholesale provider, you can provide the means of your wholesale customer reaching their customer by providing your own fixed access transport infrastructure. Fixed access is about connecting the home, the office, the public institution (e.g., school), or whatever type of dwelling in general.

Figure 18 illustrates a fixed access network and its position in the overall telco architecture. The following make up the ODN (Optical Distribution Network); OLT: Optical Line Termination, ODF: Optical Distribution Frame, POS: Passive Optical Splitter, ONT: Optical Network Termination. At the customer premise, besides the ONT, we have the CPE: Customer Premise Equipment and the STB: Set-Top Box. Suppose you are an operator that bought wholesale fixed access from another telco’ (incl. Open Access Providers, OAPs). In that case, you may require a BNG to establish the connection with your customer’s CPE and STB through the wholesale access network.

As fiber optical access networks are being deployed across Europe, this tends to be a substantial Capex item on the budgets of telcos. Here we have two main Capex drivers. First is the Capex for deploying fibers across urban areas, which provides coverage for households (or dwellings) and is measured as Capex-per-homes passed. Second is the Capex required for establishing the connection to households (or dwellings). The method of fiber deployment is either buried, possibly using existing ducts or underground passageways, or via aerial deployment using established poles (e.g., power poles or street furniture poles) or new poles deployed with the fiber deployment. Aerial deployment tends to incur lower Capex than buried fiber solutions due to requiring less civil work. The OLT, ODF, POS, and optical fiber planning, design, and build to provide home coverage depends on the home-passed deployment ambition. The fiber to connect a home (i.e., civil work and materials), ONT, CPE, and STBs are driven by homes connected (or FTTH connected). Typically, CPE and STBs are not included in the Access Capex but should be accounted for as a separate business-driven Capex item.

The network solutions (BNG, OLT, Routers, Switches, …) outside the customer’s dwelling come in the form of a cabinet and appropriate cards to populate the cabinet. The cards provide the capacity and serviced speed (e.g., 100 Mbps, 300 Mbps, 1 Gbps, 10 Gbps, …) sold to the fixed broadband customer. Moreover, for some of the deployed solutions, there is likely a mandatory software (incl. features) fee and possibly both optional and custom-specific features (although rare to see that in mainstream deployments). It should be clear (but you would be surprised) that ONT and CPE should support the provisioned speed of the fixed access network. The customer cannot get more quality than the minimum level of either the ONT, CPE, or what the ODN has been built to deliver. In other words, if the networking cards have been deployed only to support up to 1 Gbps and your ONT, and CPE may support 3 Gbps or more, your customer will not be able to have a service beyond 1 Gbps. Of course, the other way around as well. I cannot stress enough the importance of longer-term planning in this respect. Your network should be as flexible as possible in providing customer services. It may seem that Capex savings can be made by only deploying capacity sold today or may be required by business over the next 12 months. While taking a 3 to 5-year view on the deployed network capacity and ONT/CPEs provided to customers avoids having to rip out relatively new equipment or finance the significant replacement of obsolete customer premise equipment that no longer can support the services required.

When we look at the economic drivers for fixed access, we can look at the capital cost of deploying a kilometer of fiber. This is particularly interesting if we are only interested in the fiber deployment itself and nothing else. Depending on the type of clutter, deployment, and labor cost occur. Maybe it is more interesting to bundle your investment into what is required to pass a household and what is required to connect a household (after it has been passed). Thus, we look at the Capex-per-home (or dwellings) passed and separate the Capex to connect an individual customer’s premise. It is important to realize that these Capex drivers are not just a single value but will depend on the household density depends on the type of area the deployment happens. We generally expect dense urban clutters to have a high dwelling density; thus, more households are covered (or passed) per km of fiber deployed. Dense-urban areas, however, may not necessarily hold the highest density of potential residential customers and hold less retail interest in the retail business. Generally, urban areas have higher household densities (including residential households) than sub-urban clutter. Rural areas are expected to have the lowest density and, thus, the most costly (on a household basis) to deploy.

Figure 19, just below, illustrates the basic economics of buried (as opposed to aerial) fiber for FTTH homes passed and FTTH homes connected. Apart from showing the intuitive economic logic, the cost per home passed or connected is driven by the household density (note: it’s one driver and fairly important but does not capture all the factors). This may serve as a base for rough assessments of the cost of fiber deployment in homes passed and homes connected as a function of household density. I have used data in the Fiber-to-the-Home Council Europe report of July 2012 (10 years old), “The Cost of Meeting Europe’s Network Needs”, and have corrected for the European inflationary price increase since 2012 of ca. 14% and raised that to 20% to account for increased demand for FTTH related work by third parties. Then I checked this against some data points known to me (which do not coincide with the cities quoted in the chart). These data points relate to buried fiber, including the homes connected cost chart. Aerial fiber deployment (including home connected) would cost less than depicted here. Of course, some care should be taken in generalizing this to actual projects where proper knowledge of the local circumstances is preferred to the above.

Figure 19 The “chicken and egg” of connecting customers’ premises with fiber and providing them with 100s of Mbps up to Gbps broadband quality is that the fibers need to pass the home first before the home can be connected. The cost of passing a premise (i.e., the home passed) and connecting a premise (home connected) should, for planning purposes, be split up. The cost of rolling out fiber to get homes-passed coverage is not surprisingly particularly sensitive to household density. We will have more households per unit area in urban areas compared to rural areas. Connecting a home is more sensitive to household density in deep rural areas where the distance from the main fiber line connection point to the household may be longer. The above cost curves are for buried fiber lines and are in 2021 prices.

Aerial fiber deployment would generally be less capital-intensive due to faster and easier deployment (less civil work, including permitting) using pre-existing (or newly built) poles. Not every country allows aerial deployment or even has the infrastructure (i.e., poles) available, which may be medium and low-voltage poles (e.g., for last-mile access). Some countries will have a policy allowing only buried fibers in the city or metropolitan areas and supporting pole infrastructure for aerial deployment in sub-urban and rural clutters. I have tried to illustrate this with Figure 18 below, where the pie charts show the aerial potential and share that may have to be assigned to buried fiber deployment.

Figure 20 above illustrates the amount of fiber coverage (i.e., in terms of homes passed) in Western European markets. The number for 2015 and 2021 is based on European Commission’s “Broadband Coverage in Europe 2021” (authored by Omdia et al.). The 2025 & 2031 coverage numbers are my extrapolation of the 5-year trend leading up to 2021, considering the potential for aerial versus buried deployment. Aerial making accelerated deployment gains is more likely than in markets that only have buried fiber as a possibility, either because of regulation or lack of appropriate infrastructure for aerials. The only country that may be below 50% FTTH coverage in 2025 is Germany (i.e., DE), with a projected 39% of homes passed by 2025. Should Germany aim for 50% instead, they would have to do ca. 15 million households passed or, on average, 3 million a year from 2021 to 2025. Maximum Germany achieved in one year was in 2020, with ca. 1.4 million homes passed (i.e., Covid was good for getting “things done”). In 2021 this number dropped to ca. 700 thousand or half of the 2020 number. The maximum any country in Europe has done in one year was France, with 2.9 million homes passed in 2018. However, France does allow for aerial fiber deployment outside major metropolitan areas.

Figure 21 above provides an overview across Western Europe for the last 5 years (2016 – 2021) of average annual household fiber deployment, the maximum done in one year in the previous 5 years, and the average required to achieve household coverage in 2026 shown above in Figure 20. For Germany (DE), the average deployment pace of 3.23 homes passed per year (orange bar) would then result in a coverage estimate of 25%. I don’t see any practical reasons for the UK, France, and Italy not to make the estimated household coverage by 2026, which may exceed my estimates.

From a deployment pace and Capex perspective, it is good to keep in mind that as time goes by, the deployment cost per household is likely to increase as household density reduces when the deployment moves from metropolitan areas toward suburban and rural. Thus, even if the deployment pace may reduce naturally for many countries in Figure 20 towards 2025, absolute Capex may not necessarily reduce accordingly.

In summary, the following topics would likely be on the Capex priority list;

  1. Continued fiber deployment to achieve household coverage. Based on Figure 17, at household (HH) densities above 500 per km2, the unit Capex for buried fiber should be below 900 Euro per HH passed with an average of 600 Euro per HH passed. Below 500 HH per km2, the cost increases rapidly towards 3,000 Euro per HH passed. The aerial deployment will result in substantially lower Capex, maybe with as much as 50% lower unit Capex.
  2. As customers subscribe, the fiber access cost associated with connecting homes (last-mile connectivity) will need to be considered. Figure 17 provides some guidance regarding the quantum-Euro range expected for buried fiber. Aerial-based connections may be somewhat cheaper.
  3. Life-cycle management (business-as-usual) investments, modernization investments, accommodating growth including new service and quality requirements (annual business as usual). Typically it would be upgrading OLT, ONTs, routers, and switches to support higher bandwidth requirements upgrading line cards (or interface cards), and moving from ≤100 Mbps to 1 Gbps and 10 Gbps. Many telcos will be considering upgrading their GPON (Gigabit Passive Optical Networks, 2.5 Gbps↓ / 1.2 Gbps↑) to provide XGPON (10 Gbps↓ / 2.5 Gbps↑) or even XGSPON services (10 Gbps↓ / 10 Gbps↑).
  4. Chinese supplier exposure and risks (i.e., political and regulatory enforcement) may be an issue in some Western European markets and require accelerated phase-out capital needs. In general, I don’t see fixed access infrastructure being a priority in this respect, given the strong focus on increasing household fiber coverage, which already takes up a lot of human and financial resources. However, this topic needs to be considered in case of obsolescence and thus would be a business case and performance-driven with a risk adjustment in dealing with Chinese suppliers at that point in time.

Fixed Access Capex KPIs: Capex share of Total, Capex per km, Number of HH passed and connected, Capex per HH passed, Capex per HH connected, Capex to Incremental Traffic, GPON, XGPON and XGSPON share of Capex and Households connected.

Whether actual and planned Capex is available or an analyst is modeling it, the above KPIs should be followed over an extended period. A single year does not tell much of a story.

Capex modeling comment: In a modeling exercise, I would use estimates for the telco’s household coverage plans as well as the expected household-connected sales projections. Hopefully, historical numbers would be available to the analyst that can be used to estimate the unit-Capex for a household passed and a household connected. You need to have an idea of where the telco is in terms of household density, and thus as time goes by, you may assume that the cost of deployment per household increases somewhat. For example, use Figure 18 to guide the scaling curve you need. The above-fixed access Capex KPIs should allow checking for inconsistencies in your model or, if you are reviewing a Capex plan, whether that Capex plan is self-consistent with the data provided.

If anyone would have doubted it, there is still much to do with fiber optical deployment in Western Europe. We still have around 100+ million homes to pass and a likely capital investment need of 100+ billion euros. Fiber deployment will remain a tremendously important investment area for the foreseeable future.

Figure 22 shows the remaining fiber coverage in homes passed based on 2021 actuals for urban and rural areas. In general, it is expected that once urban areas’ coverage has reached 80% to 90%, the further coverage-based rollout will reduce. Though, for attractive urban areas, overbuilt, that is, deploying fiber where there already are fibers deployed, is likely to continue.

Figure 23 The top illustrates the next 5 years’ weekly rollout to reach an 80% to 90% household coverage range by 2025. The bottom, it shows an estimate of the remaining capital investment required to reach that 80% to 90% coverage range. This assessment is based on 2021 actuals from the European Commission’s “Broadband Coverage in Europe 2021” (authored by Omdia et al.); the weekly activity and Capex levels are thus from 2022 onwards.

In many Western European countries, the pace is expected to be increased considerably compared to the previous 5 years (i.e., 2016 – 2021). Even if the above figure may be over-optimistic, with respect to the goal of 2026, the European ambition for fiberizing European markets will impose a lot of pressure on speedy deployment.

IT investment levels are typically between 15% and 25% of Telecom Capex.

IT may be the most complex area to reach a consensus on concerning Capex. In my experience, it is also the area within a telco with the highest and most emotional discussion overhead within the operations and at a Board level. Just like everyone is far better at driving a car than the average driver, everyone is far better at IT than the IT experts and knows exactly what is wrong with IT and how to make IT much better and much faster, and much cheaper (if there ever was an area in telco-land where there are too many cooks).

Why is that the case? I tend to say that IT is much more “touchy-feely” than networks where most of the Capex can be estimated almost mathematically (and sufficiently complicated for non-technology folks to not bother with it too much … btw I tend to disagree with this from a system or architecture perspective). Of course, that is also not the whole truth.

IT designs, plans, develops (or builds), and operates all the business support systems that enable the business to sell to its customers, support its customers, and in general, keep the relationship with the customer throughout the customer life-cycle across all the products and services offered by the business irrespective of it being fixed or mobile or converged. IT has much more intense interactions with the business than any other technology department, whose purpose is to support the business in enabling its requirements.

Most of the IT Capex is related to people’s work, such as development, maintenance, and operations. Thus capitalized labor of external and internal labor is the main driver for IT Capex. The work relates to maintaining and improving existing services and products and developing new ones on the IT system landscape or IT stacks. In 2021, Western European telco Capex spending was about 20% of their total revenue. Out of that, 4±1 % or in the order of 10±3 billion Euro is spent on IT. With ca. 714 million fixed and mobile subscribers, this corresponds to an IT average spend of 14 Euros per telco customer in 2021. Best investment practices should aim at an IT Capex spend at or below 3% of revenue on average over 5 years (to avoid penalizing IT transformation programs). As a rule of thumb, if you do not have any details of internal cost structure (I bet you usually would not have that information), assume that the IT-related Opex has a similar quantum as Capex (you may compensate for GDP differences between markets). Thus, the total IT spend (Capex and Opex) would be in the order of 2×Capex, so the IT Spend to Revenue double the IT-related Capex to Revenue. While these considerations would give you an idea of the IT investment level and drill down a bit further into cost structure details, it is wise to keep in mind that it’s all a macro average, and the spread can be pretty significant. For example, two telcos with roughly the same number of customers, IT landscape, and complexity and have pretty different revenue levels (e.g., due to differences in ARPU that can be achieved in the particular market) may have comparable absolute IT spending levels but very different relative levels compared to the revenue. I also know of telcos with very low total IT spend to Revenue ITR (shareholder imposed), which had (and have) a horrid IT infrastructure performance with very extended outages (days) on billing and frequent instabilities all over its IT systems. Whatever might have been saved by imposing a dramatic reduction in the IT Capex (e.g., remember 10 million euros Capex reduction equivalent to 200 million euros value enhancement) was more than lost on inferior customer service and experience (including the inability to bill the customers).

You will find industry experts and pundits that expertly insist that your IT development spend is way too high or too low (although the latter is rare!). I recommend respectfully taking such banter seriously. Although try to understand what they are comparing with, what KPIs they are using, and whether it’s apples for apples and not with pineapples. In my experience, I would expect a mobile-only business to have a better IT spend level than a fixed-mobile telco, as a mobile IT landscape tends to be more modern and relatively simple compared to a fixed IT landscape. First, we often find more legacy (and I mean with a capital L) in the fixed IT landscape with much older services and products still being kept operational. The fixed IT landscape is highly customized, making transformation and modernization complex and costly. At least if old and older legacy products must remain operational. Another false friend in comparing one company IT spending with another’s is that the cost structure may be different. For example, it is worth understanding where OSS (Operational Support System) development is accounted for. Is it in the IT spend, or is it in the Network-side of things? Service platforms and Data Centers may be another difference where such spending may be with IT or Networks.

Figure 24 shows the helicopter view of a traditional telco IT architectural stack. Unless the telco is a true greenfield, it is a very normal state of affairs to have multiple co-existing stacks, which may have some degree of integration at various levels (sub-layers). Most fixed-mobile telcos remain with a high degree of IT architecture separation between their mobile and fixed business on a retail and B2B level. When approaching IT, investments never consider just one year. Understand their IT investment strategy in the immediate past (2-3 years prior) as well as how that fits with known and immediate future investments (2 – 3 years out).

Above, Figure 24 illustrates the typical layers and sub-layers in an IT stack. Every sub-layer may contain different applications, functionalities, and systems, all with an over-arching property of the sub-layer description. It is not uncommon for a telco to have multiple IT stacks serving different brands (e.g., value, premium, …) and products (e.g., mobile, fixed, converged) and business lines (e.g., consumer/retail, business-to-business, wholesale, …). Some layers may be consolidated across stacks, and others may be more fragmented. The most common division is between fixed and mobile product categories, as historically, the IT business support systems (BSS) as well as the operational support systems (OSS) were segregated and might even have been managed by two different IT departments (that kind of silliness is more historical albeit recent).

Figure 25 shows a typical fixed-mobile incumbent (i.e., anything not greenfield) multi-stack IT architecture and their most likely aspiration of aggressive integrated stack supporting a fixed-mobile conversion business. Out of experience, I am not a big fan of retail & B2B IT stack integration. It creates a lot of operational complexity and muddies the investment transparency and economics of particular B2B at the expense of the retail business.

A typical IT landscape supporting fixed and mobile services may have quite a few IT stacks and a wide range of solutions for various products and services. It is not uncommon that a Fixed-Mobile telco would have several mobile brands (e.g., premium, value, …) and a separate (from an IT architecture perspective, at least) fixed brand. Then in addition, there may be differences between the retail (business-to-consumer, B2C) and the business-to-business (B2B) side of the telco, also being supported by separate stacks or different partitions of a stack. This is illustrated in Figure 24 above. In order for the telco business to become more efficient with respect to its IT landscape, including development, maintenance, and operational aspects of managing a complex IT infrastructure landscape, it should strive to consolidate stacks where it makes sense and not un-importantly along the business wish of convergence at least between fixed and mobile.

Figure 24 above illustrates an example of an IT stack harmonization activity long retail brands as well as Fixed and Mobile products as well as a separation of stacks into a retail and a business-to-business stack. It is, of course, possible to leverage some of the business logic and product synergies between B2C and B2B by harmonizing IT stacks across both business domains. However, in my experience, nothing great comes out of that, and more likely than not, you will penalize B2C by spending above and beyond value & investment attention on B2B. The B2B requirements tend to be significantly more complex to implement, their specifications change frequently (in line with their business customers’ demand), and the unit cost of development returns less unit revenue than the consumer part. Economically and from a value-consideration perspective, the telco needs to find an IT stack solution that is more in line with what B2B contributes to the valuation and fits its requirements. That may be a big challenge, particularly for minor players, as its business rarely justifies a standalone IT stack or developments. At least not a stack that is developed and maintained at the same high-quality level as a consumer stack. There is simply a mismatch in the B2B requirements, often having much higher quality and functionality requirements than the consumer part, and what it contributes to the business compared to, for example, B2C.

When I judge IT Capex, I care less about the absolute level of spend (within reason, of course) than what is practical to support within the given IT landscape the organization has been dealt with and, of course, the organization itself, including 3rd party support. Most systems will have development constraints and a natural order of how development can be executed. It will not matter how much money you are given or how many resources you throw at some problems; there will be an optimum amount of resources and time required to complete a task. This naturally leads to prioritization which may lead to disappointment of some stakeholders and projects that may not be prioritized to the degree they might feel entitled to.

When looking at IT capital spending and comparing one telco with another, it is worthwhile to take a 3- to 5-year time horizon, as telcos may be in different business and transformation cycles. A one-year comparison or benchmark may not be appropriate for understanding a given IT-spend journey and its operational and strategic rationale. Search for incidents (frequency and severity) that may indicate inappropriate spend prioritization or overall too little available IT budget.

The IT Capex budget would typically be split into (a) Consumer or retail part (i.e., B2C), (b) Business to Business and wholesale part, (c) IT technical part (optimization, modernization, cloudification, and transformations in general), and a (d) General and Administrative (G&A) part (e.g., Finance, HR, ..). Many IT-related projects, particularly of transformative nature, will run over multiple years (although if much more than 24 months, the risk of failure and monetary waste increases rapidly) and should be planned accordingly. For the business-driven demand (from the consumer, business, and wholesale), it makes sense to assign Capex proportional to the segment’s revenue and the customers those segments support and leverage any synergies in the development work required by the business units. For IT, capital spending should be assigned, ensuring that technical debt is manageable across the IT infrastructure and landscape and that efficiency gains arising from transformative projects (including landscape modernization) are delivered timely. In general, such IT projects promise efficiency in terms of more agile development possibilities (faster time to market), lower development and operational costs, and, last but not least, improved quality in terms of stability and reduced incidents. The G&A prioritizes finance projects and then HR and other corporate projects.

In summary, the following topics would likely be on the Capex priority list;

  1. Provide IT development support for business demand in the next business plan cycle (3 – 5 years with a strong emphasis on the year ahead). The allocation key should be close to the Revenue (or Ebitda) and customer contribution expected within the budget planning period. The development focus is on maintenance, (incremental) improvements to existing products/services, and new products/services required to make the business plans. In my experience, the initial demand tends to be 2 to 3 times higher than what a reasonable financial envelope would dictate (i.e., even considering what is possible to do within the natural limitations of the given IT landscape and organization) and what is ultimately agreed upon.
  2. Cloudification transformation journey moving away from the traditional monolithic IT platform and into a public, hybrid, or private cloud environment. In my opinion, the safest approach is a “lift-and-shift” approach where existing functionality is developed in the cloud environment. After a successful migration from the traditional monolithic platform into the cloud environment, the next phase of the cloudification journey will be to move to a cloud-native framework should be embarked. This provides a very solid automation framework delivering additional efficiencies and improved stability and quality (e.g., reduction in incidents). Analysts should be aware that migrating to a (public) cloud environment may reduce the capitalization possibilities with the consequence that Capex may reduce in the forward budget planning, but this would be at the expense of increased Opex for the IT organization.
  3. Stack consolidation. Reducing the number of IT stacks generally lowers the IT Capex demand and improves development efficiency, stability, and quality. The trend is to focus on the harmonization efforts on the frontend (Portals and Outlets layer in Figure 14), the CRM layer (retiring legacy or older CRM solutions), and moving down the layers of the IT stack (see Figure 14) often touching the complex backend systems when they become obsolete providing an opportunity to migrate to a modern cloud-based solution (e.g., cloud billing).
  4. Modernization activities are not covered by cloudification investments or business requirements.
  5. Development support for Finance (e.g., ERP/SAP requirements), HR requirements, and other miscellaneous activities not captured above.
  6. Chinese suppliers are rarely an issue in Western European telecom’s IT landscape. However, if present in a telco’s IT environment, I would expect Capex has been allocated for phasing out that supplier urgently over the next 24 months (pending the complexity of such a transformation/migration program) due to strong political and regulatory pressures. Such an initiative may have a value-destructing impact as business-driven IT development (related to the specific system) might not be prioritized too highly during such a program and thus result in less ability to compete for the telco during a phase-out program.

IT Capex KPIs: IT share of Total Capex (if available, broken down into a Fixed and Mobile part), IT Capex to Revenue, ITR (IT total spend to Revenue), IT Capex per Customer, IT Capex per Employee, IT FTEs to Total FTEs.

Moreover, if available or being modeled, I would like to have an idea about how much of the IT Capex goes to investment categories such as (i) Maintain, (ii) Growth, and (iii) Transform. I will get worried if the majority of IT Capex over an extended period goes to the Growth category and little to Maintain and Transform. This indicates a telco that has deprioritized quality and ignores efficiency, resulting in the risk of value destruction over time (if such a trend were sustained). A telco with little Transform spend (again over an extended period) is a business that does not modernize (another word for sweating assets).

Capex modeling comment: when I am modeling IT and have little information available, I would first assume an IT Capex to Revenue ratio around 4% (mobile-only) to 6% (fixed-mobile operation) and check as I develop the other telco Capex components whether the IT Capex stays within 15% to 25%. Of course, keep an eye out for all the above IT Capex KPIs, as they provide a more holistic picture of how much confidence you can have in the Capex model.

Figure 26 illustrates the anticipated IT Capex to Revenue ranges for 2024: using New Street Research (total) Capex data for Western Europe, the author’s own Capex projection modeling, and using the heuristics that IT spend typically would be 15% to 25% of the total Capex, we can estimate the most likely ranges of IT Capex to Revenue for the telecommunications business covered by NSR for 2024. For individual operations, we may also want to look at the time series of IT spending to revenue and compare that to any available intelligence (e.g., transformation intensive, M&A integration, business-as-usual, etc..)

Using the heuristic of the IT Capex being between 15% (1st quantile) and 25% (3rd quantile) of the total Capex, we can get an impression of how much individual Telcos invest in IT annually. The above chart shows such an estimate for 2024. I have the historical IT spending levels for several Western European Telcos, which agree well with the above and would typically be a bit below the median unless a Telco is in the progress of a major IT transformation (e.g., after a merger, structural separation, Huawei forced replacement, etc..). One would also expect and should check that the total IT spend, Capex and Opex, are decreasing over time when the transformational IT spend has been removed. If this is observed, it would indicate that Telco does become increasingly more efficient in its IT operation. Usually, the biggest effect should be in IT Opex reduction over time.

Figure 27 illustrates the anticipated IT Capex to Customer ranges for 2024: having estimated the likely IT spend ranges (in Figure 26) for various Western European telcos, allows us to estimate the expected 2024 IT spend per customer (using New Street Research data, author’s own Capex projection model and the IT heuristics describe in the section). In general and in the absence of structural IT transformation programs, I would expect the IT per customer spend to be below the median. Some notes to the above results: TDC (Nuuday & TDC Net) has major IT transformation programs ongoing after the structural separation, KPN is in progress with replacing their Huawei BSS, and I would expect them to be at the upper part of IT spending, Telenor Norway seems higher than I would expect but is an incumbent that traditionally spends substantially more than its competitors so might be okay but caution should be taken here, Switzerland in general and Swisscom, in particular, is higher than I would have expected. This said, it is a sophisticated Telco services market that would be likely to spend above the European average, irrespective I would take some caution with the above representation for Switzerland & Swisscom.

Similar to the IT Capex to Revenue, we can get an impression of what Telcos spend on IT Capex as it relates to their total mobile and fixed customer base. Again for Telcos in Western Europe (as well as outside), these ranges shown above do seem reasonable as the estimated range of where one would expect the IT spend. The analyst is always encouraged to look at this over a 3- to 5-year period to better appreciate the trend and should keep in mind that not all Telcos are in synch with their IT investments (as hopefully is obvious as transformation strategies and business cycles may be very different even within the same market).

Other, or miscellaneous, investments tend to be between 3% and 8% of the Telecom Capex.

When modeling a telco’s Capex, I find it very helpful to keep an “Other” or “Miscellaneous” Capex category for anything non-technology related. Modeling-wise, having a placeholder for items you don’t know about or may have forgotten is convenient. I typically start my models with 15% of all Capex. As my model matures, I should be able to reduce this to below 10% and preferably down to 5% (but I will accept 8% as a kind of good enough limit). I have had Capx review assignments where the Capex for future years had close to 20% in the “Miscellaneous.” If this “unspecified” Capex would not be included, the Capex to Revenue in the later years would drop substantially to a level that might not be deemed credible. In my experience, every planned Capex category will have a bit of “Other”-ness included as many smaller things require Capex but are difficult to mathematically derive a measure for. I tend to leave it if it is below 5% of a given Capex category. However, if it is substantial (>5%), it may reveal “sandbagging” or simply less maturity in the Capex planning and budget process.

Apart from a placeholder for stuff we don’t know, you will typically find Capex for shop refurbishment or modernization here, including office improvements and IT investments.

DE-AVERAGING THE TELECOM CAPEX TO FIXED AND MOBILE CONTRIBUTIONS.

There are similar heuristics to go deeper down into where the Capex should be spent, but that is a detail for another time.

Our first step is decomposing the total Capex into a fixed and a mobile component. We find that a multi-linear model including Total Capex, Mobile Customers, Mobile Service Revenue, Fixed Customers, and Fixed Service Revenues can account for 93% of the Capex trend. The multi-linear regression formula looks like the following;

C_{total} \; = \; C_{mobile} \; + \; C_{fixed}

\; = \; \alpha_{customers}^{mobile} \; N_{customers}^{mobile} \; + \; \alpha_{revenue}^{mobile} \; R_{revenue}^{mobile}

\; +  \;  \beta_{customers}^{fixed} \; N_{customers}^{fixed} \; + \; \beta_{revenue}^{fixed} \; R_{revenue}^{fixed}

with C = Capex, N = total customer count, R = service revenue, and α and β are the regression coefficient estimates from the multi-linear regression. The Capex model has been trained on 80% of the data (1,008 data points) chosen randomly and validated on the remainder (252 data points). All regression coefficients (4 in total) are statistically significant, with p-values well below a 95% confidence level.

Figure 28 above shows the Predicted Capex versus the Actual Capex. It illustrates that the predicted model agreed reasonably well with the actual Capex, which would also be expected based on the statistical KPIs resulting from the fit.

The Total is (obviously) available to us and therefore allows us to estimate both fixed and mobile Capex levels, by

C_{fixed} \; = \;  \beta_{customers}^{fixed} \; N_{customers}^{fixed} \; + \; \beta_{revenue}^{fixed} \; R_{revenue}^{fixed}

C_{mobile} \; = \; C_{total} \; - \; C_{fixed}

The result of the fixed-mobile Capex decomposition is shown in Figure 26 below. Apart from being (reasonably) statistically sound, it is comforting that the trend in Capex for fixed and mobile seem to agree with what our intuition should be. The increase in mobile Capex (for Western Europe) over the last 5 years appears reasonable, given that 5G deployment commenced in early 2019. During the Covid lockdown from early 2020, fixed revenue was boosted by a massive shift in fixed broadband traffic (and voice) from the office to the individuals’ homes. Likewise, mobile service revenues have been in slow decline for years. Thus, the Capex increase due to 5G and reduced mobile service revenues ultimately leads to a relatively more significant increase in the mobile Capex to Revenue ratio.

Figure 29 illustrates the statistical modeling (by multi-linear regression), or decomposition, of the Total Capex as a function of Mobile Customers, Mobile Service Revenues, Fixed Customers, and Fixed Service Revenues, allowing to break up of the Capex into Fixed and Mobile components by decomposing the total Capex. The absolute Capex level is higher for fixed than what is found for mobile, with about a factor of 2 until 2021, when mobile Capex increases due to 5G investments in the mobile industry. It is found that the Mobile Capex has increased the most over the last 5 years (e.g., 5G deployment) while the service revenues have declined somewhat over the same period. This increased the Mobile Capex to Service Revenue ratio (note: based on Total Revenue, the ratio would be somewhat smaller, by ca. 17%). Source: Total Capex, Fixed, and Mobile Service revenues from New Street Research data for Western Europe. Note: The decomposition of the total Capex into Fixed and Mobile Capex is based on the author’s own statistical analysis and modeling. It is not a delivery of the New Street Research report.

CAN MOBILE-TRAFFIC GROWTH CONTINUE TO BE ACCOMMODATED CAPEX-WISE?

In my opinion, there has been much panic in our industry in the past about exhausting the cellular capacity of mobile networks and the imminent doom of our industry. A fear fueled by the exponential growth of user demand perceived inadequate spectrum amount and low spectral efficiency of the deployed cellular technologies, e.g., 3G-HSPA, classical passive single-in single-out antennas. Going back to the “hey-days” of 3G-HSPA, there was a fear that if cellular demand kept its growth rate, it would result in supply requirements going towards infinity and the required Capex likewise. So clearly an unsustainable business model for the mobile industry. Today, there is (in my opinion) no basis for such fears short or medium-term. With the increased fiberization of our society, where most homes will be connected to fiber within the next 5 – 10 years, cellular doomsday, in the sense of running out of capacity or needing infinite levels of Capex to sustain cellular demand, maybe a day never to come.

In Western Europe, the total mobile subscriber penetration was ca. 130% of the total population in 2021, with an excess of approximately 2.1+ mobile devices per subscriber. Mobile internet penetration was 76% of the total population in 2021 and is expected to reach 83% by 2025. In 2021, Europe’s average smartphone penetration rate was 77.6%, and it is projected to be around 84% by 2025. Also, by 2024±1, 50% of all connections in Western Europe are projected to be 5G connections. There are some expectations that around 2030, 6G might start being introduced in Western European markets. 2G and 3G will be increasingly phased out of the Western European mobile networks, and the spectrum will be repurposed for 4G and eventually 5G.

The above Figure 30 shows forecasted mobile users by their main mobile access technology. Source: based on the author’s forecast model relying on past technology diffusion trends for Western Europe and benchmarked against some WEU markets and other telco projections. See also 5G Standalone – European Demand & Expectations by Kim Larsen.

We may not see a complete phase-out of either older Gs, as observed in Figure 19. Due to a relatively large base of non-VOLTE (Voice-over-LTE) devices, mobile networks will have to support voice circuit-switched fallback to 2G or 3G. Furthermore, for the foreseeable future, it would be unlikely that all visiting roaming customers would have VOLTE-based devices. Furthermore, there might be legacy machine-2-machine businesses that would be prohibitively costly and complex to migrate from existing 2G or 3G networks to either LTE or 5G. All in all, ensure that 2G and 3G may remain with us for reasonably long.

Figure 31 above shows that mobile and fixed data traffic consumption is growing in totality and per-user level. On average mobile traffic grew faster than fixed from 2015 to 2021. A trend that is expected to continue with the introduction of 5G. Although the total traffic growth rate is slowing down somewhat over the period, on a per-user basis (mobile as well as fixed), the consumptive growth rate has remained stable.

Since the early days of 3G-HSPA (High-Speed Packet Access) radio access, investors and telco businesses have been worried that there would be an end to how much demand could be supported in our cellular networks. The “fear” is often triggered by seeing the exponential growth trend of total traffic or of the usage per customer (to be honest, that fear has not been made smaller by technology folks “panicking” as well).

Let us look at the numbers for 2021 as they are reported in the Cisco VNI report. The total mobile data traffic was in the order of 4 Exabytes (4 Billion gigabytes, GB), more than 5.5× the level of 2016. It is more than 600 million times the average mobile data consumption of 6.5 GB per month per customer (in 2021). Compare this with the Western European population of ca. 200 million. While big numbers, the 6.5 GB per month per customer is insignificant. Assuming that most of this volume comes from video streaming at an optimum speed of 3 – 5 Mbps (good enough for HD video stream), the 6.5 GB translates into approx. 3 – 5 hours of video streaming over a month.

The above Figure 32 Illustrates a 24-hour workday total data demand on the mobile network infrastructure. A weekend profile would be more flattish. We spend at least 12 hours in our home, ca. 7 hours at work (including school), and a maximum of 5 hours (~20%) commuting, shopping, and otherwise being away from our home or workplace. Previous studies of mobile traffic load have shown that 80% of a consumer’s mobile demand falls in 3 main radio node sites around the home and workplace. The remaining 20% tends to be much more mobile-like in the sense of being spread out over many different radio-node sites.

Daily we have an average of ca. 215 Megabytes per day (if spread equally over the month), corresponding to 6 – 10 minutes of video streaming. The average length of a YouTube was ca. 4.4 minutes. In Western Europe, consumers spend an average of 2.4 hours per day on the internet with their smartphones (having younger children, I am surprised it is not more than that). However, these 2.4 hours are not necessarily network-active in the sense of continuously demanding network resources. In fact, most consumers will be active somewhere between 8:00 to around 22:00, after which network demand reduces sharply. Thus, we have 14 hours of user busy time, and within this time, a Western European consumer would spend 2.4 hours cumulated over the day (or ca. 17% of the active time).

Figure 33 above illustrates (based on actual observed trends) how 5 million mobile users distribute across a mobile network of 5,000 sites (or radio nodes) and 15,000 sectors (typically 3 sectors = 1 site). Typically, user and traffic distributions tend to be log-norm-like with long tails. In the example above, we have in the busy hour a median value of ca. 80 users attached to a sector, with 15 being active (i.e., loading the network) in the busy hour, demanding a maximum of ca. 5 GB (per sector) or an average of ca. 330 MB per active user in the radio sector over that sector’s relevant busy hour.

Typically, 2 limits, with a high degree of inter-dependency, would allegedly hit the cellular businesses rendering profitable growth difficult at some point in the future. The first limit is a practical technology limit on how much capacity a radio access system can supply. As we will see a bit later, this will depend on the operator’s frequency spectrum position (deployed, not what might be on the shelf), the number of sites (site density), the installed antenna technology, and its effective spectral efficiency. The second (inter-dependent) limit is an economic limit. The incremental Capex that telcos would need to commit to sustaining the demand at a given quality level would become highly unprofitable, rendering further cellular business uneconomical.

From a Capex perspective, the cellular access part drives a considerable amount of the mobile investment demand. Together with the supporting transport, such as fronthaul, backhaul, aggregation, and core transport, the capital investment share is typically 50% or higher. This is without including the spectrum frequencies required to offer the cellular service. Such are usually acquired by local frequency spectrum auctions and amount to substantial investment levels.

In the following, the focus will be on cellular access.

The Cellular Demand.

Before discussing the cellular supply side of things, let us first explore the demand side from the view of a helicopter. Demand is created by users (N) of the cellular services offered by telcos. Users can be human or non-human such as things in general or more specific machines. Each user has a particular demand that, in an aggregated way, can be represented by the average demand in Bytes per User (d). Thus, we can then identify two growth drivers. One from adding new users (ΔN) to our cellular network and another from the incremental change in demand per user (ΔN) as time goes by.

It should be noted that the incremental change in demand or users might not per se be a net increase. Still, it could also be a net decrease, either because the cellular networks have reached the maximum possible level of capacity (or quality) that results in users either reducing their demand or “ churning” from those networks or that an alternative to today’s commercial cellular network triggers abandonment as high-demand users migrate to that alternative — leading both to a reduction in cellular users and the average demand per user. For example, a near-100% Fiber-to-the-Home coverage with supporting WiFi could be a reason for users to abandon cellular networks, at least in an indoor environment, which would reduce between 60 to 80% of present-day cellular data demand. This last (hypothetical) is not an issue for today’s cellular networks and telco businesses.

N_{t+1} \; = \; N_t \; + \; \Delta N_{t+1}

d_{t+1} \; = \; d_t \; + \; \Delta d_{t+1}

D_{t+1}^{total} \; = \; N_{t+1} \times d_{t+1}

Of course, this can easily be broken down into many more drivers and details, e.g., technology diffusion or adaptation, the rate of users moving from one access technology to another (e.g., 3G→4G, 4G→5G, 5G→FTTH+WiFi), improved network & user device capabilities (better coverage, higher speeds, lower latency, bigger display size, device chip generation), new cellular service adaptation (e.g., TV streaming, VR, AR, …), etc.…

However, what is often forgotten is that the data volume of consumptive demand (in Byte) is not the main direct driver for network demand and, thus, not for the required investment level. A gross volumetric demand can be caused by various gross throughput demands (bits per second). The throughput demanded in the busiest hour (T_{demand} or T_{BH}) is the direct driver of network load, and thus, network investments, the volumetric demand, is a manifestation of that throughput demand.

T_{demand} \; = \; T_{BH \; in \; bits/sec} \; max_t \sum_{cell} \; n_t^{cell} \; \times \; 8 \; \delta_t^{cell} \; = \; max_t \sum_{cell} \; \tau_t^{cell}

With n_t^{cell} being the number of active users in a given radio cell at the time-instant of unit t taken within a day. \delta_t^{cell} is the Bytes consumed in a time instant (e.g., typically a second); thus, 8 \delta_t^{cell}  gives us the bits per time unit (or bits/sec), which is throughput consumed. Sum over all the cells’ instant throughput (\tau_t^{cell} bits/sec) in the same instant and take the maximum across. For example, a day provides the busiest hour throughput for the whole network. Each radio cell drives its capacity provision and supply (in bits/sec) and the investments required to provide that demanded capacity in the air interface and front- and back-haul.

For example, if n = 6 active (concurrent) users, each consuming on average  = 0.625 Mega Bytes per second (5 Megabits per second, Mbps), the typical requirement for a YouTube stream with an HD 1080p resolution, our radio access network in that cell would experience a demanded load of 30 Mbps (i.e., 6×5 Mbps). Of course, provided that the given cell has sufficient capacity to deliver what is demanded. A 4G cellular system, without any special antenna technology, e.g., Single-in-Single-out (SiSo) classical antenna and not the more modern Multiple-in-Multiple-out (MiMo) antenna, can be expected to deliver ca. 1.5 Mbps/MHz per cell. Thus, we would need at least 20 MHz spectrum to provide for 6 concurrent users, each demanding 5 Mbps. With a simple 2T2R MiMo antenna system, we could support about 8 simultaneous users under the same conditions. A 33% increase in what our system can handle without such an antenna. As mobile operators implement increasingly sophisticated antenna systems (i.e., higher-order MiMo systems) and move to 5G, a leapfrog in the handling capacity and quality will occur.

Figure 34 Is the sky the limit to demand? Ultimately, the limit will come from the practical and economic limits to how much can be supplied at the cellular level (e.g., spectral bandwidth, antenna technology, and software features …). Quality will reduce as the supply limit is reached, resulting in demand adaptation, hopefully settling at a demand-supply (metastable) equilibrium.

Cellular planners have many heuristics to work with that together trigger when a given radio cell would be required to be expanded to provide more capacity, which can be provided by software (licenses), hardware (expansion/replacement), civil works (sectorization/cell splits) and geographical (cell split) means. Going northbound, up from the edge of the radio network up through the transmission chain, such as fronthaul, back, aggregation, and core transport network, may require additional investments in expanding the supplied demand at a given load level.

As discussed, mobile access and transport together can easily make up more than half of a mobile capital budget’s planned and budgeted Capex.

So, to know whether the demand triggers new expansions and thus capital demand as well as the resulting operational expenses (Opex), we really need to look at the supply side. That is what our current mobile network can offer. When it cannot provide a targeted level of quality, how much capacity do we have to add to the network to be on a given level of service quality?

The Cellular Supply.

Cellular capacity in units of throughput (T_{supply}) given in bits per second, the basic building block of quality, is relatively easy to estimate. The cellular throughput (per unit cell) is provided by the amount of committed frequency spectrum to the air interface, what your radio access network and antenna support are, multiplied by the so-called spectral efficiency in bits per Hz per cell. The spectral efficiency depends on the antenna technology and the underlying software implementation of signal processing schemes enabling the details of receiving and sending signals over the air interface.

T_{supply} can be written as follows;

With Mbps being megabits (a million bits) per second and MHz being Mega Herz.

For example, if we have a site that covers 3 cells (or sectors) with a deployed 100 MHz @ 3.6GHz (B) on a 32T32R advanced antenna system (AAS) with an effective downlink (i.e., from the antenna to user), spectral efficiency \eta_{eff} of ca. 20 Mbps/MHz/cell (i.e., \eta_{eff} = n_{eff} \times \eta_{SISO}), we should expect to have a cell throughput on average at 1,000 Mbps (1 Gbps).

The capacity supply formula can be applied to the cell-level consideration providing sizing and thus investment guidance as we move northbound up the mobile network and traffic aggregates and concentrates towards the core and connections points to the external internet.

From the demand planning (e.g., number of customers, types of services sold, etc..), that would typically come from the Marketing and Sales department within the telco company, the technical team can translate those plans into a network demand and then calculate what they would need to do to cope with the customer demand within an agreed level of quality.

In Figure 35 above, operators provide cellular capacity by deploying their spectral assets on an appropriate antenna type and system-level radio access network hardware and software. Competition can arise from a superior spectrum position (balanced across low, medium, and high-frequency bands), better or more aggressive antenna technology, and utilizing their radio access supplier(s)’ features (e.g., signal processing schemes). Usually, the least economical option will be densifying the operator’s site grid where needed (on a macro or micro level).

Figure 36 above shows the various options available to the operator to create more capacity and quality. In terms of competitive edge, more spectrum than competitors provided it is being used and is balanced across low, medium, and high bands, provides the surest path to becoming the best network in a given market and is difficult to economically copy by operators with substantially less spectrum. Their options would be compensating for the spectrum deficit by building more sites and deploying more aggressive antenna technologies. The last one is relatively easy to follow by anyone and may only provide some respite temporarily.  

An average mobile network in Western Europe has ca. 270 MHz spectrum (60 MHz low-band below 1800 and 210 MHz medium-band below 5 GHz) distributed over an average of 7 cellular frequency bands. It is rare to see all bands deployed in actual deployments and not uniformly across a complete network. The amount of spectrum deployed should match demand density; thus, more spectrum is typically deployed in urban areas than in rural ones. In demand-first-driven strategies, the frequency bands will be deployed based on actual demand that would typically not require all bands to be deployed. This is opposed to MNOs that focus on high quality, where demand is less important, and where typically, most bands would be deployed extensively across their networks. The demand-first-driven strategy tends to be the most economically efficient strategy as long as the resulting cellular quality is market-competitive and customers are sufficiently satisfied.

In terms of downlink spectral capacity, we have an average of 155 MHz or 63 MHz, excluding the C-band contribution. Overall, this allows for a downlink supply of a minimum of 40 GB per hour (assuming low effective spectral efficiency, little advanced antenna technology deployed, and not all medium-band being utilized, e.g., C-Band and 2.5 GHz). Out of the 210 MHz mid-band spectrum, 92 MHz falls in the 3.X GHz (C-band) range and is thus still very much in the process of being deployed for 5G (as of June 2022). The C-band has, on average, increased the spectral capacity of Western European telcos by 50+% and, with its very high suitability for deployment together with massive MiMo and advanced antenna systems, effectively more than doubled the total cellular capacity and quality compared to pre-C-band deployment (using a 64T64R massive MiMo as a reference with today’s effective spectral efficiency … it will be even better as time goes by).

Figure 37 (above) shows the latest Ookla and OpenSignal DL speed benchmarks for Western Europe MNOs (light blue circles), and comparing this with their spectrum holdings below 3.x GHz indicates that there may be a lot of unexploited cellular capacity and quality to be unleashed in the future. Although, it would not be for free and likely require substantial additional Capex if deemed necessary. The ‘Expected DL Mbps’ (orange solid line, *) assumes the simplest antenna setup (e.g., classical SiSo antennas) and that all bands are fully used. On average, MNOs above the benchmark line have more advanced antenna setups (higher-order antennas) and fully (or close to) spectrum deployment. MNOs below the benchmark line likely have spectrum assets that have not been fully deployed yet and (or) “under-prioritized” their antenna technology infrastructure. The DL spectrum holding excludes C- and mmWave spectrum. Note:  There was a mistake in the original chart published on LinkedIn as the data was depicted against the total spectrum holding (DL+UL) and not only DL. Data: 54 Western European telcos.

Figure 37 illustrates the Western European cellular performance across MNOs, as measured by DL speed in Mbps, and compares this with the theoretical estimate of the performance they could have if all DL spectrum (not considering C-band, 3.x GHz) in their portfolio had been deployed at a fairly simple antenna setup (mainly SiSo and some 2T2R MiMo) with an effective spectral efficiency of 0.85 Mbps per MHz. It is good to point out that this is expected of 3G HSPA without MiMo. We observe that 21 telcos are above the solid (orange) line, and 33 have an actual average measured performance that is substantially below the line in many cases. Being above the line indicates that most spectrum has been deployed consistently across the network, and more advanced antennas, e.g., higher-order MiMo, are in use. Being below the line does (of course) not mean that networks are badly planned or not appropriately optimized. Not at all. Choices are always made in designing a cellular network. Often dictated by the economic reality of a given operator, geographical demand distribution, clutter particularities, or the modernization cycle an operator may be in. The most obvious reasons for why some networks are operating well under the solid line are; (1) Not all spectrum is being used everywhere (less in rural and more in urban clutter), (2) Rural configurations are simpler and thus provide less performance than urban sites. We have (in general) more traffic demand in urban areas than in rural. Unless a rural area turns seasonally touristic, e.g., lake Balaton in Hungary in the summer … It is simply a good technology planning methodology to prioritize demand in Capex planning, and it makes very good economic sense (3) Many incumbent mobile networks have a fundamental grid based on (GSM) 900MHz and later in-filled for (UMTS) 2100MHz…which typically would have less site density than networks based on (DCS) 1800MHz. However, site density differences between competing networks have been increasingly leveled out and are no longer a big issue in Western Europe (at least).

Overall, I see this as excellent news. For most mobile operators, the spectrum portfolio and the available spectrum bandwidth are not limiting factors in coping with demanded capacity and quality. Operators have many network & technology levers to work with to increase both quality and capacity for their customers. Of course, subject to a willingness to prioritize their Capex accordingly.

A mobile operator has few options to supply cellular capacity and quality demanded by its customer base.

  • Acquire more spectrum bandwidth by buying in an auction, buying from 3rd party (including M&A), asymmetric sharing, leasing, or trading (if regulatory permissible).
  • Deploy a better (spectral efficient) radio access technology, e.g., (2G, 3G) → (4G, 5G) or/and 4G → 5G, etc. Benefits will only be seen once a critical mass of customer terminal equipment supporting that new technology has been reached on the network (e.g., ≥20%).
  • Upgrade antenna technology infrastructure from lower-order passive antennas to higher-order active antenna systems. In the same category would be to ensure that smart, efficient signal processing schemes are being used on the air interface.
  • Building a denser cellular network where capacity demand dictates or coverage does not support the optimum use of higher frequency bands (e.g., 3.x GHz or higher).
  • Small cell deployment in areas where macro-cellular built-out is no longer possible or prohibitively costly. Though small cells scale poorly with respect to economics and maybe really the last resort.

Sectorization with higher-frequency massive-MiMo may be an alternative to small-cell and macro-cellular additions. However, sectorization requires that it is possible civil-engineering wise (e.g., construction) re: structural stability, permissible by the landlord/towerco and finally economic compared to a new site built. Adding more than the usual 3-sectors to a site would further boost site spectral efficiency as more antennas are added.

Acquiring more spectrum requires that such spectrum is available either by a regulatory offering (public auction, public beauty contest) or via alternative means such as 3rd party trading, leasing, asymmetric sharing, or by acquiring an MNO (in the market) with spectrum. In Western Europe, the average cost of spectrum is in the ballpark of 100 million Euro per 10 million population per 20 MHz low-band or 100 MHz medium bands. Within the European Union, recent auctions provide a 20-year usage-rights period before the spectrum would have to be re-auctioned. This policy is very different from, for example, in the USA, where spectrum rights are bought and ownership secured in perpetuity (sometimes conditioned on certain conditions being met). For Western Europe, apart from the mmWave spectrum, in the foreseeable future, there will not be many new spectrum acquisition opportunities in the public domain.

This leaves mobile operators with other options listed above. Re-farming spectrum away from legacy technology (e.g., 2G or 3G) in support of another more spectral efficient access technology (e.g., 4G and 5G) is possibly the most straightforward choice. In general, it is the least costly choice provided that more modern options can support the very few customers left. For either retiring 2G or 3G, operators need to be aware that as long as not all terminal equipment support Voice-over-LTE (VoLTE), they need to keep either 2G or 3G (but not both) for 4G circuit-switched fallback (to 2G or 3G) for legacy voice services. The technologist should be prepared for substantial pushback from the retail and wholesale business, as closing down a legacy technology may lead to significant churn in that legacy customer base. Although, in absolute terms, the churn exposure should be much smaller than the overall customer base. Otherwise, it will not make sense to retire the legacy technology in the first place. Suppose the spectral re-farming is towards a new technology (e.g., 5G). In that case, immediate benefits may not occur before a critical mass of capable devices is making use of the re-farmed spectrum. The Capex impact of spectral re-farming tends to be minor, with possibly some licensing costs associated with net savings from retiring the legacy. Most radio departments within mobile operators, supplier experts, and managed service providers have gained much experience in this area over the last 5 – 7 years.

Another venue that should be taken is upgrading or modernizing the radio access network with more capable antenna infrastructure, such as higher-order massive MiMo antenna systems. As has been pointed out by Prof. Emil Björnson also, the available signal processing schemes (e.g., for channel estimation, pre-coding, and combining) will be essential for the ultimate gain that can be achieved. This will result in a leapfrog increase in spectral efficiency. Thus, directly boosting air-interface capacity and the quality that the mobile customer can enjoy. If we take a 20-year period, this activity is likely to result in a capital demand in the order of 100 million euros for every 1,000 sites being modernized and assumes a modernization (or obsolescence) cycle of 7 years. In other words, within the next 20 years, a mobile operator will have undergone at least 3 antenna-system modernization cycles. It is important to emphasize that this does not (entirely) cover the likely introduction of 6G during the 20 years. Operators face two main risks in their investment strategy. One risk is that they take a short-term look at their capital investments and customer demand projections. As a result, they may invest in insufficient infrastructure solutions to meet future demands, forcing accelerated write-offs and re-investments. The second significant risk is that the operator invests too aggressively upfront in what appears to be the best solution today to find substantially better and more efficient solutions in the near future that more cautious competitive operators could deploy and achieve a substantially higher quality and investment efficiency. Given the lack of technology maturity and the very high pace of innovation in advanced antenna systems, the right timing is crucial but not straightforward.

Last and maybe least, the operator can choose to densify its cellular grid by adding one or more macro-cellular sites or adding small cells across existing macro-cellular coverage. Before it is possible to build a new site or site, the operator or the serving towerco would need to identify suitable locations and subsequently obtain a permit to establish the new site or site. In urban areas, which typically have the highest macro-site densities, getting a new permit may be very time-consuming and with a relatively high likelihood of not being granted by the municipality. Small cells may be easier to deploy in urban environments than in macro sites. For operators making use of towerco to provide the passive site infrastructure, the cost of permitting and building the site and materials (e.g., steel and concrete) is a recurring operational expense rather than a Capex charge. Of course, active equipment remains a Capex item for the relevant mobile operator.

The conclusion I make above is largely consistent with the conclusions made by New Street Research in their piece “European 5G deep-dive” (July 2021). There is plenty of unexploited spectrum with the European operators and even more opportunity to migrate to more capable antenna systems, such as massive-MiMo and active advanced antenna systems. There are also above 3GHz, other spectrum opportunities without having to think about millimeter Wave spectrum and 5G deployment in the high-frequency spectrum range.

ACKNOWLEDGEMENT.

I greatly acknowledge my wife Eva Varadi, for her support, patience, and understanding during the creative process of writing this Blog. There should be no doubt that without the support of Russell Waller (New Street Research), this blog would not have been possible. Thank you so much for providing much of the data that lays the ground for much of the Capex analysis in this article. Of course, a lot of thanks go out to my former Technology and Network Economics colleagues, who have been a source of inspiration and knowledge. I cannot get away with acknowledging Maurice Ketel (who for many years let my Technology Economics Unit in Deutsche Telekom, I respect him above and beyond), Paul Borker, David Haszeldine, Remek Prokopiak, Michael Dueser, Gudrun Bobzin, as well as many, many other industry colleagues who have contributed with valuable insights, discussions & comments throughout the years. Many thanks to Paul Zwaan for a lot of inspiration, insights, and discussions around IT Architecture.

Without executive leadership’s belief in the importance of high-quality techno-financial models, I have no doubt that I would not have been able to build up the experience I have in this field. I am forever thankful, for the trust and for making my professional life super interesting and not just a little fun, to Mads Rasmussen, Bruno Jacobfeuerborn, Hamid Akhavan, Jim Burke, Joachim Horn, and last but certainly not least, Thorsten Langheim.

FURTHER READING.

  1. Kim Kyllesbech Larsen, “The Nature of Telecom Capex.” (July, 2022). My first article laying the ground for Capex in the Telecom industry. The data presented in this article is largely outdated and remains for comparative reasons.
  2. Kim Kyllesbech Larsen, “5G Standalone European Demand Expectations (Part I).”, (January, 2022).
  3. Kim Kyllesbech Larsen, “RAN Unleashed … Strategies for being the best (or the worst) cellular network (Part III).”, (January, 2022).
  4. Tom Copeland, Tim Koller, and Jack Murrin, “Valuation”, John Wiley & Sons, (2000). I regard this as my “bible” when it comes to understanding enterprise valuation. There are obviously many finance books on valuation (I have 10 on my bookshelf). Copeland’s book is the best imo.
  5. Stefan Rommer, Peter Hedman, Magnus Olsson, Lars Frid, Shabnam Sultana, and Catherine Mulligan, “5G Core Networks”, Academic Press, (2020, 1st edition). Good account for what a 5G Core Network entails.
  6. Jia Shen, Zhongda Du, Zhi Zhang, Ning Yang and Hai Tang, “5G NR and enhancements”, Elsevier (2022, 1st edition). Very good and solid account of what 5G New Radio (NR) is about and the considerations around it.
  7. Wim Rouwet, “Open Radio Access Network (O-RAN) Systems Architecture and Design”, Academic Press, (2022). One of the best books on Open Radio Access Network architecture and design (honestly, there are not that many books on this topic yet). I like that the author, at least as an introduction makes the material reasonably accessible to even non-experts (which tbh is also badly needed).
  8. Strand Consult, “OpenRAN and Security: A Literature Review”, (June, 2022). Excellent insights into the O-RAN maturity challenges. This report focuses on the many issues around open source software-based development that is a major part of O-RAN and some deep concerns around what that may mean for security if what should be regarded as critical infrastructure. I warmly recommend their “Debunking 25 Myths of OpenRAN”.
  9. Ian Morris, “Open RAN’s 5G course correction takes it into choppy waters”, Light Reading, (July, 2023).
  10. Hwaiyu Geng P.E., “Data Center Handbook”, Wiley (2021, 2nd edition). I have several older books on the topic that I have used for my models. This one brings the topic of data center design up to date. Also includes the topic of Cloud and Edge computing. Good part on Data Center financial analysis. 
  11. James Farmer, Brian Lane, Kevin Bourgm Weyl Wang, “FTTx Networks, Technology Implementation, and Operations”, Elsevier, (2017, 1st edition). It has some books covering FTTx deployment, GPON, and other alternative fiber technologies. I like this one in particular as it covers hands-on topics as well as basic technology foundations.
  12. Tower companies overview, “Top-12 Global 5G Cell Tower Companies 2021”, (Nov. 2021). A good overview of international tower companies with a meaningful footprint in Europe.
  13. New Street Research, “European 5G deep-dive”, (July, 2021).
  14. Prof. Emil Björnson, https://ebjornson.com/research/ and references therein. Please take a look at many of Prof. Björnson video presentations (e.g., many brilliant YouTube presentations that are fairly assessable).

Spectrum in the USA – An overview of Today and a new Tomorrow.

This week (Week 17, 2023), I submitted my comments and advice titled “Development of a National Spectrum Strategy (NSS)” to the United States National Telecommunications & Information Administration (NTIA) related to their work on a new National Spectrum Strategy.

Of course, one might ask why, as a European, bother with the spectrum policy of the United States. So hereby, a bit of reasoning for bothering with this super interesting and challenging topic of spectrum policy on the other side of the pond.

A EUROPEAN IN AMERICA.

As a European coming to America (i.e., USA) for the first time to discuss the electromagnetic spectrum of the kind mobile operators love to have exclusive access to, you quickly realize that Europe’s spectrum policy/policies, whether you like them or not, are easier to work with and understand. Regarding spectrum policy, whatever you know from Europe is not likely to be the same in the USA (though physics is still fairly similar).

I was very fortunate to arrive back in the early years of the third millennium to discuss cellular capacity and, as it quickly evolves (“escalates”), too, having a discussion of available cellular frequencies, the associated spectral bandwidth, and whether they really need that 100 million US dollar for radio access expansions.

Why fortunate?

I was one of the first (from my company) to ask all those “stupid” questions whenever I erroneously did not just assume things surely must be the same as in Europe and ended up with the correct answer that in the USA, things are a “little” different and a lot more complicated in terms of the availability of frequencies and what feeds the demand … the spectrum bandwidth. My arrival was followed by “hordes” of other well-meaning Europeans with the same questions and presumptions, using European logic to solve US challenges. And that doesn’t really work (surprised you not should be). I believe my T-Mobile US colleagues and friends over the years surely must have felt like Groundhog Day all over again at every new European visit.

COMPARING APPLES AND ORANGES.

Looking at US spectrum reporting, it is important to note that it is customary to provide the total amount of spectrum. Thus, for FDD spectrum bands, including both the downlink spectrum portion and uplink spectrum part of the cellular frequency band in question. For example, when a mobile network operator (MNO) reports that it has, e.g., 40 MHz of AWS1 spectrum in San Diego (California), it means that it has 2×20 MHz (or 20+20 MHz). Thus, 20 MHz of downlink (DL) services and 20 MHz of uplink (UL) services. For FDD, both the DL and the UL parts are counted. In Europe, historically, we mainly would talk about half the spectrum for FDD spectrum bands. This is one of the first hurdles to get over in meetings and discussions. If not sorted out early can lead to some pretty big misunderstandings (to say the least). To be honest, and in my opinion, providing the full spectrum holding, irrespective of whether a band is used as FDD or TDD, is less ambiguous than the European tradition.

The second “hurdle” is to understand that a USA-based MNO is likely to have a substantial variation in its spectrum holdings across the US geography. An MNO may have a 40 MHz (i.e., 2×20 MHz) PCS spectrum in Los Angeles (California) and only 30 MHz (2×15 MHz) of the same spectrum in New York or only 20 MHz (2×10 MHz) in Miami (Florida). For example, FCC (i.e., the regulator managing non-federal spectrum) uses 734 so-called Cellular Market Areas or CMAs, and there is no guarantee that a mobile operator’s spectrum position will remain the same over these 734 CMAs. Imagine Dutch (or other European) mobile operators having a varying 700 MHz (used for 5G) spectrum position across the 342 municipalities of The Netherlands (or another European country). It takes a lot of imagination … right? And maybe why, we Europeans, shake our heads at the US spectrum fragmentation, or market variation, as opposed to our nice, neat, and tidy market-wise spectrum uniformity. But is the European model so much better (apart from being neat & tidy)? …

… One may argue that the US model allows for spectrum acquisition to be more closely aligned with demand, e.g., less spectrum is needed in low-population density areas and more is required in high-density population areas (where demand will be much more intense). As evidenced by many US auctions, the economics matched the demand fairly well. While the European model is closely aligned with our good traditions of being solid on average … with our feet in the oven and our head in the freezer … and on average all is pretty much okay in Europe.

Figure 1 and 2 below illustrates a mobile operator difference between its spectrum bandwidth spread across the 734 US-defined CMAs in the AWS1 band and how that would look in Europe.

Figure 1 illustrates the average MNO distribution of (left chart) USA AWS1 band (band 4) distribution over the 734 Cellular Market Areas (CMA) defined by the FCC. (right chart) Typical European 3 MNO 2100-band (band-1) distribution across the country’s geographical area. As a rule of thumb for European countries, the spectrum is fairly uniformly distributed across the national MNOs. E.g., if you have 3 mobile operators, the 120 MHz available to band-1 will be divided equally among the 3, and If there are 4 MNOs, then it will be divided by 4. Nevertheless, in Europe, an MNO spectrum position is fixed across the geography.

Figure 2 below is visually an even stronger illustration of mobile operator bandwidth variation across the 734 cellular market areas. The dashed white horizontal line is if the PCS band (a total of 120 MHz or 2×60 MHz) would be shared equally between 4 main nationwide mobile operators ending up at 30 MHz per operator across all CMAs. This would resemble what today is more or less a European situation, i.e., irrespective of regional population numbers, the mobile operator’s spectrum bandwidth at a given carrier frequency would be the same. The European model, of course, also implies that an operator can provide the same quality in peak bandwidth before load may become an issue. The high variation in the US operator’s spectrum bandwidth may result in a relatively big variation in provided quality (i.e., peak speed in Mbps) across the different CMAs.

There is an alternative approach to spectrum acquisition that may also be more spectrally efficient, which the US model is much more suitable for. Aim at a target Hz per Customer (i.e., spectral overhead) and keep this constant within the various market. Of course, there is a maximum realistic amount of bandwidth to acquire, governed by availability (e.g., for PCS, that is, 120 MHz) and competitive bidders’ strength. There will also be a minimum bandwidth level determined by the auction rules (e.g., 5 MHz) and a minimum acceptable quality level (e.g., 10 MHz). However, Figure 2 below reflects more opportunistic spectrum acquisition in CMAs with less than a million population as opposed to a more intelligent design (possibly reflecting the importance of, or lack of, different CMAs to the individual operators).

Figure 2 illustrates the bandwidth variation (orange dots) across the 734 cellular market areas for 4 nationwide mobile network operators in the United States. The horizontal dashed white line is if the four main nationwide operators would equally share the 120 MHz of PCS spectrum (fairly similar to a European situation). MNOs would have the same spectral bandwidth across every CMA. The Minimum – Growing – Maximum dashed line illustrates a different spectrum acquisition strategy, where the operator has fixed the amount of spectrum per customer required and keeps this as a planning rule between a minimum level (e.g., a unit of minimum auctioned bandwidth) and a realistic maximum level (e.g., determined by auction competition, auction ruling, and availability).

Thirdly, so-called exclusive use frequency licenses (as opposed to shared frequencies), as issued by FCC, can be regarded accounting-wise as an indefinitely-lived intangible asset. Thus, once a US-based cellular mobile operator has acquired a given exclusive-use license, that license can be considered disposable to the operator in perpetuity. It should be noted that FCC licenses typically would be issued for a fixed (limited) period, but renewals are routine.

This is a (really) big difference from European cellular frequency licenses that typically expire after 10 – 20 years, with the expired frequency bands being re-auctioned. A European mobile operator cannot guarantee its operation beyond the expiration date of the spectrum acquired, posing substantial existential threats to business and shareholder value. In the USA, cellular mobile operators have a substantially lower risk regarding business continuity as their spectrum, in general, can be regarded as theirs indefinitely.

FCC also operates with a shared-spectrum license model, as envisioned by the Citizens Broadband Radio Service (CBRS) in the 3.55 to 3.7 GHz frequency range (i.e., the C-band). A shared-spectrum license model allows for several types of users (e.g., Federal and non-Federal) and use-cases (e.g., satellite communications, radar applications, national cellular services, local community broadband services, etc..) to co-exist within the same spectrum band. Usually, such shared licenses come with firm protection of federal (incumbent) users that allows commercial use to co-exist with federal use, though with the federal use case taking priority over the non-federal. A really good overview of the CBRS concept can be found in “A Survey on Citizens Broadband Radio Service (CBRS)” by P. Agarwal et al.. Wireless Innovation Forum published on 2022 a piece on “Lessons Learned from CBRS” which provides a fairly nuanced, although somewhat negative, view on spectrum sharing as observed in the field and within the premises of the CBRS priority architecture and management system.

Recent data around FCC’s 3.5 GHz (CBRS) Auction 105 would indicate that shared-licensed spectrum is valued at a lower USD-per-MHz-pop (i.e., 0.14 USD-per-MHz-pop) than exclusive-use license auctions in 3.7 GHz (Auction 107; 0.88 USD-per-MHz-pop) and 3.45 GHz (Auction 110; 0.68 USD-per-MHz-pop). The duration of the shared-spectrum license in the case of the Auction 105 spectrum is 10 years after which it is renewed. Verizon and Dish Networks were the two main telecom incumbents that acquired substantial spectrum in Auction 105. AT&T did not acquire and T-Mobile US only picked close to nothing (i.e., 8 licenses).

THE STATE OF CELLULAR PERFORMANCE – IN THE UNITED STATES AND THE REST OF THE WORLD.

Irrespective of how one feels about the many mobile cellular benchmarks around in the industry (e.g., Ookla Speedtest, Umaut benchmarking, OpenSignal, etc…), these benchmarks do give an indication of the state of networks and how those networks utilize the spectral resources that mobile companies have often spend hundreds of millions, if not billions, of US dollars acquiring and not to underestimate in cost and time, spectrum clearing or perfecting a “second-hand” spectrum may incur for those operators.

So how do US-based mobile operators perform in a global context? We can get an impression, although very 1-dimensional, from Figure 1 below.

Figure 3 illustrates the comparative results of Ookla Speedtest data in median downlink speed (Mbps) for various countries. The selection of countries provides a reasonable representation of maximum and minimum values. To give an impression of the global ranking as of February 2023; South Korea (3), Norway (4), China (7), Canada (17), USA (19), and Japan (48). As a reminder, the statistic is based on the median of all measurements per country. Thus, half of the measurements were above the median speed value, and the other half were below. Note: median values from 2020 to 2017 are estimated as Ookla did only provide average numbers.

Ookla’s Speedtest rank (see Figure 3 above) positions the United States cellular mobile networks (as an average) among the Top-20. Depending on the ambition level, that may be pretty okay or a disappointment. However, over the last 24 months, thanks to the fast 5G deployment pace at 600 MHz, 2.5 GHz, and C-band, the US has leapfrogged (on average) its network quality which for many years did not improve much due to little spectrum availability and huge capital investment levels. Something that the American consumer can greatly enjoy irrespective of the relative mobile network ranking of the US compared to the rest of the world. South Korea and Norway are ranked 3 and 4, respectively, regarding cellular downlink (DL) speed in Mbps. The above figure also shows a significant uplift in the speed at the time of introducing 5G in the cellular operators’ networks worldwide.

How to understand the supplied cellular network quality and capacity that the consumer demand and hopefully also enjoy? Let start with the basics:

Figure 4 illustrates one of the most important (imo) to understand about creating capacity & quality in cellular networks. You need frequency bandwidth (in MHz), the right technology boosting your spectral efficiency (i.e., the ability to deliver bits per unit Hz), and sites (sectors, cells, ..) to deploy the spectrum and your technology. That’s pretty much it.

We might be able to understand some of the dynamics of Figure 3 using Figure 4, which illustrates the fundamental cellular quality (and capacity) relationship with frequency bandwidth, spectral efficiency, and the number of cells (or sectors or sites) deployed in a given country.

Thus, a mobile operator can improve its cellular quality (and capacity) by deploying more spectrum acquired on its existing network, for example, by auctions, leasing, sharing, or other arrangements within the possibilities of whatever applicable regulatory regime. This option will exhaust as the operator’s frequency spectrum pool is deployed across the cellular network. It leaves an operator to wait for an upcoming new frequency auction or, if possible, attempt to purchase additional spectrum in the market (if regulation allows) that may ultimately include a merger with another spectrum-rich entity (e.g., AT&T attempt to take over T-Mobile US). All such spectrum initiatives may take a substantial amount of time to crystalize, while customers may experience a worsening in their quality. In Europe, the licensed spectrum becomes available in cycles of 10 – 20 years. In the USA, exclusive-use licensed spectrum typically would be a once-only opportunity to acquire (unless you acquire another spectrum-holding entity later, e.g., Metro PCS, Sprint, AT&T’s attempt to acquire T-Mobile, …).

Another part of the quality and capacity toolkit is for the mobile operator to choose appropriately spectral efficient technologies that are supported by a commercially available terminal ecosystem. Firstly, migrate frequency and bandwidth away from currently deployed legacy radio-access technology (e.g., 2G, 3G, …) to newer and spectrally more efficient ones (e.g., 4G, 5G, …). This migration, also called spectral re-farming, requires a balancing act between current legacy demand versus the future expectations of demand in the newer technology. In a modern cellular setting, the choice of antenna technology (e.g., massive MiMo, advanced antenna systems, …) and type (e.g., multi-band) is incredibly important for boosting quality and capacity within the operators’ cellular networks. Given that such choices may result in redesigning existing site infrastructure, it provides an opportunity to optimize the existing infrastructure for the best coverage of the consolidated spectrum pool. It is likely that the existing infra was designed with a single or only a few frequencies in mind (e.g., PCS, PCS+AWS, …) as well as legacy antennas, and the cellular performance is likely improved by considering the complete pool of frequencies in the operator’s spectrum holding. The mobile operator’s game should always be to achieve the best possible spectral efficiency considering demand and economics (i.e., deploying 64×64 massive MiMo all over a network may be the most spectrally efficient solution, theoretically, but both demand and economics would rarely support such an apparently “silly” non-engineering strategy). In general, this will be the most frequently used tool in the operators’ quality/capacity toolkit. I expect to see an “arms race” between operators deploying the best and most capable antennas (where it matters), as it will often be the only way to differentiate in quality and capacity (if everything else is almost equal).

Finally, the mobile operator can deploy more site locations (macro and small cells), if permitting allows, or more sectors by sectorization (e.g., 3 → 4, 4 → 5 sectors) or cell split if the infrastructure and landlord allows. If there remains unused spectral bandwidth in the operator’s spectrum pool, the operator may likely choose to add another cell (i.e., frequency band) to the existing site. Particular adding new site locations (macro or small cell) is the most complex path to be taken and, of course, also often the least economic path.

Thus, to get a feeling for the Ookla Speedtest, which is a country average, results of Figure 3, we need, as a starting point, to have the amount of spectral bandwidth for the average cellular mobile operator. This is summarised in below’s Table 1.

Table 1 provides, per country, the average amount of Low-band (≤ 1 GHz), Mid-band (1 GHz to 2.1 GHz), 2.3 & 2.5 GHz bands, Sub-total bandwidth before including the C-band, the C-band (3.45 to 4.2 GHz) and the Total bandwidth. The table also includes the Ookla Global Speedtest DL Mbps and Global Rank as of February 2023. I have also included the in-country mobile operator variation within the different categories, which may indicate what kind of performance range to expect within a given country.

It does not take too long to observe that there is only an apparently rather weak correlation between spectrum bandwidth (sub-total and total) and the observed DL speed (even after rescaling to downlink spectrum only). Also, what is important is, of course, how much of the spectrum is deployed. Typically low and medium bands will be deployed extensively, while other high-frequency bands may only have been selectively deployed, and the C-band is only in the process of being deployed (where it is available). What also plays a role is to what degree 5G has been rollout across the network, how much bandwidth has been dedicated to 5G (and 4G), and what type of advanced antenna system or massive MiMo capabilities has been chosen. And then, to provide a great service, a network must have a certain site density (or coverage) compared to the customer’s demand. Thus, it is to be expected that the number of mobile site locations, and the associated number of frequency cells and sectors, will play a role in the average speed performance of a given country.

Figure 5 illustrates how the DL speed in Mbps correlates with the (a) total amount of spectrum excluding the C-band (still not widely deployed), (b) Customers per Site that provides a measure of the customer load at the site location level. The more customers load a site or compete for radio resources (i.e., MHz), the lower the experience. Finally, (c) The higher the Site times, the bandwidth is compared to the number of customers. More quality can be provided (as observed with the positive correlation). The data is from Table 1.

Figure 5 shows that load (e.g., customers per site) and available capacity (e.g., sites x bandwidth) relative to customers are strongly correlated with the experienced quality (e.g., speed in Mbps). The comparison between the United States and China is interesting as both countries with a fairly similar surface area (i.e., 9.8 vs. 9.6 million sq. km), the USA has a little less than a quarter of the population, and the average mobile US operator would have about one-third of the customers compared to the average Chinese operator (note: China mobile dominates the average). The Chinese operator, ignoring C-band, would have ca. 25 MHz or ~+20% (~50 MHz or ca. +10% if C-band is included) more than the US operator. Regarding sites, China Mobile has been reported to have millions of cell site locations (incl. lots of small cells). The US operator’s site count is in the order of hundreds of thousands (though less than 200k currently, including small cells). Thus, Chinese mobile operators have between 5x to 10x the number of site locations compared to the American ones. While the difference in spectrum bandwidth has some significance (i.e., China +10% to 20% higher), the huge relative difference in site numbers is one of the determining factors in why China (i.e., 117 Mbps) gets away with a better speed test score that is better than the American one (i.e., 85 Mbps). While theoretically (and simplistically), one would expect that the average Chinese mobile operator should be able to provide more than twice the speed as compared to the American mobile operator instead of “only” about 40% more, it stands to show that the radio environment is a “bit” more complex than the simplistic view.

Of course, the US-based operator could attempt to deploy even more sites where it matters. However, I very much doubt that this would be a feasible strategy given permitting and citizen resistance to increasing site density in areas where it actually would be needed to boost the performance and customer experience.

Thus, the operator in the United States must acquire more spectrum bandwidth and deploy that where it matters to their customers. They also need to continue to innovate on leapfrogging the spectral efficiency of the radio access technologies and deploy increasingly more sophisticated antenna systems across their coverage footprint.

In terms of sectorization (at existing locations), cell split (adding existing spectrum to an existing site), and/or adding more sophisticated antenna systems is a matter of Capex prioritization and possibly getting permission from the landlord. Acquiring new spectrum … well, that depends on such new spectrum somehow becomes available.

Where to “look” for more spectrum?

WHERE COULD MORE SPECTRUM COME FROM?

Within the so-called “beachfront spectrum” covering the frequency range from 225 MHz to 4.2 GHz (according to NTIA), only about 30% (ca. 1GHz of bandwidth within the frequency range from 600 MHz to 4.2 GHz) is exclusively non-Federal, and mainly with the mobile operators as exclusive use licenses deployed for cellular mobile services across the United States. Federal authorities exclusively use a bit less than 20% (~800 MHz) for communications, radars, and R&D purposes. This leaves ca. 50% (~2 GHz) of the beachfront spectrum shared between Federal authorities and commercial entities (i.e., non-Federal).

For cellular mobile operators, exclusive use licenses would be preferable (note: at least at the current state of the relevant technology landscape) as it provides the greatest degree of operational control and possibility to optimize spectral efficiency, avoiding unacceptable levels of interference either from systems or towards systems that may be sharing a given frequency range.

The options for re-purposing the Federal-only spectrum (~800 MHz) could, for example, be either (a) moving radar systems’ operational frequency range out of the beachfront spectrum range to the degree innovation and technology supports such a migration, (b) modernizing radar systems with a focus of making these substantially more spectrally efficient and interference-resistant, (c) migrated federal-only communications services to commercially available systems (e.g., 5G federal-only slicing) similar to the trend of migrating federal legacy data centers to the public cloud. Within the shared frequency portion with the ~2 GHz of bandwidth, it may be more challenging as considerable commercial interests (other than mobile operators) have positioned that business at and around such frequencies, e.g., within the CBRS frequency range. This said, there might also be opportunities within the Federal use cases to shift applications towards commercially available communication systems or to shift them out of the beachfront range. Of course, in my opinion, it always makes sense to impose (and possibly finance) stricter spectral efficiency conditions, triggering innovation on federal systems and commercial systems alike within the shared portion of the beachfront spectrum range. With such spectrum strategies, it appears compelling that there are high likelihood opportunities for creating more spectrum for exclusive license use that would safeguard future consumer and commercial demand and continuous improvement of customer experience that comes with the future demand and user expectations of the technology that serves them.

I believe that the beachfront should be extended beyond 4.2 GHz. For example aligning with band-79, whose frequency range extends from 4.4 GHz to 5 GHz, allows for a bandwidth of 600 MHz (e.g., China Mobile has 100 MHz in the range from 4.8 GHz to 4.9 GHz). Exploring additional re-purposing opportunities for exclusive use licenses in what may be called the extended beachfront frequency range from 4.2 GHz up to 7.2 GHz should be conducted with priority. Such a study should also consider the possibility of moving the spectrum under exclusive and shared federal use to other frequency bands and optimizing the current federal frequency and spectrum allocation.

The NTIA, that is, the National Telecommunications and Information Administration, is currently (i.e., 2023) for the United States developing a National Spectrum Strategy (NSS) and the associated implementation plan. Comments and suggestions to the NSS were possible until the 18th of April, 2023. The National Spectrum Strategy should address how to create a long-term spectrum pipeline. It is clear that developing a coherent national spectrum strategy is critical to innovation, economic competition, national security, and maybe re-capture global technology leadership.

So who is the NTIA? What do they do that FCC doesn’t already do? (you may possibly ask).

WHO MANAGES WHAT SPECTRUM?

Two main agencies in the US manage the frequency spectrum, the FCC and the NTIA.The Federal Communications Commission, the FCC for short, is an independent agency that exclusively regulates all non-Federal spectrum use across the United States. FCC allocates spectrum licenses for commercial use, typically through spectrum auctions. A new or re-purposed commercialized spectrum has been reclaimed from other uses, both from federal uses and existing commercial uses. Spectrum can be re-purposed either because newer, more spectrally efficient technologies become available (e.g., the transition from analog to digital broadcasting) or it becomes viable to shift operation to other spectrum bands with less commercial value (and, of course, without jeopardizing existing operational excellence). It is also possible that spectrum, previously having been for exclusive federal use (e.g., military applications, fixed satellite uses, etc..), can be shared, such as the case with Citizens Broadband Radio Service (CBRS), which allows non-federal parties access to 150 MHz in the 3.5 GHz band (i.e., band 48). However, it has recently been concluded that (centralized) dynamic spectrum sharing only works in certain use cases and is associated with considerable implementation complexities. Multiple parties with possible vastly different requirements co-existence within a given band is very much work-in-progress and may not be consistent with the commercialized spectrum operation required for high-quality broadband cellular operation.

In parallel with the FCC, we have the National Telecommunications and Information Administration, NTIA for short. NTIA is solely responsible for authorizing Federal spectrum use. It also acts as the President of the United State’s principal adviser on telecommunications policies, coordinating the views of the Executive Branch. NTIA manages about 2,398 MHz (69%) within the so-called “beachfront spectrum” range of 225 MHz to 3.7 GHz (note: I would let that Beachfront go to 7 GHz, to be honest). Of the total of 3,475 MHz, 591 MHz (17%) is exclusively for Federal use, and 1,807 MHz (52%) is shared (or coordinated) between Federal and non-Federal. Thus, leaving 1,077 MHz (31%) for exclusive commercial use under the management of the FCC.

NTIA, in collaboration with the FCC, has been instrumental in the past in freeing up substantial C-band spectrum, 480 MHz in total, of which 100 MHz is conditioned on prioritized sharing (i.e., Auction 105), for commercial and shared use that subsequently has been auctioned off over the last 3 years raising USD 109 billion. In US Dollar (USD) per MHz per population count (pop) we have on average ca. USD 0.68 per MHz-pop from the C-band auctions in the US, compared to USD 0.13 per MHz-pop in Europe C-band auctions, and USD 0.23 per MHz-pop in APAC auctions. It should be remember that the United States exclusive-use spectrum licenses can be regarded as an indefinite-lived intangible asset while European spectrum rights expire between 10 and 20 years. This may explain a big part of the pricing difference between US-based spectrum pricing and that of Europe and Asia.

NTIA and FCC jointly manage all the radio spectrum, licensed (e.g., cellular mobile frequencies, TV signals, …) and unlicensed (e.g., WiFi, MW Owens, …) of the United States, NTIA for Federal use, and FCC for non-Federal use (put simply). FCC is responsible for auctioning spectrum licenses and is also authorized to redistribute licenses.

RESPONSE TO NTIA’s National Spectrum Strategy Request for Comments

Here are some of key points to consider for developing a National Spectrum Strategy (NSS).

  • The NTIA National Spectrum Strategy (NSS) should focus on creating a long-term spectrum pipeline. Developing a coherent national spectrum strategy is critical to innovation, economic competition, national security, and global technology leadership.
  • NTIA should aim at significant amounts of spectrum to study and clear to build a pipeline. Repurposing at least 1,500 Mega Hertz of spectrum perfected for commercial operations is good initial target allowing it to continue to meet consumer, business, and societal demand. It requires more than 1,500 Mega Hertz to be identified for study.
  • NTIA should be aware that the mobile network quality strongly correlates with the mobile operators’ spectrum available for their broadband mobile service in a global setting.
  • NTIA must remember that not all spectrum is equal. As it thinks about a pipeline, it must ensure its plans are consistent with the spectrum needs of various use cases of the wireless sectors. The NSS is a unique opportunity for NTIA to establish a more reliable process and consistent policy for making the federal spectrum available for commercial use. NTIA should reassert its role, and that of the FCC, as the primary federal and commercial regulator of spectrum policy.

A balanced spectrum policy is the right approach. Given the current spectrum dynamics, the NSS should prioritize identifying exclusive-use licensed spectrum instead of, for example, attempting co-existence between commercial and federal use.

Spectrum-band sharing between commercial communications networks and federal communications, or radar systems, may impact the performance of all the involved systems. Such practice compromises the level of innovation in modern commercialized communications networks (e.g., 5G or 6G) to co-exist with the older legacy systems. It also discourages the modernization of legacy federal equipment.

Only high-power licensed spectrum can provide the performance necessary to support nationwide wireless with the scale, reliability, security, resiliency, and capabilities consumers, businesses, and public sector customers expect.

Exclusive use of licensed spectrum provides unique benefits compared to unlicensed and shared spectrum. Unlicensed spectrum, while important, is only suitable for some types of applications, and licensed spectrum under shared access frameworks by CBRS is unsuited for serving as the foundation for nationwide mobile wireless networks.

Allocating new spectrum bands for the exclusive use of licensed spectrum positively impacts the entire wireless ecosystem, including downstream investments by equipment companies and others who support developing and deploying wireless networks. Insufficient licensed spectrum means increasingly deteriorating customer experience and lost economic growth, jobs, and innovation.

Other countries are ahead of the USA in developing plans for licensed spectrum allocations, targeting the full potential of the spectrum range from 300 MHz up to 7 GHz (i.e., the beachfront spectrum range), and those countries will lead the international conversation on licensed spectrum allocation. The NSS offers an opportunity to reassert U.S. leadership in these debates.

NTIA should also consider the substantial benefits and economic value of leading the innovation in modernizing the legacy spectrally in-efficient non-commercial communications and radar systems occupying vast spectrum resources.

Exclusive-use licensed spectrum has inherent characteristics that benefit all users in the wireless ecosystem.

Consumer demand for mobile data is at an all-time high and only continues to surge as demand grows for lightning-fast and responsive wireless products and services enabled by licensed spectrum.

With an appropriately designed and well-sized spectrum pipeline, demand will remain sustainable as supplied spectrum capacity compared to the demand will remain or exceed today’s levels.

Networks built on licensed spectrum are the backbone of next-generation innovative applications like precision agriculture, telehealth, advanced manufacturing, smart cities, and our climate response.

Licensed spectrum is enhancing broadband competition and bridging the digital divide by enabling 5G services like 5G Fixed Wireless Access (FWA) in areas traditionally dominated by cable and in rural areas where fiber is not cost-effective to deploy.

NTIA should identify the midband spectrum (e.g., ~2.5GHz to ~7GHz) and, in particular, frequencies above the C-band for licensed spectrum. That would be the sweet spot for leapfrogging broadband speed and capacity necessary to power 5G and future generations of broadband communications networks.

The National Spectrum Strategy is an opportunity to improve the U.S. Government’s spectrum management process.

The NSS allows NTIA to develop a more consistent and better process for allocating spectrum and providing dispute resolution.

The U.S. should handle mobile networks without a new top-down government-driven industrial policy to manage mobile networks. A central planning model would harm the nation, severely limiting innovation and private sector dynamism.

Instead, we need a better collaboration between government agencies with NTIA and the FCC as the U.S. Government agencies with clear authority over the nation’s spectrum. The NSS also should explore mechanisms to get federal agencies (and their associated industry sectors) to surface their concerns about spectrum allocation decisions early in the process and accept NTIA’s role as a mediator in any dispute.

ACKNOWLEDGEMENT.

I greatly acknowledge my wife, Eva Varadi, for her support, patience, and understanding during the creative process of writing this article. Of course, throughout the years of being involved in T-Mobile US spectrum strategy, I have enjoyed many discussions and debates with US-based spectrum professionals, bankers, T-Mobile US colleagues, and very smart regulatory policy experts in Deutsche Telekom AG. I have the utmost respect for their work and the challenges they have faced and face. For this particular work, I cannot thank Roslyn Layton, PhD enough for nudging me into writing the comments to NTIA. By that nudge, this little article is a companion to my submission about the US Spectrum as it stands today and what I would like to see with the upcoming National Spectrum Strategy. I very much recommend reading Roslyn’s far more comprehensive and worked-through comments to the NTIA NSS request for advice. A final thank you to John Strand (who keeps away from Linkedin;-) of Strand Consult for challenging my way of thinking and for always stimulating new ways of approaching problems in our telecom sector. I very much appreciate our discussions.

ADDITIONAL MATERIAL.

  1. Kim Kyllesbech Larsen, “NTIA-2023-003. Development of a National Spectrum Strategy (NSS)”, National Spectrum Strategy Request for Comment Responses April 2023. See all submissions here.
  2. Roslyn Layton, “NTIA–2023–0003. Development of a National Spectrum Strategy (NSS)”, National Spectrum Strategy Request for Comment Responses April 2023..
  3. Ronald Harry Coase, “The Federal Communications Commission”, The Journal of Law & Economics, Vol. 2 (October 1959), pp. 1- 40. In my opinion, a must-read for anyone who wants to understand the US spectrum regulation and how it came about.
  4. Kenneth R. Carter, “Policy Lessons from Personal Communications Services: Licensed vs. Unlicensed Spectrum Access,” 2006, Columbus School of Law. An interesting perspective on licensed and unlicensed spectrum access.
  5. Federal Communication Commission (FCC) assigned areas based on the relevant radio licenses. See also FCC Cellular Market Areas (CMAs).
  6. FCC broadband PCS band plan, UL:1850-1910 MHz & DL:1930-1990 MHz, 120 MHz in total or 2×60 MHz.
  7. Understanding Federal Spectrum Use is a good piece from NTIA about the various federal use of spectrum in the United States.
  8. Ookla’s Speedtest Global Index for February 2023. In order to get the historical information use the internet archive, also called “The Wayback Machine.”
  9. I make extensive use of the Spectrum Monitoring site, which I can recommend as one of the most comprehensive sources of frequency allocation data worldwide that I have come across (and is affordable to use).
  10. FCC Releases Rules for Innovative Spectrum Sharing in 3.5 GHz Band.
  11. 47 CFR Part 96—Citizens Broadband Radio Service. Explain the hierarchical spectrum-sharing regime of and priorities given within the CBRS.

RAN Unleashed … Strategies for being the best (or the worst) cellular network (Part III).

I have been spending my holiday break this year (December 2021) updating my dataset on Western Europe Mobile Operators, comprising 58+ mobile operators in 16 major Western European markets, focusing on spectrum positions, market dynamics, technology diffusion (i.e., customer migration to 5G), advanced antenna strategies, (modeled) investment levels and last but not least answering the question: what makes a cellular network the best in a given market or the world. What are the critical ingredients for an award-winning mobile network?

An award-winning cellular network, the best network, also provides its customers with a superior experience, the best network experience possible in a given market.

I am fascinated by the many reasons and stories we tell ourselves (and others) why this or that cellular network is the best. The story may differ whether you are an operator, a network supplier, or an analyst covering the industry. I have had the privileged to lead a mobile network (T-Mobile Netherlands) that won the Umlaut best mobile network award in The Netherlands since 2016 (5 consecutive times) and even scored the highest amount of points in the world in 2019 and 2020/2021. So, I guess it would make me a sort of “authority” on winning best network awards? (=sarcasm).

In my opinion and experience, a cellular operator has a much better than fair chance at having the best mobile network, compared to its competition, with access to the most extensive active spectrum portfolio, across all relevant cellular bands, implemented on a better (or best) antenna technology (on average) situated on a superior network footprint (e.g., more sites).

For T-Mobile Netherlands, firstly, we have the largest spectrum portfolio (260 MHz) compared to KPN (205 MHz) and Vodafone (215 MHz). The spectrum advantage of T-Mobile, as shown above, is both in low-band (< 1800 MHz) as well as mid-band range (> 1500 MHz). Secondly, as we started out back in 1998, our cell site grid was based on 1800 MHz, requiring a denser cell site grid (thus, more sites required) than the networks based on 900 MHz of the two Dutch incumbent operators, KPN and Vodafone. Therefore, T-Mobile ended up with more cell sites than our competition. We maintained the site advantage even after the industry’s cell grid densification needs of UMTS at 2100 MHz (back in the early 2000s). Our two very successful mergers have also helped our site portfolio, back in 2007 acquiring and merging with Orange NL and in 2019 merging with Tele2 NL.

The number of sites (or cells) matter for coverage, capacity, and overall customer experience. Thirdly, T-Mobile was also first in deploying advanced antenna systems in the Dutch market (e.g., aggressive use of higher-order MiMo antennas) across many of our frequency bands and cell sites. Our antenna strategy has allowed for a high effective spectral efficiency (across our network). Thus, we could (and can) handle more bits per second in our network than our competition.

Moreover, over the last 3 years, T-Mobile has undergone (passive) site modernization that has improved coverage and quality for our customers. This last point is not surprising since the original network was built based on a single 1800 MHz frequency, and since 1998 we have added 7 additional bands (from 700 MHz to 2.5 GHz) that need to be considered in the passive site optimization. Of course, as site modernization is ongoing, an operator (like T-Mobile) also should consider the impact of future bands that may be required (e.g., 3.x GHz). Optimize subject to the past as well as the future spectrum outlook. Last but not least, we at T-Mobile have been blessed with a world-class engineering team that has been instrumental in squeezing out continuous improvements of our cellular network over the last 6 years.

So, suppose you have 25% less spectrum than a competitor. In that case, you either need to compensate by building 25% more cells (very costly & time-consuming), deploying better antennas with a 25% better effective spectral efficiency (limited, costly & relatively easy to copy/match), or a combination of both (expensive & time-consuming). The most challenging driver to copy for network superiority is the amount of spectrum. A competitor only compensates by building more sites, deploying better antenna technology, and over decades to try to equalize spectrum position is subsequent spectrum auctions (e.g., valid for Europe, not so for the USA where acquired spectrum usually is owned in perpetuity).

T-Mobile has consistently won the best mobile network award over the last 6 years (and 5 consecutive times) due to these 3 multiplying core dimensions (i.e., spectrum × antenna technology × sites) and our world-class leading engineering team.

THE MAGIC RECIPE FOR CELLULAR PERFORMANCE.

We can formalize the above network heuristics in the following key (very beautiful IMO) formula for cellular network capacity measured in throughput (bits per second);

It is actually that simple. Cellular capacity is made as simple as possible, dependent on three basic elements, but not more straightforward. Maybe, super clear, though only active spectrum counts. Any spectrum not deployed is an opportunity for a competitor to gain network leadership on you.

If an operator has a superior spectrum position and everything else is equal (i.e., antenna technology & the number of sites), that operator should be unbeatable in its market.

There are some caveats, though. In an overloaded (congested) cellular network, performance would decrease, and superior network performance would be unlikely to be ensured compared to competitors not experiencing such congestion. Furthermore, spectrum superiority must be across the depth of the market-relevant cellular frequencies (i.e., 600 MHz – 3.x GHz and higher). In other words, if a cellular operator “only” has to work with, for example, 100 MHz @ 3.5GHz, it is unlikely that this would guarantee a superior network performance across a market (country) compared to a much better balance spectrum portfolio.

The option space any operator has is to consider the following across the three key network quality dimensions;

Let us look at the hypothetical Western European country Mediana. Mediana, with a population of 25 million, has 3 mobile operators each have 8 cellular frequency bands, incumbent Winky has a total cellular bandwidth of 270 MHz, Dipsy has 220 MHz, and Po has 320 MHz (top their initial weaker spectrum position through acquisitions). Apart from having the most robust spectrum portfolio, Po also has more cell sites than any other in the market (10,000) and keeps winning the best network award. Winky, being the incumbent, is not happy about this situation. No new spectrum opportunities will become available in the next 10 years. Winky’s cellular network, based initially on 900MHz but densified over time, has about 20% fewer sites than Po. Po and Winky’s deployed state of antenna technology is comparable.

What can Winky do to gain network leadership? Winky has assessed that Po has ca. 20% stronger spectrum position than they, state of antenna technology is comparable, and they (Po) have ca. 20% more sites. Using the above formula, Winky estimates that Po’s have 44% more raw cellular network quality available compared to their own capability. Winky’s commenced a network modernization program that adds another 500 new sites and significantly improves their antenna technology. After this modernization program, Winky has decreased its site deficit to having 10% fewer sites than Po and almost 60% better antenna technology capability than Po. Overall, using the above network quality formula, Winky has changed their network position to a lead over Po with ca. 18%. In theory, it should have an excellent chance to capture the best network award.

Of course, Po could simply follow and deploy the same antenna technology as Winky and would easily overtake Winky’s position due to its superior spectrum position (that Winky cannot beat the next 10 to 15 years at least).

In economic terms, it may be tempting to conclude that Winky has avoided 625 Million Euro in spectrum fees by possessing 50 MHz less than Po (i.e., median spectrum fee in Mediana is 0.50 Euro per MHz per pop times the avoided 50 MHz times the population of Mediana 25 Million pops) and that for sure should allow Winky to make a lot of network (and market) investments to gain network leadership. By adding more sites, assuming it is possible to do where they are needed and invest in better antenna technology. However, do the math with realistic prices and costs incurred over a 10 to 15 year period (i.e., until the next spectrum opportunity). You may be more likely to find a higher total cost for Winky than the spectrum fee avoidance. Also, the strategy of Winky is easy to copy and overtake in the next modernization cycle of Po.

Is there any value for operators engaging in such the best network equivalent of a “nuclear arms” race? That interesting question is for another article. Though the answer (spoiler alert) is (maybe) not so black and white as one may think.

An operator can compensate for a weaker spectrum position by adding more cell sites and deploying better antenna technologies.

A superior spectrum portfolio is not an entitlement. Still, an opportunity to become the sustainable best network in a given market (for the duration that spectrum is available to the operator, e.g., 10 – 15 years in Europe at least).

WESTERN EUROPE SPECTRUM POSITIONS.

A cellular operator’s spectrum position is an important prerequisite for superior performance and customer experience. If an operator has the highest amount of spectrum (well balanced over low, mid, and high-frequency bands), it will have a powerful position to become the best network in that given market. Using Spectrum Monitor’s Global Mobile Frequency database (last updated May 2021), I analyzed the spectrum position of a total of 58 cellular operators in 16 Western European markets. The result is shown below as (a) Total spectrum position, (b) Low-band spectrum position covering spectrum below and including 1500 MHz (SDL band), and (c) Mid-band spectrum covering the spectrum above 1500 MHz (SDL band). For clarity, I include the 3.X GHz (C-band) as mid-band and do not include any mmWave (n257 band) positions (anyway would be high band, obviously).

4 operators are in a category by themselves with 400+ MHz of total cellular bandwidth in their spectrum portfolios; A1 (Austria), TDC (Denmark), Cosmote (Greece), and Swisscom (Switzerland). TDC and Swisscom have incredibly strong low-band and mid-band positions compared to their competition. Magenta in Austria has a 20 MHz advantage to A1 in low-band (very good) but trails A1 with 92 MHz in mid-band (not so good). Cosmote slightly follows behind on low-band compared to Vodafone (+10 MHz in their favor), and they head the Greek race with +50 MHz (over Vodafone) in mid-band. All 4 operators should be far ahead of their competitors in network quality. At least if they used their spectrum resources wisely in combination with good (or superior) antenna technologies and a sufficient cellular network footprint. In all else being equal, these 4 operators should be sustainable unbeatable based on their incredible strong spectrum positions. Within Western Europe, I would, over the next few years, expect to see all round best networks with very high best network benchmark scores in Denmark (TDC), Switzerland (Swisscom), Austria (A1), and Greece (Cosmote). Western European countries with relatively more minor surface areas (e.g., <100,000 square km) should outperform much larger countries.

In fact, 3 of the 4 top spectrum-holding operators also have the best cellular networks in their markets. The only exception is A1 in Austria, which lost to Magenta in the most recent Umlaut best network benchmark. Magenta has the best low-band position in the Austrian market, providing for above and beyond cellular indoor-quality coverage that the low-band provides.

There are so many more interesting insights in my collected data. Alas for another article at another time (e.g., topics like the economic value of being the best and winning awards, industry investment levels vs. performance, infrastructure strategies, incumbent vs. later stages operator dynamics, 3.X GHz and mmWave positions in WEU, etc…).

The MNO rank within a country will depend on the relative spectrum position between 1st and 2nd operator. If below 10% (i.e., dark red in chart below), I assess that it will be relative easy for number 2 to match or beat number 1 with improved antenna technology. As the relative strength of the spectrum position of number 1 relative to number 2 is increased, it will become increasingly difficult (assuming number 1 uses an optimal deployment strategy).

The Stars (e.g., #TDCNet / #Nuuday#Swisscom and #EE) have more than a 30% relative spectrum strength compared to the 2nd ranked MNO in a given market. They will have to severely mess up, not to take (or have!) the best cellular network position in their relevant markets. Moreover, network economically, the Stars should have a substantial better Capex position compared to their competitors (although 1 of the Stars seem a “bit” out-of-whack in their sustainable Capex spend, but may be due to fixed broadband focus as well?). As a “cherry on the pie” both Nuuday/TDCNet and Swisscom have some of the strongest spectral overhead positions (i.e., MHz per pop) in Western Europe (relative small populations to very strong spectrum portfolios), which is obviously should enable superior customer experience.

HOW AND HOW NOT TO WIN BEST NETWORK AWARDS.

Out of the 16 cellular operators having the best networks (i.e., rank 1), 12 (75%) also had the strongest (in market) spectrum positions. 3 Operators having the second-best spectrum position ended up taking the best network position, and 1 operator (WindTre, Italy) with the 3rd best spectrum position took the pole network position. The incumbent TIM (Italy) has the strongest spectrum position both in low- (+40 MHz vs. WindTre) and mid-band (+52 MHz vs. WindTre). Clearly, it is not a given that having a superior spectrum position also leads to a superior network position. Though 12 out of 16 operators leverage their spectrum superiority compared to their respective competitors.

For operators with the 2nd largest spectrum position, more variation is observed. 7 out of 16 operators end up with the 2nd position as best network (using Umlaut scoring). 3 ended up as best network, and the rest either in 3rd or 4th position. The reason is that often the difference between 2nd and 3rd spectrum rank position is not per see considerable and therefor, other effects, such as several sites, better antenna technologies, and/or better engineering team, are more likely to be decisive factors.

Nevertheless, the total spectrum is a strong predictor for having the best cellular network and winning the best network award (by Umlaut).

As I have collected quite a rich dataset for mobile operators in Western Europe, it may also be possible to model the expected ranking of operators in a given market. Maybe even reasonably predict an Umlaut score (Hakan, don’t worry, I am not quite there … yet!). This said, while the dataset comprises 58+ operators across 16 markets, more data would be required to increase the confidence in benchmark predictions (if that is what one would like to do). Particular to predict absolute benchmark scores (e.g., voice, data, and crowd) as compiled by Umlaut. Speed benchmarks, ala what Ookla’s provides, are (much) easier to predict with much less sophistication (IMO).

Here I will just show my little toy model using the following rank data (using Jupyter R);

The rank dataset set has 64 rows representing rank data and 5 columns containing (1) performance rank (perf_rank, the response), (2) total spectrum rank (spec_rank, predictor), (3) low-band spectrum rank (lo_spec_rank, predictor), (4) high-band spectrum rank (hi_spec_rank, predictor) and (5) Hz-per-customer rank (hz_cust_rank, predictor).

Concerning the predictor (or feature) Hz-per-customer, I am tracking all cellular operators’ so-called spectrum-overhead, which indicates how much Hz can be assigned to a customer (obviously an over-simplification but nevertheless an indicator). Rank 1 means that there is a significant overhead. That is, we have a lot of spectral capacity per customer. Rank 4 has the opposite meaning: the spectral overhead is small, and we have less spectral capacity per customer. It is good to remember that this particular feature is usually dynamic unless the spectrum situation changes for a given cellular operator (e.g., like traffic and customers may grow).

A (very) simple illustration of the “toy model” is shown below, choosing only low-band and high-band ranks as relevant predictors. Almost 60% of the network-benchmark rank can be explained by the low- and high-band ranks.

The model can, of course, be enriched by including more features, such as effective antenna-capability, Hz-per-Customer, Hz-per-Byte, Coverage KPI, Incident rates, Equipment Aging, Supplier, investment level (over last 2 – 3 years), etc… Given the ongoing debate of the importance of supplier to best networks (and their associated awards), I do not find a particularly strong correlation between RAN (incl. antenna) supplier, network performance, and benchmark rank. The total amount of deployed spectrum is a more important predictor. Of course, given the network performance formula above, if an antenna deployment delivers more effective spectral efficiency (or antenna “boost”) than competitors, it will increase the overall network quality for that operator. However, such an operator would still need to overcompensate the potential lack of spectrum compared to a spectrum-superior competitor.

END THOUGHTS.

Having the best cellular network in a market is something to be very proud of. Winning best network awards is obviously great for an operator and its employees. However, it should really mean that the customers of that best network operator also get the best cellular experience compared to any other operator in that market. A superior customer experience is key.

Firstly, the essential driver (enabler) for best network or network leadership is having a superior spectrum position. In low-band, mid-band, and longer-term also in high-band (e.g., mmWave spectrum). The second is having a good coverage footprint across your market. Compared to competitors, a superior spectrum portfolio could even be with fewer cell sites than a competitor with an inferior spectrum position (forced to densify earlier due to spectral capacity limitations as traffic increases). For a spectrum laggard, building more cell sites is costly (i.e., Capex, Opex, and Time) to attempt to improve or match a superior spectrum competitor. Thirdly, having superior antenna technology deployed is essential. It is also a relatively “easy” way to catch up with a superior competitor, at least in the case of relative minor spectrum position differences. Compared to buying additional spectrum (assuming such is available when you need it) or building out a substantial amount of new cell sites to equalize a cellular performance difference, investing into the best (or better or good-enough-to-win) antenna technology, particular for a spectrum laggard, seems to be the best strategy. Economically, relative to the other two options, and operationally, as time-to-catch-up can be relatively short.

After all, this has been said and done, a superior cellular spectrum portfolio is one of the best predictors for having the best network and even winning the best network award.

Economically, it could imply that a spectrum-superior operator, depending on the spectrum distance to the next-best spectrum position in a given market, may not need to invest in the same level of antenna technology as an inferior operator or could delay such investments to a more opportune moment. This could be important, particularly as advanced antenna development is still at its “toddler” state, and more innovative, powerful (and economical) solutions are expected over the next few years. Though, for operators with relatively minor spectrum differences, the battle will be via the advancement of antenna technology and further cell site sectorization (as opposed to building new sites).

ACKNOWLEDGEMENT.

I greatly acknowledge my wife, Eva Varadi, for her support, patience, and understanding during the creative process of writing this Blog. Also, many of my Deutsche Telekom AG and Industry colleagues, in general, have in countless ways contributed to my thinking and ideas leading to this little Blog. Again, I would like to draw attention to Petr Ledl and his super-competent team in Deutsche Telekom’s Group Research & Trials. Thank you so much for being a constant inspiration and always being available to talk antennas and cellular tech in general.

FURTHER READINGS.

Spectrum Monitoring, “Global Mobile Frequencies Database”, the last update on the database was May 2021. You have a limited amount of free inquiries before you will have to pay an affordable fee for access.

Umlaut, “Umlaut Benchmarking” is an important resources for mobile (and fixed) network benchmarks across the world. The umlaut benchmarking methodology is the de-facto industry standard today and applied in more than 120 countries measuring over 200 mobile networks worldwide. I have also made use of the associated Connect Testlab resouce; www.connect-testlab.com. Most network benchmark data goes back to at least 2017. The Umlaut benchmark is based on in-country drive test for voice and data as well as crowd sourced data. It is by a very big margin The cellular network benchmark to use for ranking cellular operators (imo).

Speedtest (Ookla), “Global Index”, most recent data is Q3, 2021. There are three Western European markets that I have not found any Umlaut (or P3 prior to 2020) benchmarks for; Denmark, France and Norway. For those markets I have (regrettably) had to use Ookla data which is clearly not as rich as Umlaut (at least for public domain data).