The Telco Ascension to the Sky.

It’s 2045. Earth is green again. Free from cellular towers and the terrestrial radiation of yet another G, no longer needed to justify endless telecom upgrades. Humanity has finally transcended its communication needs to the sky, fully served by swarms of Low Earth Orbit (LEO) satellites.

Millions of mobile towers have vanished. No more steel skeletons cluttering skylines and nature in general. In their place: millions of beams from tireless LEO satellites, now whispering directly into our pockets from orbit.

More than 1,200 MHz of once terrestrially-bound cellular spectrum below the C-band had been uplifted to LEO satellites. Nearly 1,500 MHz between 3 and 6 GHz had likewise been liberated from its earthly confines, now aggressively pursued by the buzzing broadband constellations above.

It all works without a single modification to people’s beloved mobile devices. Everyone enjoyed the same, or better, cellular service than in those wretched days of clinging to terrestrial-based infrastructure.

So, how did this remarkable transformation come about?

THE COVERAGE.

First, let’s talk about coverage. The chart below tells the story of orbital ambition through three very grounded curves. On the x-axis, we have the inclination angle, which is the degree to which your satellites are encouraged to tilt away from the equator to perform their job. On the y-axis: how much of the planet (and its people) they’re actually covering. The orange line gives us land area coverage. It starts low, as expected, tropical satellites don’t care much for Greenland. But as the inclination rises, so does their sense of duty to the extremes (the poles that is). The yellow line represents population coverage, which grows faster than land, maybe because humans prefer to live near each other (or they like the scenery). By the time you reach ~53° inclination, you’re covering about 94% of humanity and 84% of land areas. The dashed white line represents mobile cell coverage, the real estate of telecom towers. A constellation at a 53° inclination would cover nearly 98% of all mobile site infrastructure. It serves as a proxy for economic interest. It closely follows the population curve, but adds just a bit of spice, reflecting urban density and tower sprawl.

This chart illustrates the cumulative global coverage achieved at varying orbital inclination angles for three key metrics: land area (orange), population (yellow), and estimated terrestrial mobile cell sites (dashed white). As inclination increases from equatorial (0°) to polar (90°), the percentage of global land and population coverage rises accordingly. Notably, population coverage reaches approximately 94% at ~53° inclination, a critical threshold for satellite constellations aiming to maximize global user reach without the complexity of polar orbits. The mobile cell coverage curve reflects infrastructure density and aligns closely with population distribution.

The satellite constellation’s beams have replaced traditional terrestrial cells, providing a one-to-one coverage substitution. They not only replicate coverage in former legacy cellular areas but also extend service to regions that previously lacked connectivity due to low commercial priority from telecom operators. Today, over 3 million beams substitute obsolete mobile cells, delivering comparable service across densely populated areas. An additional 1 million beams have been deployed to cover previously unserved land areas, primarily rural and remote regions, using broader, lower-capacity beams with radii up to 10 kilometers. While these rural beams do not match the density or indoor penetration of urban cellular coverage, they represent a cost-effective means of achieving global service continuity, especially for basic connectivity and outdoor access in sparsely populated zones.

Conclusion? If you want to build a global satellite mobile network, you don’t need to orbit the whole planet. Just tilt your constellation enough to touch the crowded parts, and leave the tundra to the poets. However, this was the “original sin” of LEO Direct-2-Cellular satellites.

THE DEMAND.

Although global mobile traffic growth slowed notably after the early 2020s, and the terrestrial telecom industry drifted toward its “end of history” moment, the orbital network above inherited a double burden. Not only did satellite constellations need to deliver continuous, planet-wide coverage, a milestone legacy telecoms had never reached, despite millions of ground sites, but they also had to absorb globally converging traffic demands as billions of users crept steadily toward the throughput mean.

This chart shows the projected DL traffic across a full day (UTC), based on regions where local time falls within the evening Busy Hour window (17:00–22:00) and are within satellite coverage (minimum elevation ≥ 25°). The BH population is calculated hourly, taking into account time zone alignment and visibility, with a 20% concurrency rate applied. Each active user is assumed to consume 500 Mbps downlink in 2045. The peak, reaching over
This chart shows the uplink traffic demand experienced across a full day (UTC), based on regions under Busy Hour conditions (17:00–22:00 local time) and visible to the satellite constellation (with a minimum elevation angle of 25°). For each UTC hour, the BH population within coverage is calculated using global time zone mapping. Assuming a 20% concurrency rate and an average uplink throughput of 50 Mbps per active user, the total UL traffic is derived. The resulting curve reflects how demand shifts in response to the Earth’s rotation beneath the orbital band. The peak, reaching over

The radio access uplink architecture relies on low round-trip times for proper scheduling, timing alignment, and HARQ (Hybrid Automatic Repeat Request) feedback cycles. The propagation delay at 350 km yields a round-trip time of about 2.5 to 3 milliseconds, which falls within the bounds of what current specifications can accommodate. This is particularly important for latency-sensitive applications such as voice, video, and interactive services that require low jitter and reliable feedback mechanisms. In contrast, orbits at 550 km or above push latency closer to the edge of what NR protocols can tolerate, which could hinder performance or require non-standard adaptations. The beam geometry also plays a central role. At lower altitudes, satellite beams projected to the ground are inherently smaller. This smaller footprint translates into tighter beam patterns with narrower 3 dB cut-offs, which significantly improves frequency reuse and spatial isolation. These attributes are important for deploying high-capacity networks in densely populated urban environments, where interference and spectrum efficiency are paramount. Narrower beams allow D2C operators to steer coverage toward demand centers while minimizing adjacent-beam interference dynamically. Operating at 350 km is not without drawbacks. The satellite’s ground footprint at this altitude is smaller, meaning that more satellites are required to achieve full Earth coverage. Additionally, satellites at this altitude are exposed to greater atmospheric drag, resulting in shorter orbital lifespans unless they are equipped with more powerful or efficient propulsion systems to maintain altitude. The current design aims for a 5-year orbital lifespan. Despite this, the shorter lifespan has an upside, as it reduces the long-term risks of space debris. Deorbiting occurs naturally and quickly at lower altitudes, making the constellation more sustainable in the long term.

THE CONSTELLATION.

The satellite-to-cellular infrastructure has now fully matured into a global-scale system capable of delivering mobile broadband services that are not only on par with, but in many regions surpass, the performance of terrestrial cellular networks. At its core lies a constellation of low Earth orbit satellites operating at an altitude of 350 kilometers, engineered to provide seamless, high-quality indoor coverage for both uplink and downlink, even in densely urban environments.

To meet the evolving expectations of mobile users, each satellite beam delivers a minimum of 50 Mbps of uplink capacity and 500 Mbps of downlink capacity per user, ensuring full indoor quality even in highly cluttered environments. Uplink transmissions utilize the 600 MHz to 1800 MHz band, providing 1200 MHz of aggregated bandwidth. Downlink channels span 1500 MHz of spectrum, ranging from 2100 MHz to the upper edge of the C-band. At the network’s busiest hour (e.g., around 20:00 local time) across the most densely populated regions south of 53° latitude, the system supports a peak throughput of 60,000 Tbps for downlink and 6,000 Tbps for uplink. To guarantee reliability under real-world utilization, the system is engineered with a 25% capacity overhead, raising the design thresholds to 75,000 Tbps for DL and 7,500 Tbps for UL during peak demand.

Each satellite beam is optimized for high spectral efficiency, leveraging advanced beamforming, adaptive coding, and cutting-edge modulation. Under these conditions, downlink beams deliver 4.5 Gbps, while uplink beams, facing more challenging reception constraints, achieve 1.8 Gbps. Meeting the adjusted peak-hour demand requires approximately 16.7 million active DL beams and 4.2 million UL beams, amounting to over 20.8 million simultaneous beams concentrated over the peak demand region.

Thanks to significant advances in onboard processing and power systems, each satellite now supports up to 5,000 independent beams simultaneously. This capability reduces the number of satellites required to meet regional peak demand to approximately 4,200. These satellites are positioned over a region spanning an estimated 45 million square kilometers, covering the evening-side urban and suburban areas of the Americas, Europe, Africa, and Asia. This configuration yields a beam density of nearly 0.46 beams per square kilometer, equivalent to one active beam for every 2 square kilometers, densely overlaid to provide continuous, per-user, indoor-grade connectivity. In urban cores, beam radii are typically below 1 km, whereas in lower-density suburban and rural areas, the system adjusts by using larger beams without compromising throughput.

Because peak demand rotates longitudinally with the Earth’s rotation, only a portion of the entire constellation is positioned over this high-demand region at any given time. To ensure 4,200 satellites are always present over the region during peak usage, the total constellation comprises approximately 20,800 satellites, distributed across several hundred orbital planes. These planes are inclined and phased to optimize temporal availability, revisit frequency, and coverage uniformity while minimizing latency and handover complexity.

The resulting Direct-to-Cellular satellite constellation and system of today is among the most ambitious communications infrastructures ever created. With more than 20 million simultaneous beams dynamically allocated across the globe, it has effectively supplanted traditional mobile towers in many regions, delivering reliable, high-speed, indoor-capable broadband connectivity precisely where and when people need it.

When Telcos Said ‘Not Worth It,’ Satellites Said ‘Hold My Beam. In the world of 2045, even the last village at the end of the dirt road streams at 500 Mbps. No tower in sight, just orbiting compassion and economic logic finally aligned.

THE SATELLITE.

The Cellular Device to Satellite Path.

The uplink antennas aboard the Direct-to-Cellular satellites have been specifically engineered to reliably receive indoor-quality transmissions from standard (unmodified) mobile devices operating within the 600 MHz to 1800 MHz band. Each device is expected to deliver a minimum of 50 Mbps uplink throughput, even when used indoors in heavily cluttered urban environments. This performance is made possible through a combination of wideband spectrum utilization, precise beamforming, and extremely sensitive receiving systems in orbit. The satellite uplink system operates across 1200 MHz of aggregated bandwidth (e.g., 60 channels of 20 MHz), spanning the entire upper UHF and lower S-band. Because uplink signals originate from indoor environments, where wall and structural penetration losses can exceed 20 dB, the satellite link budget must compensate for the combined effects of indoor attenuation and free-space propagation at a 350 km orbital altitude. At 600 MHz, which represents the lowest frequency in the UL band, the free space path loss alone is approximately 133 dB. When this is compounded with indoor clutter and penetration losses, the total attenuation the satellite must overcome reaches approximately 153 dB or more.

Rather than specifying the antenna system at a mid-band average frequency, such as 900 MHz (i.e., the mid-band of the 600 MHz to 1800 MHz range), the system has been conservatively engineered for worst-case performance at 600 MHz. This design philosophy ensures that the antenna will meet or exceed performance requirements across the entire uplink band, with higher frequencies benefiting from naturally improved gain and narrower beamwidths. This choice guarantees that even the least favorable channels, those near 600 MHz, support reliable indoor-grade uplink service at 50 Mbps, with a minimum required SNR of 10 dB to sustain up to 16-QAM modulation. Achieving this level of performance at 600 MHz necessitated a large physical aperture. The uplink receive arrays on these satellites have grown to approximately 700 to 750 m² in area, and are constructed using modular, lightweight phased-array tiles that unfold in orbit. This aperture size enables the satellite to achieve a receive gain of approximately 45 dBi at 600 MHz, which is essential for detecting low-power uplink transmissions with high spectral efficiency, even from users deep indoors and under cluttered conditions.

Unlike earlier systems, such as AST SpaceMobile’s BlueBird 1, launched in the mid-2020s with an aperture of around 900 m² and challenged by the need to acquire indoor uplink signals, today’s Direct-to-Cellular (D2C) satellites optimize the uplink and downlink arrays separately. This separation allows each aperture to be custom-designed for its frequency and link budget requirements. The uplink arrays incorporate wideband, dual-polarized elements, such as log-periodic or Vivaldi structures, backed by high-dynamic-range low-noise amplifiers and a distributed digital beamforming backend. Assisted by real-time AI beam management, each satellite can simultaneously support and track up to 2,500 uplink beams, dynamically allocating them across the active coverage region.

Despite their size, these receive arrays are designed for compact launch configurations and efficient in-orbit deployment. Technologies such as inflatable booms, rigidizable mesh structures, and ultralight composite materials allow the arrays to unfold into large apertures while maintaining structural stability and minimizing mass. Because these arrays are passive receivers, thermal loads are significantly lower than those of transmit systems. Heat generation is primarily limited to the digital backend and front-end amplification chains, which are distributed across the array surface to facilitate efficient thermal dissipation.

The Satellite to Cellular Device Path.

The downlink communication path aboard Direct-to-Cellular satellites is engineered as a fully independent system, physically and functionally separated from the uplink antenna. This separation reflects a mature architectural philosophy that has been developed over decades of iteration. The downlink and uplink systems serve fundamentally different roles and operate across vastly different frequency bands, with their power, thermal, and antenna constraints. The downlink system operates in the frequency range from 2100 MHz up to the upper end of the C-band, typically around 4200 MHz. This is significantly higher than the uplink range, which extends from 600 to 1800 MHz. Due to this disparity in wavelength, a factor of nearly six between the lowest uplink and highest downlink frequencies, a shared aperture is neither practical nor efficient. It is widely accepted today that integrating transmit and receive functions into a single broadband aperture would compromise performance on both ends. Instead, today’s satellites utilize a dual-aperture approach, with the downlink antenna system optimized exclusively for high-frequency transmission and the uplink array designed independently for low-frequency reception.

In order to deliver 500 Mbps per user with full indoor coverage, each downlink beam must sustain approximately 4.5 Gbps, accounting for spectral reuse and beam overlap. At an orbital altitude of 350 kilometers, downlink beams must remain narrow, typically covering no more than a 1-kilometer radius in urban zones, to match uplink geometry and maintain beam-level concurrency. The antenna gain required to meet these demands is in the range of 50 to 55 dBi, which the satellites achieve using high-frequency phased arrays with a physical aperture of approximately 100 to 200 m². Because the downlink system is responsible for high-power transmission, the antenna tiles incorporate GaN-based solid-state power amplifiers (SSPAs), which deliver hundreds of watts per panel. This results in an overall effective isotropic radiated power (EIRP) of 50 to 60 dBW per beam, sufficient to reach deep indoor devices even at the upper end of the C-band. The power-intensive nature of the downlink system introduces thermal management challenges (describe below in the next section), which are addressed by physically isolating the transmit arrays from the receiver surfaces. The downlink and uplink arrays are positioned on opposite sides of the spacecraft bus or thermally decoupled through deployable booms and shielding layers.

The downlink beamforming is fully digital, allowing real-time adaptation of beam patterns, power levels, and modulation schemes. Each satellite can form and manage up to 2,500 independent downlink beams, which are coordinated with their uplink counterparts to ensure tight spatial and temporal alignment. Advanced AI algorithms help shape beams based on environmental context, usage density, and user motion, thereby further improving indoor delivery performance. The modulation schemes used on the downlink frequently reach 256-QAM and beyond, with spectral efficiencies of six to eight bits per second per Hz in favorable conditions.

The physical deployment of the downlink antenna varies by platform, but most commonly consists of front-facing phased array panels or cylindrical surfaces fitted with azimuthally distributed tiles. These panels can be either fixed or mounted on articulated platforms that allow active directional steering during orbit, depending on the beam coverage strategy, an arrangement also called gumballed.

No Bars? Not on This Planet. In 2045, even the Icebears will have broadband. When satellites replaced cell towers, the Arctic became just another neighborhood in the global gigabit grid.

Satellite System Architecture.

The Direct-to-Cellular satellites have evolved into high-performance, orbital base stations that far surpass the capabilities of early systems, such as AST SpaceMobile’s Bluebird 1 or SpaceX’s Starlink V2 Mini. These satellites are engineered not merely to relay signals, but to deliver full-featured indoor mobile broadband connectivity directly to standard handheld devices, anywhere on Earth, including deep urban cores and rural regions that have been historically underserved by terrestrial infrastructure.

As described earlier, today’s D2C satellite supports up to 5,000 simultaneous beams, enabling real-time uplink and downlink with mobile users across a broad frequency range. The uplink phased array, designed to capture low-power, deep-indoor signals at 600 MHz, occupies approximately 750 m². The DL array, optimized for high-frequency, high-power transmission, spans 150 to 200 m². Unlike early designs, such as Bluebird 1, which used a single, large combined antenna, today’s satellites separate the uplink and downlink arrays to optimize each for performance, thermal behavior, and mechanical deployment. These two systems are typically mounted on opposite sides of the satellite and thermally isolated from one another.

Thermal management is one of the defining challenges of this architecture. While AST’s Bluebird 1 (i.e., from mid-2020s) boasted a large antenna aperture approaching 900 m², its internal systems generated significantly less heat. Bluebird 1 operated with a total power budget of approximately 10 to 12 kilowatts, primarily dedicated to a handful of downlink beams and limited onboard processing. In contrast, today’s D2C satellite requires a continuous power supply of 25 to 35 kilowatts, much of which must be dissipated as heat in orbit. This includes over 10 kilowatts of sustained RF power dissipation from the DL system alone, in addition to thermal loads from the digital beamforming hardware, AI-assisted compute stack, and onboard routing logic. The key difference lies in beam concurrency and onboard intelligence. The satellite manages thousands of simultaneous, high-throughput beams, each dynamically scheduled and modulated using advanced schemes such as 256-QAM and beyond. It must also process real-time uplink signals from cluttered environments, allocate spectral and spatial resources, and make AI-driven decisions about beam shape, handovers, and interference mitigation. All of this requires a compute infrastructure capable of delivering 100 to 500 TOPS (tera-operations per second), distributed across radiation-hardened processors, neural accelerators, and programmable FPGAs. Unlike AST’s Bluebird 1, which offloaded most of its protocol stack to the ground, today’s satellites run much of the 5G core network onboard. This includes RAN scheduling, UE mobility management, and segment-level routing for backhaul and gateway links.

This computational load compounds the satellite’s already intense thermal environment. Passive cooling alone is insufficient. To manage thermal flows, the spacecraft employs large radiator panels located on its outer shell, advanced phase-change materials embedded behind the DL tiles, and liquid loop systems that transfer heat from the RF and compute zones to the radiative surfaces. These thermal systems are intricately zoned and actively managed, preventing the heat from interfering with the sensitive UL receive chains, which require low-noise operation under tightly controlled thermal conditions. The DL and UL arrays are thermally decoupled not just to prevent crosstalk, but to maintain stable performance in opposite thermal regimes: one dominated by high-power transmission, the other by low-noise reception.

To meet its power demands, the satellite utilizes a deployable solar sail array that spans 60 to 80 m². These sails are fitted with ultra-high-efficiency solar cells capable of exceeding 30–35% efficiency. They are mounted on articulated booms that track the sun independently from the satellite’s Earth-facing orientation. They provide enough current to sustain continuous operation during daylight periods, while high-capacity batteries, likely based on lithium-sulfur or solid-state chemistry, handle nighttime and eclipse coverage. Compared to the Starlink V2 Mini, which generates around 2.5 to 3.0 kilowatts, and the Bluebird 1, which operates at roughly 10–12 kilowatts. Today’s system requires nearly three times the generation and five times the thermal rejection capability compared to the initial satellites of the mid-2020s.

Structurally, the satellite is designed to support this massive infrastructure. It uses a rigid truss core (i.e., lattice structure) with deployable wings for the DL system and a segmented, mesh-based backing for the UL aperture. Propulsion is provided by Hall-effect or ion thrusters, with 50 to 100 kilograms of inert propellant onboard to support three to five years of orbital station-keeping at an altitude of 350 kilometers. This height is chosen for its latency and spatial reuse advantages, but it also imposes continuous drag, requiring persistent thrust.

The AST Bluebird 1 may have appeared physically imposing in its time due to its large antenna, thermal, computational, and architectural complexity. Today’s D2C satellite, 20 years later, far exceeds anything imagined two decades earlier. The heat generated by its massive beam concurrency, onboard processing, and integrated network core makes its thermal management system not only more severe than Bluebird 1’s but also one of the primary limiting factors in the satellite’s physical and functional design. This thermal constraint, in turn, shapes the layout of its antennas, compute stack, power system, and propulsion.

Mass and Volume Scaling.

The AST’s Bluebird 1, launched in the mid-2020s, had a launch mass of approximately 1,500 kilograms. Its headline feature was a 900 m² unfoldable antenna surface, designed to support direct cellular connectivity from space. However, despite its impressive aperture, the system was constrained by limited beam concurrency, modest onboard computing power, and a reliance on terrestrial cores for most network functions. The bulk of its mass was dominated by structural elements supporting its large antenna surface and the power and thermal subsystems required to drive a relatively small number of simultaneous links. Bluebird’s propulsion was chemical, optimized for initial orbit raising and limited station-keeping, and its stowed volume fit comfortably within standard medium-lift payload fairings. Starlink’s V2 Mini, although smaller in physical aperture, featured a more balanced and compact architecture. Weighing roughly 800 kilograms at launch, it was designed around high-throughput broadband rather than direct-to-cellular use. Its phased array antenna surface was closer to 20–25 m², and it was optimized for efficient manufacturing and high-density orbital deployment. The V2 Mini’s volume was tightly packed, with solar panels, phased arrays, and propulsion modules folded into a relatively low-profile bus optimized for rapid deployment and low-cost launch stacking. Its onboard compute and thermal systems were scaled to match its more modest power budget, which typically hovered around 2.5 to 3.0 kilowatts.

In contrast, today’s satellites occupy an entirely new performance regime. The dry mass of the satellite ranges between 2,500 and 3,500 kilograms, depending on specific configuration, thermal shielding, and structural deployment method. This accounts for its large deployable arrays, high-density digital payload, radiator surfaces, power regulation units, and internal trusses. The wet mass, including onboard fuel reserves for at least 5 years of station-keeping at 350 km altitude, increases by up to 800 kilograms, depending on the propulsion type (e.g., Hall-effect or gridded ion thrusters) and orbital inclination. This brings the total launch mass to approximately 3,000 to 4,500 kilograms, or more than double ATS’s old Bluebird 1 and roughly five times that of SpaceX’s Starlink V2 Mini.

Volume-wise, the satellites require a significantly larger stowed configuration than either AST’s Bluebird 1 or SpaceX’s Starlink V2 Mini. While both of those earlier systems were designed to fit within traditional launch fairings, Bluebird 1 utilizes a folded hinge-based boom structure, and Starlink V2 Mini is optimized for ultra-compact stacking. Today’s satellite demands next-generation fairing geometries, such as 5-meter-class launchers or dual-stack configurations. This is driven by the dual-antenna architecture and radiator arrays, which, although cleverly folded during launch, expand dramatically once deployed in orbit. In its operational configuration, the satellite spans tens of meters across its antenna booms and solar sails. The uplink array, built as a lightweight, mesh-backed surface supported by rigidizing frames or telescoping booms, unfolds to a diameter of approximately 30 to 35 meters, substantially larger than Bluebird 1’s ~20–25 meter maximum span and far beyond the roughly 10-meter unfolded span of Starlink V2 Mini. The downlink panels, although smaller, are arranged for precise gimballed orientation (i.e., a pivoting mechanism allowing rotation or tilt along one or more axes) and integrated thermal control, which further expands the total deployed volume envelope. The volumetric footprint of today’s D2C satellite is not only larger in surface area but also more spatially complex, as its segregated UL and DL arrays, thermal zones, and solar wings must avoid interference while maintaining structural and thermal equilibrium. Compared to the simplified flat-pack layout of Starlink V2 Mini and the monolithic boom-deployed design of Bluebird 1.

The increase in dry mass, wet mass, and deployed volume is not a byproduct of inefficiency, but a direct result of very substantial performance improvements that were required to replace terrestrial mobile towers with orbital systems. Today’s D2C satellites deliver an order of magnitude more beam concurrency, spectral efficiency, and per-user performance than its 2020s predecessors. This is reflected in every subsystem, from power generation and antenna design to propulsion, thermal control, and computing. As such, it represents the emergence of a new class of satellite altogether: not merely a space-based relay or broadband node, but a full-featured, cloud-integrated orbital RAN platform capable of supporting the global cellular fabric from space.

CAN THE FICTION BECOME A REALITY?

From the perspective of 2025, the vision of a global satellite-based mobile network providing seamless, unmodified indoor connectivity at terrestrial-grade uplink and downlink rates, 50 Mbps up, 500 Mbps down, appears extraordinarily ambitious. The technical description from 2045 outlines a constellation of 20,800 LEO satellites, each capable of supporting 5,000 independent full-duplex beams across massive bandwidths, while integrating onboard processing, AI-driven beam control, and a full 5G core stack. To reach such a mature architecture within two decades demands breakthrough progress across multiple fronts.

The most daunting challenge lies in achieving indoor-grade cellular uplink at frequencies as low as 600 MHz from devices never intended to communicate with satellites. Today, even powerful ground-based towers struggle to achieve sub-1 GHz uplink coverage inside urban buildings. For satellites at an altitude of 350 km, the free-space path loss alone at 600 MHz is approximately 133 dB. When combined with clutter, penetration, and polarization mismatches, the system must close a link budget approaching 153–160 dB, from a smartphone transmitting just 23 dBm (200 mW) or less. No satellite today, including AST SpaceMobile’s BlueBird 1, has demonstrated indoor uplink reception at this scale or consistency. To overcome this, the proposed system assumes deployable uplink arrays of 750 m² with gain levels exceeding 45 dBi, supported by hundreds of simultaneously steerable receive beams and ultra-low-noise front-end receivers. From a 2025 lens, the mechanical deployment of such arrays, their thermal stability, calibration, and mass management pose nontrivial risks. Today’s large phased arrays are still in their infancy in space, and adaptive beam tracking from fast-moving LEO platforms remains unproven at the required scale and beam density.

Thermal constraints are also vastly more complex than anything currently deployed. Supporting 5,000 simultaneous beams and radiating tens of kilowatts from compact platforms in LEO requires heat rejection systems that go beyond current radiator technology. Passive radiators must be supplemented with phase-change materials, active fluid loops, and zoned thermal isolation to prevent transmit arrays from degrading the performance of sensitive uplink receivers. This represents a significant leap from today’s satellites, such as Starlink V2 Mini (~3 kW) or BlueBird 1 (~10–12 kW), neither of which operates with a comparable beam count, throughput, or antenna scale.

The required onboard compute is another monumental leap. Running thousands of simultaneous digital beams, performing real-time adaptive beamforming, spectrum assignment, HARQ scheduling, and AI-driven interference mitigation, all on-orbit and without ground-side offloading, demands 100–500 TOPS of radiation-hardened compute. This is far beyond anything that will be flying in 2025. Even state-of-the-art military systems rely heavily on ground computing and centralized control. The 2045 vision implies on-orbit autonomy, local decision-making, and embedded 5G/6G core functionality within each spacecraft, a full software-defined network node in orbit. Realizing such a capability requires not only next-gen processors but also significant progress in space-grade AI inference, thermal packaging, and fault tolerance.

On the power front, generating 25–35 kW per satellite in LEO using 60–80 m² solar sails pushes the boundary of photovoltaic technology and array mechanics. High-efficiency solar cells must achieve conversion rates exceeding 30–35%, while battery systems must maintain high discharge capacity even in complete darkness. Space-based power architectures today are not yet built for this level of sustained output and thermal dissipation.

Even if the individual satellite challenges are solved, the constellation architecture presents another towering hurdle. Achieving seamless beam handover, full spatial reuse, and maintaining beam density over demand centers as the Earth rotates demands near-perfect coordination of tens of thousands of satellites across hundreds of planes. No current LEO operator (including SpaceX) manages a constellation of that complexity, beam concurrency, or spatial density. Furthermore, scaling the manufacturing, testing, launch, and in-orbit commissioning of over 20,000 high-performance satellites will require significant cost reductions, increased factory throughput, and new levels of autonomous deployment.

Regulatory and spectrum allocation are equally formidable barriers. The vision entails the massively complex undertaking of a global reallocation of terrestrial mobile spectrum, particularly in the sub-3 GHz bands, to LEO operators. As of 2025, such a reallocation is politically and commercially fraught, with entrenched mobile operators and national regulators unlikely to cede prime bands without extensive negotiation, incentives, and global coordination. The use of 600–1800 MHz from orbit for direct-to-device is not yet globally harmonized (and may never be), and existing terrestrial rights would need to be either vacated or managed via complex sharing schemes.

From a market perspective, widespread device compatibility without modification implies that standard mobile chipsets, RF chains, and antennas evolve to handle Doppler compensation, extended RTT timing budgets, and tighter synchronization tolerances. While this is not insurmountable, it requires updates to 3GPP standards, baseband silicon, and potentially network registration logic, all of which must be implemented without degrading terrestrial service. Although NTN (non-terrestrial networks) support has begun to emerge in 5G standards, the level of transparency and ubiquity envisioned in 2045 is not yet backed by practical deployments.

While the 2045 architecture described so far assumes a single unified constellation delivering seamless global cellular service from orbit, the political and commercial realities of space infrastructure in 2025 strongly suggest a fragmented outcome. It is unlikely that a single actor, public or private, will be permitted, let alone able, to monopolize the global D2C landscape. Instead, the most plausible trajectory is a competitive and geopolitically segmented orbital environment, with at least one major constellation originating from China (note: I think it is quit likely we may see two major ones), another from the United States, a possible second US-based entrant, and potentially a European-led system aimed at securing sovereign connectivity across the continent. This fracturing of the orbital mobile landscape imposes a profound constraint on the economic and technical scalability of the system. The assumption that a single constellation could achieve massive economies of scale, producing, launching, and managing tens of thousands of high-performance satellites with uniform coverage obligations, begins to collapse under the weight of geopolitical segmentation. Each competitor must now shoulder its own development, manufacturing, and deployment costs, with limited ability to amortize those investments over a unified global user base. Moreover, such duplication of infrastructure risks saturating orbital slots and spectrum allocations, while reducing the density advantage that a unified system would otherwise enjoy. Instead of concentrating thousands of active beams over a demand zone with a single coordinated fleet, separate constellations must compete for orbital visibility and spectral access over the same urban centers. The result is likely to be a decline in per-satellite utilization efficiency, particularly in regions of geopolitical overlap or contested regulatory coordination.

2045: One Vision, Many Launch Pads. The dream of global satellite-to-cellular service may shine bright, but it won’t rise from a single constellation. With China, the U.S., and others racing skyward, the economics of universal LEO coverage could fracture into geopolitical silos, making scale, spectrum, and sustainability more contested than ever.

Finally, the commercial viability of any one constellation diminishes when the global scale is eroded. While a monopoly or globally dominant operator could achieve lower per-unit satellite costs, higher average utilization, and broader roaming revenues, a fractured environment reduces ARPU (average revenue per user). It increases the breakeven threshold for each deployment. Satellite throughput that could have been centrally optimized now risks duplication and redundancy, increasing operational overhead and potentially slowing innovation as vendors attempt to differentiate on proprietary terms. In this light, the architecture described earlier must be seen as an idealized vision. This convergence point may never be achieved in pure form unless global policy, spectrum governance, and commercial alliances move toward more integrated outcomes. While the technological challenges of the 2045 D2C system are significant, the fragmentation of market structure and geopolitical alignment may prove an equally formidable barrier to realizing the full systemic potential. While a monopoly or globally dominant operator could achieve lower per-unit satellite costs, higher average utilization, and broader roaming revenues, a fractured environment reduces ARPU (average revenue per user). It increases the breakeven threshold for each deployment. Satellite throughput that could have been centrally optimized now risks duplication and redundancy, increasing operational overhead and potentially slowing innovation as vendors attempt to differentiate on proprietary terms. In this light, the architecture described earlier must be seen as an idealized vision. This convergence point may never be achieved in pure form unless global policy, spectrum governance, and commercial alliances move toward more integrated outcomes. While the technological challenges of the 2045 D2C system are significant, the fragmentation of market structure and geopolitical alignment may prove an equally formidable barrier to realizing the full systemic potential.

Heavenly Coverage, Hellish Congestion. Even a single mega-constellation turns the sky into premium orbital real estate … and that’s before the neighbors show up with their own fleets. Welcome to the era of broadband traffic … in space.

Despite these barriers, incremental paths forward exist. Demonstration satellites in the late 2020s, followed by regional commercial deployments in the early 2030s, could provide real-world validation. The phased evolution of spectrum use, dual-use handsets, and AI-assisted beam management may mitigate some of the scaling concerns. Regulatory alignment may emerge as rural and unserved regions increasingly depend on space-based access. Ultimately, the achievement of the 2045 architecture relies not only on engineering but also on sustained cross-industry coordination, geopolitical alignment, and commercial viability on a planetary scale. As of 2025, the probability of realizing the complete vision by 2045, in terms of indoor-grade, direct-to-device service via a fully orbital mobile core, is perhaps 40–50%, with a higher probability (~70%) for achieving outdoor-grade or partially integrated hybrid services. The coming decade will reveal whether the industry can fully solve the unique combination of thermal, RF, computational, regulatory, and manufacturing challenges required to replace the terrestrial mobile network with orbital infrastructure.

POSTSCRIPT – THE ECONOMICS.

The Direct-to-Cellular satellite architecture described in this article would reshape not only the technical landscape of mobile communications but also its economic foundation. The very premise of delivering mobile broadband directly from space, bypassing terrestrial towers, fiber backhaul, and urban permitting, undermines one of the most entrenched capital systems of the 20th and early 21st centuries: the mobile infrastructure economy. Once considered irreplaceable, the sprawling ecosystem of rooftop leases, steel towers, field operations, base stations, and fiber rings has been gradually rendered obsolete by a network that floats above geography.

The financial implications of such a shift are enormous. Before such an orbital transition described in this article, the global mobile industry invested well over 300 billion USD annually in network CapEx and Opex, with a large share dedicated to the site infrastructure layer, construction, leasing, energy, security, and upkeep of millions of base stations and their associated land or rooftop assets. Tower companies alone have become multi-billion-dollar REITs (i.e., Real Estate Investment Trusts), profiting from site tenancy and long-term operating contracts. As of the mid-2020s, the global value tied up in the telecom industry’s physical infrastructure is estimated to exceed 2.5 to 3 trillion USD, with tower companies like Cellnex and American Tower collectively managing hundreds of billions of dollars in infrastructure assets. An estimated $300–500 billion USD invested in mobile infrastructure represents approximately 0.75% to 1.5% of total global pension assets and accounts for 15% to 30% of pension fund infrastructure investments. This real estate-based infrastructure model defined mobile economics for decades and has generally been regarded as a reasonably safe haven for investors. In contrast, the 2045 D2C model front-loads its capital burden into satellite manufacturing, launch, and orbital operations. Rather than being geographically bound, capital is concentrated into a fleet of orbital base stations, each capable of dynamically serving users across vast and shifting geographies. This not only eliminates the need for millions of distributed cell sites, but it also breaks the historical tie between infrastructure deployment and national geography. Coverage no longer scales with trenching crews or urban permitting delays but with orbital plane density and beamforming algorithms.

Yet, such a shift does not necessarily mean lower cost, only different economics. Launching and operating tens of thousands of advanced satellites, each capable of supporting thousands of beams and running onboard compute environments, still requires massive capital outlay and ongoing expenditures in space traffic management, spectrum coordination, ground gateways, and constellation replenishment. The difference lies in utilization and marginal reach. Where terrestrial infrastructure often struggles to achieve ROI in rural or low-income markets, orbital systems serve these zones as part of the same beam budget, with no new towers or trenches required.

Importantly, the 2045 model would likely collapse the mobile value chain. Instead of a multi-layered system of operators, tower owners, fiber wholesalers, and regional contractors, a vertically integrated satellite operator can now deliver the full stack of mobile service from orbit, owning the user relationship end-to-end. This disintermediation has significant implications for revenue distribution and regulatory control, and challenges legacy operators to either adapt or exit.

The scale of economic disruption mirrors the scale of technical ambition. This transformation could rewrite the very economics of connectivity. While the promise of seamless global coverage, zero tower density, and instant-on mobility is compelling, it may also signal the end of mobile telecom as a land-based utility.

If this little science fiction story comes true, and there are many good and bad reasons to doubt it, Telcos may not Ascend to the Sky, but take the Stairway to Heaven.

Graveyard of the Tower Titans. This symbolic illustration captures the end of an era, depicting headstones for legacy telecom giants such as American Tower, Crown Castle, and SBA Communications, as well as the broader REIT (Real Estate Investment Trust) infrastructure model that once underpinned the terrestrial mobile network economy. It serves as a metaphor for the systemic shift brought on by Direct-to-Cellular (D2C) satellite networks. What’s fading is not only the mobile tower itself, but also the vast ancillary industry that has grown around it, including power systems, access rights, fiber-infrastructure, maintenance firms, and leasing intermediaries, as well as the telecom business model that relied on physical, ground-based infrastructure. As the skies take over the signal path, the economic pillars of the old telecom world may no longer stand.

FURTHER READING.

Kim K. Larsen, “Will LEO Satellite Direct-to-Cellular Networks Make Traditional Mobile Networks Obsolete?”, A John Strand Consult Report, (January 2025). This has also been published in full on my own Techneconomyblog.

Kim K. Larsen, “Can LEO Satellites close the Gigabit Gap of Europe’s Unconnectables?“ Techneconomyblog (April 2025).

Kim K. Larsen, “The Next Frontier: LEO Satellites for Internet Services.” Techneconomyblog (March 2024).

Kim K. Larsen, “Stratospheric Drones & Low Earth Satellites: Revolutionizing Terrestrial Rural Broadband from the Skies?” Techneconomyblog (January 2024).

Kim K. Larsen, “A Single Network Future“, Techneconomyblog (March 2024).

ACKNOWLEDGEMENT.

I would like to acknowledge my wife, Eva Varadi, for her unwavering support, patience, and understanding throughout the creative process of writing this article.

Will LEO Satellite Direct-to-Cell Networks make Terrestrial Networks Obsolete?

THE POST-TOWER ERA – A FAIRYTAIL.

From the bustling streets of New York to the remote highlands of Mongolia, the skyline had visibly changed. Where steel towers and antennas once dominated now stood open spaces and restored natural ecosystems. Forests reclaimed their natural habitats, and birds nested in trees undisturbed by the scaring of high rural cellular towers. This transformation was not sudden but resulted from decades of progress in satellite technology, growing demand for ubiquitous connectivity, an increasingly urgent need to address the environmental footprint of traditional telecom infrastructures, and the economic need to dramatically reduce operational expenses tied up in tower infrastructure. By the time the last cell site was decommissioned, society stood at the cusp of a new age of connectivity by LEO satellites covering all of Earth.

The annual savings worldwide from making terrestrial cellular towers obsolete in total cost are estimated to amount to at least 300 billion euros, and it is expected that moving cellular access to “heaven” will avoid more than 150 million metric tons of CO2 emissions annually. The retirement of all terrestrial cellular networks worldwide has been like eliminating the entire carbon footprint of The Netherlands or Malaysia and leading to a dramatic reduction in demand for sustainable green energy sources that previously were used to power the global cellular infrastructure.

INTRODUCTION.

Recent postings and a substantial part of commentary give the impression that we are heading towards a post-tower era where Elon Musk’s Low Earth Orbit (LEO) satellite Starlink network (together with competing options, e.g., ATS Spacemobile and Lynk, and no, I do not see Amazon’s Project Kuiper in this space) will make terrestrially-based tower infrastructure and earth-bound cellular services obsolete.

T-Mobile USA is launching its Direct-to-Cell (D2C) service via SpaceX’s Starlink LEO satellite network. The T-Mobile service is designed to work with existing LTE-compatible smartphones, allowing users to connect to Starlink satellites without needing specialized hardware or smartphone applications.

Since the announcement, posts and media coverage have declared the imminent death of the terrestrial cellular network. When it is pointed out that this may be a premature death sentence to an industry, telecom operators, and their existing cellular mobile networks, it is also not uncommon to be told off as being too pessimistic and an unbeliever in Musk’s genius vision. Musk has on occasion made it clear the Starlink D2C service is aimed at texts and voice calls in remote and rural areas, and to be honest, the D2C service currently hinges on 2×5 MHz in the T-Mobile’s PCS band, adding constraints to the “broadbandedness” of the service. The fact that the service doesn’t match the best of T-Mobile US’s 5G network quality (e.g., 205+ Mbps downlink) or even get near its 4G speeds should really not bother anyone, as the value of the D2C service is that it is available in remote and rural areas with little to no terrestrial cellular coverage and that you can use your regular cellular device with no need for a costly satellite service and satphone (e.g., Iridium, Thuraya, Globalstar).

While I don’t expect to (or even want to) change people’s beliefs, I do think it would be great to contribute to more knowledge and insights based on facts about what is possible with low-earth orbiting satellites as a terrestrial substitute and what is uninformed or misguided opinion.

The rise of LEO satellites has sparked discussions about the potential obsolescence of terrestrial cellular networks. With advancements in satellite technology and increasing partnerships, such as T-Mobile’s collaboration with SpaceX’s Starlink, proponents envision a future where towers are replaced by ubiquitous connectivity from the heavens. However, the feasibility of LEO satellites achieving service parity with terrestrial networks raises significant technical, economic, and regulatory questions. This article explores the challenges and possibilities of LEO Direct-to-Cell (D2C) networks, shedding light on whether they can genuinely replace ground-based cellular infrastructure or will remain a complementary technology for specific use cases.

WHY DISTANCE MATTERS.

The distance between you (your cellular device) and the base station’s antenna determines your expected service experience in cellular and wireless networks. The longer you are away from the base station that serves you, in general, the poorer your connection quality and performance will be, with everything else being equal. As the distance increases, signal weakening (i.e., path loss) grows exponentially, reducing signal quality and making it harder for devices to maintain reliable communication. Closer proximity allows for more substantial, faster, and more stable connections, while longer distances require more power and advanced technologies like beamforming or repeaters to compensate.

Physics tells us how a signal loses its signal strength (or power) over a distance with the square of the distance from the source of the signal itself (either the base station transmitter or the consumer device). This applies universally to all electromagnetic waves traveling in free space. Free space means that there are no obstacles, reflections, or scattering. No terrain features, buildings, or atmospheric conditions interfere with the propagation signal.

So, what matters to the Free Space Path Loss (FSPL)? That is the signal strength over a given distance in free space:

  • The signal strength reduces (the path loss increases) with the square of the distance (d) from its source.
  • Path loss increases (i.e., signal strength decreases) with the (square of the) frequency (f). The higher the frequency, the higher the path loss at a given distance from the signal source.
  • A larger transmit antenna aperture reduces the path loss by focusing the transmitted signal (energy) more efficiently. An antenna aperture is an antenna’s “effective area” that captures or transmits electromagnetic waves. It depends directly on antenna gain and inverse of the square of the signal frequency (i.e., higher frequency → smaller aperture).
  • Higher receiver gain will also reduce the path loss.

$PL_{FS} \; = \; \left( \frac{4 \pi}{c} \right)^2 (d \; f)^2 \; \propto d^2 \; f^2$

$$FSPL_{dB} \; = 10 \; Log_{10} (PL_{FS}) \; = \; 20 \; Log_{10}(d) \; + \; 20 \; Log_{10}(f) \; + \; constant$$

The above equations show a strong dependency on distance; the farther away, the larger the signal loss, and the higher the frequency, the larger the signal loss. Relaxing some of the assumptions leading to the above relationship leads us to the following:

$FSPL_{dB}^{rs} \; = \; 20 \; Log_{10}(d) \; – \; 10 \; Log_{10}(A_t^{eff}) \; – \; 10 \; Log_{10}(G_{r}) \; + \; constant$

The last of the above equations introduces the transmitter’s effective antenna aperture (\(A_t^{eff}\)) and the receiver’s gain (\(G_r\)), telling us that larger apertures reduce path loss as they focus the transmitted energy more efficiently and that higher receiver gain likewise reduces the path loss (i.e., “they hear better”).

It is worth remembering that the transmitter antenna aperture is directly tied to the transmitter gain ($G_t$) when the frequency (f) has been fixed. We have

$A_t^{eff} \; = \; \frac{c^2}{4\pi} \; \frac{1}{f^2} \; G_t \; = \; 0.000585 \; m^2 \; G_t \;$ @ f = 3.5 GHz.

From the above, as an example, it is straightforward to see that the relative path loss difference between the two distances of 550 km (e.g., typical altitude of an LEO satellite) and 2.5 km (typical terrestrial cellular coverage range ) is

$\frac{PL_{FS}(550 km)}{PL_{FS}(2.5 km)} \; = \; \left( \frac {550}{2.5}\right)^2 \; = \; 220^2 \; \approx \; 50$ thousand. So if all else was equal (it isn’t, btw!), we would expect that the signal loss at a distance of 550 km would be 50 thousand times higher than at 2.5 km. Or, in the electrical engineer’s language, at a distance of 550 km, the loss would be 47 dB higher than at 2.5 km.

The figure illustrates the difference between (a) terrestrial cellular and (b) satellite coverage. A terrestrial cellular signal typically covers a radius of 0.5 to 5 km. In contrast, a LEO satellite signal travels a substantial distance to reach Earth (e.g., Starlink satellite is at an altitude of 550 km). While the terrestrial signal propagates through the many obstacles it meets on its earthly path, the satellite signal’s propagation path would typically be free-space-like (i.e., no obstacles) until it penetrates buildings or other objects to reach consumer devices. Historically, most satellite-to-Earth communication has relied on outdoor ground stations or dishes where the outdoor antenna on Earth provides LoS to the satellite and will also compensate somewhat for the signal loss due to the distance to the satellite.

Let’s compare a terrestrial 5G 3.5 GHz advanced antenna system (AAS) 2.5 km from a receiver with a LEO satellite system at an altitude of 550 km. Note I could have chosen a lower frequency, e.g., 800 MHz or the PCS 1900 band. While it would give me some advantages regarding path loss (i.e., $FSPL \; \propto \; f^2$), the available bandwidth is rather smallish and insufficient for state-or-art 5G services (imo!). From a free-space path loss perspective, independently of frequency, we need to overcome an almost 50 thousand times relative difference in distance squared (ca. 47 dB difference) in favor of the terrestrial system. In this comparison, it should be understood that the terrestrial and the satellite systems use the same carrier frequency (otherwise, one should account for the difference in frequency), and the only difference that matters (for the FSPL) is the difference in distance to the receiver.

Suppose I require that my satellite system has the same signal loss in terms of FSPL as my terrestrial system to aim at a comparable quality of service level. In that case, I have several options in terms of satellite enhancements. I could increase transmit power, although it would imply that I need a transmit power of 47 dB more than the terrestrial system, or approximately 48 kW, which is likely impractical for the satellite due to power limitations. Compare this with the current Starlink transmit power of approximately 32 W (45 dBm), ca. 1,500 times lower. Alternatively, I could (in theory!) increase my satellite antenna aperture, leading to a satellite antenna with a diameter of ca. 250 meters, which is enormous compared to current satellite antennas (e.g., Starlink’s ca. 0.05 m2 aperture for a single antenna and total area in the order of 1.6 m2 for the Ku/Ka bands). Finally, I could (super theoretically) also massively improve my consumer device (e.g., smartphone) to receive gain (with 47 dB) from today’s range of -2 dBi to +5 dBi. Achieving 46 dBi gain in a smartphone receiver seems unrealistic due to size, power, and integration constraints. As the target of LEO satellite direct-to-cell services is to support commercially available cellular devices used in terrestrial, only the satellite specifications can be optimized.

Based on a simple free-space approach, it appears unreasonable that an LEO satellite communication system can provide 5G services at parity with a terrestrial cellular network to normal (unmodified) 5G consumer devices without satellite-optimized modifications. The satellite system’s requirements for parity with a terrestrial communications system are impractical (but not impossible) and, if pursued, would significantly drive up design complexity and cost, likely making such a system highly uneconomical.

At this point, you should ask yourself if it is reasonable to assume that a terrestrial communication cellular system can be taken to propagate as its environment is “free-space” like. Thus, obstacles, reflections, and scattering are ignored. Is it really okay to presume that terrain features, buildings, or atmospheric conditions do not interfere with the propagation of the terrestrial cellular signal? Of course, the answer should be that it is not okay to assume that. When considering this, let’s see if it matters much compared to the LEO satellite path loss.

TERRESTRIAL CELLULAR PROPAGATION IS NOT HAPPENING IN FREE SPACE, AND NEITHER IS A SATELLITE’S.

The Free-Space Path Loss (FSPL) formula assumes ideal conditions where signals propagate in free space without interference, blockage, or degradation, besides what would naturally be by traveling a given distance. However, as we all experience daily, real-world environments introduce additional factors such as obstructions, multipath effects, clutter loss, and environmental conditions, necessitating corrections to the FSPL approach. Moving from one room of our house to another can easily change the cellular quality and our experience (e.g., dropped calls, poorer voice quality, lower speed, changing from using 5G to 4G or even to 2G, no coverage at all). Driving through a city may also result in ups and downs with respect to the cellular quality we experience. Some of these effects are tabulated below.

Urban environments typically introduce the highest additional losses due to dense buildings, narrow streets, and urban canyons, which significantly obstruct and scatter signals. For example, the Okumura-Hata Urban Model accounts for such obstructions and adds substantial losses to the FSPL, averaging around 30–50 dB, depending on the density and height of buildings.

Suburban environments, on the other hand, are less obstructed than urban areas but still experience moderate clutter losses from trees, houses, and other features. In these areas, corrections based on the Okumura-Hata Suburban Model add approximately 10–20 dB to the FSPL, reflecting the moderate level of signal attenuation caused by vegetation and scattered structures.

Rural environments have the least obstructions, resulting in the lowest additional loss. Corrections based on the Okumura-Hata Rural Model typically add around 5–10 dB to the FSPL. These areas benefit from open landscapes with minimal obstructions, making them ideal for long-range signal propagation.

Non-line-of-sight (NLOS) conditions increase additionally the path loss, as signals must diffract or scatter to reach the receiver. This effect adds 10–20 dB in suburban and rural areas and 20–40 dB in urban environments, where obstacles are more frequent and severe. Similarly, weather conditions such as rain and foliage contribute to signal attenuation, with rain adding up to 1–5 dB/km at higher frequencies (above 10 GHz) and dense foliage introducing an extra 5–15 dB of loss.

The corrections for these factors can be incorporated into the FSPL formula to provide a more realistic estimation of signal attenuation. By applying these corrections, the FSPL formula can reflect the conditions encountered in terrestrial communication systems across different environments.

The figure above illustrates the differences and similarities concerning the coverage environment for (a) terrestrial and (b) satellite communication systems. The terrestrial signal environment, in most instances, results in the loss of the signal as it propagates through the terrestrial environment due to vegetation, terrain variations, urban topology or infrastructure, weather, and ultimately, as the signal propagates from the outdoor environment to the indoor environment it signal reduces further as it, for example, penetrates windows with coatings, outer and inner walls. The combination of distance, obstacles, and material penetration leads to a cumulative reduction in signal strength as the signal propagates through the terrestrial environment. For the satellite, as illustrated in (b), a substantial amount of signal is reduced due to the vast distance it has to travel before reaching the consumer. If no outdoor antenna connects with the satellite signal, then the satellite signal will be further reduced as it penetrates roofs, multiple ceilings, multiple floors, and walls.

It is often assumed that a satellite system has a line of sight (LoS) without environmental obstructions in its signal propagation (besides atmospheric ones). The reasoning is not unreasonable as the satellite is on top of the consumers of its services and, of course, a correct approach when the consumer has an outdoor satellite receiver (e.g., a dish) in direct LoS with the satellite. Moreover, historically, most satellite-to-Earth communication has relied on outdoor ground stations or outdoor dishes (e.g., placed on roofs or another suitable location) where the outdoor antenna on Earth provides LoS to the satellite’s antenna also compensating somewhat for the signal loss due to the distance to the satellite.

When considering a satellite direct-to-cell device, we no longer have the luxury of a satellite-optimized advanced Earth-based outdoor antenna to facilitate the communications between the satellite and the consumer device. The satellite signal has to close the connection with a standard cellular device (e.g., smartphone, tablet, …), just like the terrestrial cellular network would have to do.

However, 80% or more of our mobile cellular traffic happens indoors, in our homes, workplaces, and public places. If a satellite system had to replace existing mobile network services, it would also have to provide a service quality similar to that of consumers from the terrestrial cellular network. As shown in the above figure, this involves urban areas where the satellite signal will likely pass through a roof and multiple floors before reaching a consumer. Depending on housing density, buildings (shadowing) may block the satellite signal, resulting in substantial service degradation for consumers suffering from such degrading effects. Even if the satellite signal would not face the same challenges as a terrestrial cellular signal, such as with vegetation, terrain variations, and the horizontal dimension of urban topology (e.g., outer& inner walls, coated windows,… ), the satellite signal would still have to overcome the vertical dimension of urban topologies (e..g, roofs, ceilings, floors, etc…) to connect to consumers cellular devices.

For terrestrial cellular services, the cellular network’s signal integrity will (always) have a considerable advantage over the satellite signal because of the proximity to the consumer’s cellular device. With respect to distance alone, an LEO satellite at an altitude of 550 km will have to overcome a 50 thousand times (or a 47 dB) path loss compared to a cellular base station antenna 2.5 km away. Overcoming that path loss penalty adds considerable challenges to the antenna design, which would seem highly challenging to meet and far from what is possible with today’s technology (and economy).

CHALLENGES SUMMARIZED.

Achieving parity between a Low Earth Orbit (LEO) satellite providing Direct-to-Cell (D2C) services and a terrestrial 5G network involves overcoming significant technical challenges. The disparity arises from fundamental differences in these systems’ environments, particularly in free-space path loss, penetration loss, and power delivery. Terrestrial networks benefit from closer proximity to the consumer, higher antenna density, and lower propagation losses. In contrast, LEO satellites must address far more significant free-space path losses due to the large distances involved and the additional challenges of transmitting signals through the atmosphere and into buildings.

The D2C challenges for LEO satellites are increasingly severe at higher frequencies, such as 3.5 GHz and above. As we have seen above, the free-space path loss increases with the square of the frequency, and penetration losses through common building materials, such as walls and floors, are significantly higher. For an LEO satellite system to achieve indoor parity with terrestrial 5G services at this frequency, it would need to achieve extraordinary levels of effective isotropic radiated power (EIRP), around 65 dB, and narrow beamwidths of approximately 0.5° to concentrate power on specific service areas. This would require very high onboard power outputs, exceeding 1 kW, and large antenna apertures, around 2 m in diameter, to achieve gains near 55 dBi. These requirements place considerable demands on satellite design, increasing mass, complexity, and cost. Despite these optimizations, indoor service parity at 3.5 GHz remains challenging due to persistent penetration losses of around 20 dB, making this frequency better suited for outdoor or line-of-sight applications.

Achieving a stable beam with the small widths required for a LEO satellite to provide high-performance Direct-to-Cell (D2C) services presents significant challenges. Narrow beam widths, on the order of 0.5° to 1°, are essential to effectively focus the satellite’s power and overcome the high free-space path loss. However, maintaining such precise beams demands advanced satellite antenna technologies, such as high-gain phased arrays or large deployable apertures, which introduce design, manufacturing, and deployment complexities. Moreover, the satellite must continuously track rapidly moving targets on Earth as it orbits around 7.8 km/s. This requires highly accurate and fast beam-steering systems, often using phased arrays with electronic beamforming, to compensate for the relative motion between the satellite and the consumer. Any misalignment in the beam can result in significant signal degradation or complete loss of service. Additionally, ensuring stable beams under variable conditions, such as atmospheric distortion, satellite vibrations, and thermal expansion in space, adds further layers of technical complexity. These requirements increase the system’s power consumption and cost and impose stringent constraints on satellite design, making it a critical challenge to achieve reliable and efficient D2C connectivity.

As the operating frequency decreases, the specifications for achieving parity become less stringent. At 1.8 GHz, the free-space path loss and penetration losses are lower, reducing the signal deficit. For a LEO satellite operating at this frequency, a 2.5 m² aperture (1.8 m diameter) antenna and an onboard power output of around 800 W would suffice to deliver EIRP near 60 dBW, bringing outdoor performance close to terrestrial equivalency. Indoor parity, while more achievable than 3.5 GHz, would still face challenges due to penetration losses of approximately 15 dB. However, the balance between the reduced propagation losses and achievable satellite optimizations makes 1.8 GHz a more practical compromise for mixed indoor and outdoor coverage.

At 800 MHz, the frequency-dependent losses are significantly reduced, making it the most feasible option for LEO satellite systems to achieve parity with terrestrial 5G networks. The free-space path loss decreases further, and penetration losses into buildings are reduced to approximately 10 dB, comparable to what terrestrial systems experience. These characteristics mean that the required specifications for the satellite system are notably relaxed. A 1.5 m² aperture (1.4 m diameter) antenna, combined with a power output of 400 W, would achieve sufficient gain and EIRP (~55 dBW) to deliver robust outdoor coverage and acceptable indoor service quality. Lower frequencies also mitigate the need for extreme beamwidth narrowing, allowing for more flexible service deployment.

Most consumers’ cellular consumption happens indoors. These consumers are compared to an LEO satellite solution typically better served by existing 5G cellular broadband networks. When considering a direct-to-normal-cellular device, it would not be practical to have an LEO satellite network, even an extensive one, to replace existing 5G terrestrial-based cellular networks and the services these support today.

This does not mean that LEO satellite cannot be of great utility when connecting to an outdoor Earth-based consumer dish, as is already evident in many remote, rural, and suburban places. The summary table above also shows that LEO satellite D2C services are feasible, without too challenging modifications, at the lower cellular frequency ranges between 600 MHz to 1800 MHz at service levels close to the terrestrial systems, at least in rural areas and for outdoor services in general. In indoor situations, the LEO Satellite D2C signal is more likely to be compromised due to roof and multiple floor penetration scenarios to which a terrestrial signal may be less exposed.

WHAT GOES DOWN MUST COME UP.

LEO satellite services that provide direct to unmodified mobile cellular device services are getting us all too focused on the downlink path from the satellite directly to the device. It seems easy to forget that unless you deliver a broadcast service, we also need the unmodified cellular device to directly communicate meaningfully with the LEO satellite. The challenge for an unmodified cellular device (e.g., smartphone, tablet, etc.) to receive the satellite D2C signal has been explained extensively in the previous section. In the satellite downlink-to-device scenario, we can optimize the design specifications of the LEO satellite to overcome some (or most, depending on the frequency) of the challenges posed by the satellite’s high altitude (compared to a terrestrial base station’s distance to the consumer device). In the device direct-uplink-to-satellite, we have very little to no flexibility unless we start changing the specifications of the terrestrial device portfolio. Suppose we change the specifications for consumer devices to communicate better with satellites. In that case, we also change the premise and economics of the (wrong) idea that LEO satellites should be able to completely replace terrestrial cellular networks at service parity with those terrestrial cellular networks.

Achieving uplink communication from a standard cellular device to an LEO satellite poses significant challenges, especially when attempting to match the performance of a terrestrial 5G network. Cellular devices are designed with limited transmission power, typically in the range of 23–30 dBm (0.2–1 watt), sufficient for short-range communication with terrestrial base stations. However, when the receiving station is a satellite orbiting between 550 and 1,200 kilometers, the transmitted signal encounters substantial free-space path loss. The satellite must, therefore, be capable of detecting and processing extremely weak signals, often below -120 dBm, to maintain a reliable connection.

The free-space path loss in the uplink direction is comparable to that in the downlink, but the challenges are compounded by the cellular device’s limitations. At higher frequencies, such as 3.5 GHz, path loss can exceed 155 dB, while at 1.8 GHz and 800 MHz, it reduces to approximately 149.6 dB and 143.6 dB, respectively. Lower frequencies favor uplink communication because they experience less path loss, enabling better signal propagation over large distances. However, cellular devices typically use omnidirectional antennas with very low gain (0–2 dBi), poorly suited for long-distance communication, placing even greater demands on the satellite’s receiving capabilities.

The satellite must compensate for these limitations with highly sensitive receivers and high-gain antennas. Achieving sufficient antenna gain requires large apertures, often exceeding 4 meters in diameter for 800 MHz or 2 meters for 3.5 GHz, increasing the satellite’s size, weight, and complexity. Phased-array antennas or deployable reflectors are often used to achieve the required gain. Still, their implementation is constrained by the physical limitations and costs of launching such systems into orbit. Additionally, the satellite’s receiver must have an exceptionally low noise figure, typically in the range of 1–3 dB, to minimize internal noise and allow the detection of weak uplink signals.

Interference is another critical challenge in the uplink path. Unlike terrestrial networks, where signals from individual devices are isolated into small sectors, satellites receive signals over larger geographic areas. This broad coverage makes it difficult to separate and process individual transmissions, particularly in densely populated areas where numerous devices transmit simultaneously. Managing this interference requires sophisticated signal processing capabilities on the satellite, increasing its complexity and power demands.

The motion of LEO satellites introduces additional complications due to the Doppler effect, which causes a shift in the uplink signal frequency. At higher frequencies like 3.5 GHz, these shifts are more pronounced, requiring real-time adjustments to the receiver to compensate. This dynamic frequency management adds another layer of complexity to the satellite’s design and operation.

Among the frequencies considered, 3.5 GHz is the most challenging for uplink communication due to high path loss, pronounced Doppler effects, and poor building penetration. Satellites operating at this frequency must achieve extraordinary sensitivity and gain, which is difficult to implement at scale. At 1.8 GHz, the challenges are somewhat reduced as the path loss and Doppler effects are less severe. However, the uplink requires advanced receiver sensitivity and high-gain antennas to approach terrestrial network performance. The most favorable scenario is at 800 MHz, where the lower path loss and better penetration characteristics make uplink communication significantly more feasible. Satellites operating at this frequency require less extreme sensitivity and gain, making it a practical choice for achieving parity with terrestrial 5G networks, especially for outdoor and light indoor coverage.

Uplink, the consumer device to satellite signal direction, poses additional limitations to the frequency range. Such systems may be interesting to 600 MHz to a maximum of 1.8 GHz, which is already challenging for uplink and downlink in indoor usage. Service in the lower cellular frequency range is feasible for outdoor usage scenarios in rural and remote areas and for non-challenging indoor environments (e.g., “simple” building topologies).

The premise that LEO satellite D2C services would make terrestrial cellular networks redundant everywhere by offering service parity appears very unlikely, and certainly not with the current generation of LEO satellites being launched. The altitude range of the LEO satellites (300 – 1200 km) and frequency ranges used for most terrestrial cellular services (600 MHz to 5 GHz) make it very challenging and even impractical (for higher cellular frequency ranges) to achieve quality and capacity parity with existing terrestrial cellular networks.

LEO SATELLITE D2C ARCHITECTURE.

A subscriber would realize they have LEO satellite Direct-to-Cell coverage through network signaling and notifications provided by their mobile device and network operator. Using this coverage depends on the integration between the LEO satellite system and the terrestrial cellular network, as well as the subscriber’s device and network settings. Here’s how this process typically works:

When a subscriber moves into an area where traditional terrestrial coverage is unavailable or weak, their mobile device will periodically search for available networks, as it does when trying to maintain connectivity. If the device detects a signal from a LEO satellite providing D2C services, it may indicate “Satellite Coverage” or a similar notification on the device’s screen.

This recognition is possible because the LEO satellite extends the subscriber’s mobile network. The satellite broadcasts system information on the same frequency bands licensed to the subscriber’s terrestrial network operator. The device identifies the network using the Public Land Mobile Network (PLMN) ID, which matches the subscriber’s home network or a partner network in a roaming scenario. The PLMN is a fundamental component of terrestrial and LEO satellite D2C networks, which is the identifier that links a mobile consumer to a specific mobile network operator. It enables communication, access rights management, network interoperability, and supporting services such as voice, text, and data.

The PLMN is also directly connected to the frequency bands used by an operator and any satellite service provider, acting as an extension of the operator’s network. It ensures that devices access the appropriately licensed bands through terrestrial or satellite systems and governs spectrum usage to maintain compliance with regulatory frameworks. Thus, the PLMN links the network identification and frequency allocation, ensuring seamless and lawful operation in terrestrial and satellite contexts.

In an LEO satellite D2C network, the PLMN plays a similar but more complex role, as it must bridge the satellite system with terrestrial mobile networks. The satellite effectively operates as an extension of the terrestrial PLMN, using the same MCC and MNC codes as the consumer’s home network or a roaming partner. This ensures that consumer devices perceive the satellite network as part of their existing subscription, avoiding the need for additional configuration or specialized hardware. When the satellite provides coverage, the PLMN enables the device to authenticate and access services through the operator’s core network, ensuring consistency with terrestrial operations. It ensures that consumer authentication, billing, and service provisioning remain consistent across the terrestrial and satellite domains. In cases where multiple terrestrial operators share access to a satellite system, the PLMN facilitates the correct routing of consumer sessions to their respective home networks. This coordination is particularly important in roaming scenarios, where a consumer connected to a satellite in one region may need to access services through their home network located in another region.

For a subscriber to make use of LEO satellite coverage, the following conditions must be met:

  • Device Compatibility: The subscriber’s mobile device must support satellite connectivity. While many standard devices are compatible with satellite D2C services using terrestrial frequencies, certain features may be required, such as enhanced signal processing or firmware updates. Modern smartphones are increasingly being designed to support these capabilities.
  • Network Integration: The LEO satellite must be integrated with the subscriber’s mobile operator’s core network. This ensures the satellite extends the terrestrial network, maintaining seamless authentication, billing, and service delivery. Consumers can make and receive calls, send texts, or access data services through the satellite link without changing their settings or SIM card.
  • Service Availability: The type of services available over the satellite link depends on the network and satellite capabilities. Initially, services may be limited to text messaging and voice calls, as these require less bandwidth and are easier to support in shared satellite coverage zones. High-speed data services, while possible, may require further advancements in satellite capacity and network integration.
  • Subscription or Permissions: Subscribers must have access to satellite services through their mobile plan. This could be included in their existing plan or offered as an add-on service. In some cases, roaming agreements between the subscriber’s home network and the satellite operator may apply.
  • Emergency Use: In specific scenarios, satellite connectivity may be automatically enabled for emergencies, such as SOS messages, even if the subscriber does not actively use the service for regular communication. This is particularly useful in remote or disaster-affected areas with unavailable terrestrial networks.

Once connected to the satellite, the consumer experience is designed to be seamless. The subscriber can initiate calls, send messages, or access other supported services just as they would under terrestrial coverage. The main differences may include longer latency due to the satellite link and, potentially, lower data speeds or limitations on high-bandwidth activities, depending on the satellite network’s capacity and the number of consumers sharing the satellite beam.

Managing a call on a Direct-to-Cell (D2C) satellite network requires specific mobile network elements in the core network, alongside seamless integration between the satellite provider and the subscriber’s terrestrial network provider. The service’s success depends on how well the satellite system integrates into the terrestrial operator’s architecture, ensuring that standard cellular functions like authentication, session management, and billing are preserved.

In a 5G network, the core network plays a central role in managing calls and data sessions. For a D2C satellite service, key components of the operator’s core network include the Access and Mobility Management Function (AMF), which handles consumer authentication and signaling. The AMF establishes and maintains connectivity for subscribers connecting via the satellite. Additionally, the Session Management Function (SMF) oversees the session context for data services. It ensures compatibility with the IP Multimedia Subsystem (IMS), which manages call control, routing, and handoffs for voice-over-IP communications. The Unified Data Management (UDM) system, another critical core component, stores subscriber profiles, detailing permissions for satellite use, roaming policies, and Quality of Service (QoS) settings.

To enforce network policies and billing, the Policy Control Function (PCF) applies service-level agreements and ensures appropriate charges for satellite usage. For data routing, elements such as the User Plane Function (UPF) direct traffic between the satellite ground stations and the operator’s core network. Additionally, interconnect gateways manage traffic beyond the operator’s network, such as the Internet or another carrier’s network.

The role of the satellite provider in this architecture depends on the integration model. If the satellite system is fully integrated with the terrestrial operator, the satellite primarily acts as an extension of the operator’s radio access network (RAN). In this case, the satellite provider requires ground stations to downlink traffic from the satellites and forward it to the operator’s core network via secure, high-speed connections. The satellite provider handles radio gateway functionality, translating satellite-specific protocols into formats compatible with terrestrial systems. In this scenario, the satellite provider does not need its own core network because the operator’s core handles all call processing, authentication, billing, and session management.

In a standalone model, where the LEO satellite provider operates independently, the satellite system must include its own complete core network. This requires implementing AMF, SMF, UDM, IMS, and UPF, allowing the satellite provider to directly manage subscriber sessions and calls. In this case, interconnect agreements with terrestrial operators would be needed to enable roaming and off-network communication.

Most current D2C solutions, including those proposed by Starlink with T-Mobile or AST SpaceMobile, follow the integrated model. In these cases, the satellite provider relies on the terrestrial operator’s core network, reducing complexity and leveraging existing subscriber management systems. The LEO satellites are primarily responsible for providing RAN functionality and ensuring reliable connectivity to the terrestrial core.

REGULATORY CHALLENGES.

LEO satellite networks offering Direct-to-Cell (D2C) services face substantial regulatory challenges in their efforts to operate within frequency bands already allocated to terrestrial cellular services. These challenges are particularly significant in regions like Europe and the United States, where cellular frequency ranges are tightly regulated and managed by national and regional authorities to ensure interference-free operations and equitable access among service providers.

The cellular frequency spectrum in Europe and the USA is allocated through licensing frameworks that grant exclusive usage rights to mobile network operators (MNOs) for specific frequency bands, often through competitive auctions. For example, in the United States, the Federal Communications Commission (FCC) regulates spectrum usage, while in Europe, national regulatory authorities manage spectrum allocations under the guidelines set by the European Union and CEPT (European Conference of Postal and Telecommunications Administrations). The spectrum currently allocated for cellular services, including low-band (e.g., 600 MHz, 800 MHz), mid-band (e.g., 1.8 GHz, 2.1 GHz), and high-band (e.g., 3.5 GHz), is heavily utilized by terrestrial operators for 4G LTE and 5G networks.

In March 2024, the Federal Communications Commission (FCC) adopted a groundbreaking regulatory framework to facilitate collaborations between satellite operators and terrestrial mobile service providers. This initiative, termed “Supplemental Coverage from Space,” allows satellite operators to use the terrestrial mobile spectrum to offer connectivity directly to consumer handsets and is an essential component of FCC’s “Single Network Future.” The framework aims to enhance coverage, especially in remote and underserved areas, by integrating satellite and terrestrial networks. The FCC granted SpaceX (November 2024) approval to provide direct-to-cell services via its Starlink satellites. This authorization enables SpaceX to partner with mobile carriers, such as T-Mobile, to extend mobile coverage using satellite technology. The approval includes specific conditions to prevent interference with existing services and to ensure compliance with established regulations. Notably, the FCC also granted SpaceX’s request to provide service to cell phones outside the United States. For non-US operations, Starlink must obtain authorization from the relevant governments. Non-US operations are authorized in various sub-bands between 1429 MHz and 2690 MHz.

In Europe, the regulatory framework for D2C services is under active development. The European Conference of Postal and Telecommunications Administrations (CEPT) is exploring the regulatory and technical aspects of satellite-based D2C communications. This includes understanding connectivity requirements and addressing national licensing issues to facilitate the integration of satellite services with existing mobile networks. Additionally, the European Space Agency (ESA) has initiated feasibility studies on Direct-to-Cell connectivity, collaborating with industry partners to assess the potential and challenges of implementing such services across Europe. These studies aim to inform future regulatory decisions and promote innovation in satellite communications.

For LEO satellite operators to offer D2C services in these regulated bands, they would need to reach agreements with the licensed MNOs with the rights to these frequencies. This could take the form of spectrum-sharing agreements or leasing arrangements, wherein the satellite operator obtains permission to use the spectrum for specific purposes, often under strict conditions to avoid interference with terrestrial networks. For example, SpaceX’s collaboration with T-Mobile in the USA involves utilizing T-Mobile’s existing mid-band spectrum (i.e., PCS1900) under a partnership model, enabling satellite-based connectivity without requiring additional spectrum licensing.

In Europe, the situation is more complex due to the fragmented nature of the regulatory environment. Each country manages its spectrum independently, meaning LEO operators must negotiate agreements with individual national MNOs and regulators. This creates significant administrative and logistical hurdles, as the operator must align with diverse licensing conditions, technical requirements, and interference mitigation measures across multiple jurisdictions. Furthermore, any satellite use of the terrestrial spectrum in Europe must comply with European Union directives and ITU (International Telecommunication Union) regulations, prioritizing terrestrial services in these bands.

Interference management is a critical regulatory concern. LEO satellites operating in the same frequency bands as terrestrial networks must implement sophisticated coordination mechanisms to ensure their signals do not disrupt terrestrial operations. This includes dynamic spectrum management, geographic beam shaping, and power control techniques to minimize interference in densely populated areas where terrestrial networks are most active. Regulators in the USA and Europe will likely require detailed technical demonstrations and compliance testing before approving such operations.

Another significant challenge is ensuring equitable access to spectrum resources. MNOs have invested heavily in acquiring and deploying their licensed spectrum, and many may view satellite D2C services as a competitive threat. Regulators would need to establish clear frameworks to balance the rights of terrestrial operators with the potential societal benefits of extending connectivity through satellites, particularly in underserved rural or remote areas.

Beyond regulatory hurdles, LEO satellite operators must collaborate extensively with MNOs to integrate their services effectively. This includes interoperability agreements to ensure seamless handoffs between terrestrial and satellite networks and the development of business models that align incentives for both parties.

TAKEAWAYS.

Ditect-to-cell LEO satellite networks face considerable technology hurdles in providing services comparable to terrestrial cellular networks.

  • Overcoming free-space path loss and ensuring uplink connectivity from low-power mobile devices with omnidirectional antennas.
  • Cellular devices transmit at low power (typically 23–30 dBm), making it difficult for uplink signals to reach satellites in LEO at 500–1,200 km altitudes.
  • Uplink signals from multiple devices within a satellite beam area can overlap, creating interference that challenges the satellite’s ability to separate and process individual uplink signals.
  • Developing advanced phased-array antennas for satellites, dynamic beam management, and low-latency signal processing to maintain service quality.
  • Managing mobility challenges, including seamless handovers between satellites and beams and mitigating Doppler effects due to the high relative velocity of LEO satellites.
  • The high relative velocity of LEO satellites introduces frequency shifts (i.e., Doppler Effect) that the satellite must compensate for dynamically to maintain signal integrity.
  • Address bandwidth limitations and efficiently reuse spectrum while minimizing interference with terrestrial and other satellite networks.
  • Scaling globally may require satellites to carry varied payload configurations to accommodate regional spectrum requirements, increasing technical complexity and deployment expenses.
  • Operating on terrestrial frequencies necessitates dynamic spectrum sharing and interference mitigation strategies, especially in densely populated areas, limiting coverage efficiency and capacity.
  • Ensuring the frequent replacement of LEO satellites due to shorter lifespans increases operational complexity and cost.

On the regulatory front, integrating D2C satellite services into existing mobile ecosystems is complex. Spectrum licensing is a key issue, as satellite operators must either share frequencies already allocated to terrestrial mobile operators or secure dedicated satellite spectrum.

  • Securing access to shared or dedicated spectrum, particularly negotiating with terrestrial operators to use licensed frequencies.
  • Avoiding interference between satellite and terrestrial networks requires detailed agreements and advanced spectrum management techniques.
  • Navigating fragmented regulatory frameworks in Europe, where national licensing requirements vary significantly.
  • Spectrum Fragmentation: With frequency allocations varying significantly across countries and regions, scaling globally requires navigating diverse and complex spectrum licensing agreements, slowing deployment and increasing administrative costs.
  • Complying with evolving international regulations, including those to be defined at the ITU’s WRC-27 conference.
  • Developing clear standards and agreements for roaming and service integration between satellite operators and terrestrial mobile network providers.
  • The high administrative and operational burden of scaling globally diminishes economic benefits, particularly in regions where terrestrial networks already dominate.
  • While satellites excel in rural or remote areas, they might not meet high traffic demands in urban areas, restricting their ability to scale as a comprehensive alternative to terrestrial networks.

The idea of D2C satellite networks making terrestrial cellular networks obsolete is ambitious but fraught with practical limitations. While LEO satellites offer unparalleled reach in remote and underserved areas, they struggle to match terrestrial networks’ capacity, reliability, and low latency in urban and suburban environments. The high density of base stations in terrestrial networks enables them to handle far greater traffic volumes, especially for data-intensive applications.

  • Coverage advantage: Satellites provide global reach, particularly in remote or underserved regions, where terrestrial networks are cost-prohibitive and often of poor quality or altogether lacking.
  • Capacity limitations: Satellites struggle to match the high-density traffic capacity of terrestrial networks, especially in urban areas.
  • Latency challenges: Satellite latency, though improving, cannot yet compete with the ultra-low latency of terrestrial 5G for time-critical applications.
  • Cost concerns: Deploying and maintaining satellite constellations is expensive, and they still depend on terrestrial core infrastructure (although the savings if all terrestrial RAN infrastructure could be avoided is also very substantial).
  • Complementary role: D2C networks are better suited as an extension to terrestrial networks, filling coverage gaps rather than replacing them entirely.

The regulatory and operational constraints surrounding using terrestrial mobile frequencies for D2C services severely limit scalability. This fragmentation makes it difficult to achieve global coverage seamlessly and increases operational and economic inefficiencies. While D2C services hold promise for addressing connectivity gaps in remote areas, their ability to scale as a comprehensive alternative to terrestrial networks is hampered by these challenges. Unless global regulatory harmonization or innovative technical solutions emerge, D2C networks will likely remain a complementary, sub-scale solution rather than a standalone replacement for terrestrial mobile networks.

FURTHER READING.

  1. Kim K. Larsen, “The Next Frontier: LEO Satellites for Internet Services.” Techneconomyblog, (March 2024).
  2. Kim K. Larsen, “Stratospheric Drones & Low Earth Satellites: Revolutionizing Terrestrial Rural Broadband from the Skies?” Techneconomyblog, (January 2024).
  3. Kim K. Larsen, “A Single Network Future“, Techneconomyblog, (March 2024).
  4. T.S. Rappaport, “Wireless Communications – Principles & Practice,” Prentice Hall (1996). In my opinion, it is one of the best graduate textbooks on communications systems. I bought it back in 1999 as a regular hardcover. I have not found it as a Kindle version, but I believe there are sites where a PDF version may be available (e.g., Scribd).

ACKNOWLEDGEMENT.

I greatly acknowledge my wife, Eva Varadi, for her support, patience, and understanding during the creative process of writing this article.

The Nature of Telecom Capex – a 2024 Update.

Part of this blog has also been published in Telecom Analysis, titled “Navigating the Future of Telecom Capex: Western Europe’s Telecom Investment 2024 to 2030.” and some of the material has been updated to reflect the latest available data in some areas (e.g., fiber deployment in Western Europe).

Over the last three years, I have extensively covered the details of the Western European telecom sector’s capital expense levels and the drivers behind telecom companies’ capital investments. These accounts can be found in “The Nature of Telecom Capex—a 2023 Update” from 2023 and my initial article from 2022. This new version of “The Nature of Telecom Capex – a 2024 Update” is also different compared to the issues of 2022 and 2023 in that it focuses on the near future Capex demands from 2024 to 2030 and what we may expect from our Industry capital spending over the next 7 years.

For Western Europe, Capex levels in 2023 were lower than in 2022, a relatively rare but not unique occurrence that led many industry analysts to conclude the “End of Capex” and that from now on, “Capex will surely decline.” The compelling and logical explanations were also evident, pointing out that “data traffic (growth) is in decline”, “overproduction of bandwidth”, “5G is not what it was heralded to be”, “No interest in 6G”, “Capital is too expensive” and so forth. These “End to Capex” conclusions were often made on either aggregated data or selected data, depending on the availability of data.

Having worked on Capex planning and budgeting since the early 2000s for one of the biggest telecom companies in Europe, Deutsche Telecom AG, building what has been described as best-practice Capex models, my outlook is slightly less “optimistic” about the decline and “End” of Capex spending by the Industry. Indeed, for those expecting that a Telco’s capital planning is only impacted by hyper-rational insights glued to real-world tangibles and driven by clear strategic business objectives, I beg you to modify that belief somewhat.

Figure 1 illustrates the actual telecom Capex development for Western Europe between 2017 and 2023, with projected growth from 2024 (with the first two quarters’ actual Capex levels) to 2026, represented by the orange-colored dashed lines. The light dashed line illustrates the annual baseline Capex level before 5G and fiber deployment acceleration. The light solid line shows the corresponding Telco Capex to Revenue development, including an assessment for 2024 to 2026, with an annual increase of ca. 500 million euros. Source: New Street Research European Quarterly Review, covering 15 Western European countries (see references at the end of the blog) and 56+ telcos from 2017 to 2024, with 2024 covering the year’s first two quarters.

Western Europe’s telecommunications Capex fell between 2022 and 2023 for the first time in some years, from the peak of 51 billion euros in 2022. The overall development from 2017 to 2023 is illustrated below, including a projected Capex development covering 2024 to 2026 using each Telco’s revenue projections as a simple driver for the expected Capex level (i.e., inherently assuming that the planned Capex level is correlated to the anticipated, or targeted, revenue of the subsequent year).

The reduction in Capex between 2022 and 2023 comes from 29 out of 56 Telcos reducing their Capex level in 2023 compared to 2022. In 8 out of 15 countries, the Telco Capex levels were decreased by ca. 2.3 billion euros compared to their 2022 Capex levels. Likewise, 7 countries spent approximately 650 million euros more than their 2022 levels together. If we compared the 1st and 2nd half of 2023 with 2022, there was an unprecedented Capex reduction in the 2nd half of 2023 compared to any other year from 2017 to 2023. It really gives the impression that many ( at least 36 out of 56) Telcos put their feet on the break in 2023. 29 Telcos out of the 36 broke their spending in the last half of 2023 and ended the year with an overall lower spending than in 2022. Of the 8 countries with a lower Capex spend in 2023, the UK, France, Italy, and Spain make up more than 80%. Of the countries with a higher Capex in 2023, Germany, Netherlands, Belgium, and Austria make up more than 80%.

For a few of the countries with lower Capex levels in 2023, one could argue that they more or less finished their 5G rollout and have so high fiber-to-the-home penetration levels that more fiber is on account of overbuilt and of a substantially smaller scale than in the past (e.g., France, Norway, Spain, Portugal, Denmark, and Sweden). For other countries with a lower investment level than the previous year, such as the UK, Italy, and Greece, 2022 and 2023 saw substantial consolidation activity in the markets (e.g., Vodafone UK & C.K. Hutchinson 3, Wind Hellas rounded up in Nova Greece, …). In fact, Spain (e.g., Masmovil), Norway (e.g., Ice Group), and Denmark (e.g., Telia DK) also experienced consolidation activities that will generally lower companies’ spending levels initially. One would expect, as to some extent visible in the first half of 2024, that countries that spend less due to consolidation activities would increase their Capex levels in the next two to three years after an initial replanning period.

WESTERN EUROPE – THE BIG CAPEX OVERVIEW.

Figure 2 Shows on a country-level the 5-year average Capex spend (over the period 2019 to 2023) and the Capex in 2023. Source: New Street Research European Quarterly Review 2017 to 2024 (Q2).

When attempting to understand Telco Capex, or any Capex with a “built-in” cyclicity, one really should look at more than one or two years. Figure 2 above provides the comparison with the average Capex spend over the period 2019 to 2023 and the Capex spend in 2023. The five year Capex average captures the initial stages of 5G deployment in Europe, 5G deployment in general, COVID capacity investments (in fixed networks), the acceleration of Fiber rollout in many countries in Europe (e.g., Germany, UK, Netherlands, …), the financial (inflationary) crisis of increasing costly capital, and so forth. In my opinion 2023 is a reflection of the 2021-2022 financial crisis and that most of the 5G has been deployed to cover current market needs. As we have seen before, Telco investments are often 12 to 18 month out of synch with financial crisis years, and thus it is from that perspective also not surprising that 2023 might be a lower Capex year than in the past. Although, as is also evident from Figure 2, only 5 countries had a lower Capex level in 2023 than the previous 5 years average level.

Figure 3 Illustrates the Capex development over the last 5 years from 2019 to 2023 with the color Green showing years where the subsequent year had a higher Capex level, and color Red that the subsequent year had a lower Capex level. From a Western Europe perspective only 2023 had a lower Capex level than the previous year (compared to the last 5 years). Source: New Street Research European Quarterly Review 2017 to 2024 (Q2).

Using Capex to Revenue ratios of the Telco industry are prone to some uncertainty. This is particular the case when individual Telcos are compared. In general, I recommend to make comparisons over a given period of time, like 3 or 5 year periods, as it averages out some of the natural variation between Telcos and countries (e.g., one country or Telco may have started its 5G deployment earlier than others). Even that approach has to be taken with some caution as some Telcos may fully incur Capex for fiber deployments and others may make wholesale agreements with open Fiberco’s (for example) and only incur last-mile access or connection Capex. Although, of smaller relative Capex scale nowadays, Telcos increasingly have Towercos managing and building their passive infrastructure for their cell site demand. Some may still fully build their own cell sites, incurring proportionally higher Capex per new site deployed, which of course may lead to structural Capex differences between such Telcos. Having these cautionary remarks in mind, I believe that Capex to Revenue ratios does provide a means of comparing Countries or Telcos and it does give provide a picture of the capital investment intensity compared to the market performance. A country comparison of the 5-year (period: 2019 to 2023) average Capex to Revenue ratio is illustrated in Figure 3 below for the 15 markets considered in this blog.

Figure 4 Shows on a country-level the 5-year average Capex to Revenue ratios over the period 2019 to 2023. Source: New Street Research European Quarterly Review 2017 to 2024 (Q2).

Comparing Capex per capita and Capex as a percentage of GDP may offer insights into how capital investments are prioritized in relation to population size and economic output. These two metrics could highlight different aspects of investment strategies, providing a more comprehensive understanding of national economic priorities and critical infrastructure development levels. Such a comparison is show in Figure 15 below.

Capex per capita, shown in Figure 5 left hand side, measures the average amount of investment allocated to each person within a country. This metric is particularly useful for understanding the intensity of investment relative to the population, indicating how much infrastructure, technology, or other capital resources are being made available on a per-person basis. A higher Capex per capita suggests significant investment in areas like public services, infrastructure, or economic development, which could improve quality of life or boost productivity. Comparing this measure across countries helps identify disparities in investment levels, revealing which nations are placing greater emphasis on infrastructure development or economic expansion. For example, a country with a high Capex per capita likely prioritizes public goods such as transportation, energy, or digital infrastructure, potentially leading to better economic outcomes and higher living standards over time. The 5-year average Capex level does show a strong positive linear relationship with the Country population (R² = 0.9318, chart not shown), suggesting that ca. 93% of the variation in Capex can be explained by the variation in population. The trend implies that as the population increases, Capex also tends to increase, likely reflecting higher investment needs to accommodate larger populations. It should be noted that that a countries surface area is not a significant factor influencing Capex. While some countries with larger land areas might exhibit a higher Capex level, the overall trend is not strong.

Capex as a percentage of GDP, shown in Figure 5 right hand side, measures the proportion of a country’s economic output devoted to capital investment. This ratio provides context for understanding investment levels relative to the size of the economy, showing how much emphasis is placed on growth and development. A higher Capex-to-GDP ratio can indicate an aggressive investment strategy, commonly seen in developing economies or countries undergoing significant infrastructure expansion. Conversely, a lower ratio might suggest efficient capital allocation or, in some cases, underinvestment that could constrain future economic growth. This metric helps assess the sustainability of investment levels and reflects economic priorities. For instance, a high Capex-to-GDP ratio in a developed country could indicate a focus on upgrading existing infrastructure, whereas in a developing economy, it may signify efforts to close infrastructure gaps, modernization efforts (e.g., optical fiber replacing copper infrastructure per fixed broadband transformation) and accelerating growth. The 5-year average Capex level does show a strong positive linear relationship with the Country GDP (R² = 0.9389, chart not shown), suggesting that ca. 94% of the variation in Capex can be explained by the variation in the country GDP. While a few data points show some deviation from this trend, the overall fit is very strong, reinforcing the notion that larger economies generally allocate more resources to capital investments.

The insights gained from both Capex per capita and Capex as a percentage of GDP are complementary, providing a fuller picture of a country’s investment strategy. While Capex per capita reflects individual investment levels, Capex as a percentage of GDP reveals the scale of investment in relation to the overall economy. For example, a country with high Capex per capita but a low Capex-to-GDP ratio (e.g., Denmark, Norway, …) may have a wealthy population where individual investment levels are significant, but the size of the economy is such that these investments constitute a relatively small portion of total economic activity. Conversely, a country with a high Capex-to-GDP ratio but low Capex per capita (e.g., Greece) may be dedicating a substantial portion of its economic resources to infrastructure in an effort to drive growth, even if the per-person investment remains modest.

Figure 5 Illustrates two charts that compare the average capital expenditures over a 5-year period from 2019 to 2023. The left chart shows Capex per capita in euros, with Switzerland leading at 230 euros, while Spain has the lowest at 75 euros. The right chart depicts Capex as a percentage of GDP, where Greece tops the list at 0.47%, and Sweden is at the bottom with 0.16%. These metrics provide insights into how different countries allocate investments relative to their population size and economic output, revealing varying levels of investment intensity and economic priorities. It should be noted that Capex levels are strongly correlated with both the size of the population and the size of the economy as measured by the GDP. Source: New Street Research European Quarterly Review 2017 to 2024 (Q2).

FORWARD TO THE PAST.

Almost 15 years ago, I gave a presentation at the “4G World China” conference in Beijing titled “Economics of 4G Introduction in Growth Markets”. The idea was that a mobile operator’s capital demand would cycle between 8% (minimum) and 13% (maximum), usually with one replacement cycle before migrating to the next-generation radio access technology. This insight was backed up by best-practice capital demand models considering market strategy and growth Capex drivers. It involved also involved the insights of many expert discussions.

Figure 6 illustrates my expectations of how Capex would relate before, during, and after LTE deployment in Western Europe. Source: “Economics of 4G Introduction in Growth Markets” at “4G World China”, 2011.

For the careful observer, you will see that I expected, back in 2011, the typical Capex maintenance cycle in Western European markets between infrastructure and technology modernization periods to be no more than 8% and that Capex in the maintenance years would be 30% lower than required in the peak periods. I have yet to see a mobile operation with such a low capital intensity unless they effectively share their radio access network and/or by cost-structure “magic” (i.e., cost transformation), move typical mobile Capex items to Opex (by sourcing or optimizing the cost structure between fixed and mobile business units).

I retrospectively underestimated the industry’s willingness to continue increasing capital investments in existing networks, often ignoring the obvious optimization possibilities between their fixed and mobile broadband networks (due to organizational politics) and, of course, what has and still is a major industrial contagious infliction: “Metus Crescendi Exponentialis” (i.e., the fear of the exponential growth aka the opportunity to spend increasingly lots of Capex). From 2000 to today, the Western European Capex to Revenue ratio has been approximately between 11% and 21%, although it has been growing since around 2012 (see details in “The Nature of Telecom Capex—a 2023 Update”).

CAPEX DEVELOPMENT FROM 2024 TO 2026.

From the above Figure 1, it should be no surprise that I do not expect Capex to continue to decline substantially over the next couple of years, as we saw between 2022 and 2023. In fact, I anticipate that 2024 will be around the level of 2023, after which we will experience modest annual increases of 600 to 700 million euros. Countries with high 5G and Fiber-to-the-Home (FTTH) coverage (e.g., France, Netherlands, Norway, Spain, Portugal, Denmark, and Sweden) will keep their Capex levels possible with some modest declines with single-digit percentage points. Countries such as Germany, the UK, Austria, Belgium, and Greece are still European laggards in terms of FTTH coverage, being far below the 80+% of other Western European countries such as France, Spain, Portugal, Netherlands, Denmark, Sweden, and Norway. Such countries may be expected to continue to increase their Capex as they close the FTTH coverage gap. Here, it is worth remembering that several fiber acquisition strategies aiming at connecting homes with fiber result in a lower Capex than if a Telco aims to build all the required fiber infrastructure.

Consolidation Capex.

Telecom companies tend to scale back Capex during consolidation due to uncertainty, the desire to avoid redundancy, and the need to preserve cash. However, after regulatory approval and the deal’s closing, Capex typically rises as the company embarks on network integration, system migration, and infrastructure upgrades necessary to realize the merger’s benefits. This post-merger increase in Capex is crucial for achieving operational synergies, enhancing network performance, and maintaining a competitive edge in the telecom market.

If we look at the period 2021 to 2024, we have had the following consolidation and acquisition examples:

  • UK: In May 2021, Virgin Media and the O2 (Telefonica) UK merger was approved. They announced the intention to consolidate on May 7th, 2020.
  • UK: Vodafone UK and Three UK announced their intention to merge in June 2023. The final decision is expected by the end of 2024.
  • Spain: Orange and MasMovil announced their intent to consolidate in July 2023. Merger approval was given in February 2024. Conditions were imposed on the deal for MasMovil to divestitures its frequency spectrum.
  • Italy: The potential merger between Telecom Italia (TIM) and Open Fiber was first discussed in 2020 when the idea emerged to create a national fiber network in Italy by merging TIM’s fixed access unit, FiberCop, with Open Fiber. a Memorandum of Understanding was signed in May 2022.
  • Greece: Wind Hellas acquisition by United Group (Nova) was announced in August 2021 and finalized in January 2022 (with EU approval in December 2021).
  • Denmark: Norlys’s acquisition of Telia Denmark was first announced on April 25, 2023, and approved by the Danish competition authority in February 2024.

Thus, we should also expect that the bigger in-market consolidations may, in the short term (next 2+ years), lead to increased Capex spending during the consolidation phase, after which Capex (& Opex) synergies hopefully kick in. Typically, 2 budgetary cycles minimum before this would be expected to be observed. Consolidation Capex usually amounts to a couple of percentage points of total consolidated revenue, with some other bigger items being postponed to the tail end of a consolidation unless it is synergetic with the required integration.

The High-risk Suppler Challenge to Western Europe’s Telcos.

When assessing whether Capex will increase or decrease over the next few years (e.g., up to 2030), we cannot ignore the substantial Capex amounts associated with replacing high-risk suppliers (e.g., Huawei, ZTE) from Western European telecom networks. Today, the impact is mainly on mobile critical infrastructure, which is “limited” to core networks and 5G radio access networks (although some EU member states may have extended the reach beyond purely 5G). Particularly if (or when?) the current European Commission’s 5G Toolbox (legal) Framework (i.e., “The EU Toolbox for 5G Security”) is extended to all broadband network infrastructure (e.g., optical and IP transport network infrastructure, non-mobile backend networking & IT systems) and possibly beyond to also address Optical Network Terminal (ONT) and Customer Premise Equipment (note: ONT’s can be integrated in the CPE or alternatively separated from the CPE but installed at the customers premise). To an extent, it is thought-provoking that the EU emphasis has only been on 5G-associated critical infrastructure rather than the vast and ongoing investment of fiber-optical, next-generation fixed broadband networks across all European Union member states (and beyond). In particular, this may appear puzzling when the European Union has subsidized these new fiber-optical networks by up to 50%. Considering that the fixed-broadband traffic is 8 to 10 times that of the mobile traffic, and all mobile (and wireless) traffic passes through the fixed broadband network and associated local as well as global internet critical infrastructure.

As far back as 2013, the European Parliament raised some concerns about the degree of involvement (market share) of Chinese companies in the EU’s telecommunications sector. It should be remembered that in 2013, Europe’s sentiment was generally positive and optimistic toward collaboration with China, as evidenced by the European Commission’s report “EU-China 2020 Strategic Agenda for Cooperation” (2013). Historically, the development of the EU’s 5G Toolbox for Security was the result of a series of events from about 2008 (after the financial crisis) to 2019 (and to today), characterized by growing awareness in Europe of China’s strategic ambitions, the expansion of the BRI (Belt and Road Initiative, 2013), DSR (Digital Silk Road, an important part of BRI 2.0, 2015), and China’s National Intelligence Law (2017) requiring Chinese companies to cooperate with the Chinese Government on intelligence matters, as well as several high-profile cybersecurity incidents (e.g., APT, Operation Cloud Hopper, …), and increased scrutiny of Chinese technology providers and their influence on critical communications infrastructure across pretty much the whole of Europe. These factors collectively drove the EU to adopt a more cautious and coordinated approach to addressing security risks in the context of 5G and beyond.

Figure 7 illustrates Western society, including Western Europe, ‘s concern about Chinese technology presence in its digital infrastructure. A substantial “hidden” capital expense (security debt) is tied to Western Telco’s telecom infrastructures, mobile and fixed.

The European Commission’s 2023 second report on the implementation of the EU 5G cybersecurity toolbox offers an in-depth examination of the risks posed by high-risk suppliers, focusing on Chinese-origin infrastructure, such as equipment from Huawei and ZTE. The report outlines the various stages of implementation across EU Member States and provides recommendations on how to mitigate risks associated with Chinese infrastructure. It considers 5G and fixed broadband networks, including Customer Premise Equipment (CPE) devices like modems and routers placed at customer sites.

The EU Commission defines a high-risk supplier in the context of 5G cybersecurity based on several objective criteria to reduce security threats in telecom networks. A supplier may be classified as high-risk if it originates from a non-EU country with strong governmental ties or interference, particularly if its legal and political systems lack democratic safeguards, security protections, or data protection agreements with the EU. Suppliers susceptible to governmental control in such countries pose a higher risk.

A supplier’s ability to maintain a reliable and uninterrupted supply chain is also critical. A supplier may be considered high-risk if it is deemed vulnerable in delivering essential telecom components or ensuring consistent service. Corporate governance is another important aspect. Suppliers with opaque ownership structures or unclear separation from state influence are more likely to be classified as high-risk due to the increased potential for external control or lack of transparency.

A supplier’s cybersecurity practices also play a significant role. If the quality of the supplier’s products and its ability to implement security measures across operations are considered inadequate, this may raise concerns. In some cases, country-specific factors, such as intelligence assessments from national security agencies or evidence of offensive cyber capabilities, might heighten the risk associated with a particular supplier.

Furthermore, suppliers linked to criminal activities or intelligence-gathering operations undermining the EU’s security interests may also be considered high-risk.

To summarize what may make a telecom supplier a high-risk supplier:

  • Of non-EU origin.
  • Strong governmental ties.
  • The country of origin lacks democratic safeguards.
  • The country of origin lacks security protection or data protection agreements with the EU.
  • Associated supply chain risks of interruption.
  • Opaque ownership structure.
  • Unclear separation from state influence.
  • Ability to independently implement security measures shielding infrastructure from interference (e.g., sabotage, espionage, …).

These criteria are applied to ensure that telecom operators, and eventually any business with critical infrastructure, become independent of a single supplier, especially those that pose a higher risk to the security and stability of critical infrastructure.

Figure 8 above summarizes the current European legislative framework addressing high-risk suppliers in critical infrastructure, with an initial focus on 5G infrastructure and networks.

Regarding 5G infrastructure, the EU report reiterates the urgency for EU Member States to immediately implement restrictions on high-risk suppliers. The EU policy highlights the risks of state interference and cybersecurity vulnerabilities posed by the close ties between Chinese companies like Huawei and ZTE and the Chinese government. Following groundwork dating back to the 2008s EU Directive on Critical Infrastructure Protection (EPCIP), The EU’s Digital Single Market Strategy (2015), the (first) Network and Information Security (NIS) directive (2016), and early European concern about 5G societal impact and exposure to cybersecurity (2015 – 2017), the EU toolbox published in January 2020 is designed to address these risks by urging Member States to adopt a coordinated approach. As of 2023, a second EU report was published on the member state’s progress in implementing the EU Toolbox for 5G Cybersecurity. While many Member States have established legal frameworks that give national authorities the power to assess supplier risks, only 10 have fully imposed restrictions on high-risk suppliers in their 5G networks. The report criticizes the slow pace of action in some countries, which increases the EU’s collective exposure to security threats.

Germany, having one of the largest, in absolute numbers, Chinese RAN deployments in Western Europe, has been singled out for its apparent reluctance to address the high-risk supplier challenge in the last couple of years (see also notes in “Further Readings” at the back of this blog). Germany introduced its regulation on Chinese high-risk suppliers in July 2024 with a combination of their Telekommunikationsgesetz (TKG) and IT-Sicherheitsgesetz 2.0. The German government announced that starting in 2026, it will ban critical components from Huawei and ZTE in its 5G networks due to national security concerns. This decision aligns Germany with other European countries working to limit reliance on high-risk suppliers. Germany has been slower in implementing such measures than others in the EU, but the regulation marks a significant step towards strengthening its telecom infrastructure security. Light Reading has estimated that a German Huawei ban would cost €2.5B and take years for German telcos. This estimate seems very optimistic and certainly would require very substantial discounts from the supplier that would be chosen to replace, for example, their Huawei installations with, e.g., for Telekom Deutschland that would be ca. 50+% of their ca. 38+ thousand sites, and it is difficult for me to believe that that kind of economy would apply to all telcos in Western Europe with high-risk suppliers. I also believe it ignores de-commissioning costs and changes to the backend O&M systems. I expect telco operators will try to push the timeline for replacement until most of their high-risk supplier infrastructure is written off and ripe for modernization, which for Germany would most likely happen after 2026. One way or another, we should expect an increase in mobile Capex spending towards the end of the decade as the German operators are swapping out their Chinese RAN suppliers (which may only be a small part of their Capital spend if the ban is extended beyond 5G).

The European Commission recommends that restrictions cover critical and highly sensitive assets, such as the Radio Access Network (RAN) and core network functions, and urges member states to define transition periods to phase out existing equipment from high-risk suppliers. The transition periods, however, must be short enough to avoid prolonging dependency on these suppliers. Notably, the report calls for an immediate halt to installing new equipment from high-risk vendors, ensuring that ongoing deployment does not undermine EU security.

When it comes to fixed broadband services, the report extends its concerns beyond 5G. It stresses that many Member States are also taking steps to ensure that the fixed network infrastructure is not reliant on high-risk suppliers. Fourteen (14) member states have either implemented or plan to restrict Chinese-origin equipment in their fixed networks. Furthermore, nine (9) countries have adopted technology-neutral legislation, meaning the restrictions apply across all types of networks, not just 5G. This implies that Chinese-origin infrastructure, including transport network components, will eventually face the same scrutiny and restrictions as 5G networks. While the report does not explicitly call for a total ban on all Chinese-origin equipment, it stresses the need for detailed assessments of supplier risks and restrictions where necessary based on these assessments.

While the EU’s “5G Security Toolbox” focuses on 5G networks, Denmark’s approach, the “Danish Investment Screening Act,” which took effect on the 1st of July 2021, goes much further by addressing the security of fixed broadband, 4G, and transport networks. This broad regulatory focus helps Denmark ensure the security of its entire communications ecosystem, recognizing that vulnerabilities in older or supporting networks could still pose serious risks. A clear example of Denmark’s comprehensive approach to telecommunications security beyond 5G is when the Danish Center for Cybersikkerhed (CFCS) required TDC Net to remove Chinese DWDM equipment from its optical transport network. TDC Net claimed that the consequence of the CFCS requirement would result in substantial costs to TDC Net that they had not considered in their budgets. CFCS has regulatory and legal authority within Denmark, particularly in relation to national cybersecurity. CFCS is part of the Danish Defense Intelligence Service, which places it under the Ministry of Defense. Denmark’s regulatory framework is not only one of the sharpest implementations of the EU’s 5G Toolkit but also one of the most extensive in protecting its national telecom infrastructure across multiple layers and generations of technology. The Danish approach could be a strong candidate to serve as a blueprint for expanded EU regulation beyond 5G high-risk suppliers and thus become applicable to fixed broadband and transport networks, resulting in substantial additional Capex towards the end of the decade.

While not singled out as a unique risk category, customer premises equipment (CPE) from high-risk suppliers is mentioned in the context of broader network security measures. Some Member States have indicated plans to ensure that CPE is subject to strict procurement standards, potentially using EU-wide certification schemes to vet the security of such devices. CPE may be included in future security measures if it presents a significant risk to the network. Many CPEs have been integrated with the optical network terminal, or ONT, which is architecturally a part of the fixed broadband infrastructure, serving as a demarcation point between the fiber optic network and the customer’s internal network. Thus, ONT is highly likely to be considered and included in any high-risk supplier limitations that may come soon. Any CPE replacement program would likely be associated on its own with considerable Capex and cost for operators and their customers in general. The CPE quantum for the European Union (including the UK, cheeky, I know) is between 200 and 250 million CPEs, including various types of CPE devices, such as routers, modems, ONTs, and other network equipment deployed for residential and commercial users. It is estimated that 30% to 40% of these CPEs may be linked to high-risk Chinese suppliers. The financial impact of a systematic CPE replacement program in the EU (including the UK) could be between 5 to 8 billion euros in capital expenses, ignoring the huge operational costs of executing such a replacement program.

The Data Growth Slow Down – An Opportunities for Lower Capex?

How do we identify whether a growth dynamics, such as data growth, is exponential or self-limiting?

Exponential growth dynamics have the same (percentage) growth rate indefinitely. Self-limiting growth dynamics, or s-curve behavior, will have a declining growth rate. Natural systems are generally self-limiting, although they might exhibit exponential growth over a short term, typically in the initial growth phase. So, if you are in doubt (which you should not be), calculate the growth rate of your growth dynamics from the beginning until now. If that growth rate is constant (over several time intervals), your dynamics are exponential in nature (at least over the period you looked at); if not … well, your growth process is most likely self-limiting.

Telco Capex increases, and Telco Capex decreases. Capex is, in nature, cyclic, although increasing over time. Most European markets will have access to 550 to 650 MHz downlink spectrum depending on SDL deployment levels below 4 GHz. Assuming 4 (1) Mbps per DL (UL) MHz per sector effective spectral efficiency, 10 traffic hours per day, and ca. 350 to 400 thousand mobile sites (3 sectors each) across Western Europe, the carrying mobile capacity in Bytes is in the order of 140 Exa Bytes (EB) per Month (note: if I had chosen 2 and 0.5 Mbps per MHz per sector, carrying capacity would be ca. 70 EB/Month). It is clear that this carrying capacity limit will continue to increase with software releases, innovation, advanced antenna deployment with higher order MiMo, and migration from older radio access technologies to the newest (increasing the effective spectral efficiency).

According to Ericsson Mobility Visualizer, Western Europe saw a mobile data demand per month of 11 EB in 2023 (see Figure below). The demand for mobile data in 2023 was almost 10 times lower than the (conservatively) estimated carrying capacity of the underlying mobile networks.

Figure 9 illustrates the actual demanded data volume in EB per month. I have often observed that when planners estimate their budgetary demand for capacity expansions, they use the current YoY growth rate and apply it to the future (assuming their growth dynamics are geometrical). I call this the “Naive Expectations” assumption (fallacy) that obviously leads to the overprovision of network capacity and less efficient use of Capex, as opposed to the “Informed Expectations” approach based on the more realistic S-Curve dynamic growth dynamics. I have rarely seen the “Naive Expectations” fallacy challenged by CFOs or non-technical leadership responsible for the Telco budgets and economic health. Although not a transparent approach, it is a “great” way to add a “bit” of Capex cushion for other Capex uncertainties.

It should be noted that the Ericsson data treats traffic generated by fixed wireless access (FWA) separately (which, by the way, makes sense). Thus, the 11 EB for 2023 does not include FWA traffic. Ericsson only has a global forecast for FWA traffic starting from 2023 (note: it is not clear whether 2023 is actual FWA traffic or estimated). To get an impression of the long-term impact of FWA traffic, we can apply the same S-curve approach as the one used for mobile data traffic above, according to what I call the “Informed expectations” approach. Even with the FWA traffic, it is difficult to see a situation that, on average (at least), would pose any challenge to existing mobile networks. Particularly, the carrying capacity can easily be increased by deploying more advanced antennas (e.g., higher order MiMo), and, in general, it is expected to improve with each new software release forthcoming.

Figure 10 above uses Ericsson’s Mobile Visualizer data for Western Europe’s mobile and fixed wireless access (FWA) traffic. It gives us an idea of the total traffic expectations if the current usage dynamics continue. Ericsson only provides a global FWA forecast from 2023 to 2029. I have assumed WEU takes its proportional mobile share of the FWA traffic. Note: For the period up to and including 2023, it seems a bit rich in its FWA expectations, imo.

So, by all means, the latest and greatest mobile networks are, without much doubt, in most places, over-dimensioned from the perspective of their carrying bytes potential, the volumetric capacity, and what is demanded in terms of data volume. They also appear to remain so for a very long time unless the current demand dynamics fundamentally change (which is, of course, always a possibility, as we have seen historically).

However, that our customers get their volumetric demand satisfied is generally a reflection of the quality in terms of bits per second (a much more fundamental unit than volume) satisfied. Thus, the throughput, or speed, should be good enough for the customer to unhindered enjoy their consumption, which, as a consequence, generates the Bytes that most Telco executives have told themselves they understand and like to base their pricing on (and I would argue judging by my experience outside Europe more often than not maybe really don’t get). It is not uncommon that operators with complex volumetric pricing become more obsessed with data volume rather than optimum quality (that might, in fact, generate even more volume). The figure below is a snapshot from August 2024 of the median speeds customers enjoy in mobile as well as fixed broadband networks in Western Europe. In most cases in Europe, customers today enjoy substantially faster fixed-broadband services than they would get in mobile networks. One should expect that this would change how Telcos (at least integrated Telcos) would design and plan their mobile networks and, consequently, maybe dramatically reduce the amount of Mobile Capex we spend. There is little evidence that this is happening yet. However, I do anticipate, most likely naively, that the Telco industry would revise how mobile networks are architected, designed, and built with 6G.

Figure 11 shows that apart from one Western European country (Greece, also a fixed broadband laggard), all other markets have superior fixed broadband downlink speeds compared to what mobile networks can deliver. Note that the speed measurement data is based on the median statistic. Source: Speedtest Global Index, August 2024.

A Crisis of Too Much of a “Good” Thing?

Analysys Mason recently (July 2024) published a report titled “A Crisis of Overproduction in Bandwidth Means that Telecoms Capex Will Inevitably Fall.” The report explores the evolving dynamics of capital expenditure (Capex) in the telecom industry, highlighting that the industry is facing a turning point. The report argues that the telecom sector has reached a phase of bandwidth overproduction, where the infrastructure built to deliver data has far exceeded demand, leading to a natural decline in Capex over the coming years.

According to the Analysys Mason report, global Capex in the telecom sector has already peaked, with two significant investment surges behind it: the rollout of 5G networks in mobile infrastructure and substantial investments in fiber-to-the-premises (FTTP) networks. Both of these infrastructure developments were seen as essential for future-proofing networks, but now that the peaks in these investments have passed, Capex is expected to fall. The report predicts that by 2030, the Capex intensity (the proportion of revenue spent on capital investments) will drop from around 20% to 12%. This reduction is due to the shift from building new infrastructure to optimizing and maintaining existing networks.

The main messages that I take away from the Analysys Mason report are the following:

  • Overproduction of bandwidth: Telecom operators have invested heavily in building their networks. However, demand for data and bandwidth is no longer growing at the exponential rates seen in previous years.
  • Shifting Capex Trends: The telecom industry is experiencing two peaks: one in mobile spending due to the initial 5G coverage rollout and another in fixed broadband due to fiber deployments. Now that these peaks have passed, Capex is expected to decline.
  • Impact of lower data growth: The stagnation in mobile and fixed data demand, combined with the overproduction of mobile and fixed bandwidth, makes further large-scale investment in network expansion unnecessary.

My take on Analysys Mason’s conclusions is that with the cyclic nature of Telco investments, it is natural to expect that Capex will go up and down. That Capex will cycle between 20% (peak deployment phase) and 12% (maintenance phase) seems very agreeable. However, I would expect that the maintenance level would continue to increase as time goes by unless we fundamentally change how we approach mobile investments.

That network capacity is built up at the beginning of a new technology cycle (e.g., 5G NR, GPON, XGPON, XSGPON-based FTTH), it is also not surprising that the amount of available capacity will appear substantial. I would not call it a bandwidth overproduction crisis (although I agree that the overhead of provisioned carrying capacity compared to demand expectations seems historically high); it manifests the technologies we have developed and deployed today. For 5G NR real-world conditions, users could see peak DL speeds ranging from 200 Mbps to 1 Gbps with median 5G DL speeds of 100+ Mbps. The lower end of this range applies in areas with fewer available resources (e.g., less spectrum, fewer MIMO streams). In comparison, the higher end reflects better conditions, such as when a user is close to the cell tower with optimal signal conditions. The quality of fiber-connected households at current GPON and XGPON technology would be sustainable at 1 to 10 Gbps downstream to the in-home ONT/CPE. However, the in-home quality experienced over WiFi would depend a lot on how the WiFi network has been deployed and how many concurrent users there are at any given time. As backhaul and backbone transmission solutions to mobile and fixed access will be modern and fiber-based, there is no reason to believe that user demand should be limited in any way (anytime soon), given a well-optimized, modern fiber-optic network should be able to reach up to 100 Tbps (e.g., 10 EB per month with 10 traffic hours per day).

Germany, the UK, Belgium, and a few smaller Western countries will continue their fiber deployment for some years to bring their fiber coverage up to the level of countries such as France, Spain, Portugal, and the Netherlands. It is difficult to believe that these countries would not continue to invest substantial money to raise their fiber coverage from their current low levels. Countries with less than 60% fiber-to-the-home coverage have a share of 50+ % of the overall Western European Capex level.

The fact that the Telco industry would eventually experience lower growth rates should not surprise anyone. That has been in the cards since growth began. The figure below takes actual mobile data from Ericsson’s Mobile Visualizer. It applies a simple S-curve growth model dynamics to those data that actually do a very good job of accounting for the behavior. A geometrical growth model (or exponential growth dynamics), while possibly accounting for the early stages of technology adaptation and the resulting data growth, is not a reasonable model to apply here and is not supported by the actual data.

Figure 12 provides the actual Exa Bytes (EB) monthly with a fitted S-Curve extrapolated beyond 2023. The S-Curve is described by the Data Demand Limit (Ls), Growth Rate (k), and the Inflection Year (T0), where growth transitions from acceleration to deceleration. Source: Ericsson Mobile Visualizer resource.

The growth dynamic, applied to the data we extract from the markets shown in the above Figure, indicates that in Western Europe and the CEE (Central Eastern Europe), the inflection point should be expected around 2025. This is the year when the growth rates begin to decline. In Western Europe (and CEE), we would expect the growth rate to become less than 10% by 2030, assuming that no fundamental changes to the growth dynamic occur. The inflection point for the North American markets (i.e., The USA and Canada) is around 2033; this is expected to happen a bit earlier (2030) for Asia. Based on the current growth dynamics, North America will experience growth rates below 10% by 2036. For Asia, this event is expected to take place around 2033. How could FWA traffic growth change these results? The overall behavior would not change. The inflection point may happen later, thus the onset of slower growth rates, and the time when we would expect a growth rate lower than 10% would be a couple of years after the inflection year.

Let us just for fun (usually the best reason) construct a counterfactual situation. Let us assume that data growth continues to follow geometric (exponential) growth indefinitely without reaching a saturation point or encountering any constraints (e.g., resource limits, user behavior limitations). The premise is that user demand for mobile and fixed-line data will continue to grow at a constant, accelerating rate. For mobile data growth, we use the 27% YoY growth of 2023 and use this growth rate for our geometrical growth model. Thus, every ca. 3 years, the demand would double.

If telecom data usage continued to grow geometrically, the implications would (obviously) be profound:

  • Exponential network demand: Operators would face exponentially increasing demand on their networks, requiring constant and massive investments in capacity to handle growing traffic. Once we reach the limits of the carrying capacity of the network, we have three years (with a CAGR of 27%) until demand has doubled. Obviously, any spectrum position would quickly become insufficient, resulting in massive investments in new infrastructure (sites in mobile and more fiber) would be needed. Capacity would become the growth limiting factor.
  • Costs: The capital expenditures (Capex) required to keep pace with geometric growth would skyrocket. Operators must continually upgrade or replace network equipment, expand physical infrastructure, and acquire additional spectrum to support the growing data loads. This would lead to unsustainable business models unless prices for services rose dramatically, making such growth scenarios unaffordable for consumers but long before that for the operators themselves.
  • Environmental and Physical Limits: The physical infrastructure necessary to support geometric growth (cell towers, fiber optic cables, data centers) would also have environmental consequences, such as increased energy consumption and carbon emissions. Additionally, telecom providers would face the law of diminishing returns as building out and maintaining these networks becomes less economically feasible over time.
  • Consumer Experience: The geometric growth model assumes that user behavior will continue to change dramatically. Consumers would need to find new ways to utilize vast amounts of bandwidth beyond streaming and current data-heavy applications. Continuous innovation in data-hungry applications would be necessary to keep up with the increased data usage.

The counterfactual argument shows that geometric growth, while useful for the early stages of data expansion, becomes unrealistic as it leads to unsustainable economic, physical, and environmental demands. The observed S-curve growth is more appropriate for describing mobile data demand because it accounts for saturation, the limits of user behavior, and the constraints of telecom infrastructure investment.

Back to Analysys Mason’s expected, and quite reasonable, consequence of the (progressively) lower data growth: large-scale investment would become unnecessary.

While the assertion is reasonable, as said, mobile obsolescence hits the industry every 5 to 7 years, regardless of whether there is a new radio access technology (RAT) to take over. I don’t think this will change, or maybe the Industry will spend much more on software annually than previously and less on hardware modernization during obsolescence transformations. Though I suspect that the software would impose increasingly harder requirements on the underlying hardware (whether on-prem or in the cloud), modernization investments into the hardware part would continue to be substantial. This is not even considering the euphoria that may come around the next generation RAT (e.g., 6G).

The fixed broadband fiber infrastructure’s economical and useful life is much longer than that of the mobile infrastructure. The optical transmission equipment is likewise used for access, aggregation, and backbone (although not as long as the optical fiber itself). Additionally, fiber-based fixed broadband networks are operationally (much) more efficient than their mobile counterparts, alluding to the need to re-architect and redesign how they are being built as they are no longer needed inside customer dwellings. Overall, it is not unreasonable to expect that fixed broadband modernization investments will occur less frequently than for mobile networks.

Is Enough Customer Bandwidth a Thing?

Is there an optimum level of bandwidth in bits per second at which a customer is fully (optimized) served? Beyond that, whether the network could provide far more speed or quality does not matter.

For example. for most mobile devices, phones, and tablets, much more than 10 Mbps for streaming would not make much of a viewing difference for the typical customer. Given the assumptions about eyesight and typical viewing distances, more than 90% of people would not notice an improvement in viewing experience on a mobile phone or tablet beyond 1080p resolution. Increasing the resolution beyond that point—such as to 1440p (Quad HD) or 4K would likely not provide a noticeably better experience for most users, as their visual acuity limits their ability to discern finer details on small screens. This means the focus for improving mobile and tablet displays shifts from resolution to other factors like color accuracy, brightness, and contrast rather than chasing higher pixel counts. An optimization strategy that should not necessarily result in higher bandwidth requirements, although moving to higher color depth or more brightness / dynamic range (e.g., HDR vs SDR) would lead to a moderate increase in the required data ranges.

A throughput between 50 and 100 Mbps for fixed broadband TV streaming currently provides an optimum viewing experience. Of course, a fixed broadband household may have many concurrent bandwidth demands that would justify a 1 Gbps fiber to the home or maybe even 10 Gbps downstream to serve the whole household at an optimum experience at any time.

Figure 13 provides the data rate ranges for a streaming format, device type, and typical screen size. The data rate required for streaming video content is determined by various factors, including video resolution, frame rate, compression, and screen size. The data rate calculation (in Mbps) for different streaming formats follows a process that involves estimating the amount of data required to encode each frame and multiplying by the frame rate and compression efficiency. The methodology can be found in many places. See also my blog “5G Economics – An Introduction (Chapter 1)” from Dec. 2016.

Let’s move into high-end and fully immersive virtual reality experiences. The user bandwidth requirement may exceed 100 Mbps and possibly even require a Gbps sustainable bandwidth delivered to the user device to provide an optimum experience. However, jitter and latency performance may not make such full immersion or high-end VR experiences fully optimal over mobile or fixed networks with long distances to the supporting (edge) data centers and cloud servers where the related application may reside. In my opinion, this kind of ultra-high-end specialized service might be better run exclusively on location.

Size Matter.

I once had a CFO who was adamant that an organization’s size on its own would drive a certain amount of Capex. I would, at times, argue that an organization’s size should depend on the number of activities required to support customers (or, more generally, the number of revenue-generating units (RGUs), your given company has or expects to have) and the revenue those generate. In my logic, at the time, the larger a country in terms of surface area, population, and households, the more capex-related activities would be required, thus also resulting in the need for a bigger organization. If you have more RGU, it might also not be too surprising that the organization would be bigger.

Since then, I have scratched my head many times when I look at country characteristics, the RGUs, and Revenues, asking how that can justify a given size of Telco organizations, knowing that there are other Telcos out there that spend the same or more Capex with a substantially smaller organization (also after considering the difference in sourcing strategies). I have never been with an organization that irrespective of its size did not feel pressured work-wise and believed it was too lightly staffed to operate, irrespective of the Capex and activities under management.

Figure 14 illustrates the correlation between the Capex and the number of FTEs in a Telco organization. It should be noted that the upper right point results in a very good correlation of 0.75. Without this point, the correlation would be around 0.25. Note that sourcing does have a minor effect on the correlation.

The above figure illustrates a strong correlation between Capex and the number of people in a Telco organization. However, the correlation would be weaker without the upper right data point. In the data shown here, you will find no correlation between FTEs and a country’s size, such as population or surface area, which is also the case for Capex. There is a weak correlation between FTEs and RGU and a stronger correlation with Revenues. Capex, in general, is very strongly correlated with Revenues. The best multi-linear regression model, chosen by p-value, is a model where Capex relates to FTEs and RGUs. For a Telco with 1000 employees and 1 million RGUs, approximately 50% of the Capex could be explained by the number of FTEs. Of course, in the analysis above, we must remember that correlation does not imply causation. You will have telcos that, in most Capex driver aspects, should be reasonably similar in their investment profiles over time, except the telco with the largest organization will consistently invest more in Capex. While I think this is, in particular, an incumbent vs challenger issue, it is a much broader issue in our industry.

Having spent most of my 20+ year career in Telecom being involved in Capex planning and budgeting, it is clear that the size of an organization plays a role in the size of a Capex budget. Intuitively, it should not be too surprising. Suppose the Capex is lower than the capacity of your organization. In that case, you may have to lay off people with the risk you might be short of resources in the future as you may cycle through modernization or a new technology introduction. On the other hand, if the Capex needs are substantially larger than the organization can cope with, including any sourcing agreements in place, it may not make too much sense to ask for more than what can be managed with the resources available (apart from it being sub-optimal for cash flow optimization).

Telco companies that have fixed and mobile broadband infrastructure in their portfolio with organizations that are poorly optimized and with strict demarcation lines between people working on fixed broadband and mobile broadband will, in general, have much worse Capex efficiencies compared to fully fixed-mobile converged organizations (not to mention suffering from poorer operational efficiencies and work practices compared to integrated organizations). Here, the size of, for example, a mobile organization will drive behavior that rather would spend above and beyond Capex in their Radio Access Network infrastructure than use more clever and proven solutions (e.g., Opanga’s RAIN) to optimize quality and capacity needs across their mobile networks.

In general, the resistance to utilize smarter solutions and clever ideas that may save Capex (and/or Opex) is manifesting in a many-fold of behaviors that I have observed over my 25+ year career (and some I might even have adapted on occasion … but shhhh;-).

Budget heuristics:

  • 𝗦𝗶𝘇𝗲 𝗱𝗼𝗲𝘀𝗻𝘁 𝗺𝗮𝘁𝘁𝗲𝗿 𝗽𝗮𝗿𝗮𝗱𝗶𝗴𝗺 Irrespective of size, my organization will always be busy and understaffed.
  • 𝗧𝗵𝗲 𝗚𝗼𝗹𝗱𝗶𝗹𝗼𝗰𝗸𝘀 𝗙𝗮𝗹𝗹𝗮𝗰𝘆 My organization’s size and structure will determine its optimum Capex spending profile, allowing it to stay busy (and understaffed).
  • 𝗧𝗮𝗻𝗴𝗶𝗯𝗹𝗲 𝗕𝗶𝗮𝘀 A hardware (infrastructure-based) solution is better and more visible than a software solution. I feel more comfortable with my organization being busy with hardware.
  • 𝗧𝗵𝗲 𝗦𝘂𝗻𝗸 𝗖𝗼𝘀𝘁 𝗙𝗮𝗹𝗹𝗮𝗰𝘆 I don’t trust (allegedly) clever software solutions that may lower or postpone my Capex needs and, by that, reduce the need for people in my organization.
  • 𝗕𝘂𝗱𝗴𝗲𝘁 𝗠𝗮𝘅𝗶𝗺𝗶𝘇𝗮𝘁𝗶𝗼𝗻 𝗧𝗲𝗻𝗱𝗲𝗻𝗰𝘆 My organization’s importance and my self-importance are measured by how much Capex I have in my budget. I will resist giving part of my budget away to others.
  • 𝗦𝘁𝗮𝘁𝘂𝘀 𝗤𝘂𝗼 𝗕𝗶𝗮𝘀 I will resist innovation that may reduce my Capex budget, even if it may also help reduce my Opex.
  • 𝗝𝗼𝗯 𝗣𝗿𝗼𝘁𝗲𝗰𝘁𝗶𝗼𝗻𝗶𝘀𝗺 I resist innovation that may result in a more effective organization, i.e., fewer FTEs.
  • 𝗖𝗮𝗽𝗮𝗰𝗶𝘁𝘆 𝗖𝗼𝗺𝗳𝗼𝗿𝘁 𝗦𝘆𝗻𝗱𝗿𝗼𝗺𝗲: The more physical capacity I build into my network, the more we can relax. Our goal is a “Zero Worry Network.”
  • 𝗧𝗵𝗲 𝗙𝗲𝗮𝗿 𝗙𝗮𝗰𝘁𝗼𝗿: The leadership is “easy to scare” when arguing for more capacity Capex opposed to the “if-not”-consequences. (e.g., losing best network awards, poorer customer experience, …).
  • 𝗧𝗵𝗲 𝗕𝘂𝗱𝗴𝗲𝘁 𝗜𝗻𝗲𝗿𝘁𝗶𝗮 Return on Investment (ROI) prioritization is rarely considered (rigorously), particularly after a budget has been released.

𝗔 𝘄𝗮𝗿𝗻𝗶𝗻𝗴: although each is observable in the live, the reader should be aware that there is also a fair amount of deliberate ironic provocation in the above heuristics.

We should never underestimate that within companies, two things make you important (including self-important and self-worthy) … It is: (1) The size of your organization and (2) the amount of money, your budget size, you have for your organization to be busy with.

Any innovation that may lower an organization’s size and budget will be met with resistance from that organization.

The Balancing Act of Capex to Opex Transformations.

Telco cost structures and Capex have evolved significantly due to accounting changes, valuation strategies, technological advancements, and economic pressures. While shifts like IFRS (International Financial Reporting Standards), issued by the International Accounting Standards Board (IASB), have altered how costs are reported and managed, changes in business strategies, such as cell site spin-offs, cloud migrations, and the transition to software-defined networks, have reshaped Capex allocations somewhat. At the same time, economic crises and competitive pressures have influenced Telcos to continually reassess their capital investments, balancing the need to optimize value, innovation, and growth with financial diligence.

One of the most significant drivers of change has been the shift in accounting standards, particularly with the introduction of IFRS16, which replaced the older GAAP-based approaches. Under IFRS16, nearly all leases are now recognized on the balance sheet as right-of-use assets and corresponding liabilities. This change has particularly impacted Telcos, which often engage in long-term leases for cell sites, network infrastructure, and equipment. Previously, under GAAP (Generally Accepted Accounting Principles), many leases were treated as operating leases, keeping them off the balance sheet, and their associated costs were considered operational expenditures (Opex). Now, under IFRS16, these leases are capitalized, leading to an increase in reported Capex as assets and liabilities grow to reflect the leased infrastructure. This shift has redefined how Telcos manage and report their Capex, as what was previously categorized as leasing costs now appears as capital investments, altering key financial metrics like EBITDA and debt ratios that would appear stronger post-IFRS16.

Simultaneously, valuation strategies and financial priorities have driven significant shifts in Telco Capex. Telecom companies have increasingly focused on enhancing metrics such as EBITDA and capital efficiency, leading them to adopt strategies to reduce heavy capital investments. One such strategy is the cell site spin-off, where Telcos sell off their tower and infrastructure assets to specialized independent companies or create separate entities that manage these assets. These spin-offs have allowed Telcos to reduce the Capex tied to maintaining physical assets, replacing it with leasing arrangements, which shift costs towards operational expenses. As a result, Capex related to infrastructure declines, freeing up resources for investments in other areas such as technology upgrades, customer services, and digital transformation. The spun-off infrastructures often result in significant cash inflows from sales. The telcos can then use this cash to improve their balance sheets by reducing debt, reinvesting in new technologies, or distributing higher dividends to shareholders. However, this shift may also reduce control over critical network infrastructure and create long-term lease obligations, resulting in substantial operational expenses as telcos will have to pay the rental costs on the spun-off infrastructure, increasing Opex pressure. I regularly see analysts using the tower spin-off as an argument for why Capex requirements of telcos are no longer wholly trustworthy and, in particular, in comparison with the past capital spending as the passive part of the cell site built used to be a substantial share mobile site Capex of up to 50% to 60% for a standard site built and beyond that for special sites. I believe that as not many new cell sites are being built any longer, and certainly not as many as in the 90s and 2000s, this effect is very minor on the overall Capex. Most new sites are built at a maintenance level, covering new residential or white spot areas.

When considering mobile network evolution and the impact of higher frequencies, it is important not to default to the assumption that more cell sites will always be necessary. If all things are equal, the coverage cell range of a high carrier frequency would be shorter (often much shorter) than the coverage range at a lower frequency. However, all things are not equal. This misconception arises from a classical coverage approach, where the frequency spectrum is radiated evenly across the entire cell area. However, modern cellular networks employ advanced technologies such as beamforming, which allows for more precise and efficient distribution of radio energy. Beamforming concentrates signal power in specific directions rather than thinly spreading it across a wide area, effectively increasing reach and signal quality without additional sites. Furthermore, the support for asymmetric downlink (higher) and uplink (lower) carrier frequencies allows for high-quality service downlink and uplink in situations where the uplink might be challenged at higher frequencies.

Moreover, many mobile networks today have already been densified to accommodate coverage needs and capacity demands. This densification often occurred when spectrum resources were scarce, and the solution was to add more sites for improved performance rather than simply increasing coverage. As newer frequency bands become available, networks can leverage beamforming and existing densification efforts to meet coverage and capacity requirements without necessarily expanding the number of cell sites. Thus, the focus should be optimizing the deployment of advanced technologies like beamforming and Massive MIMO rather than increasing the site count by default. In many cases, densified networks are already equipped to handle higher frequencies, making additional sites unnecessary for coverage alone.

The migration to public cloud solutions from, for example, Amazon’s AWS or Microsoft Azure is another factor influencing the Capex of Telcos. Historically, telecom companies relied on significant upfront Capex to build and maintain their own data centers or switching locations (as they were once called, as these were occupied mainly by the big legacy telecom proprietary telco switching infrastructure), network operations centers, and IT (monolithic) infrastructure. However, with the rise of cloud computing, Telcos are increasingly migrating to cloud-based solutions, reducing the need for large-scale physical infrastructure investments. This shift from hardware to cloud services changes the composition of Capex as the need for extensive data center investments declines, and more flexible, subscription-based cloud services are adopted. Although Capex for physical infrastructure decreases, there is a shift towards Opex as Telcos pay for cloud services on a usage basis.

Further, the transition to software-defined networks (SDNs) and software-centric telecom solutions has transformed the nature of Telco Capex. In the past, Telcos heavily depended on proprietary hardware for network management, which required substantial Capex to purchase and maintain physical equipment. However, with the advancement of virtualization and SDNs, telcos have shifted away from hardware-intensive solutions to more software-driven architectures. This transition reduces the need for continuous Capex on physical assets like routers, switches, and servers and increases investment in software development, licensing, and cloud-based platforms. The software-centric model allows, in theory, Telcos to innovate faster and reduce long-term infrastructure costs.

The Role of Capex in Financial Statements.

Capital expenditures play a critical role in shaping a telecommunications company’s financial health, influencing its income statement, balance sheet, and cash flow statements in various ways. At the same time, Telcos establish financial guardrails to manage the impact of Capex spending on dividends, liquidity, and future cash needs.

In the income statement (see Figure 15 below), Capex does not appear directly as an expense when it is incurred. Instead, it is capitalized on the balance sheet and then expensed over time through depreciation (for tangible assets) or amortization (for intangible assets). This gradual recognition of the Capex expenditure leads to higher depreciation or amortization charges over future periods, reducing the company’s net income. While the immediate impact of Capex is not seen on the income statement, the long-term effects can improve revenue when investments enhance capacity and quality, as with technological upgrades like 5G infrastructure. However, these benefits are offset by the fact that depreciation lowers profitability in the short term (as the net profit is lowered). The last couple of radio access technology (RAT) generations have, in general, caused an increase in telcos’ operational expenses (i.e., Opex) as more cell sites are required, heavier site configurations are implemented (e.g., multi-band antennas, massive MiMo antennas), and energy consumption has increased in absolute terms. Despite every new generation having become relatively more energy efficient in terms of the kWh/GB, in absolute terms, this is not the case, and that matters for the income statement and the incurred operational expenses.

Figure 15 illustrates the typical income statement one may find in a telco’s annual report or official financial statements. The purpose here is to show where Capex may have an influence although Capex will not be directly stated in the Income Statement. Note: the numbers in the above financial statement are for illustration only representing a Telco with 35% EBITDA margin, 20% Capex to Revenue Ratio and a Tax rate of 22%.

On the balance sheet (see Figure 16 below), Capex increases the value of a company’s fixed assets, typically recorded as property, plant, and equipment (PP&E). As new assets are added, the company’s overall asset base grows. However, this is balanced by the accumulation of depreciation, which gradually reduces the book value of these assets over time. How Capex is financed also affects the company’s liabilities or equity. If debt is used to finance Capex, the company’s liabilities increase; if equity financing is used, shareholders’ equity increases. The Balance Sheet together with the Depreciation & Amortization (D&A), typically given in the income statement, can help us estimate the amount of Capex a Telco has spend. The capital expense, typically not directly reported in a companies financial statements, can be estimated by adding the changes between subsequent years of PP&E and Intangible Assets to the D&A.

Figure 16 illustrates the balance sheet one may find in a telco’s annual report or official financial statements. The purpose here is to show where Capex may have an influence. Knowing the Depreciation & Amortization (D&A) typically shown in the Income Statement, the change in PP&E and Intangible Assets (between two subsequent years) will provide an estimate of the Capex of the current year. Note: the numbers in the above financial statement are for illustration only representing a Telco with 35% EBITDA margin, 20% Capex to Revenue Ratio and a Tax rate of 22%.

In the cash flow statement, Capex appears as an outflow under the category of cash flows from investing activities, representing the company’s spending on long-term assets. In the short term, this creates a significant reduction in cash. However, well-planned Capex to enhance infrastructure or expand capacity can lead to higher operating cash flows in the future. If Capex is funded through debt or equity issuance, the inflow of funds will be reflected under cash flows from financing activities.

Figure 17 illustrates the Cash Flow Statements one may find in a telco’s annual report or official financial statements (might have a bit more details than what usually would be provided). We would typically get a 70+% impression of a Telco’s Capex level by looking at the “Net Cash Flow Used in Investing Activities”, unless we are offered Purchases of Tangible and Intangible Assets. Note: the numbers in the above financial statement are for illustration only representing a Telco with 35% EBITDA margin, 20% Capex to Revenue Ratio and a Tax rate of 22%.

To ensure Capex does not overly strain the company’s financial health or limit returns to shareholders, Telcos put in place financial guardrails. Regarding dividends, many companies set specific dividend payout ratios, ensuring that a portion of earnings or free cash flow is consistently returned to shareholders. This practice balances returning value to shareholders while retaining sufficient earnings to fund operations and investments. It is also not unusual that Telco’s commit a given dividend level to shareholders, that as a consequence may place a limit on Capex spending or result in Capex tasking within a given planning period, as management must balance cash outflows between shareholder returns and strategic investments. This may lead to prioritizing essential projects, delaying less critical investments, or seeking alternative financing to maintain both Capex and dividend commitments. Additionally, Telcos often use dividend coverage ratios to ensure they can sustain dividend payouts even during periods of heavy capital expenditure.

Some telcos have chosen not to commit dividends to shareholders in order to maximize Capex investments, aiming to reinvest profits into the business to drive long-term growth and create higher shareholder value. This strategy prioritizes network expansion, technological upgrades, and new market opportunities over immediate cash returns, allowing the company to maintain financial flexibility and pursue strategic objectives more aggressively. When a telco decides to start paying dividends, it may indicate that management believes there are fewer high-value investment opportunities that can deliver returns above the company’s cost of capital. The decision to pay dividends often reflects the view that shareholders may derive greater value from the cash than the company could generate by reinvesting it. Often it signals a shift to a higher degree of maturity (e.g., corporate or market wise) from having been a growth focused company (i.e., the Telco has past the inflection point of growth). An example of maturity, and maybe less about growth opportunities, is the case of T-Mobile USA which in 2024 announced that it would start to pay dividend for the first time in its history targeting a 10 percent annually per share (note: Deutsche Telekom AG gained ownership in 2001, the company was founded in 1994).

Liquidity management is another consideration. Companies monitor their liquidity through current or quick ratios to ensure they can meet short-term obligations without cutting dividends or pausing important Capex projects. To provide an additional safety net, Telcos often maintain cash reserves or access to credit lines to handle immediate financial needs without disrupting long-term investment plans.

Regarding debt management, Telcos must carefully balance using debt to finance Capex. Companies often track their debt-to-equity ratio to avoid over-leveraging, which can lead to higher interest expenses and reduced financial flexibility. Another common metric is net debt to EBITDA, which ensures that debt levels remain manageable concerning the company’s earnings. To avoid breaching agreements with lenders, Telcos often operate under covenants that limit the amount they can spend on Capex without negatively affecting their ability to service debt or pay dividends.

Telcos also plan long-term cash flow to ensure Capex investments align with future financial needs. Many companies establish a capital allocation framework that prioritizes projects with the highest returns, ensuring that investments in infrastructure or technology do not jeopardize future cash flow. Free cash flow (FCF) is a particularly important metric in this context, as it represents the amount of cash available after covering operating expenses and Capex. A positive FCF ensures the company can meet future cash needs while returning value to shareholders through dividends or share buybacks.

Capex budgeting and prioritization are also essential tools for managing large investments. Companies assess the expected return on investment (ROI) and the payback period for Capex projects, ensuring that capital is allocated efficiently. Projects with assumed high strategic value, such as 5G infrastructure upgrades, household fiber coverage, or strategic fiber overbuilt, are often prioritized for their potential to drive long-term revenue growth. Monitoring the Capex-to-sales ratio helps ensure that capital investments are aligned with revenue growth, preventing over-investment in infrastructure that may not yield sufficient returns.

CAPEX EXPECTATIONS 2024 to 2026.

Considering all of the 54 telcos, ignoring MasMovil and WindHellas that are in the process of being integrated, in the pool of New Street Research Quarterly review each with their individual as well as country “peculiarities” (e.g., state of 5G deployment, fiber-optical coverage, fiber uptake, merger-resulting integration Capex, general revenue trends, …), it is possible to get a directional idea of how Capex will develop for each individual telco as well as the overall trend. This is illustrated in the Figure below on a Western European level.

I expect that we will not see a Capex reduction in 2024, supported by how Capex in the third and fourth quarters usually behave compared to the first two quarters, and due to integration and transformation Capex that will carry from 2023 into 2024 and possibly with a tail-end in 2024. I expect most telcos will cut back on new mobile investments, even if some might start ripping out radio access infrastructure from Chinese suppliers. However, I also believe that telcos will try to delay replacement to 2026 to 2028, when the first round of 5G modernization activities would be expected (and even overdue for some countries).

While 5G networks have made significant advancements, the rollout of 5G SA remains limited. By the end of 2023, only five of 39 markets analyzed by GSMA have reached near-complete adoption of 5G SA networks. 17 markets had yet to launch 5G SA at all. One of the primary barriers is the high cost of investment required to build the necessary infrastructure. The expansion and densification of 5G networks, such as installing more base stations, are essential to support 5G SA. According to GSMA, many operators are facing financial hurdles, as returns in many markets have been flat, and any increase is mainly due to inflationary price corrections rather than incremental or new usage occurring. I suspect that telcos may also be more conservative (and even more realistic, maybe) in assessing the real economic potential of the features being enabled by migrating to 5G SA, e.g., advanced network slicing, ultra-low latency, and massive IoT capabilities in comparison with the capital investments and efforts that they would need to incur. I should point out that any core network investments supporting 5G SA would not be expected to have a visible impact on telcos Capex budgets as this would be expected to be less than 10% of the mobile capex.

Figure 18 shows the 2022 status of homes covered by fiber in 16 Western European countries, as well as the number of households remaining. It should be noted that a 100% coverage level may be unlikely, and this data does not consider fiber overbuilt (i.e., multiple companies covering the same households with their individual fiber deployments). Fiber overbuilt becomes increasingly likely as the coverage exceeds 80% (on a geographical regional/city basis). The percentages (yellow color) above the chart show the share of Total 2022 Western European Capex for the country, e.g., Germany’s share of the 2022 Capex was 18% and had ca. 19% of all German households covered with fiber. Source: based on Omdia & Point Topic’s “Broadband Coverage in Europe 2013-2022” (EU Commission Report).

In 2022, a bit more than 50% of all Western European households were covered by fiber (see Figure 18 above), which amounts to approximately 85 million households with fiber coverage. This also leaves approximately 80 million households without fiber reach. Almost 60% of households without fiber coverage are in Germany (38%) and the UK (21%). Both Germany and the UK contributed about 40% of the total Western European Capex spend in 2022.

Moreover, I expect there are still Western European markets where the Capex priority is increasing the fiber-optic household coverage. In 2022, there was a peak in new households covered by fiber in Western Europe (see Figure 15 below), with 13+ million households covered according to the European Commission’s report “Broadband Coverage in Europe 2013-2022“. Germany (a fiber laggard) and the UK, which account for more than 35% of the Western European Capex, are expected to continue to invest substantially in fiber coverage until the end of the decade. As Figure 19 below illustrates, there is still a substantial amount of Capex required to close the fixed broadband coverage gap some Western European countries have.

Figure 19 illustrates the number of households covered by fiber (homes passed) and the number of millions of new households covered in a year. The period from 2017 to 2022 is based on actuals. The period from 2023 to 2026 is forecasted for new households covered based on the last 5-year average deployment or the maximum speed over the last 5 years (Urban: e.g., DE, IT, NL, UK,…) with deceleration as coverage reaches 95% for urban areas and 80% for rural (note: may be optimistic for some countries). The fiber deployment model differentiates between Urban and Rural areas. Source: based on Omdia & Point Topic’s “Broadband Coverage in Europe 2013-2022” (EU Commission Report).

I should point out that I am not assuming that telcos would be required over the next couple of years to swap out Chinese suppliers outside the scope of the European Commission “The EU 5G Toolkit for Security” framework that mainly focuses on 5G mobile networks eventually including the radio access network. It should be kept in mind that there is a relatively big share of high-risk suppliers within the Western European (actually in most European Union member states) fixed broadband networks (e.g., core routers & switches, SBCs, OLT/ONTs, MSAPs) that if subjected to “5G Toolkit for Security”-like regulation, such as in effect in Denmark (i.e., “The Danish Investment Screening Act”), would result in substantial increase in telcos fixed capital spend. We may see that some Western European telcos will commence replacement programs as equipment becomes obsolete (or near obsolete), and I would expect that the fixed broadband Capex will remain relatively high for telcos in Western Europe even beyond 2026.

Thus, overall, I think it is not unrealistic to anticipate a decrease in Capex over the next 3 years. Contrary to some analysts’ expectations, I do not see the lower Capex level being persistent but rather what to expect due to the reasons given above in this blog.

Figure 20 illustrates the pace and financial requirements for fiber-to-the-premises (FTTP) deployment across the EU, emphasizing the significant challenges ahead. Germany needs the highest number of households passed per week and the largest investments at €32.9 billion to reach 80% household coverage by 2031. The total investment required to reach 80% household fiber coverage by 2031 is estimated at over €110 billion, with most of this funding allocated to urban areas. Despite progress, more than 57% of Western European households still lack fiber coverage as of 2022. Achieving this goal will require maintaining the current pace of deployment and overcoming historical performance limitations. Source: based on Omdia & Point Topic’s “Broadband Coverage in Europe 2013-2022” (EU Commission Report).

CAPEX EXPECTATIONS TOWARDS 2030.

Taking the above Capex forecasting approach, based on the individual 54 Western European telcos in the New Street Research Quarterly review, it is relatively straightforward, but not per se very accurate, to extend to 2030, as shown in the figure below.

It is worth mentioning that predicting Capex’s reliability over such a relatively long period of ten years is prone to a high degree of uncertainty and can actually only be done with relatively high reliability if very detailed information is available on each telco’s long-term, short-term and strategy as well as their economic outlook. In my experience from working with very detailed bottom-up Capex models covering a five and beyond-year horizon (which is not the approach I have used here simply for lack of information required for such an exercise not to be futile), it is already prone to a relatively high degree of uncertainty even with all the information, solid strategic outlook, and reasonable assumptions up front.

Figure 21 illustrates Western Europe’s projected capital expenditure (Capex) development from 2020 to 2030. The slight increase in Capex towards 2030 is primarily driven by the modernization of 5G radio access networks (RAN), which could potentially incorporate 6G capabilities and further deploy 5G Standalone (SA) networks. Additionally, there is a focus on swapping out high-risk suppliers in the mobile domain and completing heavy fiber household coverage in the remaining laggard countries. Suppose the European Commission’s 5G Security Toolkit should be extended to fixed broadband networks, focusing on excluding high-risk suppliers in the 5G mobile domain. In that case, this scenario has not been factored into the current model represented here. The percentages on the chart represent the overall Capex to Total Revenue ratio development over the period.

The capital expenditure trends in Western Europe from 2020 to 2030, with projections indicating a steady investment curve (remember that this is the aggregation of 54 Western European telcos Capex development over the period).

A noticeable rise in Capex towards 2030 can be attributed to several key factors, primarily the modernization of 5G Radio Access Networks (RAN). This modernization effort will likely include upgrades to the current 5G infrastructure and potential integration of 6G (or renamed 5G SA) capabilities as Europe prepares for the next generation of mobile technology, which I still believe is an unavoidable direction. Additionally, deploying or expanding 5G Standalone (SA) networks, which offer more advanced features such as network slicing and ultra-low latency, will further drive investments.

Another significant factor contributing to the increased Capex is the planned replacement of high-risk suppliers in the mobile domain. Countries across Western Europe are expected to phase out network equipment from suppliers deemed risky for national security, aligning with broader EU efforts to ensure a secure telecommunications infrastructure. I expect a very strong push from some member state regulators and the European Commission to finish the replacement by 2027/2028. I also expect impacted telcos (of a certain size) to push back and attempt to time a high-risk supplier swap out with their regular mobile infrastructure obsolescence program and introduction of 6G in their networks towards and after 2030.

Figure 22 shows the projections for 2023 and 2030 for the number of homes covered by fiber in Western European countries and the number of households remaining. It should be noted that a 100% coverage level may be unlikely, and this data does not consider fiber overbuilt (i.e., multiple companies covering the same households with their individual fiber deployments). Fiber overbuilt becomes increasingly likely as the coverage exceeds 80% (on a geographical regional/city basis). Source: based on Omdia & Point Topic’s “Broadband Coverage in Europe 2013-2022” (EU Commission Report).

Simultaneously, Western Europe is expected to complete the extensive rollout of fiber-to-the-home (FTTH) networks, as illustrated by Figure 20 above, particularly in countries lagging behind in fiber deployment, such as Germany, the UK, Belgium, Austria, and Greece. These EU member states will likely have finished covering the majority of households (80+%) with high-speed fiber by the end of the decade. On this topic, we should remember that telcos are using various fiber deployment models that minimize (and optimize) their capital investment levels. By 2030 I would expect that almost 80% of all Western European households will be covered with fiber and thus most consumers and businesses will have easy access to gigabit services to their homes by then (and for most countries long before 2030). Germany is still expected to be the Western European fiber laggard by 20230, with an increased share of 50+% of German households not being covered by fiber (note: in 2022, this was 38%). Most other countries will have reached and exceeded 80% fiber household coverage.

It is also important to note that my Capex model does not assume the extension of the European Commission’s 5G Security Toolkit, which focuses on excluding high-risk suppliers in the 5G domain to fixed broadband networks. If the legal framework were to be applied to the fixed broadband sector as well, an event that I see to be very likely, forcing the removal of high-risk suppliers from fiber broadband networks, Capex requirements would likely increase significantly beyond the projections represented in my assessment with the last years of the decade focused on high-risk supplier replacement in Western European Telcos fixed broadband transport and IP networks. While it is I don’t see a (medium-high) risk that all CPEs would be included in a high-risk supplier ban. However, I do believe that CPEs with the ONT integrated may be required to replace their installed CPE base. If a high-risk supplier ban were to include the ONT, there would be several implications.

Any CPEs that use components from the banned supplier would need to be replaced or retrofitted to ensure compliance. This would require swapping the integrated CPE/ONT units for separate CPE and ONT devices from approved suppliers, which could add to installation costs and increase deployment time. Service providers would also need to reassess their network equipment supply chain, ensuring that new ONTs and CPEs meet regulatory standards for security and compliance. Moreover, replacing equipment could potentially disrupt existing service, necessitating careful planning to manage the transition without major outages for customers. This situation would likely also require updates to the network configuration, as replacing an integrated CPE/ONT device could involve reconfiguring customer devices to work seamlessly with the new setup. I believe it is very likely that telcos eventually will offer fixed broadband service, including CPEs and home gateways, that are free of high-risk suppliers end-2-end (e.g., for B2B and public institutions, e.g., defense and other critically sensitive areas). This may extend to requirements that employees working in or with sensitive areas will need a certificate of high-risk supplier-free end-2-end fixed broadband connection to be allowed to work from home or receive any job-related information (this could extend to mobile devices as well). Again, substantial Capex (and maybe a fair amount of time as well) would be required to reach such a high-risk supplier reduction.

AN ALTERNATE REALITY.

I am unsure whether William Webb’s idea of “The End of Telecoms History” (I really recommend you get his book) will have the same profound impact as Francis Fukuyama’s marvelously thought-provoking book “The End of History and the Last Man or be more “right” than Fukuyama’s book. However, I think it may be an oversimplification of his ideas to say that he has been proven wrong. The world of Man may have proven more resistant to “boredom” than the book assumed (as Fukuyama conceded in subsequent writing). Nevertheless, I do not believe history can be over unless the history makers and writers are all gone (which may happen sooner rather than later). History may have long and “boring” periods where little new and disruptive things happen. Still, historically, something so far has always disrupted the hiatus of history, followed by a quieter period (e.g., Pax Romana, European Feudalism, Ming Dynasty, 19th century’s European balance of power, …). The nature of history is cyclic. Stability and disruption are not opposing forces but part of an ongoing dynamic. I don’t think telecommunication would be that different. Parts of what we define as telecom may reach a natural end and settle until it is disrupted again; for example, the fixed telephony services on copper lines were disrupted by emerging mobile technologies driven by radio access technology innovation back in the 90s and until today. Or, like circuit-switched voice-centric technologies, which have been replaced by data-centric packet-switched technologies, putting an “end” to the classical voice-based business model of the incumbent telecommunication corporations.

At some point in the not-so-distant future (2030-2040), all Western European households will be covered by optical fiber and have a fiber-optic access connection with indoor services being served by ultra-WiFi coverage (remember approx. 80% of mobile consumption happens indoors). Mobile broadband networks have by then been redesigned to mainly provide outdoor coverage in urban and suburban areas. These are being modernized at minimum 10-year cycles as the need for innovation is relatively minor and more focused on energy efficiency and CO2 footprint reductions. Direct-to-cell (D2C) LEO satellite or stratospheric drone constellations utilizing a cellular spectrum above 1800 MHz serve outdoor coverage of rural regions, as opposed to the current D2C use of low-frequency bands such as 600 – 800 MHz (as higher frequency bands are occupied terrestrially and difficult to coordinate with LEO Satellite D2C providers). Let’s dream that the telco IT landscape, Core, transport, and routing networks will be fully converged (i.e., no fixed silo, no mobile silo) and autonomous network operations deal with most technical issues, including planning and optimization.

In this alternate reality, you pay for and get a broadband service enabled by a fully integrated broadband network. Not a mobile service served by a mobile broadband network (including own mobile backhaul, mobile aggregation, mobile backbone, and mobile core), and, not a fixed service served by a fixed broadband network different from the mobile infrastructure.

Given the Western European countries addressed in this report (i.e., see details in Further Reading #1), we would need to cover a surface area of 3.6 million square kilometers. To ensure outdoor coverage in urban areas and road networks, we may not need more than about 50,000 cell sites compared to today’s 300 – 400 thousand. If the cellular infrastructure is shared, the effective number of sites that are paid in full would be substantially lower than that.

The required mobile Capex ballpark estimate would be a fifth (including its share of related fixed support investment, e.g., IT, Core, Transport, Switching, Routing, Product development, etc.) of what it otherwise would be if we continue “The Mobile History” as it has been running up to today.

In this “Alternate Reality” ” instead of having a mobile Capex level of about 10% of the total fixed and mobile revenue (~15+% of mobile service revenues), we would be down to between 2% and 3% of the total telecom revenues (assuming it remains reasonably flat at a 2023 level. The fixed investment level would be relatively low, household coverage would be finished, and most households would be connected. If we use numbers of fixed broadband Capex without substantial fiber deployment, that level should not be much higher than 5% of the total revenue. Thus, instead of today’s persistent level of 18% – 20% of the total telecom revenues, in our “Alternate Reality,” it would not exceed 10%. And just imagine what such a change would do to the operational cost structure.

Obviously, this fictive (and speculative) reality would be “The End of Mobile History.”

It would be an “End to Big Capex” and a stop to spending mobile Capex like there is no (better fixed broadband) tomorrow.

This is an end-reflection of where the current mobile network development may be heading unless the industry gets better at optimizing and prioritizing between mobile and fixed broadband. Re-architecting the fundamental design paradigms of mobile network design, plan, and build is required, including an urgent reset of current 6G thinking.

ACKNOWLEDGEMENT.

I greatly acknowledge my wife, Eva Varadi, for her support, patience, and understanding during the creative process of writing this Blog. There should be no doubt that without the support of Russell Waller (New Street Research), this blog would not have been possible. Thank you so much for providing the financial telco data for Western Europe that lays the ground for much of the Capex analysis in this article. This blog has also been published in telecomanalysis.net with some minor changes and updates.

FURTHER READING.

  1. New Street Research covers the following countries in their Quarterly report: Austria, Belgium, Denmark, Finland, France, Germany, Greece, Italy, Netherlands, Norway, Portugal, Spain, Sweden, Switzerland, and the United Kingdom. Across those 15 countries, ca. 56 telcos are covered.
  2. Kim Kyllesbech Larsen, “Navigating the Future of Telecom Capex: Western Europe’s Telecom Investment 2024 to 2030,” telecomanalysis.net, (October 2024).
  3. Kim Kyllesbech Larsen, “The Nature of Telecom Capex – a 2023 Update”, techneconomyblog.com, (July 2023).
  4. Kim Kyllesbech Larsen, “The Nature of Telecom Capex,” techneconomyblog.com, (July 2022).
  5. Rupert Wood, “A crisis of overproduction in bandwidth means that telecoms capex will inevitably fall,” Analysys Mason (July 2024). A rather costly (for mortals & their budgets, at least) report called “The end of big capex: new strategic options for the telecoms industry” allegedly demonstrates the crisis.
  6. European Commission, “Cybersecurity of 5G networks – EU Toolbox of risk mitigating measures”, (January 2020).
  7. European Commission, “The EU Toolbox for 5G Security”, (2020).
  8. European Commission, “5G security: Member States report on progress on implementing the EU toolbox and strengthening safety measures”, (July 2020). It also includes a link to the actual Member States progress report on 5G Security.
  9. European Commission, “Second report on the implementation of the EU 5G cybersecurity toolbox”, (June 2023).
  10. Danish Investment Screening Act, “Particularly sensitive sectors and activities,” Danish Business Authority, (July 2021). Note that the “Danish Investment Screening Act” is closely aligned with broader European Union (EU) frameworks and initiatives to safeguard critical infrastructure from high-risk foreign suppliers. The Act reflects Denmark’s effort to implement national and EU-level policies to protect sensitive sectors from foreign investments that could pose security risks, particularly in critical infrastructure such as telecommunications, energy, and defense.
  11. Cynthia Kroet, “Eleven EU countries took 5G security measures to ban Huawei, ZTE”, Euro News, (August 2024).
  12. Michael Stenvei, “Historisk indgreb: TDC tvinges til at droppe Huawei-aftale”, Finans.dk, (May 2023).
  13. Mathieu Pollet, “Time to cut back on Huawei, German minister tells telecoms giants,” Politico (August 2023).
  14. German press on high-risk suppliers in German telecommunications networks: “Zeit für den Abschied von Huawei, sagt Innenministerin Faeser” (Handelsblatt, August 18, 2023), “Deutsche Telekom und Huawei: Warum die Abhängigkeit bleibt” (Die Welt, September 7, 2023), “Telekom-Netz: Kritik an schleppendem Rückzug von Huawei-Komponenten” (Der Spiegel, September 20, 2023), “Faeser verschiebt Huawei-Bann und stößt auf heftige Kritik” (Handelsblatt, July 18, 2024), “Huawei-Verbot in 5G-Netzen: Deutschland verschärft, aber langsam” (Tagesschau, July 15, 2024), and “Langsame Fortschritte: Deutschland und das Huawei-Dilemma” (Der Spiegel, September 21, 2024) and many many others.
  15. Iain Morris, “German Huawei ban to cost €2.5B and take years, no thanks to EU”, Light Reading (May 2023).
  16. Alexander Martin, “EU states told to restrict Huawei and ZTE from 5G networks ‘without delay’”, The Record, (June 2023).
  17. Strand Consult, “Understanding the Market for 4G RAN in Europe: Share of Chinese and Non-Chinese Vendors – in 102 Mobile Networks”, (2020).
  18. Strand Consult, “The Market for 5G RAN in Europe: Share of Chinese and Non-Chinese Vendors in 31 European Countries”, (2023).
  19. William Web, “The End of Telecoms History,” Kindle, (June 2024).
  20. GSMA, “The State of 5G 2024 – Introducing the GSMA Intelligence 5G Connectivity Index”, (February 2024).
  21. Speedtest.com, “Speedtest Global Index”, (August 2024).
  22. Ericsson Mobility Visualizer – Mobile Data Traffic.
  23. Kim Kyllesbech Larsen, “5G Economics – An Introduction (Chapter 1)”, techneconomyblog.com, (December 2016).
  24. Kim Kyllesbech Larsen, “Capacity planning in mobile data networks experiencing exponential growth in demand” (April 2012). See slide 5, showing that 50% of all data traffic is generated in 1 cell, 80% of data traffic is carried in up to 3 cells, and only 20% of traffic can be regarded as truly mobile. The presentation has been viewed more than 19 thousand times.
  25. Tom Copeland, Tim Koller, Jack Murrin, “Valuation – Measuring and Managing the Valuation of Companies,” John Wiley & Sons, (3rd edition, 2000). There are newer editions on Amazon.com today (e.g., 7th by now).
  26. Dean Bubley, “The 6G vision needs a Reset” (October 2024).
  27. Geoff Hollingworth, “Why 6G Reset and why I support”, (October 2024).
  28. Opanga, “The RAIN AI Platform”, provides a cognitive AI-based solution that addresses (1) Network Optimization lowering Capex demand and increasing the Customer Experience, (2) Energy Reduction above and beyond existing supplier solutions leading to further Opex efficiencies, and (3) Network Intelligence using AI to better manage your network data at a much higher resolution than is possible with classical dashboard applied to technology-driven data lakes.

The Next Frontier: LEO Satellites for Internet Services.

THE SPACE RACE IS ON.

If all current commercial satellite plans were to be realized within the next decade, we would have more, possibly substantially more, than 65 thousand satellites circling Earth. Today, that number is less than 10 thousand, with more than half that number realized by StarLink’s Low Earth Orbit (LEO) constellation over the last couple of years (i.e., since 2018).

While the “Arms Race” during the Cold War was “a thing” mainly between The USA and the former Soviet Union, the Space Race will, in my opinion, be “battled out” between the commercial interests of the West against the political interest of China (as illustrated in Figure 1 below). The current numbers strongly indicate that Europe, Canada, the Middle East, Africa, and APAC (minus China) will likely and largely be left on the sideline to watch the US and China impose, in theory, a “duopoly” in LEO satellite-based services. However, in practice, it will be a near-monopoly when considering security concerns between the West and the (re-defined) East block.

Figure 1 Illustrates my thesis that we will see a Space Race over the next 10 years between a (or very few) commercial LEO constellation, represented by a Falcon-9 like design (for maybe too obvious reasons), and a Chinese-state owned satellite constellation. (Courtesy: DALL-E).

As of end of 2023, more than 50% of launched and planned commercial LEO satellites are USA-based. Of those, the largest fraction is accounted for by the US-based StarLink constellation (~75%). More than 30% are launched or planned by Chinese companies headed by the state-owned Guo Wang constellation rivaling Elon Musk’s Starlink in ambition and scale. Europe comes in at a distant number 3 with about 8% of the total of fixed internet satellites. Apart from being disappointed, alas, not surprised by the European track record, it is somewhat more baffling that there are so few Indian and African satellite (there are none) constellations given the obvious benefits such satellites could bring to India and the African continent.

India is a leading satellite nation with a proud tradition of innovative satellite designs and manufacturing and a solid track record of satellite launches. However, regarding commercial LEO constellations, India still needs to catch up on some opportunities here. Having previously worked on the economics and operationalizing a satellite ATC (i.e., a satellite service with an ancillary terrestrial component) internet service across India, it is mind-blowing (imo) how much economic opportunity there is to replace by satellite the vast terrestrial cellular infrastructure in rural India. Not to mention a quantum leap in communication broadband services resilience and availability that could be provided. According to the StarLink coverage map, the regulatory approval in India for allowing StarLink (US) services is still pending. In the meantime, Eutelsat’s OneWeb (EU) received regulatory approval in late 2023 for its satellite internet service over India in collaboration with Barthi Enterprises (India), that is also the largest shareholder in the recently formed Eutelsat Group with 21.2%. Moreover, Jio’s JioSpaceFiber satellite internet services were launched in several Indian states at the end of 2023, using the SES (EU) MEO O3b mPower satellite constellation. Despite the clear satellite know-how and capital available, it appears there is little activity for Indian-based LEO satellite development, taking up the competition with international operators.

The African continent is attracting all the major LEO satellite constellations such as StarLink (US), OneWeb (EU), Amazon Kuipers (US), and Telesat Lightspeed (CAN). However, getting regulatory approval for their satellite-based internet services is a complex, time-consuming, and challenging process with Africa’s 54 recognized sovereign countries. I would expect that we will see the Chinese-based satellite constellations (e.g., Guo Wang) taking up here as well due to the strong ties between China and several of the African nations.

This article is not about SpaceX’s StarLink satellite constellation. Although StarLink is mentioned a lot and used as an example. Recently, at the Mobile World Congress 2024 in Barcelona, talking to satellite operators (but not StarLink) providing fixed broadband satellite services, we joked about how long into a meeting we could go before SpaceX and StarLink would be mentioned (~ 5 minutes where the record, I think).

This article is about the key enablers (frequencies, frequency bandwidth, antenna design, …) that make up an LEO satellite service, the LEO satellite itself, the kind of services one should expect from it, and its limitations.

There is no doubt that LEO satellites of today have an essential mission: delivering broadband internet to rural and remote areas with little or no terrestrial cellular or fixed infrastructure to provide internet services. Satellites can offer broadband internet to remote areas with little population density and a population spread out reasonably uniformly over a large area. A LEO satellite constellation is not (in general) a substitute for an existing terrestrial communications infrastructure. Still, it can enhance it by increasing service availability and being an important remedy for business continuity in remote rural areas. Satellite systems are capacity-limited as they serve vast areas, typically with limited spectral resources and capacity per unit area.

In comparison, we have much smaller coverage areas with demand-matched spectral resources in a terrestrial cellular network. It is also easier to increase capacity in a terrestrial cellular system by adding more sectors or increasing the number of sites in an area that requires such investments. Adding more cells, and thus increasing the system capacity, to satellite coverage requires a new generation of satellites with more advanced antenna designs, typically by increasing the number of phased-array beams and more complex modulation and coding mechanisms that boost the spectral efficiency, leading to increased capacity and quality for the services rendered to the ground. Increasing the system capacity of a cellular communications system by increasing the number of cells (i.e., cell splitting) works the same in satellite systems as it does for a terrestrial cellular system.

So, on average, LEO satellite internet services to individual customers (or households), such as those offered by StarLink, are excellent for remote, lowly populated areas with a nicely spread-out population. If we de-average this statement. Clearly, within the satellite coverage area, we may have towns and settlements where, locally, the population density can be fairly large despite being very small over the larger footprint covered by the satellite. As the capacity and quality of the satellite is a shared resource, serving towns and settlements of a certain size may not be the best approach to providing a sustainable and good customer experience as the satellite resources exhaust rapidly in such scenarios. In such scenarios, a hybrid architecture is of much better use as well as providing all customers in a town or settlement with the best service possible leveraging the existing terrestrial communications infrastructure, cellular as well as fixed, with that of a satellite backhaul broadband connection between a satellite ground gateway and the broadband internet satellite. This is offered by several satellite broadband providers (both from GEO, MEO and LEO orbits) and has the beauty of not only being limited to one provider. Unfortunately, this particular finesse, is often overlooked by the awe of massive scale of the StarLink constellation.

AND SO IT STARTS.

When I compared the economics of stratospheric drone-based cellular coverage with that of LEO satellites and terrestrial-based cellular networks in my previous article, “Stratospheric Drones: Revolutionizing Terrestrial Rural Broadband from the Skies?”, it was clear that even if LEO satellites are costly to establish, they provide a substantial cost advantage over cellular coverage in rural and remote areas that are either scarcely covered or not at all. Although the existing LEO satellite constellations have limited capacity compared to a terrestrial cellular network and would perform rather poorly over densely populated areas (e.g., urban and suburban areas), they can offer very decent fixed-wireless-access-like broadband services in rural and remote areas at speeds exceeding even 100 Mbps, such as shown by the Starlink constellation. Even if the provided speed and capacity is likely be substantially lower than what a terrestrial cellular network could offer, it often provides the missing (internet) link. Anything larger than nothing remains infinitely better.

Low Earth Orbit (LEO) satellites represent the next frontier in (novel) communication network architectures, what we in modern lingo would call non-terrestrial networks (NTN), with the ability to combine both mobile and fixed broadband services, enhancing and substituting terrestrial networks. The LEO satellites orbit significantly closer to Earth than their Geostationary Orbit (GEO) counterparts at 36 thousand kilometers, typically at altitudes between 300 to 2,000 kilometers, LEO satellites offer substantially reduced latency, higher bandwidth capabilities, and a more direct line of sight to receivers on the ground. It makes LEO satellites an obvious and integral component of non-terrestrial networks, which aim to extend the reach of existing fixed and mobile broadband services, particularly in rural, un-and under-served, or inaccessible regions as a high-availability element of terrestrial communications networks in the event of natural disasters (flooding, earthquake, …), or military conflict, in which the terrestrial networks are taken out of operation.

Another key advantage of LEO satellite is that the likelihood of a line-of-sight (LoS) to a point on the ground is very high compared to establishing a LoS for terrestrial cellular coverage that, in general, would be very low. In other words, the signal propagation from a LEO satellite closely approximates that of free space. Thus, all the various environmental signal loss factors we must consider for a standard terrestrial-based cellular mobile network do not apply to our satellite with signal propagation largely being determined by the distance between the satellite and the ground (see Figure 2).

Figure 2 illustrates the difference between terrestrial cellular coverage from a cell tower and that of a Low Earth Orbit (LEO) Satellite. The benefit of seeing the world from above is that environmental and physical factors have substantially less impact on signal propagation and quality primarily being impacted by distance as it approximates free space propagation with signal attenuation mainly determined by the Line-of-Sight (LoS) distance from antenna to Earth. This situation is very different for a terrestrial-based cellular tower with its radiated signal being substantially compromised by environmental factors.

Low Earth Orbit (LEO) satellites, compared to GEO and MEO-based higher-altitude satellite systems, in general, have simpler designs and smaller sizes, weights, and volumes. Their design and architecture are not just a function of technological trends but also a manifestation of their operational environment. The (relative) simplicity of LEO satellites also allows for more standardized production, allowing for off-the-shelf components and modular designs that can be manufactured in larger quantities, such as the case with CubeSats standard and SmallSats in general. The lower altitude of LEO satellites translates to a reduced distance from the launch site to the operational orbit, which inherently affects the economics of satellite launches. This proximity to Earth means that the energy required to propel a satellite into LEO is significantly less than needed to reach Geostationary Earth Orbit (GEO), resulting in lower launch costs.

The advent of LEO satellite constellations marks an important shift in how we approach global connectivity. With the potential to provide ubiquitous internet coverage in rural and remote places with little or no terrestrial communications infrastructure, satellites are increasingly being positioned as vital elements in global communication. The LEO satellites, as well as stratospheric drones, have the ability to provide economical internet access, as addressed in my previous article, in remote areas and play a significant role in disaster relief efforts. For example, when terrestrial communication networks may be disrupted after a natural disaster, LEO satellites can quickly re-establish communication links to normal cellular devices or ad-how earth-based satellite systems, enabling efficient coordination of rescue and relief operations. Furthermore, they offer a resilient network backbone that complements terrestrial infrastructure.

The Internet of Things (IoT) benefits from the capabilities of LEO satellites. Particular in areas where there is little or no existing terrestrial communications networks. IoT devices often operate in remote or mobile environments, from sensors in agricultural fields to trackers across shipping routes. LEO satellites provide reliable connectivity to IoT networks, facilitating many applications, such as non- and near real-time monitoring of environmental data, seamless asset tracking over transcontinental journeys, and rapid deployment of smart devices in smart city infrastructures. As an example, let us look at the minimum requirements for establishing a LEO satellite constellation that can gather IoT measurements. At an altitude of 550 km the satellite would take ca. 1.5 hour to return to a given point on its orbit. Earth rotates (see also below) which require us to deploy several orbital planes to ensure that we have continuous coverage throughout the 24 hours of a day (assuming this is required). Depending on the satellite antenna design, the target coverage area, and how often a measurement is required, a satellite constellation to support an IoT business may not require much more than 20 (lower measurement frequency) to 60 (higher measurement frequency, but far from real real-time data collection) LEO satellites (@ 550 km).

For defense purposes, LEO satellite systems present unique advantages. Their lower orbits allow for high-resolution imagery and rapid data collection, which are crucial for surveillance, reconnaissance, and operational awareness. As typically more LEO satellites will be required, compared to a GEO satellite, such systems also offer a higher degree of redundancy in case of anti-satellite (ASAT) warfare scenarios. When integrated with civilian applications, military use cases can leverage the robust commercial infrastructure for communication and geolocation services, enhancing capabilities while distributing the system’s visibility and potential targets.

Standalone military LEO satellites are engineered for specific defense needs. These may include hardened systems for secure communication, resistance to jamming, and interception. For instance, they can be equipped with advanced encryption algorithms to ensure secure transmission of sensitive military data. They also carry tailored payloads for electronic warfare, signal intelligence, and tactical communications. For example, they can host sensors for detecting and locating enemy radar and communication systems, providing a significant advantage in electronic warfare. As the line between civilian and military space applications blurs, dual-use LEO satellite systems are emerging, capable of serving civilian broadband and specialized military requirements. It should be pointed out that there also military applications, such as signal gathering, that may not be compatible with civil communications use cases.

In a military conflict, the distributed architecture and lower altitude of LEO constellations may offer some advantages regarding resilience and targetability compared to GEO and MEO-based satellites. Their more significant numbers (i.e., 10s to 1000s) compared to GEO, and the potential for quicker orbital resupply can make them less susceptible to complete system takedown. However, their lower altitudes could make them accessible to various ASAT technologies, including ground-based missiles or space-based kinetic interceptors.

It is not uncommon to encounter academic researchers and commentators who give the impression that LEO satellites could replace existing terrestrial-based infrastructures and solve all terrestrial communications issues known to man. That is (of course) not the case. Often, such statements appears to be based an incomplete understanding of the capacity limitation of satellite systems. Due to satellites’ excellent coverage with very large terrestrial footprints, the satellite capacity is shared over very large areas. For example, consider an LEO satellite at 550 km altitude. The satellite footprint, or coverage area (aka ground swath), is the area on the Earth’s surface over which the satellite can establish a direct line of sight. The satellite footprint in our example diameter would be ca. five thousand five hundred kilometers. An equivalent area of ca. 23 million square kilometers is more than twice that of the USA (or China or Canada). Before you get too excited, the satellite antenna will typically restrict the surface area the satellite will cover. The extent of the observable world that is seen at any given moment by the satellite antenna is defined as the Field of View (FoV) and can vary from a few degrees (narrow beams, small coverage area) to 40 degrees or higher (wide beams, large coverage areas). At a FoV of 20 degrees, the antenna footprint would be ca. 2 thousand 400 kilometers, equivalent to a coverage area of ca. 5 million square kilometers.

In comparison, for a FoV of 0.8 degrees, the antenna footprint would only be 100 kilometers. If our satellite has a 16-satellite beam capability, it would translate into a coverage diameter of 24 km per beam. For the StarLink system based on the Ku-band (13 GHz) and a cell downlink (Satellite-to-Earth) capacity of ca. 680 Mbps (in 250 MHz) we would have ca. 2 Mbps per km2 unit coverage area. Compared to a terrestrial rural cellular site with 85 MHz (Downlink, Base station antenna to customer terminal), it would deliver 10+ Mbps per km2 unit coverage area.

It is always good to keep in mind that “Satellites mission is not to replace terrestrial communications infrastructures but supplement and enhance them”, and furthermore, “Satellites offer the missing (internet) link in areas where there is no terrestrial communications infrastructure present”. Satellites offer superior coverage to any terrestrial communications infrastructure. Satellites limitations are in providing capacity, and quality, at population scale as well as supporting applications and access technologies requiring very short latencies (e.g., smaller than 10 ms).

In the following, I will focus on terrestrial cellular coverage and services that LEO satellites can provide. At the end of my blog, I hope I have given you (the reader) a reasonable understanding of how terrestrial coverage, capacity, and quality work in a (LEO) satellite system and have given you an impression of key parameters we can add to the satellite to improve those.

EARTH ROTATES, AND SO DO SATELLITES.

Before getting into the details of low earth orbit satellites, let us briefly get a couple of basic topics off the table. Skipping this part may be a good option if you are already into and in the know satellites. Or maybe carry on an get a good laugh of those terra firma cellular folks that forgot about the rotation of Earth 😉

From an altitude and orbit (around Earth) perspective, you may have heard of two types of satellites: The GEO and the LEO satellites. Geostationary (GEO) satellites are positioned in a geostationary orbit at ~36 thousand kilometers above Earth. That the satellite is geostationary means it rotates with the Earth and appears stationary from the ground, requiring only one satellite to maintain constant coverage over an area that can be up to one-third of Earth’s surface. Low Earth Orbit (LEO) satellites are positioned at an altitude between 300 to 2000 kilometers above Earth and move relative to the Earth’s surface at high speeds, requiring a network or constellation to ensure continuous coverage of a particular area.

I have experienced that terrestrial cellular folks (like myself) when first thinking about satellite coverage are having some intuitive issues with satellite coverage. We are not used to our antennas moving away from the targeted coverage area, and our targeted coverage area, too, is moving away from our antenna. The geometry and dynamics of terrestrial cellular coverage are simpler than they are for satellite-based coverage. For LEO satellite network planners, it is not rocket science (pun intended) that the satellites move around in their designated orbit over Earth at orbital speeds of ca. 70 to 80 km per second. Thus, at an altitude of 500 km, a LEO satellite orbits Earth approximately every 1.5 hours. Earth, thankfully, rotates. Compared to its GEO satellite “cousin,” the LEO satellite ” is not “stationary” from the perspective of the ground. Thus, as Earth rotates, the targeted coverage area moves away from the coverage provided by the orbital satellite.

We need several satellites in the same orbit and several orbits (i.e., orbital planes) to provide continuous satellite coverage of a target area. This is very different from terrestrial cellular coverage of a given area (needles to say).

WHAT LEO SATELLITES BRING TO THE GROUND.

Anything is infinitely more than nothing. The Low Earth Orbit satellite brings the possibility of internet connectivity where there previously was nothing, either because too few potential customers spread out over a large area made terrestrial-based services hugely uneconomical or the environment is too hostile to build normal terrestrial networks within reasonable economics.

Figure 3 illustrates a low Earth satellite constellation providing internet to rural and remote areas as a way to solve part of the digital divide challenge in terms of availability. Obviously, the affordability is likely to remain a challenge unless subsidized by customers who can afford satellite services in other places where availability is more of a convenience question. (Courtesy: DALL-E)

The LEO satellites represent a transformative shift in internet connectivity, providing advantages over traditional cellular and fixed broadband networks, particularly for global access, speed, and deployment capabilities. As described in “Stratospheric Drones: Revolutionizing Terrestrial Rural Broadband from the Skies?”, LEO satellite constellations, or networks, may also be significantly more economical than equivalent cellular networks in rural and remote areas where the economics of coverage by satellite, as depicted in the above Figure 3, is by far better than by traditional terrestrial cellular means.

One of the foremost benefits of LEO satellites is their ability to offer global coverage as well as reasonable broadband and latency performance that is difficult to match with GEO and MEO satellites. The GEO stationary satellite obviously also offers global broadband coverage, the unit coverage being much more extensive than for a LEO satellite, but it is not possible to offer very low latency services, and it is more difficult to provide high data rates (in comparison to a LEO satellite). LEO satellites can reach the most remote and rural areas of the world, places where laying cables or setting up cell towers is impractical. This is a crucial step in delivering communications services where none exist today, ensuring that underserved populations and regions gain access to internet connectivity.

Another significant advantage is the reduction in latency that LEO satellites provide. Since they orbit much closer to Earth, typically at an altitude between 350 to 700 km, compared to their geostationary counterparts that are at 36 thousand kilometers altitude, the time it takes for a communications signal to travel between the user and the satellite is significantly reduced. This lower latency is crucial for enhancing the user experience in real-time applications such as video calls and online gaming, making these activities more enjoyable and responsive.

An inherent benefit of satellite constellations is their ability for quick deployment. They can be deployed rapidly in space, offering a quicker solution to achieving widespread internet coverage than the time-consuming and often challenging process of laying cables or erecting terrestrial infrastructure. Moreover, the network can easily be expanded by adding more satellites, allowing it to dynamically meet changing demand without extensive modifications on the ground.

LEO satellite networks are inherently scalable. By launching additional satellites, they can accommodate growing internet usage demands, ensuring that the network remains efficient and capable of serving more users over time without significant changes to ground infrastructure.

Furthermore, these satellite networks offer resilience and reliability. With multiple satellites in orbit, the network can maintain connectivity even if one satellite fails or is obstructed, providing a level of redundancy that makes the network less susceptible to outages. This ensures consistent performance across different geographical areas, unlike terrestrial networks that may suffer from physical damage or maintenance issues.

Another critical advantage is (relative) cost-effectiveness compared to a terrestrial-based cellular network. In remote or hard-to-reach areas, deploying satellites can be more economical than the high expenses associated with extending terrestrial broadband infrastructure. As satellite production and launch costs continue to decrease, the economics of LEO satellite internet become increasingly competitive, potentially reducing the cost for end-users.

LEO satellites offer a promising solution to some of the limitations of traditional connectivity methods. By overcoming geographical, infrastructural, and economic barriers, LEO satellite technology has the potential to not just complement but effectively substitute terrestrial-based cellular and fixed broadband services, especially in areas where such services are inadequate or non-existent.

Figure 4 below provides an overview of LEO satellite coverage with fixed broadband services offered to customers in the Ku band with a Ka backhaul link to ground station GWs that connect to, for example, the internet. Having inter-satellite communications (e.g., via laser links such as those used by Starlink satellites as per satellite version 1.5) allows for substantially less ground-station gateways. Inter-satellite laser links between intra-plane satellites are a distinct advantage in ensuring coverage for rural and remote areas where it might be difficult, very costly, and impractical to have a satellite ground station GW to connect to due to the lack of global internet infrastructure.

Figure 4 In general, a satellite is required to have LoS to its ground station gateway (GW); in other words, the GW needs to be within the coverage footprint of the satellite. For LEO satellites, which are at low altitudes, between 300 and 2000 km, and thus have a much lower footprint than MEO and GEO satellites, this would result in a need for a substantial amount of ground stations. This is depicted in (a) above. With inter-satellite laser links (SLL), e.g., those implemented by Starlink, it is possible to reduce the ground station gateways significantly, which is particularly helpful in rural and very remote areas. These laser links enable direct communication between satellites in orbit, which enhances the network’s performance, reliability, and global reach.

Inter-satellite laser links (ISLLs), or, as it is also called Optical Inter-satellite Links (OISK), are an advanced communication technology utilized by satellite constellations, such as for example Starlink, to facilitate high-speed secure data transmission directly between satellites. Inter-satellite laser links are today (primarily) designed for intra-plane communication within satellite constellations, enabling data transfer between satellites that share the same orbital plane. This is due to the relatively stable geometries and predictable distances between satellites in the same orbit, which facilitate maintaining the line-of-sight connections necessary for laser communications. ISLLs mark a significant departure from traditional reliance on ground stations for inter-satellite communication, and as such the ISL offers many benefits, including the ability to transmit data at speeds comparable to fiber-optic cables. Additionally, ISLLs enable satellite constellations to deliver seamless coverage across the entire planet, including over oceans and polar regions where ground station infrastructure is limited or non-existent. The technology also inherently enhances the security of data transmissions, thanks to the focused nature of laser beams, which are difficult to intercept.

However, the deployment of ISLLs is not without challenges. The technology requires a clear line of sight between satellites, which can be affected by their orbital positions, necessitating precise control mechanisms. Moreover, the theoretical limit to the number of satellites linked in a daisy chain is influenced by several factors, including the satellite’s power capabilities, the network architecture, and the need to maintain clear lines of sight. High-power laser systems also demand considerable energy, impacting the satellite’s power budget and requiring efficient management to balance operational needs. The complexity and cost of developing such sophisticated laser communication systems, combined with very precise pointing mechanisms and sensitive detectors, can be quite challenging and need to be carefully weighted against building satellite ground stations.

Cross-plane ISLL transmission, or the ability to communicate between satellites in different orbital planes, presents additional technical challenges, as it is technically highly challenging to maintain a stable line of sight between satellites moving in different orbital planes. However, the potential for ISLLs to support cross-plane links is recognized as a valuable capability for creating a fully interconnected satellite constellation. The development and incorporation of cross-plane ISLL capabilities into satellites are an area of active research and development. Such capabilities would reduce the reliance on ground stations and significantly increase the resilience of satellite constellations. I see the development as a next-generation topic together with many other important developments as described in the end of this blog. However, the power consumption of the ISLL is a point of concern that needs careful attention as it will impact many other aspects of the satellite operation.

THE DIGITAL DIVIDE.

The digital divide refers to the “internet haves and haves not” or “the gap between individuals who have access to modern information and communication technology (ICT),” such as the internet, computers, and smartphones, and those who do not have access. This divide can be due to various factors, including economic, geographic, age, and educational barriers. Essentially, as illustrated in Figure 5, it’s the difference between the “digitally connected” and the “digitally disconnected.”.

The significance of the digital divide is considerable, impacting billions of people worldwide. It is estimated that a little less than 40% of the world’s population, or roughly 2.9 billion people, had never used the internet (as of 2023). This gap is most pronounced in developing countries, rural areas, and among older populations and economically disadvantaged groups.

The digital divide affects individuals’ ability to access information, education, and job opportunities and impacts their ability to participate in digital economies and the modern social life that the rest of us (i.e., the other side of the divide or the privileged 60%) have become used to. Bridging this divide is crucial for ensuring equitable access to technology and its benefits, fostering social and economic inclusion, and supporting global development goals.

Figure 5 illustrates the digital divide, that is, the gap between individuals with access to modern information and communication technology (ICT), such as the internet, computers, and smartphones, and those who do not have access. (Courtesy: DALL-E)

CHALLENGES WITH LEO SATELLITE SOLUTIONS.

Low-Earth-orbit satellites offer compelling advantages for global internet connectivity, yet they are not without challenges and disadvantages when considered substitutes for cellular and fixed broadband services. These drawbacks underscore the complexities and limitations of deploying LEO satellite technology globally.

The capital investment required and the ongoing costs associated with designing, manufacturing, launching, and maintaining a constellation of LEO satellites are substantial. Despite technological advancements and increased competition driving costs down, the financial barrier to entry remains high. Compared to their geostationary counterparts, the relatively short lifespan of LEO satellites necessitates frequent replacements, further adding to operational expenses.

While LEO satellites offer significantly reduced latency (round trip times, RTT ~ 4 ms) compared to geostationary satellites (RTT ~ 240 ms), they may still face latency and bandwidth limitations, especially as the number of users on the satellite network increases. This can lead to reduced service quality during peak usage times, highlighting the potential for congestion and bandwidth constraints. This is also the reason why the main business model of LEO satellite constellations is primarily to address coverage and needs in rural and remote locations. Alternatively, the LEO satellite business model focuses on low-bandwidth needs such as texting, voice messaging, and low-bandwidth Internet of Things (IoT) services.

Navigating the regulatory and spectrum management landscape presents another challenge for LEO satellite operators. Securing spectrum rights and preventing signal interference requires coordination across multiple jurisdictions, which can complicate deployment efforts and increase the complexity of operations.

The environmental and space traffic concerns associated with deploying large numbers of satellites are significant. The potential for space debris and the sustainability of low Earth orbits are critical issues, with collisions posing risks to other satellites and space missions. Additionally, the environmental impact of frequent rocket launches raises further concerns.

FIXED-WIRELESS ACCESS (FWA) BASED LEO SATELLITE SOLUTIONS.

Using the NewSpace Index database, updated December 2023, there are currently more than 6,463 internet satellites launched, of which 5,650 (~87%) from StarLink, and 40,000+ satellites planned for launch, with SpaceX’s Starlink satellites having 11,908 planned (~30%). More than 45% of the satellites launched and planned support multi-application use cases. Thus internet, together with, for example, IoT (~4%) and/or Direct-2-Device (D2D, ~39%). The D2D share is due to StarLink’s plans to provide services to mobile terminals with their latest satellite constellation. The first six StarLink v2 satellites with direct-to-cellular capability were successfully launched on January 2nd, 2024. Some care should be taken in the share of D2D satellites in the StarLink number as it does not consider the different form factors of the version 2 satellite that do not all include D2D capabilities.

Most LEO satellites, helped by StarLink satellite quantum, operational and planned, support satellite fixed broadband internet services. It is worth noting that the Chinese Guo Wang constellation ranks second in terms of planned LEO satellites, with almost 13,000 planned, rivaling the StarLink constellation. After StarLink and Guo Wang are counted there is only 34% or ca. 16,000 internet satellites left in the planning pool across 30+ satellite companies. While StarLink is privately owned (by Elon Musk), the Guo Wang (國網 ~ “The state network”) constellation is led by China SatNet and created by the SASAC (China’s State-Owned Assets Supervision and Administration Commission). SASAC oversees China’s biggest state-owned enterprises. I expect that such an LEO satellite constellation, which would be the second biggest LEO constellation, as planned by Guo Wang and controlled by the Chinese State, would be of considerable concern to the West due to the possibility of dual-use (i.e., civil & military) of such a constellation.

StarLink coverage as of March 2024 (see StarLink’s availability map) does not provide services in Russia, China, Iran, Iraq, Afghanistan, Venezuela, and Cuba (20% of Earth’s total land base surface area). There are still quite a few countries in Africa and South-East Asia, including India, where regulatory approval remains pending.

Figure 6 NewSpace Index data of commercial satellite constellations in terms of total number of launched and planned (top) per company (or constellation name) and (bottom) per country.

While the term FWA, fixed wireless access, is not traditionally used to describe satellite internet services, the broadband services offered by LEO satellites can be considered a form of “wireless access” since they also provide connectivity without cables or fiber. In essence, LEO satellite broadband is a complementary service to traditional FWA, extending wireless broadband access to locations beyond the reach of terrestrial networks. In the following, I will continue to use the term FWA for the fixed broadband LEO satellite services provided to individual customers, including SMEs. As some of the LEO satellite businesses eventually also might provide direct-to-device (D2D) services to normal terrestrial mobile devices, either on their own acquired cellular spectrum or in partnership with terrestrial cellular operators, the LEO satellite operation (or business architecture) becomes much closer to terrestrial cellular operations.

Figure 7 Illustrating a Non-Terrestrial Network consisting of a Low Earth Orbit (LEO) satellite constellation providing fixed broadband services, such as Fixed Wireless Access, to individual terrestrial users (e.g., Starlink, Kuiper, OneWeb,…). Each hexagon represents a satellite beam inside the larger satellite coverage area. Note that, in general, there will be some coverage overlap between individual satellites, ensuring a continuous service. The operating altitude of an LEO satellite constellation is between 300 and 2,000 km, with most aiming to be at 450 to 550 km altitude. It is assumed that the satellites are interconnected, e.g., laser links. The User Terminal antenna (UT) is dynamically orienting itself after the best line-of-sight (in terms of signal quality) to a satellite within UT’s field-of-view (FoV). The FoV has not been shown in the picture above so as not to overcomplicate the illustration.

Low Earth Orbit (LEO) satellite services like Starlink have emerged to provide fixed broadband internet to individual consumers and small to medium-sized enterprises (SMEs) targeting rural and remote areas often where no other broadband solutions are available or with poor legacy copper- or coax-based infrastructure. These services deploy constellations of satellites orbiting close to Earth to offer high-speed internet with the significant advantage of reaching rural and remote areas where traditional ground-based infrastructure is absent or economically unfeasible.

One of the most significant benefits of LEO satellite broadband is the ability to deliver connectivity with lower latency compared to traditional satellite internet delivered by geosynchronous satellites, enhancing the user experience for real-time applications. The rapid deployment capability of these services also means that areas in dire need of internet access can be connected much quicker than waiting for ground infrastructure development. Additionally, satellite broadband’s reliability is less affected by terrestrial challenges, such as natural disasters that can disrupt other forms of connectivity.

The satellite service comes with its challenges. The cost of user equipment, such as satellite dishes, can be a barrier for some users. So, can the installation process be of the terrestrial satellite dish required to establish the connection to the satellite. Moreover, services might be limited by data caps or experience slower speeds after reaching certain usage thresholds, which can be a drawback for users with high data demands. Weather conditions can also impact the signal quality, particularly at the higher frequencies used by the satellite, albeit to a lesser extent than geostationary satellite services. However, the target areas where the fixed broadband satellite service is most suited are rural and remote areas that either have no terrestrial broadband infrastructure (terrestrial cellular broadband or wired broadband such as coax or fiber)

Beyond Starlink, other providers are venturing into the LEO satellite broadband market. OneWeb is actively developing a constellation to offer internet services worldwide, focusing on communities that are currently underserved by broadband. Telesat Lightspeed is also gearing up to provide broadband services, emphasizing the delivery of high-quality internet to the enterprise and government sectors.

Other LEO satellite businesses, such as AST SpaceMobile and Lynk Mobile, are taking a unique approach by aiming to connect standard mobile phones directly to their satellite network, extending cellular coverage beyond the reach of traditional cell towers. More about that in the section below (see “New Kids on the Block – Direct-to-Devices LEO satellites”).

I have been asked why I appear somewhat dismissive of the Amazon’s Project Kuiper in a previous version of article particular compared to StarLink (I guess). The expressed mission is to “provide broadband services to unserved and underserved consumers, businesses in the United States, …” (FCC 20-102). Project Kuiper plans for a broadband constellation of 3,226 microsatellites at 3 altitudes (i.e., orbital shells) around 600 km providing fixed broadband services in the Ka-band (i.e.,~ 17-30 GHz). In its US-based FCC (Federal Communications Commission) filling and in the subsequent FCC authorization it is clear that the Kuiper constellation primarily targets contiguous coverage of the USA (but mentions that services cannot be provided in the majority of Alaska, … funny I thought that was a good definition of a underserved remote and scarcely populated area?). Amazon has committed to launch 50% (1,618 satellites) of their committed satellites constellation before July 2026 (until now 2+ has been launched) and the remaining 50% before July 2029. There is however far less details on the Kuiper satellite design, than for example is available for the various versions of the StarLink satellites. Given the Kuiper will operate in the Ka-band there may be more frequency bandwidth allocated per beam than possible in the StarLink satellites using the Ku-band for customer device connectivity. However, Ka-band is at a higher frequency which may result in a more compromised signal propagation. In my opinion based on the information from the FCC submissions and correspondence, the Kuiper constellation appear less ambitious compared to StarLink vision, mission and tangible commitment in terms of aggressive launches, very high level of innovation and iterative development on their platform and capabilities in general. This may of course change over time and as more information becomes available on the Amazon’s Project Kuiper.

FWA-based LEO satellite solutions – takeaway:

  • LoS-based and free-space-like signal propagation allows high-frequency signals (i.e., high throughput, capacity, and quality) to provide near-ideal performance only impacted by the distance between the antenna and the ground terminal. Something that is, in general, not possible for a terrestrial-based cellular infrastructure.
  • Provides satellite fixed broadband internet connectivity typically using the Ku-band in geographically isolated locations where terrestrial broadband infrastructure is limited or non-existent.
  • Lower latency (and round trip time) compared to MEO and GEO satellite internet solutions.
  • Current systems are designed to provide broadband internet services in scarcely populated areas and underserved (or unserved) regions where traditional terrestrial-based communications infrastructures are highly uneconomical and/or impractical to deploy.
  • As shown in my previous article (i.e., “Stratospheric Drones: Revolutionizing Terrestrial Rural Broadband from the Skies?”), LEO satellite networks may be an economical interesting alternative to terrestrial rural cellular networks in countries with large scarcely populated rural areas requiring tens of thousands of cellular sites to cover. Hybrid models with LEO satellite FWA-like coverage to individuals in rural areas and with satellite backhaul to major settlements and towns should be considered in large geographies.
  • Resilience to terrestrial disruptions is a key advantage. It ensures functionality even when ground-based infrastructure is disrupted, which is an essential element for maintaining the Business Continuity of an operator’s telecommunications services. Particular hierarchical architectures with for example GEO-satellite, LEO satellite and Earth-based transport infrastructure will result in very high reliability network operations (possibly approaching ultra-high availability, although not with service parity).
  • Current systems are inherently capacity-limited due to their vast coverage areas (i.e., lower performance per unit coverage area). In the peak demand period, they will typically perform worse than terrestrial-based cellular networks (e.g., LTE or 5G).
  • In regions where modern terrestrial cellular and fixed broadband services are already established, satellite broadband may face challenges competing with these potentially cheaper, faster, and more reliable services, which are underpinned by the terrestrial communications infrastructure.
  • It is susceptible to weather conditions, such as heavy rain or snow, which can degrade signal quality. This may impact system capacity and quality, resulting in inconsistent customer experience throughout the year.
  • Must navigate complex regulatory environments in each country, which can affect service availability and lead to delays in service rollout.
  • Depending on the altitude, LEO satellites are typically replaced on a 5—to 7-year cycle due to atmospheric drag (which increases as altitude decreases; thus, the lower the altitude, the shorter a satellite’s life). This ultimately means that any improvements in system capacity and quality will take time to be thoroughly enjoyed by all customers.

SATELLITE BACKHAUL SOLUTIONS.

Figure 8 illustrates the architecture of a Low Earth Orbit (LEO) satellite backhaul system used by providers like OneWeb as well as StarLink with their so-called “Community Gateway”. It showcases the connectivity between terrestrial internet infrastructure (i.e., Satellite Gateways) and satellites in orbit, enabling high-speed data transmission. The network consists of LEO satellites that communicate with each other (inter-satellite Comms) using the Ku and Ka frequency bands. These satellites connect to ground-based satellite gateways (GW), which interface with Points of Presence (PoP) and Internet Exchange Points (IXP), integrating the space-based network with the terrestrial internet (WWW). Note: The indicated speeds and frequency bands (e.g., Ku: 12–18 GHz, Ka: 28–40 GHz) and data speeds illustrate the network’s capabilities.

LEO satellites providing backhaul connectivity, such as shown in Figure 8 above, are extending internet services to the farthest reaches of the globe. These satellites offer many benefits, as already discussed above, in connecting remote, rural, and previously un- and under-served areas with reliable internet services. Many remote regions lack foundational telecom infrastructure, particularly long-haul transport networks needed for carrying traffic away from remote populated areas. Satellite backhauls do not only offer a substantially better financial solution for enhancing internet connectivity to remote areas but are often the only viable solution for connectivity.

Take, for example, Greenland. The world’s largest non-continental island, the size of Western Europe, is characterized by its sparse population and distinct unconnected by road settlement patterns mainly along the West Coast (as well as a couple of settlements on the East Coast), influenced mainly by its vast ice sheets and rugged terrain. With a population of around 56+ thousand, primarily concentrated on the west coast, Greenland’s demographic distribution is spread out over ca. 50+ settlements and about 20 towns. Nuuk, the capital, is the island’s most populous city, housing over 18+ thousand residents and serving as the administrative, economic, and cultural hub. Terrestrial cellular networks serve settlements’ and towns’ communication and internet services needs, with the traffic carried back to the central switching centers by long-haul microwave links, sea cables, and satellite broadband connectivity. Several settlements connectivity needs can only be served by satellite backhaul, e.g., settlements on the East Coast (e.g., Tasiilaq with ca. 2,000 inhabitants and Ittoqqotooormiit (an awesome name!) with around 400+ inhabitants). LEO satellite backhaul solutions serving Satellite-only communities, such as those operated and offered by OneWeb (Eutelsat), could provide a backhaul transport solution that would match FWA latency specifications due to better (round trip time) performance than that of a GEO satellite backhaul solution.

It should also be clear that remote satellite-only settlements and towns may have communications service needs and demand that a localized 4G (or 5G) terrestrial cellular network with a satellite backhaul can serve much better than, for example, relying on individual ad-hoc connectivity solution from for example Starlink. When the area’s total bandwidth demand exceeds the capacity of an FWA satellite service, a localized terrestrial network solution with a satellite backhaul is, in general, better.

The LEO satellites should offer significantly reduced latency compared to their geostationary counterparts due to their closer proximity to the Earth. This reduction in delay is essential for a wide range of real-time applications and services, from adhering to modern radio access (e.g., 4G and 5G) requirements, VoIP, and online gaming to critical financial transactions, enhancing the user experience and broadening the scope of possible services and business.

Among the leading LEO satellite constellations providing backhaul solutions today are SpaceX’s Starlink (via their community gateway), aiming to deliver high-speed internet globally with a preference of direct to consumer connectivity; OneWeb, focusing on internet services for businesses and communities in remote areas; Telesat’s Lightspeed, designed to offer secure and reliable connectivity; and Amazon’s Project Kuiper, which plans to deploy thousands of satellites to provide broadband to unserved and underserved communities worldwide.

Satellite backhaul solutions – takeaway:

  • Satellite-backhaul solutions are excellent, cost-effective solution for providing an existing isolated cellular (and fixed access) network with high-bandwidth connectivity to the Internet (such as in remote and deep rural areas).
  • LEO satellites can reduce the need for extensive and very costly ground-based infrastructure by serving as a backhaul solution. For some areas, such as Greenland, the Sahara, or the Brazilian rainforest, it may not be practical or economical to connect by terrestrial-based transmission (e.g., long-haul microwave links or backbone & backhaul fiber) to remote settlements or towns.
  • An LEO-based backhaul solution supports applications and radio access technologies requiring a very low round trip time scale (RTT<50 ms) than is possible with a GEO-based satellite backhaul. However, the optimum RTT will depend on where the LEO satellite ground gateway connects to the internet service provider and how low the RTT can be.
  • The collaborative nature of a satellite-backhaul solution allows the terrestrial operator to focus on and have full control of all its customers’ network experiences, as well as optimize the traffic within its own network infrastructure.
  • LEO satellite backhaul solutions can significantly boost network resilience and availability, providing a secure and reliable connectivity solution.
  • Satellite-backhaul solutions require local ground-based satellite transmission capabilities (e.g., a satellite ground station).
  • The operator should consider that at a certain threshold of low population density, direct-to-consumer satellite services like Starlink might be more economical than constructing a local telecom network that relies on satellite backhaul (see above section on “Fixed Wireless Access (FWA) based LEO satellite solutions”).
  • Satellite backhaul providers require regulatory permits to offer backhaul services. These permits are necessary for several reasons, including the use of radio frequency spectrum, operation of satellite ground stations, and provision of telecommunications services within various jurisdictions.
  • The Satellite life-time in orbit is between 5 to 7 years depending on the LEO altitude. A MEO satellite (2 to 36 thousand km altitude) last between 10 to 20 years (GEO). This also dictates the modernization and upgrade cycle as well as timing of your ROI investment case and refinancing needs.

NEW KIDS ON THE BLOCK – DIRECT-TO-DEVICE LEO SATELLITES.

A recent X-exchange (from March 2nd):

Elon Musk: “SpaceX just achieved peak download speed of 17 Mb/s from a satellite direct to unmodified Samsung Android Phone.” (note: the speed correspond to a spectral efficiency of ~3.4 Mbps/MHz/beam).

Reply from user: “That’s incredible … Fixed wireless networks need to be looking over their shoulders?”

Elon Musk: “No, because this is the current peak speed per beam and the beams are large, so this system is only effective where there is no existing cellular service. This services works in partnership with wireless providers, like what @SpaceX and @TMobile announced.”

Figure 9 illustrating a LEO satellite direct-to-device communication in a remote areas without any terrestrially-based communications infrastructure. Satellite being the only means of communications either by a normal mobile device or by classical satphone. (Courtesy: DALL-E).

Low Earth Orbit (LEO) Satellite Direct-to-Device technology enables direct communication between satellites in orbit and standard mobile devices, such as smartphones and tablets, without requiring additional specialized hardware. This technology promises to extend connectivity to remote, rural, and underserved areas globally, where traditional cellular network infrastructure is absent or economically unfeasible to deploy. The system can offer lower latency communication by leveraging LEO satellites, which orbit closer to Earth than geostationary satellites, making it more practical for everyday use. The round trip time (RTT), the time it takes the for the signal to travel from the satellite to the mobile device and back, is ca. 4 milliseconds for a LEO satellite at 550 km compared to ca. 240 milliseconds for a geosynchronous satellite (at 36 thousand kilometers altitude).

The key advantage of a satellite in low Earth orbit is that the likelihood of a line-of-sight to a point on the ground is very high compared to establishing a line-of-sight for terrestrial cellular coverage that, in general, would be very low. In other words, the cellular signal propagation from a LEO satellite closely approximates that of free space. Thus, all the various environmental signal loss factors we must consider for a standard terrestrial-based mobile network do not apply to our satellite. In other, more simplistic words, the signal propagation directly from the satellite to the mobile device is less compromised than it typically would be from a terrestrial cellular tower to the same mobile device. The difference between free-space propagation, which considers only distance and frequency, and the terrestrial signal propagation models, which quantifies all the gains and losses experienced by a terrestrial cellular signal, is very substantial and in favor of free-space propagation.  As our Earth-bound cellular intuition of signal propagation often gets in the way of understanding the signal propagation from a satellite (or antenna in the sky in general), I recommend writing down the math using the formula of free space propagation loss and comparing this with terrestrial cellular link budget models, such as for example the COST 231-Hata Model (relatively simple) or the more recent 3GPP TR 38.901 Model (complex). In rural and sub-urban areas, depending on the environment, in-door coverage may be marginally worse, fairly similar, or even better than from terrestrial cell tower at a distance. This applies to both the uplink and downlink communications channel between the mobile device and the LEO satellite, and is also the reason why higher frequency (with higher frequency bandwidths available) use on LEO satellites can work better than in a terrestrial cellular network.

However, despite its potential to dramatically expand coverage, after all that is what satellites do, LEO Satellite Direct-to-Device technology is not a replacement for terrestrial cellular services and terrestrial communications infrastructures for several reasons: (a) Although the spectral efficiency can be excellent, the frequency bandwidth (in MHz) and data speeds (in Mbps) available through satellite connections are typically lower than those provided by ground-based cellular networks, limiting its use for high-bandwidth applications. (b) The satellite-based D2D services are, in general, capacity-limited and might not be able to handle higher user density typical for urban areas as efficiently as terrestrial networks, which are designed to accommodate large numbers of users through dense deployment of cell towers. (c) Environmental factors like buildings or bad weather can more significantly impact satellite communications’ reliability and quality than terrestrial services. (d) A satellite D2D service requires regulatory approval (per country), as the D2D frequency typically will be limited to terrestrial cellular services and will have to be coordinated and managed with any terrestrial use to avoid service degradation (or disruption) for customers using terrestrial cellular services also using the frequency. The satellites will have to be able to switch off their D2D service when the satellite covers jurisdictions that have not provided approval or where the relevant frequency/frequencies are in use terrestrially.

Using the NewSpace Index database, updated December 2023, there are current more than 8,000 Direct-to Device (D2D), or Direct-2-Cell (D2C), satellites planned for launch, with SpaceX’s Starlink v2 having 7,500 planned. The rest, 795 satellites, are distributed on 6 other satellite operators (e.g. AST Mobile, Sateliot (Spain), Inmarsat (HEO-orbit), Lynk,…). If we look at satellites designed for IoT connectivity we get in total 5,302, with 4,739 (not including StarLink) still planned, distributed out over 50+ satellite operators. The average IoT satellite constellation including what is currently planned is ~95 satellites with the majority targeted for LEO. The the satellite operators included in the 50+ count have confirmed funding with a minimum amount of US$2 billion (half of the operators have only funding confirmed without an amount). About 2,937 (435 launched) satellites are being planned to only serve IoT markets (note: I think this seems a bit excessive). With Swarm Technologies, a SpaceX subsidiary rank number 1 in terms of both launched and planned satellites. Swarm Technologies having launched at least 189 CubeSats (e.g., both 0.25U and 1U types) and have planned an addition 150. The second ranked IoT-only operator is Orbcomm with 51 satellites launched and an additional 52 planned. The average launched of the remaining IoT specific satellites operators are 5 with on average planning to launch 55 (over 42 constellations).

There are also 3 satellite operators (i.e., Chinese-based Galaxy Space: 1,000 LEO-sats; US-based Mangata Networks: 791 MEO/HEO-sats, and US-based Omnispace: 200 LEO?-sats) that have planned a total of 2,000 satellites to support 5G applications with their satellite solutions and one operator (i.e., Hanwha Systems) has planned 2,000 LEO satellites for 6G.

The emergence of LEO satellite direct-to-device (D2D) services, as depicted in the Figure 10 below, is at the forefront of satellite communication innovations, offering a direct line of connectivity between devices that bypasses the need for traditional cellular-based ground-based network infrastructure (e.g., cell towers). This approach benefits from the relatively short distance of hundreds of kilometers between LEO satellites and the Earth, reducing communication latency and broadening bandwidth capabilities compared to their geostationary counterparts. One of the key advantages of LEO D2D services is their ability to provide global coverage with an extensive number of satellites, i.e., in their 100s to 1000s depending the targeted quality of service, to support the services, ensuring that even the most remote and underserved areas have access to reliable communication channels. They are also critical in disaster resilience, maintaining communications when terrestrial networks fail due to emergencies or natural disasters.

Figure 10 This schematic presents the network architecture for satellite-based direct-to-device (D2D) communication facilitated by Low Earth Orbit (LEO) satellites, exemplified by collaborations like Starlink and T-Mobile US, Lynk Mobile, and AST Space Mobile. It illustrates how satellites in LEO enable direct connectivity between user equipment (UE), such as standard mobile devices and IoT (Internet of Things) devices, using terrestrial cellular frequencies and VHF/UHF bands. The system also shows inter-satellite links operating in the Ka-band for seamless network integration, with satellite gateways (GW) linking the space-based network to ground infrastructure, including Points of Presence (PoP) and Internet Exchange Points (IXP), which connect to the wider internet (WWW). This architecture supports innovative services like Omnispace and Astrocast, offering LEO satellite IoT connectivity. The network could be particularly crucial for defense and special operations in remote and challenging environments, such as the deserts or the Arctic regions of Greenland, where terrestrial networks are unavailable. As an example shown here, using regular terrestrial cellular frequencies in both downlink (~300 MHz to 7 GHz) and uplinks (900 MHz or lower to 2.1 GHz) ensures robust and versatile communication capabilities in diverse operational contexts.

While the majority of the 5,000+ Starlink constellation is 13 GHz (Ku-band), at the beginning of 2024, SpaceX launched a few 2nd generation Starlink satellites that support direct connections from the satellite to a normal cellular device (e.g., smartphone), using 5 MHz of T-Mobile USA’s PCS band (1900 MHz). The targeted consumer service, as expressed by T-Mobile USA, provides texting capabilities across the USA for areas with no or poor existing cellular coverage. This is fairly similar to services at similar cellular coverage areas presently offered by, for example, AST SpaceMobileOmniSpace, and Lynk Global LEO satellite services with reported maximum downlink speed approaching 20 Mbps. The so-called Direct-2-Device, where the device is a normal smartphone without satellite connectivity functionality, is expected to develop rapidly over the next 10 years and continue to increase the supported user speeds (i.e., utilized terrestrial cellular spectrum) and system capacity in terms of smaller coverage areas and higher number of satellite beams.

Table 1 below provides an overview of the top 13 LEO satellite constellations targeting (fixed) internet services (e.g., Ku band), IoT and M2M services, and Direct-to-Device (or Direct-to-Cell, D2C) services. The data has been compiled from the NewSpace Index website, which should be with data as of 31st of December 2023. The Top-satellite constellation rank has been based on the number of launched satellites until the end of 2023. Two additional Direct-2-Cell (D2C or Direct-to-Device, D2D) LEO satellite constellations are planned for 2024-2025. One is SpaceX Starlink 2nd generation, which launched at the beginning of 2024, using T-Mobile USA’s PCS Band to connect (D2D) to normal terrestrial cellular handsets. The other D2D (D2C) service is Inmarsat’s Orchestra satellite constellation based on L-band (for mobile terrestrial services) and Ka for fixed broadband services. One new constellation (Mangata Networks, see also the NewSpace constellation information) targeting 5G services. With two 5G constellations already launched, i.e., Galaxy Space (Yinhe) launched 8 LEO satellites, 1,000 planned using Q- and V-bands (i.e., not a D2D cellular 5G service), and OmniSpace launched two satellites and appear to have planned a total of 200 satellites. Moreover, currently, there is one planned constellation targeting 6G by the South Korean Hanwha Group (a bit premature, but interesting to follow nevertheless) with 2,000 6G (LEO) satellites planned.

Most currently launched and planned satellite constellations offering (or plan to provide) Direct-2-Cell services, including IoT and M2M, are designed for low-frequency bandwidth services that are unlikely to compete with terrestrial cellular networks’ quality of service where reasonable good coverage (or better) exists.

Table 1 An overview of the Top-14 LEO satellite constellations targeting (fixed) internet services (e.g., Ku band), IoT and M2M services, and Direct-to-Device (or direct-to-cell) services. The data has been compiled from the NewSpace Index website, which should be with data as of 31st of December 2023.

The deployment of LEO D2D services also navigates a complicated regulatory landscape, with the need for harmonized spectrum allocation across different regions. Managing interference with terrestrial cellular networks and other satellite operations is another interesting challenge albeit complex aspect, requiring sophisticated solutions to ensure signal integrity. Moreover, despite the cost-effectiveness of LEO satellites in terms of launch and operation, establishing a full-fledged network for D2D services demands substantial initial investment, covering satellite development, launch, and the setup of supporting ground infrastructure.

LEO satellites with D2D-based capabilities – takeaway:

  • Provides lower-bandwidth services (e.g., GPRS/EDGE/HSDPA-like) where no existing terrestrial cellular service is present.
  • (Re-)use on Satellite of the terrestrial cellular spectrum.
  • D2D-based satellite services may become crucial in business continuity scenarios, providing redundancy and increased service availability to existing terrestrial cellular networks. This is particularly essential as a remedy for emergency response personnel in case terrestrial networks are not functional. Limited capacity (due to little assigned frequency bandwidth) over a large coverage area serving rural and remote areas with little or no cellular infrastructure.
  • Securing regulatory approval for satellite services over independent jurisdictions is a complex and critical task for any operator looking to provide global or regional satellite-based communications. The satellite operator may have to switch off transmission over jurisdictions where no permission has been granted.
  • If the spectrum is also deployed on the ground, satellite use of it must be managed and coordinated (due to interference) with the terrestrial cellular networks.
  • Require lowly or non-utilized cellular spectrum in the terrestrial operator’s spectrum portfolio.
  • D2D-based communications require a more complex and sophisticated satellite design, including the satellite antenna resulting in higher manufacturing and launch cost.
  • The IoT-only commercial satellite constellation “space” is crowded with a total of 44 constellations (note: a few operators have several constellations). I assume that many of those plans will eventually not be realized. Note that SpaceX Swarm Technology is leading and in terms of total numbers (available in the NewSpace Index) database will remain a leader from the shear amount of satellites once their plan has been realized. I expect we will see a Chinese constellation in this space as well unless the capability will be built into the Guo Wang constellation.
  • The Satellite life-time in orbit is between 5 to 7 years depending on the altitude. This timeline also dictates the modernization and upgrade cycle as well as timing of your ROI investment and refinancing needs.
  • Today’s D2D satellite systems are frequency-bandwidth limited. However, if so designed, satellites could provide a frequency asymmetric satellite-to-device connection. For instance, the downlink from the satellite to the device could utilize a high frequency (not used in the targeted rural or remote area) and a larger bandwidth, while the uplink communication between the terrestrial device and the LEO satellite could use a sufficiently lower frequency and smaller frequency bandwidth.

MAKERS OF SATELLITES.

In the rapidly evolving space industry, a diverse array of companies specializes in manufacturing satellites for Low Earth Orbit (LEO), ranging from small CubeSats to larger satellites for constellations similar to those used by OneWeb (UK) and Starlink (USA). Among these, smaller companies like NanoAvionics (Lithuania) and Tyvak Nano-Satellite Systems (USA) have carved out niches by focusing on modular and cost-efficient small satellite platforms typically below 25 kg. NanoAvionics is renowned for its flexible mission support, offering everything from design to operation services for CubeSats (e.g., 1U, 3U, 6U) and larger small satellites (100+ kg). Similarly, Tyvak excels in providing custom-made solutions for nano-satellites and CubeSats, catering to specific mission needs with a comprehensive suite of services, including design, manufacturing, and testing.

UK-based Surrey Satellite Technology Limited (SSTL) stands out for its innovative approach to small, cost-effective satellites for various applications, with cost-effectiveness in achieving the desired system’s performance, reliability, and mission objectives at a lower cost than traditional satellite projects that easily runs into USD 100s of million. SSTL’s commitment to delivering satellites that balance performance and budget has made it a popular satellite manufacturer globally.

On the larger end of the spectrum, companies like SpaceX (USA) and Thales Alenia Space (France-Italy) are making significant strides in satellite manufacturing at scale. SpaceX has ventured beyond its foundational launch services to produce thousands of small satellites (250+ kg) for its Starlink broadband constellation, which comprises 5,700+ LEO satellites, showcasing mass satellite production. Thales Alenia Space offers reliable satellite platforms and payload integration services for LEO constellation projects.

With their extensive expertise in aerospace and defense, Lockheed Martin Space (USA) and Northrop Grumman (USA) produce various satellite systems suitable for commercial, military, and scientific missions. Their ability to support large-scale satellite constellation projects from design to launch demonstrates high expertise and reliability. Similarly, aerospace giants Airbus Defense and Space (EU) and Boeing Defense, Space & Security (USA) offer comprehensive satellite solutions, including designing and manufacturing small satellites for LEO. Their involvement in high-profile projects highlights their capacity to deliver advanced satellite systems for a wide range of use cases.

Together, these companies, from smaller specialized firms to global aerospace leaders, play crucial roles in the satellite manufacturing industry. They enable a wide array of LEO missions, catering to the burgeoning demand for satellite services across telecommunications, Earth observation, and beyond, thus facilitating access to space for diverse clients and applications.

ECONOMICS.

Before going into details, let’s spend some time on an example illustrating the basic components required for building a satellite and getting it to launch. Here, I point at a super cool alternative to the above-mentioned companies, the USA-based startup Apex, co-founded by CTO Max Benassi (ex-SpaceX and Astra) and CEO Ian Cinnamon. To get an impression of the macro-components of a satellite system, I recommend checking out the Apex webpage and “playing” with their satellite configurator. The basic package comes at a price tag of USD 3.2 million and a 9-month delivery window. It includes a 100 kg satellite bus platform, a power system, a communication system based on X-band (8 – 12 GHz), and a guidance, navigation, and control package. The basic package does not include a solar array drive assembly (SADA), which plays a critical role in the operation of satellites by ensuring that the satellite’s solar panels are optimally oriented toward the Sun. Adding the SADA brings with it an additional USD 500 thousand. Also, the propulsion mechanism (e.g., chemical or electric; in general, there are more possibilities) is not provided (+ USD 450 thousand), nor are any services included (e.g., payload & launch vehicle integration and testing, USD 575 thousand), including SADAs, propulsion, and services, Apex will have a satellite launch ready for an amount of close to USD 4.8 million.

However, we are not done. The above solution still needs to include the so-called payload, which relates to the equipment or instruments required to perform the LEO satellite mission (e.g., broadband communications services), the actual satellite launch itself, and the operational aspects of a successful post-launch (i.e., ground infrastructure and operation center(s)).

Let’s take SpaceX’s Starlink satellite as an example illustrating mission and payload more clearly. The Starlink satellite’s primary mission is to provide fixed-wireless access broadband internet to an Earth-based fixed antenna using. The Starlink payload primarily consists of advanced broadband internet transmission equipment designed to provide high-speed internet access across the globe. This includes phased-array antennas for communication with user terminals on the ground, high-frequency radio transceivers to facilitate data transmission, and inter-satellite links allowing satellites to communicate in orbit, enhancing network coverage and data throughput.

The economical aspects of launching a Low Earth Orbit (LEO) satellite project span a broad spectrum of costs from the initial concept phase to deployment and operational management. These projects commence with research and development, where significant investments are made in designengineering, and the iterative process of prototyping and testing to ensure the satellite meets its intended performance and reliability standards in harsh space conditions (e.g., vacuum, extreme temperature variations, radiation, solar flares, high-velocity impacts with micrometeoroids and man-made space debris, erosion, …).

Manufacturing the satellite involves additional expenses, including procuring high-quality components that can withstand space conditions and assembling and integrating the satellite bus with its mission-specific payload. Ensuring the highest quality standards throughout this process is crucial to minimizing the risk of in-orbit failure, which can substantially increase project costs. The payload should be seen as the heart of the satellite’s mission. It could be a set of scientific instruments for measuring atmospheric data, optical sensors for imaging, transponders for communication, or any other equipment designed to fulfill the satellite’s specific objectives. The payload will vary greatly depending on the mission, whether for Earth observation, scientific research, navigation, or telecommunications.

Of course, there are many other types and more affordable options for LEO satellites than a Starlink-like one (although we should also not ignore achievements of SpaceX and learn from them as much as possible). As seen from Table 1, we have a range of substantially smaller satellite types or form factors. The 1U (i.e., one unit) CubeSat is a satellite with a form factor of 10x10x11.35 cm3 and weighs no more than 1.33 kilograms. A rough cost range for manufacturing a 1U CubeSat could be from USD 50 to 100+ thousand depending on mission complexity and payload components (e.g., commercial-off-the-shelf or application or mission-specific design). The range includes considering the costs associated with the satellite’s design, components, assembly, testing, and initial integration efforts. The cost range, however, does not include other significant costs associated with satellite missions, such as launch services, ground station operations, mission control, and insurance, which is likely to (significantly) increase the total project cost. Furthermore, we have additional form factors, such as 3U CubeSat (10x10x34.05 cm3, <4 kg), manufacturing cost in the range of USD 100 to 500+ thousand, 6U CubeSat (20x10x34 cm3, <12 kg), that can carry more complex payload solutions than the smaller 1U and 3U, with the manufacturing cost in the range of USD 200 thousand to USD 1+ million and 12U satellites (20x20x34 cm3, <24 kg) that again support complex payload solutions and in general will be significantly more expensive to manufacture.

Securing a launch vehicle is one of the most significant expenditures in a satellite project. This cost not only includes the price of the rocket and launch itself but also encompasses integration, pre-launch services, and satellite transportation to the launch site. Beyond the launch, establishing and maintaining the ground segment infrastructure, such as ground stations and a mission control center, is essential for successful satellite communication and operation. These facilities enable ongoing tracking, telemetry, and command operations, as well as the processing and management of the data collected by the satellite.

The SpaceX Falcon rocket is used extensively by other satellite businesses (see above Table 1) as well as by SpaceX for their own Starlink constellation network. The rocket has a payload capability of ca. 23 thousand kg and a volume handling capacity of approximately 300 cubic meters. SpaceX has launched around 60 Starlink satellites per Falcon 9 mission for the first-generation satellites. The launch cost per 1st generation satellite would then be around USD 1 million per satellite using the previously quoted USD 62 million (2018 figure) for a Falcon 9 launch. The second-generation Starlink satellites are substantially more advanced compared to the 1st generation. They are also heavier, weighing around a thousand kilograms. A Falcon 9 would only be able to launch around 20 generation 2 satellites (only considering the weight limitation), while a Falcon Heavy could lift ca. 60 2nd gen. satellites but also at a higher price point of USD 90 million (2018 figure). Thus the launch cost per satellite would be between USD 1.5 million using Falcon Heavy and USD 3.1 million using Falcon 9. Although the launch cost is based on price figures from 2018, the expected efficiency gained from re-use may have either kept the cost level or reduced it further as expected, particularly with Falcon Heavy.

Satellite businesses looking to launch small volumes of satellites, such as CubeSats, have a variety of strategies at their disposal to manage launch costs effectively. One widely adopted approach is participating in rideshare missions, where the expenses of a single launch vehicle are shared among multiple payloads, substantially reducing the cost for each operator. This method is particularly attractive due to its cost efficiency and the regularity of missions offered by, for example, SpaceX. Prices for rideshare missions can start from as low as a few thousand dollars for very small payloads (like CubeSats) to several hundred thousand dollars for larger small satellites. For example, SpaceX advertises rideshare prices starting at $1 million for payloads up to 200 kg. Alternatively, dedicated small launcher services cater specifically to the needs of small satellite operators, offering more tailored launch options in terms of timing and desired orbit. Companies such as Rocket Lab (USA) and Astra (USA) launch services have emerged, providing flexibility that rideshare missions might not, although at a slightly higher cost. However, these costs remain significantly lower than arranging a dedicated launch on a larger vehicle. For example, Rocket Lab’s Electron rocket, specializing in launching small satellites, offers dedicated launches with prices starting around USD 7 million for the entire launch vehicle carrying up to 300 kg. Astra has reported prices in the range of USD 2.5 million for a dedicated LEO launch with their (discontinued) Rocket 3 with payloads of up to 150 kg. The cost for individual small satellites will depend on their share of the payload mass and the specific mission requirements.

Satellite ground stations, which consist of arrays of phased-array antennas, are critical for managing the satellite constellation, routing internet traffic, and providing users with access to the satellite network. These stations are strategically located to maximize coverage and minimize latency, ensuring that at least one ground station is within the line of sight of satellites as they orbit the Earth. As of mid-2023, Starlink operated around 150 ground stations worldwide (also called Starlink Gateways), with 64 live and an additional 33 planned in the USA. The cost of constructing a ground station would be between USD 300 thousand to half a million not including the physical access point, also called the point-of-presence (PoP), and transport infrastructure connecting the PoP (and gateway) to the internet exchange where we find the internet service providers (ISPs) and the content delivery networks (CDNs). The Pop may add another USD 100 to 200 thousand to the ground infrastructure unit cost. The transport cost from the gateway to the Internet exchange can vary a lot depending on the gateway’s location.

Insurance is a critical component of the financial planning for a satellite project, covering risks associated with both the launch phase and the satellite’s operational period in orbit. These insurances are, in general, running at between 5% to 20% of the total project cost depending on the satellite value, the track record of the launch vehicle, mission complexity, and duration (i.e., typically 5 – 7 years for a LEO satellite at 500 km) and so forth. Insurance could be broken up into launch insurance and insurance covering the satellite once it is in orbit.

Operational costs, the Opex, include the day-to-day expenses of running the satellite, from staffing and technical support to ground station usage fees.

Regulatory and licensing fees, including frequency allocation and orbital slot registration, ensure the satellite operates without interfering with other space assets. Finally, at the end of the satellite’s operational life, costs associated with safely deorbiting the satellite are incurred to comply with space debris mitigation guidelines and ensure a responsible conclusion to the mission.

The total cost of an LEO satellite project can vary widely, influenced by the satellite’s complexity, mission goals, and lifespan. Effective project management and strategic decision-making are crucial to navigating these expenses, optimizing the project’s budget, and achieving mission success.

Figure 11 illustrates an LEO CubeSat orbiting above the Earth, capturing the satellite’s compact design and its role in modern space exploration and technology demonstration. Note that the CubeSat design comes in several standardized dimensions, with the reference design, also called 1U, being almost 1 thousandth of a cubic meter and weighing less than 1.33 kg. More advanced CubeSat satellites would typically be 6U or higher.

CubeSats (e.g., 1U, 3U, 6U, 12U):

  • Manufacturing Cost: Ranges from USD 50,000 for a simple 1U CubeSat to over USD 1 million for a more complex missions supported by 6U (or higher) CubeSat with advanced payloads (and 12U may again amount to several million US dollars).
  • Launch Cost: This can vary significantly depending on the launch provider and the rideshare opportunities, ranging from a few thousand dollars for a 1U CubeSat on a rideshare mission to several million dollars for a dedicated launch of larger CubeSats or small satellites.
  • Operational Costs: Ground station services, mission control, and data handling can add tens to hundreds of thousands of dollars annually, depending on the mission’s complexity and duration.

Small Satellites (25 kg up to 500 kg):

  • Manufacturing Cost: Ranges from USD 500,000 to over 10 million, depending on the satellite’s size, complexity, and payload requirements.
  • Launch Cost: While rideshare missions can reduce costs, dedicated launches for small satellites can range from USD 10 million to 62 million (e.g., Falcon 9) and beyond (e.g., USD 90 million for Falcon Heavy).
  • Operational Costs: These are similar to CubeSats, but potentially higher due to the satellite’s larger size and more complex mission requirements, reaching several hundred thousand to over a million dollars annually.

The range for the total project cost of a LEO satellite:

Given these considerations, the total cost range for a LEO satellite project can vary from as low as a few hundred thousand dollars for a simple CubeSat project utilizing rideshare opportunities and minimal operational requirements to hundreds of millions of dollars for more complex small satellite missions requiring dedicated launches and extensive operational support.

It is important to note that these are rough estimates, and the actual cost can vary based on specific mission requirements, technological advancements, and market conditions.

CAPACITY AND QUALITY

Figure 12 Satellite-based cellular capacity, or quality measured, by the unit or total throughput in Mbps is approximately driven by the amount of spectrum (in MHz) times the effective spectral efficiency (in Mbps/MHz/units) times the number of satellite beams resulting in cells on the ground.

The overall capacity and quality of satellite communication systems, given in Mbps, is on a high level, the product of three key factors: (i) the amount of frequency bandwidth in MHz allocated to the satellite operations multiplied by (ii) the effective spectral efficiency in Mbps per MHz over a unit satellite-beam coverage area multiplied by (iii) the number of satellite beams that provide the resulting terrestrial cell coverage. Thus, in other words:

Satellite Capacity (in Mbps) =
Frequency Bandwidth in MHz ×
Effective Spectral Efficiency in Mbps/MHz/Beam ×
Number of Beams (or Cells)

Consider a satellite system supporting 8 beams (and thus an equivalent of terrestrial coverage cells), each with 250 MHz allocated within the same spectral frequency range, can efficiently support ca. 680 Mbps per beam. This is achieved with an antenna setup that effectively provides a spectral efficiency of ~2.7 Mbps/MHz/cell (or beam) in the downlink (i.e., from the satellite to the ground). Moreover, the satellite typically will have another frequency and antenna configuration that establishes a robust connection to the ground station that connects to the internet via, for example, third-party internet service providers. The 680 Mbps is then shared among users that are within the satellite beam coverage, e.g., if you have 100 customers demanding a service, the speed each would experience on average would be around 7 Mbps. This may not seem very impressive compared to the cellular speeds we are used to getting with an LTE or 5G terrestrial cellular service. However, such speeds are, of course, much better than having no means of connecting to the internet.

Higher frequencies (i.e., in the GHz range) used to provide terrestrial cellular broadband services are in general quiet sensitive to the terrestrial environment and non-LoS propagation. It is a basic principle of physics that signal propagation characteristics, including the range and penetration capabilities of an electromagnetic waves, is inversely related to their frequency. Vegetation and terrain becomes an increasingly critical factor to consider in higher frequency propagation and the resulting quality of coverage. For example trees, forests, and other dense foliage can absorb and scatter radio waves, attenuating signals. The type and density of vegetation, along with seasonal changes like foliage density in summer versus winter, can significantly impact signal strength. Terrains often include varied topographies such as housing, hills, valleys, and flat plains, each affecting signal reach differently. For instance, housing, hilly or mountainous areas may cause signal shadowing and reflection, while flat terrains might offer less obstruction, enabling signals to travel further. Cellular mobile operators tend to like high frequencies (GHz) for cellular broadband services as it is possible to get substantially more system throughput in bits per second available to deliver to our demanding customers than at frequencies in the MHz range. As can be observed in Figure 12 above, we see that the frequency bandwidth is a multiplier for the satellite capacity and quality. Cellular mobile operators tend to “dislike” higher frequencies because of their poorer propagation conditions in their terrestrially based cellular networks resulting in the need for increased site densification at a significant incremental capital expense.

The key advantage of a LEO satellite is that the likelihood of a line-of-sight to a point on the ground is very high compared to establishing a line-of-sight for terrestrial cellular coverage that, in general, would be very low. In other words, the cellular signal propagation from an satellite closely approximates that of free space. Thus, all the various environmental signal loss factors we must consider for a standard terrestrial-based mobile network do not apply to our satellite having only to overcome the distance from the satellite antenna to the ground.

Let us first look at the satellite frequency component of the above satellite capacity, and quality, formula:

FREQUENCY SPECTRUM FOR SATELLITES.

The satellite frequency spectrum encompasses a range of electromagnetic frequencies allocated specifically for satellite communication. These frequencies are divided into bands, commonly known as L-band (e.g., mobile broadband), S-band (e.g., mobile broadband), C-band, X-band (e.g., mainly used by military), Ku-band (e.g., fixed broadband), Ka-band (e.g., fixed broadband), and V-band. Each serves different satellite applications due to its distinct propagation characteristics and capabilities. The spectrum bandwidth used by satellites refers to the width of the frequency range that a satellite system is licensed to use for transmitting and receiving signals.

Careful management of satellite spectrum bandwidth is critical to prevent interference with terrestrial communications systems. Since both satellite and terrestrial systems can operate on similar frequency ranges, there is a potential for crossover interference, which can degrade the performance of both systems. This is particularly important for bands like C-band and Ku-band, which are also used for terrestrial cellular networks and other applications like broadcasting.

Using the same spectrum for both satellite and terrestrial cellular coverage within the same geographical area is challenging due to the risk of interference. Satellites transmit signals over vast areas, and if those signals are on the same frequency as terrestrial cellular systems, they can overpower the local ground-based signals, causing reception issues for users on the ground. Conversely, the uplink signals from terrestrial sources can interfere with the satellite’s ability to receive communications from its service area.

Regulatory bodies such as the International Telecommunication Union (ITU) are crucial in mitigating these interference issues. They coordinate the allocation of frequency bands and establish regulations that govern their use. This includes defining geographical zones where certain frequencies may be used exclusively for either terrestrial or satellite services, as well as setting limits on signal power levels to minimize the chance of interference. Additionally, technology solutions like advanced filtering, beam shaping, and polarization techniques are employed to further isolate satellite communications from terrestrial systems, ensuring that both may coexist and operate effectively without mutual disruption.

The International Telecommunication Union (ITU) has designated several frequency bands for Fixed Satellite Services (FSS) and Mobile Satellite Services (MSS) that can be used by satellites operating in Low Earth Orbit (LEO). The specific bands allocated for satellite services, FSS and MSS, are determined by the ITU’s Radio Regulations, which are periodically updated to reflect global telecommunication’s evolving needs and address emerging technologies. Here are some of the key frequency bands commonly considered for FSS and MSS with LEO satellites:

V-Band 40 GHz to 75 GHz (microwave frequency range).
The V-band is appealing for Low Earth Orbit (LEO) satellite constellations designed to provide global broadband internet access. LEO satellites can benefit from the V-band’s capacity to support high data rates, which is essential for serving densely populated areas and delivering competitive internet speeds. The reduced path loss at lower altitudes, compared to GEO, also makes the V-band a viable option for LEO satellites. Due to the higher frequencies offered by V-band it also is significant more sensitive to atmospheric attenuation, (e.g., oxygen absorption around 60 GHz), including rain fade, which is likely to affect signal integrity. This necessitates the development of advanced technologies for adaptive coding and modulation, power amplification, and beamforming to ensure reliable communication under various weather conditions. Several LEO satellite operators have expressed an interest in operationalizing the V-band in their satellite constellations (e.g., StarLink, OneWeb, Kuiper, Lightspeed). This band should be regarded as an emergent LEO frequency band.

Ka-Band 17.7 GHz to 20.2 GHz (Downlink) & 27.5 GHz to 30.0 GHz (Uplink).
The Ka-band offers higher bandwidths, enabling greater data throughput than lower bands. Not surprising this band is favored by high-throughput satellite solutions. It is widely used by fixed satellite services (FSS). This makes it ideal for high-speed internet services. However, it is more susceptible to absorption and scattering by atmospheric particles, including raindrops and snowflakes. This absorption and scattering effect weakens the signal strength when it reaches the receiver. To mitigate rain fade effects in the Ka-band, satellite, and ground equipment must be designed with higher link margins, incorporating more powerful transmitters and more sensitive receivers. Additionally, adaptive modulation and coding techniques can be employed to adjust the signal dynamically in response to changing weather conditions. Overall, the system is more costly and, therefore, primarily used for satellite-to-earth ground station communications and high-performance satellite backhaul solutions.

For example, Starlink and OneWeb use the Ka-band to connect to satellite Earth gateways and point-of-presence, which connect to Internet Exchange and the wider internet. It is worth noticing that the terrestrial 5 G band n256 (26.5 to 29.5 GHz) falls within the Ka-band’s uplink frequency band. Furthermore, SES’s mPower satellites, operating at Middle Earth Orbit (MEO), operate exclusively in this band, providing internet backhaul services.

Ku-Band 12.75 GHz to 13.25 GHz (Downlink) & 14.0 GHz to 14.5 GHz (Uplink).
The Ku-band is widely used for FSS satellite communications, including fixed satellite services, due to its balance between bandwidth availability and susceptibility to rain fade. It is suitable for broadband services, TV broadcasting, and backhaul connections. For example, Starlink and OneWeb satellites are using this band to provide broadband services to earth-based customer terminals.

X-Band 7.25 GHz to 7.75 GHz (Downlink) & 7.9 GHz to 8.4 GHz (Uplink).
The X-band in satellite applications is governed by international agreements and national regulations to prevent interference between different services and to ensure the efficient use of the spectrum. The X-band is extensively used for secure military satellite communications, offering advantages like high data rates and relative resilience to jamming and eavesdropping. It supports a wide range of military applications, including mobile command, control, communications, computer, intelligence, surveillance, and reconnaissance (i.e., C4ISR) operations. Most defense-oriented satellites operate at geostationary orbit, ensuring constant coverage of specific geographic areas (e.g., Airbus Skynet constellations, Spain’s XTAR-EUR, and France’s Syracuse satellites). Most European LEO defense satellites, used primarily for reconnaissance, are fairly old, with more than 15 years since the first launch, and are limited in numbers (i.e., <10). The most recent European LEO satellite system is the French-based Multinational Space-based Imaging System (MUSIS) and Composante Spatiale Optique (CSO), where the first CSO components were launched in 2018. There are few commercial satellites utilizing the X-band.

C-Band 3.7 GHz to 4.2 GHz (Downlink) & 5.925 GHz to 6.425 GHz (Uplink)
C-band is less susceptible to rain fade and is traditionally used for satellite TV broadcasting, maritime, and aviation communications. However, parts of the C-band are also being repurposed for terrestrial 5G networks in some regions, leading to potential conflicts and the need for careful coordination. The C-band is primarily used in geostationary orbit (GEO) rather than Low Earth Orbit (LEO), due to the historical allocation of C-band for fixed satellite services (FSS) and its favorable propagation characteristics. I haven’t really come across any LEO constellation using the C-band. GEO FSS satellite operators using this band extensively are SES (Luxembourg), Intelsat (Luxembourg/USA), Eutelsat (France), Inmarsat (UK), etc..

S-Band 2.0 GHz to 4.0 GHz
S-band is used for various applications, including mobile communications, weather radar, and some types of broadband services. It offers a good compromise between bandwidth and resistance to atmospheric absorption. Both Omnispace (USA) and Globalstar (USA) LEO satellites operate in this band. Omnispace is also interesting as they have expressed intent to have LEO satellites supporting the 5G services in the band n256 (26.5 to 29.5 GHz), which falls within the uplink of the Ka-band.

L-Band 1.0 GHz to 2.0 GHz
L-band is less commonly used for fixed satellite services but is notable for its use in mobile satellite services (MSS), satellite phone communications, and GPS. It provides good coverage and penetration characteristics. Both Lynk Mobile (USA), offering Direct-2-Device, IoT, and M2M services, and Astrocast (Switzerland), with their IoT/M2M services, are examples of LEO satellite businesses operating in this band.

UHF 300 MHz to 3.0 GHz
The UHF band is more widely used for satellite communications, including mobile satellite services (MSS), satellite radio, and some types of broadband data services. It is favored for its relatively good propagation characteristics, including the ability to penetrate buildings and foliage. For example, Fossa Systems LEO pico-satellites (i.e., 1p form-factor) use this frequency for their IoT and M2M communications services.

VHF 30 MHz to 300 MHz

The VHF band is less commonly used in satellite communications for commercial broadband services. Still, it is important for applications such as satellite telemetry, tracking, and control (TT&C) operations and amateur satellite communications. Its use is often limited due to the lower bandwidth available and the higher susceptibility to interference from terrestrial sources. Swarm Technologies (USA and a SpaceX subsidiary) using 137-138 MHz (Downlink) and 148-150 MHz (Uplink). However, it appears that they have stopped taking new devices on their network. Orbcomm (USA) is another example of a satellite service provider using the VHF band for IoT and M2M communications. There is very limited capacity in this band due to many other existing use cases, and LEO satellite companies appear to plan to upgrade to the UHF band or to piggyback on direct-2-cell (or direct-2-device) satellite solutions, enabling LEO satellite communications with 3GPP compatible IoT and M2M devices.

SATELLITE ANTENNAS.

Satellites operating in Geostationary Earth Orbit (GEO), Medium Earth Orbit (MEO), and Low Earth Orbit (LEO) utilize a variety of antenna types tailored to their specific missions, which range from communication and navigation to observation (e.g., signal intelligence). The satellite’s applications influence the selection of an antenna, the characteristics of its orbit, and the coverage area required.

Antenna technology is intrinsically linked to spectral efficiency in satellite communications systems and of course any other wireless systems. Antenna designs influence how effectively a communication system can transmit and receive signals within a given frequency band, which is the essence of spectral efficiency (i.e., how much information per unit time in bits per second can I squeeze through my communications channel).

Thus, advancements in antenna technology are fundamental to improving spectral efficiency, making it a key area of research and development in the quest for more capable and efficient communication systems.

Parabolic dish antennas are prevalent for GEO satellites due to their high gain and narrow beam width, making them ideal for broadcasting and fixed satellite services. These antennas focus a tight beam on specific areas on Earth, enabling strong and direct signals essential for television, internet, and communication services. Horn antennas, while simpler, are sometimes used as feeds for larger parabolic antennas or for telemetry, tracking, and command functions due to their reliability. Additionally, phased array antennas are becoming more common in GEO satellites for their ability to steer beams electronically, offering flexibility in coverage and the capability to handle multiple beams and frequencies simultaneously.

Phased-array antennas are indispensable in for MEO satellites, such as those used in navigation systems like GPS (USA), BeiDou (China), Galileo (European), or GLONASS (Russian). These satellite constellations cover large areas of the Earth’s surface and can adjust beam directions dynamically, a critical feature given the satellites’ movement relative to the Earth. Patch antennas are also widely used in MEO satellites, especially for mobile communication constellations, due to their compact and low-profile design, making them suitable for mobile voice and data communications.

Phased-array antennas are very important for LEO satellites use cases as well, which include broadband communication constellations like Starlink and OneWeb. Their (fast) beam-steering capabilities are essential for maintaining continuous communication with ground stations and user terminals as the satellites quickly traverse the sky. The phased-array antenna also allow for optimizing coverage with both narrow as well as wider field of view (from the perspective of the satellite antenna) that allow the satellite operator to trade-off cell capacity and cell coverage.

Simpler Dipole antennas are employed for more straightforward data relay and telemetry purposes in smaller satellites and CubeSats, where space and power constraints are significant factors. Reflect array antennas, which offer a mix of high gain and beam steering capabilities, are used in specific LEO satellites for communication and observation applications (e.g., for signal intelligence gathering), combining features of both parabolic and phased array antennas.

Mission specific requirements drive the choice of antenna for a satellite. For example, GEO satellites often use high-gain, narrowly focused antennas due to their fixed position relative to the Earth, while MEO and LEO satellites, which move relatively closer to the Earth’s surface, require antennas capable of maintaining stable connections with moving ground terminals or covering large geographical areas.

Advanced antenna technologies such as beamforming, phased-arrays, and Multiple In Multiple Out (MMO) antenna configurations are crucial in managing and utilizing the spectrum more efficiently. They enable precise targeting of radio waves, minimizing interference, and optimizing bandwidth usage. This direct control over the transmission path and signal shape allows more data (bits) to be sent and received within the same spectral space, effectively increasing the communication channel’s capacity. In particular, MIMO antenna configurations and advanced antenna beamforming have enabled terrestrial mobile cellular access technologies (e.g., LTE and 5G) to quantum leap the effective spectral efficiency, broadband speed and capacity orders of magnitude above and beyond older technologies of 2G and 3G. Similar principles are being deployed today in modern advanced communications satellite antennas, providing increased capacity and quality within the satellite cellular coverage area provided by the satellite beam.

Moreover, antenna technology developments like polarization and frequency reuse directly impact a satellite system’s ability to maximize spectral resources. Allowing simultaneous transmissions on the same frequency through different polarizations or spatial separations effectively double the capacity without needing additional spectrum.

WHERE DO WE END UP.

If all current commercial satellite plans were realized, within the next decade, we would have more, possibly substantially more than 65 thousand satellites circling Earth. Today, that number is less than 10 thousand, with more than half that number realized by StarLink’s LEO constellation. Imagine the increase in, and the amount of, space debris circling Earth within the next 10 years. This will likely pose a substantial increase in operational risk for new space missions and will have to be addressed urgently.

Over the next decade, we may have at least 2 major LEO satellite constellations. One from Starlink with an excess of 12 thousand satellites, and one from China, the Guo Wang, the state network, likewise with 12 thousand LEO satellites. One global satellite constellation is from an American-based commercial company; the other is a worldwide satellite constellation representing the Chinese state. It would not be too surprising to see that by 2034, the two satellite constellations will divide Earth in part, being serviced by a commercial satellite constellation (e.g., North America, Europe, parts of the Middle East, some of APAC including India, possibly some parts of Africa). Another part will likely be served by a Chinese-controlled LEO constellation providing satellite broadband service to China, Russia, significant parts of Africa, and parts of APAC.

Over the next decade, satellite services will undergo transformative advancements, reshaping the architecture of global communication infrastructures and significantly impacting various sectors, including broadband internet, global navigation, Earth observation, and beyond. As these services evolve, we should anticipate major leaps in satellite technologies, driven by innovation in propulsion systems, miniaturization of technology, advancements in onboard processing capabilities, increasing use of AI and machine learning leapfrogging satellites operational efficiency and performance, breakthrough in material science reducing weight and increasing packing density, leapfrogs in antenna technology, and last but not least much more efficient use of the radio frequency spectrum. Moreover, we will see the breakthrough innovation that will allow better co-existence and autonomous collaboration of frequency spectrum utilization between non-terrestrial and terrestrial networks reducing the need for much regulatory bureaucracy that might anyway be replaced by decentralized autonomous organizations (DAOs) and smart contracts. This development will be essential as satellite constellations are being integrated into 5G and 6G network architectures as the non-terrestrial network cellular access component. This particular topic, like many in this article, is worth a whole new article on its own.

I expect that over the next 10 years we will see electronically steerable phased-array antennas, as a notable advancement. These would offer increased agility and efficiency in beamforming and signal direction. Their ability to swiftly adjust beams for optimal coverage and connectivity without physical movement makes them perfect for the dynamic nature of Low Earth Orbit (LEO) satellite constellations. This technology will becomes increasingly cost-effective and energy-efficient, enabling widespread deployment across various satellite platforms (not only LEO designs). The advance in phased-array antenna technology will facilitate substantial increase in the satellite system capacity by increasing the number of beams, the variation on beam size (possibly down to a customer ground station level), and support multi-band operations within the same antenna.

Another promising development is the integration of metamaterials in antenna design, which will lead to more compact, flexible, and lightweight antennas. The science of metamaterials is super interesting and relates to manufacturing artificial materials to have properties not found in naturally occurring materials with unique electromagnetic behaviors arising from their internal structure. Metamaterial antennas is going to offer superior performance, including better signal control and reduced interference, which is crucial for maintaining high-quality broadband connections. These materials are also important for substantially reducing the weight of the satellite antenna, while boosting its performance. Thus, the technology will also support bringing the satellite launch cost down dramatically.

Although primarily associated MIMO antennas with terrestrial networks, I would also expect that massive MIMO technology will find applications in satellite broadband systems. Satellite systems, just like ground based cellular networks, can significantly increase their capacity and efficiency by utilizing many antenna elements to simultaneously communicate with multiple ground terminals. This could be particularly transformative for next-generation satellite networks, supporting higher data rates and accommodating more users. The technology will increase the capacity and quality of the satellite system dramatically as it has done on terrestrial cellular networks.

Furthermore, advancements in onboard processing capabilities will allow satellites to perform more complex signal processing tasks directly in space, reducing latency and improving the efficiency of data transmission. Coupled with AI and machine learning algorithms, future satellite antennas could dynamically optimize their operational parameters in real-time, adapting to changes in the network environment and user demand.

Additionally, research into quantum antenna technology may offer breakthroughs in satellite communication, providing unprecedented levels of sensitivity and bandwidth efficiency. Although still early, quantum antennas could revolutionize signal reception and transmission in satellite broadband systems. In the context of LEO satellite systems, I am particularly excited about utilizing the Rydberg Effect to enhance system sensitivity could lead to massive improvements. The heightened sensitivity of Rydberg atoms to electromagnetic fields could be harnessed to develop ultra-sensitive detectors for radio frequency (RF) signals. Such detectors could surpass the performance of traditional semiconductor-based devices in terms of sensitivity and selectivity, enabling satellite systems to detect weaker signals, improve signal-to-noise ratios, and even operate effectively over greater distances or with less power. Furthermore, space could potentially be the near-ideal environment for operationalizing Rydberg antenna and communications systems as space had near-perfect vacuum, very low-temperatures (in Earth shadow at least or with proper thermal management), relatively free of electromagnetic radiation (compared to Earth), as well as its micro-gravity environment that may facilitate long-range “communications” between Rydberg atoms. This particular topic may be further out in the future than “just” a decade from now, although it may also be with satellites we will see the first promising results of this technology.

One key area of development will be the integration of LEO satellite networks with terrestrial 5G and emerging 6G networks, marking a significant step in the evolution of Non-Terrestrial Network (NTN) architectures. This integration promises to deliver seamless, high-speed connectivity across the globe, including in remote and rural areas previously underserved by traditional broadband infrastructure. By complementing terrestrial networks, LEO satellites will help achieve ubiquitous wireless coverage, facilitating a wide range of applications and use cases from high-definition video streaming to real-time IoT data collection.

The convergence of LEO satellite services with 5G and 6G will also spur network management and orchestration innovation. Advanced techniques for managing interference, optimizing handovers between terrestrial and non-terrestrial networks, and efficiently allocating spectral resources will be crucial. It would be odd not to mention it here, so artificial intelligence and machine learning algorithms will, of course, support these efforts, enabling dynamic network adaptation to changing conditions and demands.

Moreover, the next decade will likely see significant improvements in the environmental sustainability of LEO satellite operations. Innovations in satellite design and materials, along with more efficient launch vehicles and end-of-life deorbiting strategies, will help mitigate the challenges of space debris and ensure the long-term viability of LEO satellite constellations.

In the realm of global connectivity, LEO satellites will have bridged the digital divide, offering affordable and accessible internet services to billions of people worldwide unconnected today. In 2023 the estimate is that there are about 3 billion people, almost 40% of all people in the world today, that have never used internet. In the next decade, it must be our ambition that with LEO satellite networks this number is brought down to very near Zero. This will have profound implications for education, healthcare, economic development, and global collaboration.

FURTHER READING.

  1. A. Vanelli-Coralli, N. Chuberre, G. Masini, A. Guidotti, M. El Jaafari, “5G Non-Terrestrial Networks.”, Wiley (2024). A recommended reading for deep diving into NTN networks of satellites, typically the LEO kind, and High-Altitude Platform Systems (HAPS) such as stratospheric drones.
  2. I. del Portillo et al., “A technical comparison of three low earth orbit satellite constellation systems to provide global broadband,” Acta Astronautica, (2019).
  3. Nils Pachler et al., “An Updated Comparison of Four Low Earth Orbit Satellite Constellation Systems to Provide Global Broadband” (2021).
  4. Starlink, “Starlink specifications” (Starlink.com page). The following Wikipedia resource is quite good as well: Starlink.
  5. Quora, “How much does a satellite cost for SpaceX’s Starlink project and what would be the cheapest way to launch it into space?” (June 2023). This link includes a post from Elon Musk commenting on the cost involved in manufacturing the Starlink satellite and the cost of launching SpaceX’s Falcon 9 rocket.
  6. Michael Baylor, “With Block 5, SpaceX to increase launch cadence and lower prices.”, nasaspaceflight.com (May, 2018).
  7. Gwynne Shotwell, TED Talk from May 2018. She quotes here a total of USD 10 billion as a target for the 12,000 satellite network. This is just an amazing visionary talk/discussion about what may happen by 2028 (in 4-5 years ;-).
  8. Juliana Suess, “Guo Wang: China’s Answer to Starlink?”, (May 2023).
  9. Makena Young & Akhil Thadani, “Low Orbit, High Stakes, All-In on the LEO Broadband Competition.”, Center for Strategic & International Studies CSIS, (Dec. 2022).
  10. AST SpaceMobile website: https://ast-science.com/ Constellation Areas: Internet, Direct-to-Cell, Space-Based Cellular Broadband, Satellite-to-Cellphone. 243 LEO satellites planned. 2 launched.
  11. Lynk Global website: https://lynk.world/ (see also FCC Order and Authorization). It should be noted that Lynk can operate within 617 to 960 MHz (Space-to-Earth) and 663 to 915 MHz (Earth-to-Space). However, only outside the USA. Constellation Area: IoT / M2M, Satellite-to-Cellphone, Internet, Direct-to-Cell. 8 LEO satellites out of 10 planned.
  12. Omnispace website: https://omnispace.com/ Constellation Area: IoT / M2M, 5G. Ambition to have the world’s first global 5G non-terrestrial network. Initial support 3GPP-defined Narrow-Band IoT radio interface. Planned 200 LEO and <15 MEO satellites. So far, only 2 satellites have been launched.
  13. NewSpace Index: https://www.newspace.im/ I find this resource to have excellent and up-to-date information on commercial satellite constellations.
  14. R.K. Mailloux, “Phased Array Antenna Handbook, 3rd Edition”, Artech House, (September 2017).
  15. A.K. Singh, M.P. Abegaonkar, and S.K. Koul, “Metamaterials for Antenna Applications”, CRC Press (September 2021).
  16. T.L. Marzetta, E.G. Larsson, H. Yang, and H.Q. Ngo, “Fundamentals of Massive MIMO”, Cambridge University Press, (November 2016).
  17. G.Y. Slepyan, S. Vlasenko, and D. Mogilevtsev, “Quantum Antennas”, arXiv:2206.14065v2, (June 2022).
  18. R. Huntley, “Quantum Rydberg Receiver Shakes Up RF Fundamentals”, EE Times, (January 2022).
  19. Y. Du, N. Cong, X. Wei, X. Zhang, W. Lou, J. He, and R. Yang, “Realization of multiband communications using different Rydberg final states”, AIP Advances, (June 2022). Demonstrating the applicability of the Rydberg effect in digital transceivers in the Ku and Ka bands.

ACKNOWLEDGEMENT.

I greatly acknowledge my wife, Eva Varadi, for her support, patience, and understanding during the creative process of writing this article.

Stratospheric Drones & Low Earth Satellites: Revolutionizing Terrestrial Rural Broadband from the Skies?

“From an economic and customer experience standpoint, deploying stratospheric drones may be significantly more cost effective than establishing extra terrestrial infrastructures”.

This article, in a different and somewhat shorter format, has also been published by New Street Research under the title “Stratospheric drones: A game changer for rural networks?”. You will need to register with New Street Research to get access.

As a mobile cellular industry expert and a techno-economist, the first time I was presented with the concept of stratospheric drones, I feel the butterflies in my belly. That tingling feeling that I was seeing something that could be a huge disruptor of how mobile cellular networks are being designed and built. Imagine getting rid of the profitability-challenged rural cellular networks (i.e., the towers, the energy consumption, the capital infrastructure investments), and, at the same time, offering much better quality to customers in rural areas than is possible with the existing cellular network we have deployed there. A technology that could fundamentally change the industry’s mobile cellular cost structure for the better at a quantum leap in quality and, in general, provide economical broadband services to the unconnected at a fraction of the cost of our traditional ways of building terrestrial cellular coverage.

Back in 2015, I got involved with Deutsche Telekom AG Group Technology, under the leadership of Bruno Jacobfeuerborn, in working out the detailed operational plans, deployment strategies, and, of course, the business case as well as general economics of building a stratospheric cellular coverage platform from scratch with the UK-based Stratospheric Platform Ltd [2] in which Deutsche Telekom is an investor. The investment thesis was really in the way we expected the stratospheric high-altitude platform to make a large part of mobile operators’ terrestrial rural cellular networks obsolete and how it might strengthen mobile operator footprints in countries where rural and remote coverage was either very weak or non-existing (e.g., The USA, an important market for Deutsche Telekom AG).

At the time, our thoughts were to have an operational stratospheric coverage platform operationally by 2025, 10 years after kicking off the program, with more than 100 high-altitude platforms covering a major Western European country serving rural areas. As it so often is, reality is unforgiving, as it often is with genuinely disruptive ideas. Getting to a stage of deployment and operation at scale of a high-altitude platform is still some years out due to the lack of maturity of the flight platform, including regulatory approvals for operating a HAP network at scale, increasing the operating window of the flight platform, fueling, technology challenges with the advanced antenna system, being allowed to deployed terrestrial-based cellular spectrum above terra firma, etc. Many of these challenges are progressing well, although slowly.

Globally, various companies are actively working on developing stratospheric drones to enhance cellular coverage. These include aerospace and defense giants like Airbus, advancing its Zephyr drone, and BAE Systems, collaborating with Prismatic for their PHASA-35 UAV. One of the most exciting HAPS companies focusing on developing world-leading high-altitude aircraft that I have come across during my planning work on how to operationalize a Stratospheric cellular coverage platform is the German company Leichtwerk AG, which has their hydrogen-fueled StratoStreamer as well as a solar-powered platform under development with the their StratoStreamer being close to production-ready. Telecom companies like Deutsche Telekom AG and BT Group are experimenting with hydrogen-powered drones in partnership with Stratospheric Platforms Limited. Through its subsidiary HAPSMobile, SoftBank is also a significant player with its Sunglider project. Additionally, entities like China Aerospace Science and Technology Corporation and Cambridge Consultants contribute to this field by co-developing enabling technologies (e.g., advanced phased-array antenna, fuel technologies, material science, …) critical for the success and deployability of high-altitude platforms at scale, aiming to improve connectivity in rural, remote, and underserved areas.

The work on integrating High Altitude Platform (HAP) networks with terrestrial cellular systems involves significant coordination with international regulatory bodies like the International Telecommunication Union Radiocommunication Sector (ITU-R) and the World Radiocommunication Conference (WRC). This process is crucial for securing permission to reuse terrestrial cellular spectrum in the stratosphere. Key focus areas include negotiating the allocation and management of frequency bands for HAP systems, ensuring they don’t interfere with terrestrial networks. These efforts are vital for successfully deploying and operating HAP systems, enabling them to provide enhanced connectivity globally, especially in rural areas where terrestrial cellular frequencies are already in use and remote and underserved regions. At the latest WRC-2023 conference, Softbank successfully gained approval within the Asia-Pacific region to use mobile spectrum bands for stratospheric drone-based mobile broadband cellular services.

Most mobile operators have at least 50% of their cellular network infrastructure assets in rural areas. While necessary for providing the coverage that mobile customers have come to expect everywhere, these sites carry only a fraction of the total mobile traffic. Individually, rural sites have poor financial returns due to their proportional operational and capital expenses.

In general, the Opex of the cellular network takes up between 50% and 60% of the Technology Opex, and at least 50% of that can be attributed to maintaining and operating the rural part of the radio access network. Capex is more cyclical than Opex due to, for example, the modernization of radio access technology. Nevertheless, over a typical modernization cycle (5 to 7 years), the rural network demands a little bit less but a similar share of Capex overall as for Opex. Typically, the Opex share of the rural cellular network may be around 10% of the corporate Opex, and its associated total cost is between 12% and 15% of the total expenses.

The global telecom towers market size in 2023 is estimated at ca. 26+ billion euros, ca. 2.5% of total telecom turnover, with a projected growth of CAGR 3.3% from now to 2030. The top 10 Tower management companies manage close to 1 million towers worldwide for mobile CSPs. Although many mobile operators have chosen to spin off their passive site infrastructure, there are still some remaining that may yet to spin off their cellular infrastructure to one of many Tower management companies, captive or independent, such as American Tower (224,019+ towers), Cellnex Telecom (112,737+ towers), Vantage Towers (46,100+ towers), GD Towers (+41,600 towers), etc…

IMAGINE.

Focusing on the low- or no-profitable rural cellular coverage.

Imagine an alternative coverage technology to the normal cellular one all mobile operators are using that would allow them to do without the costly and low-profitable rural cellular network they have today to satisfy their customers’ expectations of high-quality ubiquitous cellular coverage.

For the alternative technology to be attractive, it would need to deliver at least the same quality and capacity as the existing terrestrial-based cellular coverage for substantially better economics.

If a mobile operator with a 40% EBITDA margin did not need its rural cellular network, it could improve its margin by a sustainable 5% and increase its cash generation in relative terms by 50% (i.e., from 0.2×Revenue to 0.3×Revenue), assuming a capex-to-revenue ratio of 20% before implementing the technology being reduced to 15% after due to avoiding modernization and capacity investments in the rural areas.

Imagine that the alternative technology would provide a better cellular quality to the consumer for a quantum leap reduction of the associated cost structure compared to today’s cellular networks.

Such an alternative coverage technology might also impact the global tower companies’ absolute level of sustainable tower revenues, with a substantial proportion of revenue related to rural site infrastructure being at risk.

Figure 1 An example of an unmanned autonomous stratospheric coverage platform. Source: Cambridge Consultants presentation (see reference [2]) based on their work with Stratospheric Platforms Ltd (SPL) and SPL’s innovative high-altitude coverage platform.

TERRESTRIAL CELLULAR RURAL COVERAGE – A MATTER OF POOR ECONOMICS.

When considering the quality we experience in a terrestrial cellular network, a comprehensive understanding of various environmental and physical factors is crucial to predicting the signal quality accurately. All these factors generally work against cellular signal propagation regarding how far the signal can reach from the transmitting cellular tower and the achievable quality (e.g., signal strength) that a customer can experience from a cellular service.

Firstly, the terrain plays a significant role. Rural landscapes often include varied topographies such as hills, valleys, and flat plains, each affecting signal reach differently. For instance, hilly or mountainous areas may cause signal shadowing and reflection, while flat terrains might offer less obstruction, enabling signals to travel further.

At higher frequencies (i.e., above 1 GHz), vegetation becomes an increasingly critical factor to consider. Trees, forests, and other dense foliage can absorb and scatter radio waves, attenuating signals. The type and density of vegetation, along with seasonal changes like foliage density in summer versus winter, can significantly impact signal strength.

The height and placement of transmitting and receiving antennas are also vital considerations. In rural areas, where there are fewer tall buildings, the height of the antenna can have a pronounced effect on the line of sight and, consequently, on the signal coverage and quality. Elevated antennas mitigate the impact of terrain and vegetation to some extent.

Furthermore, the lower density of buildings in rural areas means fewer reflections and less multipath interference than in urban environments. However, larger structures, such as farm buildings or industrial facilities, must be factored in, as they can obstruct or reflect signals.

Finally, the distance between the transmitter and receiver is fundamental to signal propagation. With typically fewer cell towers spread over larger distances, understanding how signal strength diminishes with distance is critical to ensuring reliable coverage at a high quality, such as high cellular throughput, as the mobile customer expects.

The typical way for a cellular operator to mitigate the environmental and physical factors that inevitably result in loss of signal strength and reduced cellular quality (i.e., sub-standard cellular speed) is to build more sites and thus incur increasing Capex and Opex in areas that in general will have poor economical payback associated with any cellular assets. Thus, such investments make an already poor economic situation even worse as the rural cellular network generally would have very low utilization.

Figure 2 Cellular capacity or quality measured by the unit or total throughput is approximately driven by the amount of spectrum (in MHz) times the effective spectral efficiency (in Mbps/MHz/units) times the number of cells or capacity units deployed. When considering the effective spectral efficiency, one needs to consider the possible “boost” that a higher order MiMo or Advanced Antenna System will bring over and above the Single In Single Out (SISO) antenna would result in.

As our alternative technology also would need to provide at least the same quality and capacity it is worth exploring what can be expected in terms of rural terrestrial capacity. In general, we have that the cellular capacity (and quality) can be written as (also shown in Figure 2 above):

Throughput (in Mbps) =
Spectral Bandwidth in MHz ×
Effective Spectral Efficiency in Mbps/MHz/Cell ×
Number of Cells

We need to keep in mind that an additional important factor when considering quality and capacity is that the higher the operational frequency, the lower the radius (all else being equal). Typically, we can improve the radius at higher frequencies by utilizing advanced antenna beam forming, that is, concentrate the radiated power per unit coverage area, which is why you will often hear that the 3.6 GHz downlink coverage radius is similar to that of 1800 MHz (or PCS). This 3.6 GHz vs. 1.8 GHz coverage radius comparison is made when not all else is equal. Comparing a situation where the 1800 MHz (or PCS) radiated power is spread out over the whole coverage area compared to a coverage situation where the 3.6 GHz (or C-band in general) solution makes use of beamforming, where the transmitted energy density is high, allowing to reach the customer at a range that would not be possible if the 3.6 GHz radiated power would have been spread out over the cell like the example of the 1800 MHz.

As an example, take an average Western European rural 5G site with all cellular bands between 700 and 2100 MHz activated. The site will have a total of 85 MHz DL and 75 MHz UL, with a 10 MHz difference between DL and UL due to band 38 Supplementary Downlink SDL) operational on the site. In our example, we will be optimistic and assume that the effective spectral efficiency is 2 Mbps per MHz per cell (average over all bands and antenna configurations), which would indicate a fair amount of 4×4 and 8×8 MiMo antenna systems deployed. Thus, the unit throughput we would expect to be supplied by the terrestrial rural cell would be 170 Mbps (i.e., 85 MHz × 2.0 Mbps/MHz/Cell). With a rural cell coverage radius between 2 and 3 km, we then have an average throughput per square kilometer of 9 Mbps/km2. Due to the low demand and high-frequency bandwidth per active customer, DL speeds exceeding 100+ Mbps should be relatively easy to sustain with 5G standalone, with uplink speeds being more compromised due to larger coverage areas. Obviously, the rural quality can be improved further by deploying advanced antenna systems and increasing the share of higher-order MiMo antennas in general, as well as increasing the rural site density. However, as already pointed out, this would not be an economically reasonable approach.

THE ADVANTAGE OF SEEING FROM ABOVE.

Figure 3 illustrates the difference between terrestrial cellular coverage from a cell tower and that of a stratospheric drone or high-altitude platform (“Antenna-in-the-Sky”). The benefit of seeing the world from above is that environmental and physical factors have substantially less impact on signal propagation and quality primarily being impacted by distance as it approximates free space propagation. This situation is very different for a terrestrial-based cellular tower with its radiated signal being substantially impacted by the environment as well as physical factors.

It may sound silly to talk about an alternative coverage technology that could replace the need for the cellular tower infrastructure that today is critical for providing mobile broadband coverage to, for example, rural areas. What alternative coverage technologies should we consider?

If, instead of relying on terrestrial-based tower infrastructure, we could move the cellular antenna and possibly the radio node itself to the sky, we would have a situation where most points of the ground would be in the line of sight to the “antenna-in-the-sky.” The antenna in the sky idea is a game changer in terms of coverage itself compared to conventional terrestrial cellular coverage, where environmental and physical factors dramatically reduce signal propagation and signal quality.

The key advantage of an antenna in the sky (AIS) is that the likelihood of a line-of-sight to a point on the ground is very high compared to establishing a line-of-sight for terrestrial cellular coverage that, in general, would be very low. In other words, the cellular signal propagation from an AIS closely approximates that of free space. Thus, all the various environmental signal loss factors we must consider for a standard terrestrial-based mobile network do not apply to our antenna in the sky.

Over the last ten years, we have gotten several technology candidates for our antenna-in-the-sky solution, aiming to provide terrestrial broadband services as a substitute, or enhancement, for terrestrial mobile and fixed broadband services. In the following, I will describe two distinct types of antenna-in-the-sky solutions: (a) Low Earth Orbit (LEO) satellites, operating between 500 to 2000 km above Earth, that provide terrestrial broadband services such as we know from Starlink (SpaceX), OneWeb (Eutelsat Group), and Kuiper (Amazon), and (b) So-called, High Altitude Platforms (HAPS), operating at altitudes between 15 to 30 km (i.e., in the stratosphere). Such platforms are still in the research and trial stages but are very promising technologies to substitute or enhance rural network broadband services. The HAP is supposed to be unmanned, highly autonomous, and ultimately operational in the stratosphere for an extended period (weeks to months), fueled by green hydrogen and possibly solar. The high-altitude platform is thus also an unmanned aerial vehicle (UAV), although I will use the term stratospheric drone and HAP interchangeably in the following.

Low Earth Orbit (LEO) satellites and High Altitude Platforms (HAPs) represent two distinct approaches to providing high-altitude communication and observation services. LEO satellites, operating between 500 km and 2,000 km above the Earth, orbit the planet, offering broad global coverage. The LEO satellite platform is ideal for applications like satellite broadband internet, Earth observation, and global positioning systems. However, deploying and maintaining these satellites involves complex, costly space missions and sophisticated ground control. Although, as SpaceX has demonstrated with the Starlink LEO satellite fixed broadband platform, the unitary economics of their satellites significantly improve by scale when the launch cost is also considered (i.e., number of satellites).

Figure 4 illustrates a non-terrestrial network architecture consisting of a Low Earth Orbit (LEO) satellite constellation providing fixed broadband services to terrestrial users. Each hexagon represents a satellite beam inside the larger satellite coverage area. Note that, in general, there will be some coverage overlap between individual satellites, ensuring a continuous service including interconnected satellites. The user terminal (UT) dynamically aligns itself, aiming at the best quality connection provided by the satellites within the UT field of vision.

Figure 4 Illustrating a Non-Terrestrial Network consisting of a Low Earth Orbit (LEO) satellite constellation providing fixed broadband services to terrestrial users (e.g., Starlink, Kuiper, OneWeb,…). Each hexagon represents a satellite beam inside the larger satellite coverage area. Note that, in general, there will be some coverage overlap between individual satellites, ensuring a continuous service. The operating altitude of a LEO satellite constellation is between 300 and 2,000 km. It is assumed that the satellites are interconnected, e.g., laser links. The User Terminal antenna (UT) is dynamically orienting itself after the best line-of-sight (in terms of signal quality) to a satellite within UT’s field-of-view (FoV). The FoV has not been shown in the picture above so as not to overcomplicate the illustration. It should be noted just like with the drone it is possible to integrate the complete gNB on the LEO satellite. There might even be applications (e.g., defense, natural & unnatural disaster situations, …) where a standalone 5G SA core is integrated.

On the other hand, HAPs, such as unmanned (autonomous) stratospheric drones, operate at altitudes of approximately 15 km to 30 km in the stratosphere. Unlike LEO satellites, the stratospheric drone can hover or move slowly over specific areas, often geostationary relative to the Earth’s surface. This characteristic makes them more suitable for localized coverage tasks like regional broadband, surveillance, and environmental monitoring. The deployment and maintenance of the stratospheric drones are managed from the Earth’s surface and do not require space launch capabilities. Furthermore, enhancing and upgrading the HAPs is straightforward, as they will regularly be on the ground for fueling and maintenance. Upgrades are not possible with an operational LEO satellite solution where any upgrade would have to wait on a subsequent generation and new launch.

Figure 5 illustrates the high-level network architecture of an unmanned autonomous stratospheric drone-based constellation providing terrestrial cellular broadband services to terrestrial mobile users delivered to their normal 5G terminal equipment. Each hexagon represents a beam arising from the phased-array antenna integrated into the drone’s wingspan. To deliver very high-availability services to a rural area, one could assign three HAPs to cover a given area. The drone-based non-terrestrial network is drawn consistent with the architectural radio access network (RAN) elements from Open RAN, e.g., Radio Unit (RU), Distributed Unit (DU), and Central Unit (CU). It should be noted that the whole 5G gNB (the 5G NodeB), including the CU, could be integrated into the stratospheric drone, and in fact, so could the 5G standalone (SA) packet core, enabling full private mobile 5G networks for defense and disaster scenarios or providing coverage in very remote areas with little possibility of ground-based infrastructure (e.g., the arctic region, or desert and mountainous areas).

Figure 5 illustrates a Non-Terrestrial Network consisting of a stratospheric High Altitude Platform (HAP) drone-based constellation providing terrestrial Cellular broadband services to terrestrial mobile users delivered to their normal 5G terminal equipment. Each hexagon represents a beam inside the larger coverage area of the stratospheric drone. To deliver very high-availability services to a rural area, one could assign three HAPs to cover a given area. The operating altitude of a HAP constellation is between 10 to 50 km with an optimum of around 20 km. It is assumed that there is inter-HAP connectivity, e.g., via laser links. Of course, it is also possible to contemplate having the gNB (full 5G radio node) in the stratospheric drone entirely, which would allow easier integration with LEO satellite backhauls, for example. There might even be applications (e.g., defense, natural & unnatural disaster situations, …) where a standalone 5G SA core is integrated.

The unique advantage of the HAP operating in the stratosphere is (1) The altitude is advantageous for providing wider-area cellular coverage with a near-ideal quality above and beyond what is possible with conventional terrestrial-based cellular coverage because of very high line-of-sight likelihood due to less environment and physical issues that substantially reduces the signal propagation and quality of a terrestrial coverage solution, and (2) More stable atmospheric conditions characterize the stratosphere compared to the troposphere below it. This stability allows the stratospheric drone to maintain a consistent position and altitude with less energy expenditure. The stratosphere offers more consistent and direct sunlight exposure for a solar-powered HAP with less atmospheric attenuation. Moreover, due to the thinner atmosphere at stratospheric altitudes, the stratospheric drone will experience a lower air resistance (drag), increasing the energy efficiency and, therefore, increasing the operational airtime.

Figure 6 illustrates Leichtwerk AG’s StratoStreamer HAP design that is near-production ready. Leichtwerk AG works closely together with AESA towards the type certificate that would make it possible to operationalize a drone constellation in Europe. The StratoStreamer has a wingspan of 65 meter and can carry a payload of 100+ kg. Courtesy: Leichtwerk AG.

Each of these solutions has its unique advantages and limitations. LEO satellites provide extensive coverage but come with higher operational complexities and costs. HAPs offer more focused coverage and are easier to manage, but they need the global reach of LEO satellites. The choice between these two depends on the specific requirements of the intended application, including coverage area, budget, and infrastructure capabilities.

In an era where digital connectivity is indispensable, stratospheric drones could emerge as a game-changing technology. These unmanned (autonomous) drones, operating in the stratosphere, offer unique operational and economic advantages over terrestrial networks and are even seen as competitive alternatives to low earth orbit (LEO) satellite networks like Starlink or OneWeb.

STRATOSPHERIC DRONES VS TERRESTRIAL NETWORKS.

Stratospheric drones positioned much closer to the Earth’s surface than satellites, provide distinct signal strength and latency benefits. The HAP’s vantage point in the stratosphere (around 20 km above the Earth) ensures a high probability of line-of-sight with terrestrial user devices, mitigating the adverse effects of terrain obstacles that frequently challenge ground-based networks. This capability is particularly beneficial in rural areas in general and mountainous or densely forested areas, where conventional cellular towers struggle to provide consistent coverage.

Why the stratosphere? The stratosphere is the layer of Earth’s atmosphere located above the troposphere, which is the layer where weather occurs. The stratosphere is generally characterized by stable, dry conditions with very little water vapor and minimal horizontal winds. It is also home to the ozone layer, which absorbs and filters out most of the Sun’s harmful ultraviolet radiation. It is also above the altitude of commercial air traffic, which typically flies at altitudes ranging from approximately 9 to 12 kilometers (30,000 to 40,000 feet). These conditions (in addition to those mentioned above) make operating a stratospheric platform very advantageous.

Figure 6 illustrates the coverage fundamentals of (a) a terrestrial cellular radio network with the signal strength and quality degrading increasingly as one moves away from the antenna and (b) the terrestrial coverage from a stratospheric drone (antenna in the sky) flying at an altitude of 15 to 30 km. The stratospheric drone, also called a High-Altitude Platform (HAP), provides near-ideal signal strength and quality due to direct line-of-sight (LoS) with the ground, compared to the signal and quality from a terrestrial cellular site that is influenced by its environment and physical factors and the fact that LoS is much less likely in a conventional terrestrial cellular network. It is worth keeping in mind that the coverage scenarios where a stratospheric drone and a low earth satellite may excel in particular are in rural areas and outdoor coverage in more dense urban areas. In urban areas, the clutter, or environmental features and objects, will make line-of-site more challenging, impacting the strength and quality of the radio signals.

Figure 6 The chart above illustrates the coverage fundamentals of (a) a terrestrial cellular radio network with the signal strength and quality degrading increasingly as one moves away from the antenna and (b) the terrestrial coverage from a stratospheric drone (antenna in the sky) flying at an altitude of 15 to 30 km. The stratospheric drone, also called a High Altitude Platform (HAP), provides near-ideal signal strength and quality due to direct line-of-sight (LoS) with the ground, compared to the signal & quality from a terrestrial cellular site that is influenced by its environment and physical factors and the fact that LoS is much less likely in a conventional terrestrial cellular network.

From an economic and customer experience standpoint, deploying stratospheric drones may be significantly more cost-effective than establishing extensive terrestrial infrastructure, especially in remote or rural areas. The setup and operational costs of cellular towers, including land acquisition, construction, and maintenance, are substantially higher compared to the deployment of stratospheric drones. These aerial platforms, once airborne, can cover vast geographical areas, potentially rendering numerous terrestrial towers redundant. At an operating height of 20 km, one would expect a coverage radius ranging from 20 km up to 500 km, depending on the antenna system, application, and business model (e.g., terrestrial broadband services, surveillance, environmental monitoring, …).

The stratospheric drone-based coverage platform, and by platform, I mean the complete infrastructure that will replace the terrestrial cellular network, will consist of unmanned autonomous drones with a considerable wingspan (e.g., 747-like of ca. 69 meters). For example, European (German) Leichtwerk’s StratoStreamer has a wingspan of 65 meters and a wing area of 197 square meters with a payload of 120+ kg (note: in comparison a Boing 747 has ca. 500+ m2 wing area but its payload is obviously much much higher and in the range of 50 to 60 metric tons). Leichtwerk AG work closely together with AESA in order to achieve the European Union Aviation Safety Agency (EASA) type certificate that would allow the HAPS to integrate into civil airspace (see refs. [34] for what that means).

An advanced antenna system is positioned under the wings (or the belly) of the drone. I will assume that the coverage radius provided by a single drone is 50 km, but it can dynamically be made smaller or larger depending on the coverage scenario and use case. The drone-based advanced antenna system breaks up the coverage area (ca. six thousand five hundred plus square kilometers) into 400 patches (i.e., a number that can be increased substantially), averaging approx. 16 km2 per patch and a radius of ca. 2.5 km. Due to its near-ideal cellular link budget, the effective spectral efficiency is expected to be initially around 6 Mbps per MHz per cell. Additionally, the drone does not have the same spectrum limitations as a rural terrestrial site and would be able to support frequency bands in the downlink from ~900 MHz up to 3.9 GHz (and possibly higher, although likely with different antenna designs). Due to the HAP altitude, the Earth-to-HAP uplink signal will be limited to a lower frequency spectrum to ensure good signal quality is being received at the stratospheric antenna. It is prudent to assume a limit of 2.1 GHz to possibly 2.6 GHz. All under the assumption that the stratospheric drone operator has achieved regulatory approval for operating the terrestrial cellular spectrum from their coverage platform. It should be noted that today, cellular frequency spectrum approved for terrestrial use cannot be used at an altitude unless regulatory permission has been given (more on this later).

Let’s look at an example. We would need ca. 46 drones to cover the whole of Germany with the above-assumed specifications. Furthermore, if we take the average spectrum portfolio of the 3 main German operators, this will imply that the stratospheric drone could be functioning with up to 145 MHz in downlink and at least 55 MHz uplink (i.e., limiting UL to include 2.1 GHz). Using the HAP DL spectral efficiency and coverage area we get a throughput density of 70+ Mbps/km2 and an effective rural cell throughput of 870 Mbps. In terrestrial-based cellular coverage, the contribution to quality at higher frequencies is rapidly degrading as a function of the distance to the antenna. This is not the case for HAP-based coverage due to its near-ideal signal propagation.

In comparison, the three incumbent German operators have on average ca. 30±4k sites per operator with an average terrestrial coverage area of 12 km2 and a coverage radius of ca. 2.0 km (i.e., smaller in cities, ~1.3 km, larger in rural areas, ~2.7 km). Assume that the average cost of ownership related only to the passive part of the site is 20+ thousand euros and that 50% of the 30k sites (expect a higher number) would be redundant as the rural coverage would be replaced by stratospheric drones. Such a site reduction quantum conservatively would lead to a minimum gross monetary reduction of 300 million euros annually (not considering the cost of the alternative technology coverage solution).

In our example, the question is whether we can operate a stratospheric drone-based platform covering rural Germany for less than 300 million euros yearly. Let’s examine this question. Say the stratospheric drone price is 1 million euros per piece (similar to the current Starlink satellite price, excluding the launch cost, which would add another 1.1 million euros to the satellite cost). For redundancy and availability purposes, we assume we need 100 stratospheric drones to cover rural Germany, allowing me to decommission in the radius of 15 thousand rural terrestrial sites. The decommissioning cost and economical right timing of tower contract termination need to be considered. Due to the standard long-term contracts may be 5 (optimistic) to 10+ years (realistic) year before the rural network termination could be completed. Many Telecom businesses that have spun out their passive site infrastructure have done so in mutual captivity with the Tower management company and may have committed to very “sticky” contracts that have very little flexibility in terms of site termination at scale (e.g., 2% annually allowed over total portfolio).

We have a capital expense of 100 million for the stratospheric drones.  We also have to establish the support infrastructure (e.g., ground stations, airfield suitability rework, development, …), and consider operational expenses. The ballpark figure for this cost would be around 100 million euros for Capex for establishing the supporting infrastructure and another 30 million euros in annual operational expenses. In terms of steady-state Capex, it should be at most 20 million per year. In our example, the terrestrial rural network would have cost 3 billion euros, mainly Opex, over ten years compared to 700 million euros, a little less than half as Opex, for the stratospheric drone-based platform (not considering inflation).

The economical requirements of a stratospheric unmanned and autonomous drone-based coverage platform should be superior compared to the current cellular terrestrial coverage platform. As the stratospheric coverage platform scales and increasingly more stratospheric drones are deployed, the unit price is also likely to reduce accordingly.

Spectrum usage rights yet another critical piece.

It should be emphasized that the deployment of cellular frequency spectrum in stratospheric and LEO satellite contexts is governed by a combination of technical feasibility, regulatory frameworks, coordination to prevent interference, and operational needs. The ITU, along with national regulatory bodies, plays a central role in deciding the operational possibilities and balancing the needs and concerns of various stakeholders, including satellite operators, terrestrial network providers, and other spectrum users. Today, there are many restrictions and direct regulatory prohibitions in repurposing terrestrially assigned cellular frequencies for non-terrestrial purposes.

The role of the World Radiocommunications Conference (WRC) role is pivotal in managing the global radio-frequency spectrum and satellite orbits. Its decisions directly impact the development and deployment of various radiocommunication services worldwide, ensuring their efficient operation and preventing interference across borders. The WRC’s work is fundamental to the smooth functioning of global communication networks, from television and radio broadcasting to cellular networks and satellite-based services. The WRC is typically held every three to four years, with the latest one, WRC-23, held in Dubai at the end of 2023, reference [13] provides the provisional final acts of WRC-23 (December 2023). In landmark recommendation, WRC-23 relaxed the terrestrial-only conditions for the 698 to 960 MHz and 1,71 to 2.17 GHz, and 2.5 to 2.69 GHz frequency bands to also apply for high-altitude platform stations (HAPS) base stations (“Antennas-in -Sky”). It should be noted that there are slightly different frequency band ranges and conditions, depending on which of the three ITU-R regions (as well as exceptions for particular countries within a region) the system will be deployed in. Also the HAPS systems do not enjoy protection or priority over existing use of those frequency bands terrestrially. It is important to note that the WRC-23 recommendation only apply to coverage platforms (i.e., HAPS) in the range from 20 to 50 km altitude. These WRC-23 frequency-bands relaxation does not apply to satellite operation. With the recognized importance of non-terrestrial networks and the current standardization efforts (e.g., towards 6G), it is expected that the fairly restrictive regime on terrestrial cellular spectrum may be relaxed further to also allow mobile terrestrial spectrum to be used in “Antenna-in-the-Sky” coverage platforms. Nevertheless, HAPS and terrestrial use of cellular frequency spectrum will have to be coordinated to avoid interference and resulting capacity and quality degradation.

SoftBank announced recently (i.e., 28 December 2023 [11]), after deliberations at the WRC-23, that they had successfully gained approval within the Asia-Pacific region (i.e., ITU-R region 3) to use mobile spectrum bands, namely 700-900MHz, 1.7GHz, and 2.5GHz, for stratospheric drone-based mobile broadband cellular services (see also refs. [13]). As a result of this decision, operators in different countries and regions will be able to choose a spectrum with greater flexibility when they introduce HAPS-based mobile broadband communication services, thereby enabling seamless usage with existing smartphones and other devices.

Another example of re-using terrestrial licensed cellular spectrum above ground is SpaceX direct-to-cell capable 2nd generation Starlink satellites.

On January 2nd, 2024, SpaceX launched their new generation of Starlink satellites with direct-to-cell capabilities to close a connection to a regular mobile cellular phone (e.g., smartphone). The new direct-to-cell Starlink satellites use T-Mobile US terrestrial licensed cellular frequency band (i.e., 2×5 MHz Band 25, PCS G-block) and will work, according to T-Mobile US, with most of their existing mobile phones. The initial direct-to-cell commercial plans will only support low-bandwidth text messaging and no voice or more bandwidth-heavy applications (e.g., streaming). Expectations are that the direct-to-cell system would deliver up to 18.3 Mbps (3.66 Mbps/MHz/cell) downlink and up to 7.2 Mbps (1.44 Mbps/MHz/cell) uplink over a channel bandwidth of 5 MHz (maximum).

Given that terrestrial 4G LTE systems struggle with such performance, it will be super interesting to see what the actual performance of the direct-to-cell satellite constellation will be.

COMPARISON WITH LEO SATELLITE BROADBAND NETWORKS.

When juxtaposed with LEO satellite networks such as Starlink (SpaceX), OneWeb (Eutelsat Group), or Kuiper (Amazon), stratospheric drones offer several advantages. Firstly, the proximity to the Earth’s surface (i.e., 300 – 2,000 km) results in lower latency, a critical factor for real-time applications. While LEO satellites, like those used by Starlink, have reduced latency (ca. 3 ms round-trip-time) compared to traditional geostationary satellites (ca. 240 ms round-trip-time), stratospheric drones can provide even quicker response times (one-tenth of an ms in round-trip-time), making the stratospheric drone substantially more beneficial for applications such as emergency services, telemedicine, and high-speed internet services.

A stratospheric platform operating at 20 km altitude and targeting surveillance, all else being equal, would be 25 times better at distinguishing objects apart than an LEO satellite operating at 500 km altitude. The global aerial imaging market is expected to exceed 7 billion euros by 2030, with a CAGR of 14.2% from 2021. The flexibility of the stratospheric drone platform allows for combining cellular broadband services and a wide range of advanced aerial imaging services. Again, it is advantageous that the stratospheric drone regularly returns to Earth for fueling, maintenance, and technology upgrades and enhancements. This is not possible with an LEO satellite platform.

Moreover, the deployment and maintenance of stratospheric drones are, in theory, less complex and costly than launching and maintaining a constellation of satellites. While Starlink and similar projects require significant upfront investment for satellite manufacturing and rocket launches, stratospheric drones can be deployed at a fraction of the cost, making them a more economically viable option for many applications.

The Starlink LEO satellite constellation currently is the most comprehensive satellite (fixed) broadband coverage service. As of November 2023, Starlink had more than 5,000 satellites in low orbit (i.e., ca. 550 km altitude), and an additional 7,000+ are planned to be deployed, with a total target of 12+ thousand satellites. The current generation of Starlink satellites has three downlink phased-array antennas and one uplink phase-array antenna. This specification translates into 48 beams downlink (satellite to ground) and 16 beams uplink (ground to satellite). Each Starlink beam covers approx. 2,800 km2 with a coverage range of ca. 30 km, over which a 250 MHz downlink channel (in the Ku band) has been assigned. According to Portillo et al. [14], the spectral efficiency is estimated to be 2.7 Mbps per MHz, providing a total throughput of a maximum of 675 Mbps in the coverage area or a throughput density of ca. 0.24 Mbps per km2.

According to the latest Q2-2023 Ookla speed test it is found that “among the 27 European countries that were surveyed, Starlink had median download speeds greater than 100 Mbps in 14 countries, greater than 90 Mbps in 20 countries, and greater than 80 in 24 countries, with only three countries failing to reach 70 Mbps” (see reference [18]). Of course, the actual customer experience will depend on the number of concurrent users demanding resources from the LEO satellite as well as weather conditions, proximity of other users, etc. Starlink themselves seem to have set an upper limit of 220 Mbps download speed for their so-called priority service plan or otherwise 100 Mbps (see [19] below). Quite impressive performance if there are no other broadband alternatives available.

According to Elon Musk, SpaceX aims to reduce each Starlink satellite’s cost to less than one million euros. However, according to Elon Musk, the unit price will depend on the design, capabilities, and production volume. The launch cost using the SpaceX Falcon 9 launch vehicle starts at around 57 million euros, and thus, the 50 satellites would add a launch cost of ca. 1.1 million euros per satellite. SpaceX operates, as of September 2023, 150 ground stations (“Starlink Gateways”) globally that continue to connect the satellite network with the internet and ground operations. At Starlink’s operational altitude, the estimated satellite lifetime is between 5 and 7 years due to orbital decay, fuel and propulsion system exhaustion, and component durability. Thus, a LEO satellite business must plan for satellite replacement cycles. This situation differs greatly from the stratospheric drone-based operation, where the vehicles can be continuously maintained and upgraded. Thus, they are significantly more durable, with an expected useful lifetime exceeding ten years and possibly even 20 years of operational use.

Let’s consider our example of Germany and what it would take to provide LEO satellite coverage service targeting rural areas. It is important to understand that a LEO satellite travels at very high speeds (e.g., upwards of 30 thousand km per hour) and thus completes an orbit around Earth in between 90 to 120 minutes (depending on the satellite’s altitude). It is even more important to remember that Earth rotates on its axis (i.e., 24 hours for a full rotation), and the targeted coverage area will have moved compared to a given satellite orbit (this can easily be several hundreds to thousands of kilometers). Thus, to ensure continuous satellite broadband coverage of the same area on Earth, we need a certain number of satellites in a particular orbit and several orbits to ensure continuous coverage at a target area on Earth. We would need at least 210 satellites to provide continuous coverage of Germany. Most of the time, most satellites would not cover Germany, and the operational satellite utilization will be very low unless other areas outside Germany are also being serviced.

Economically, using the Starlink numbers above as a guide, we incur a capital expense of upwards of 450 million euros to realize a satellite constellation that could cover Germany. Let’s also assume that the LEO satellite broadband operator (e.g., Starlink) must build and launch 20 satellites annually to maintain its constellation and thus incur an additional Capex of ca. 40+ million euros annually. This amount does not account for the Capex required to build the ground network and the operations center. Let’s say all the rest requires an additional 10 million euros Capex to realize and for miscellaneous going forward. The technology-related operational expenses should be low, at most 30 million euros annually (this is a guesstimate!) and likely less. So, covering Germany with an LEO broadband satellite platform over ten years would cost ca. 1.3 billion euros. Although substantially more costly than our stratospheric drone platform, it is still less costly than running a rural terrestrial mobile broadband network.

Despite being favorable compared in economic to the terrestrial cellular network, it is highly unlikely to make any operational and economic sense for a single operator to finance such a network, and it would probably only make sense if shared between telecom operators in a country and even more so over multiple countries or states (e.g., European Union, United States, PRC, …).

Despite the implied silliness of a single mobile operator deploying a satellite constellation for a single Western European country (irrespective of it being fairly large), the above example serves two purposes; (1) To illustrates how economically in-efficient rural mobile networks are that a fairly expansive satellite constellation could be more favorable. Keep in mind that most countries have 3 or 4 of them, and (2) It also shows that the for operators to share the economics of a LEO satellite constellation over larger areal footprint may make such a strategy very attractive economically,

Due to the path loss at 550 km (LEO) being substantially higher than at 20 km (stratosphere), all else being equal, the signal quality of the stratospheric broadband drone would be significantly better than that of the LEO satellite. However, designing the LEO satellite with more powerful transmitters and sensitive receivers can compensate for the factor of almost 30 in altitude difference to a certain extent. Clearly, the latency performance of the LEO satellite constellation would be inferior to that of the stratospheric drone-based platform due to the significantly higher operating altitude.

It is, however, the capacity rather than shared cost could be the stumbling block for LEOs: For a rural cellular network or stratospheric drone platform, we see the MNOs effectively having “control” over the capex costs of the network, whether it be the RAN element for a terrestrial network, or the cost of whole drone network (even if it in the future, this might be able to become a shared cost).

However, for the LEO constellation, we think the economics of a single MNO building a LEO constellation even for their own market is almost entirely out of the question (ie multiple €bn capex outlay). Hence, in this situation, the MNOs will rely on a global LEO provider (ie Starlink, or AST Space Mobile) and will “lend” their spectrum to their in their respective geography in order to provide service. Like the HAPs, this will also require further regulatory approvals in order to free up terrestrial spectrum for satellites in rural areas.

We do not yet have the visibility of the payments the LEOs will require, so there is the potential that this could be a lower cost alternative again to rural networks, but as we show below, we think the real limitation for LEOs might not be the shared capacity rental cost, but that there simply won’t be enough capacity available to replicate what a terrestrial network can offer today.

However, the stratospheric drone-based platform provides a near-ideal cellular performance to the consumer, close to the theoretical peak performance of a terrestrial cellular network. It should be emphasized that the theoretical peak cellular performance is typically only experienced, if at all, by consumers if they are very near the terrestrial cellular antenna and in a near free-space propagation environment. This situation is a very rare occurrence for the vast majority of mobile consumers.

Figure 7 summarizes the above comparison between a rural terrestrial cellular network with the non-terrestrial cellular networks such as LEO satellites and Stratospheric drones.

Figure 7 Illustrating a comparison between terrestrial cellular coverage with stratospheric drone-based (“Antenna-in-the-sky”) cellular coverage and Low Earth Orbit (LEO) satellite coverage options.

While the majority of the 5,500+ Starlink constellation is 13 GHz (Ku-band), at the beginning of 2024, Space X launched a few 2nd generation Starlink satellites that support direct connections from the satellite to a normal cellular device (e.g., smartphone), using 5 MHz of T-Mobile USA’s PCS band (1900 MHz). The targeted consumer service, as expressed by T-Mobile USA, is providing texting capabilities over areas with no or poor existing cellular coverage across the USA. This is fairly similar to services at similar cellular coverage areas presently offered by, for example, AST SpaceMobile, OmniSpace, and Lynk Global LEO satellite services with reported maximum speed approaching 20 Mbps. The so-called Direct-2-Device, where the device is a normal smartphone without satellite connectivity functionality, is expected to develop rapidly over the next 10 years and continue to increase the supported user speeds (i.e., utilized terrestrial cellular spectrum) and system capacity in terms of smaller coverage areas and higher number of satellite beams.

Table 1 below provides an overview of the top 10 LEO satellite constellations targeting (fixed) internet services (e.g., Ku band), IoT and M2M services, and Direct-to-Device (or direct-to-cell) services. The data has been compiled from the NewSpace Index website, which should be with data as of 31st of December 2023. The Top-10 satellite constellation rank has been based on the number of launched satellites until the end of 2023. Two additional Direct-2-Cell (D2C or Direct-to-Device, D2D) LEO satellite constellations are planned for 2024-2025. One is SpaceX Starlink 2nd generation, which launched at the beginning of 2024, using T-Mobile USA’s PCS Band to connect (D2D) to normal terrestrial cellular handsets. The other D2D (D2C) service is Inmarsat’s Orchestra satellite constellation based on L-band (for mobile terrestrial services) and Ka for fixed broadband services. One new constellation (Mangata Networks) targeting 5G services. With two 5G constellations already launched, i.e., Galaxy Space (Yinhe) launched 8 LEO satellites, 1,000 planned using Q- and V-bands (i.e., not a D2D cellular 5G service), and OmniSpace launched two satellites and have planned 200 in total. Moreover, currently, there is one planned constellation targeting 6G by the South Korean Hanwha Group (a bit premature, but interesting nevertheless) with 2,000 6G LEO Satellites planned. Most currently launched and planned satellite constellations offering (or plan to provide) Direct-2-Cell services, including IoT and M2M, are designed for low-frequency bandwidth services that are unlikely to compete with terrestrial cellular networks’ quality of service where reasonable good coverage (or better) exists.

In Table 1 below, we then show 5 different services with the key input variables as cell radius, spectral efficiency and downlink spectrum. From this we can derive what the “average” capacity could be per square kilometer of rural coverage.

We focus on this metric as the best measure of capacity available once multiple users are on the service the spectrum available is shared. This is different from “peak” speeds which are only relevant in the case of very few users per cell.

  • We start with terrestrial cellular today for bands up to 2.1GHz and show that assuming a 2.5km cell radius, the average capacity is equivalent to 11Mbps per sq.km.
  • For a LEO service using Ku-band, i.e., with 250MHz to an FWA dish, the capacity could be ca. 2Mbps per sq.km.
  • For a LEO-based D2D device, what is unknown is what the ultimate spectrum allowance could be for satellite services with cellular spectrum bands, and spectral efficiency. Giving the benefit of the doubt on both, but assuming the beam radius is always going to be larger, we can get to an “optimistic” future target of 2Mbps per sq. km, i.e., 1/5th of a rural terrestrial network.
  • Finally, we show for a stratospheric drone, that given similar cell radius to a rural cell today, but with higher downlink available and greater spectral efficiency, we can reach ca. 55Mbps per sq. km, i.e. 5x what a current rural network can offer.

INTEGRATING WITH 5G AND BEYOND.

The advent of 5G, and eventually 6G, technology brings another dimension to the utility of stratospheric drones delivering mobile broadband services. The high-altitude platform’s ability to seamlessly integrate with existing 5G networks makes them an attractive option for expanding coverage and enhancing network capacity at superior economics, particularly in rural areas where the economics for terrestrial-based cellular coverage tend to be poor. Unlike terrestrial networks that require extensive groundwork for 5G rollout, the non-terrestrial network operator (NTNO) can rapidly deploy stratospheric drones to provide immediate 5G coverage over large areas. The high-altitude platform is also incredibly flexible compared to both LEO satellite constellations and conventional rural cellular network flexibility. The platform can easily be upgraded during its ground maintenance window and can be enhanced as the technology evolves. For example, upgrading to and operationalizing 6G would be far more economical with a stratospheric platform than having to visit thousands or more rural sites to modernize or upgrade the installed active infrastructure.

SUMMARY.

Stratospheric drones represent a significant advancement in the realm of wireless communication. Their strategic positioning in the stratosphere offers superior coverage and connectivity compared to terrestrial networks and low-earth satellite solutions. At the same time, their economic efficiency makes them an attractive alternative to ground-based infrastructures and LEO satellite systems. As technology continues to evolve, these high-altitude platforms (HAPs) are poised to play a crucial role in shaping the future of global broadband connectivity and ultra-high availability connectivity solutions, complementing the burgeoning 5G networks and paving the way for next-generation three-dimensional communication solutions. Moving away from today’s flat-earth terrestrial-locked communication platforms.

The strategic as well as the disruptive potential of the unmanned autonomous stratospheric terrestrial coverage platform is enormous, as shown in this article. It has the potential to make most of the rural (at least) cellular infrastructure redundant, resulting in substantial operational and economic benefits to existing mobile operators. At the same time, the HAPs could, in rural areas, provide much better service overall in terms of availability, improved coverage, and near-ideal speeds compared to what is the case in today’s cellular networks. It might also, at scale, become a serious competitive and economical threat to LEO satellite constellations, such as, for example, Starlink and Kuipers, that would struggle to compete on service quality and capacity compared to a stratospheric coverage platform.

Although the strategic, economic, as well as disruptive potential of the unmanned autonomous stratospheric terrestrial coverage platform is enormous, as shown in this article, the flight platform and advanced antenna technology are still in a relatively early development phase. Substantial regulatory work remains in terms of permitting the terrestrial cellular spectrum to be re-used above terra firma at the “Antenna-in-the-Sky. The latest developments out of WRC-23 for Asia Pacific appear very promising, showing that we are moving in the right direction of re-using terrestrial cellular spectrum in high-altitude coverage platforms. Last but not least, operating an unmanned (autonomous) stratospheric platform involves obtaining certifications as well as permissions and complying with various flight regulations at both national and international levels.

Terrestrial Mobile Broadband Network – takeaway:

  • It is the de facto practice for mobile cellular networks to cover nearly 100% geographically. The mobile consumer expects a high-quality, high-availability service everywhere.
  • A terrestrial mobile network has a relatively low area coverage per unit antenna with relatively high capacity and quality.
  • Mobile operators incur high and sustainable infrastructure costs, especially in rural areas with low or no return on that cost.
  • Physical obstructions and terrain limit performance (i.e., non-free space characteristics).
  • Well-established technology with high reliability.
  • The potential for high bandwidth and low latency in urban areas with high demand may become a limiting factor for LEO satellite constellations and stratospheric drone-based platforms. Thus, it is less likely to provide operational and economic benefits covering high-demand, dense urban, and urban areas.

LEO Satellite Network – takeaway:

  • The technology is operational and improving. There is currently some competition (e.g., Starlink, Kuiper, OneWeb, etc.) in this space, primarily targeting fixed broadband and satellite backhaul services. Increasingly, new LEO satellite-based business models are launched providing lower-bandwidth cellular-spectrum based direct-to-device (D2D) text, 4G and 5G services to regular consumer and IoT devices (i.e., Starlink, Lynk Global, AST SpaceMobile, OmniSpace, …).
  • Broader coverage, suitable for global reach. It may only make sense when the business model is viewed from a worldwide reach perspective (e.g., Starlink, OneWeb,…), resulting in much-increased satellite network utilization.
  • An LEO satellite broadband network can cover a vast area per satellite due to its high altitude. However, such systems are in nature capacity-limited, although beam-forming antenna technologies (e.g., phased array antennas) allow better capacity utilization.
  • The LEO satellite solutions are best suited for low-population areas with limited demand, such as rural and largely unpopulated areas (e.g., sea areas, deserts, coastlines, Greenland, polar areas, etc.).
  • Much higher latency compared to terrestrial and drone-based networks. 
  • Less flexible once in orbit. Upgrades and modernization only via replacement.
  • The LEO satellite has a limited useful operational lifetime due to its lower orbital altitude (e.g., 5 to 7 years).
  • Lower infrastructure cost for rural coverage compared to terrestrial networks, but substantially higher than drones when targeting regional areas (e.g., Germany or individual countries in general).
  • Complementary to the existing mobile business model of communications service providers (CSPs) with a substantial business risk to CSPs in low-population areas where little to no capacity limitations may occur.
  • Requires regulatory permission (authorization) to operate terrestrial frequencies on the satellite platform over any given country. This process is overseen by national regulatory bodies in coordination with the International Telecommunication Union (ITU) as well as national regulators (e.g., FCC in the USA). Satellite operators must apply for frequency bands for uplink and downlink communications and coordinate with the ITU to avoid interference with other satellites and terrestrial systems. In recent years, however, there has been a trend towards more flexible spectrum regulations, allowing for innovative uses of the spectrum like integrating terrestrial and satellite services. This flexibility is crucial in accommodating new technologies and service models.
  • Operating a LEO satellite constellation requires a comprehensive set of permissions and certifications that encompass international and national space regulations, frequency allocation, launch authorization, adherence to space debris mitigation guidelines, and various liability and insurance requirements.
  • Both LEO and MEO satellites is likely going to be complementary or supplementary to stratospheric drone-based broadband cellular networks offering high-performing transport solutions and possible even acts as standalone or integrated (with terrestrial networks) 5G core networks or “clouds-in-the-sky”.

Stratospheric Drone-Based Network – takeaway:

  • It is an emerging technology with ongoing research, trials, and proof of concept.
  • A stratospheric drone-based broadband network will have lower deployment costs than terrestrial and LEO satellite broadband networks.
  • In rural areas, the stratospheric drone-based broadband network offers better economics and near-ideal quality than terrestrial mobile networks. In terms of cell size and capacity, it can easily match that of a rural mobile network.
  • The solution offers flexibility and versatility and can be geographically repositioned as needed. The versatility provides a much broader business model than “just” an alternative rural coverage solution (e.g., aerial imaging, surveillance, defense scenarios, disaster area support, etc.).
  • Reduced latency compared to LEO satellites.
  • Also ideal for targeted or temporary coverage needs.
  • Complementary to the existing mobile business model of communications service providers (CSPs) with additional B2B and public services business potential from its application versatility.
  • Potential substantial negative impact on the telecom tower business as the stratospheric drone-based broadband network would make (at least) rural terrestrial towers redundant.
  • May disrupt a substantial part of the LEO satellite business model due to better service quality and capacity leaving the LEO satellite constellations revenue pool to remote areas and specialized use cases.
  • Requires regulatory permission to operate terrestrial frequencies (i.e., frequency authorization) on the stratospheric drone platform (similar to LEO satellites). Big steps have are already been made at the latest WRC-23, where the frequency bands 698 to 960 MHz, 1710 to 2170 MHz, and 2500 to 2690 MHz has been relaxed to allow for use in HAPS operating at 20 to 50 km altitude (i.e., the stratosphere).
  • Operating a stratospheric platform in European airspace involves obtaining certifications as well as permissions and (of course) complying with various regulations at both national and international levels. This includes the European Union Aviation Safety Agency (EASA) type certification and the national civil aviation authorities in Europe.

FURTHER READING.

  1. New Street Research “Stratospheric drones: A game changer for rural networks?” (January 2024).
  2. https://hapsalliance.org/
  3. https://www.stratosphericplatforms.com/, see also “Beaming 5G from the stratosphere” (June, 2023) and “Cambridge Consultants building the world’s largest  commercial airborne antenna” (2021).
  4. Iain Morris, “Deutsche Telekom bets on giant flying antenna”, Light Reading (October 2020).
  5. “Deutsche Telekom and Stratospheric Platforms Limited (SPL) show Cellular communications service from the Stratosphere” (November 2020).
  6. “High Altitude Platform Systems: Towers in the Skies” (June 2021).
  7. “Stratospheric Platforms successfully trials 5G network coverage from HAPS vehicle” (March 2022).
  8. Leichtwerk AG, “High Altitude Platform Stations (HAPS) – A Future Key Element of Broadband Infrastructure” (2023). I recommend to closely follow Leichtwerk AG which is a world champion in making advanced gliding planes. The hydrogen powered StratoStreamer HAP is near-production ready, and they are currently working on a solar-powered platform. Germany is renowned for producing some of the best gliding planes in the world (after WWII Germany was banned from developing and producing aircrafts, military as well as civil. These restrictions was only relaxed in the 60s). Germany has a long and distinguished history in glider development, dating back to the early 20th century. German manufacturers like Schleicher, Schempp-Hirth, and DG Flugzeugbau are among the world’s leading producers of high-quality gliders. These companies are known for their innovative designs, advanced materials, and precision engineering, contributing to Germany’s reputation in this field.
  9. Jerzy Lewandowski, “Airbus Aims to Revolutionize Global Internet Access with Stratospheric Drones” (December 2023).
  10. Utilities One, “An Elevated Approach High Altitude Platforms in Communication Strategies”, (October 2023).
  11. Rajesh Uppal, “Stratospheric drones to provide 5g wireless communications global internet border security and military surveillance”  (May 2023).
  12. Softbank, “SoftBank Corp.-led Proposal to Expand Spectrum Use for HAPS Base Stations Agreed at World Radiocommunication Conference 2023 (WRC-23)”, press release (December 2023).
  13. ITU Publication, World Radiocommunications Conference 2023 (WRC-23), Provisional Final Acts, (December 2023). Note 1: The International Telecommunication Union (ITU) divides the world into three regions for the management of radio frequency spectrum and satellite orbits: Region 1: includes Europe, Africa, the Middle East west of the Persian Gulf including Iraq, the former Soviet Union, and Mongolia, Region 2: covers the Americas, Greenland, and some of the eastern Pacific Islands, and Region 3: encompasses Asia (excl. the former Soviet Union), Australia, the southwest Pacific, and the Indian Ocean’s islands.
  14. Geoff Huston, “Starlink Protocol Performance” (November 2023). Note 2: The recommendations, such as those designated with “ADD” (additional), are typically firm in the sense that they have been agreed upon by the conference participants. However, they are subject to ratification processes in individual countries. The national regulatory authorities in each member state need to implement these recommendations in accordance with their own legal and regulatory frameworks.
  15. Curtis Arnold, “An overview of how Starlink’s Phased Array Antenna “Dishy McFlatface” works.”, LinkedIn (August 2023).
  16. Quora, “How much does a satellite cost for SpaceX’s Starlink project and what would be the cheapest way to launch it into space?” (June 2023).
  17. The Clarus Network Group, “Starlink v OneWeb – A Comprehensive Comparison” (October 2023).
  18. Brian Wang, “SpaceX Launches Starlink Direct to Phone Satellites”, (January 2024).
  19. Sergei Pekhterev, “The Bandwidth Of The StarLink Constellation…and the assessment of its potential subscriber base in the USA.”, SatMagazine, (November 2021).
  20. I. del Portillo et al., “A technical comparison of three low earth orbit satellite constellation systems to provide global broadband,” Acta Astronautica, (2019).
  21. Nils Pachler et al., “An Updated Comparison of Four Low Earth Orbit Satellite Constellation Systems to Provide Global Broadband” (2021).
  22. Shkelzen Cakaj, “The Parameters Comparison of the “Starlink” LEO Satellites Constellation for Different Orbital Shells” (May 2021).
  23. Mike Puchol, “Modeling Starlink capacity” (October 2022).
  24. Mike Dano, “T-Mobile and SpaceX want to connect regular phones to satellites”, Light Reading (August 2022).
  25. Starlink, “SpaceX sends first text message via its newly launched direct to cell satellites” (January 2024).
  26. GSMA.com, “New Speedtest Data Shows Starlink Performance is Mixed — But That’s a Good Thing” (2023).
  27. Starlink, “Starlink specifications” (Starlink.com page).
  28. AST SpaceMobile website: https://ast-science.com/ Constellation Areas: Internet, Direct-to-Cell, Space-Based Cellular Broadband, Satellite-to-Cellphone. 243 LEO satellites planned. 2 launched.
  29. Lynk Global website: https://lynk.world/ (see also FCC Order and Authorization). It should be noted that Lynk can operate within 617 to 960 MHz (Space-to-Earth) and 663 to 915 MHz (Earth-to-Space). However, only outside the USA. Constellation Area: IoT / M2M, Satellite-to-Cellphone, Internet, Direct-to-Cell. 8 LEO satellites out of 10 planned.
  30. Omnispace website: https://omnispace.com/ Constellation Area: IoT / M2M, 5G. World’s first global 5G non terrestrial network. Initial support 3GPP-defined Narrow-Band IoT radio interface. Planned 200 LEO and <15 MEO satellites. So far only 2 satellites launched.
  31. NewSpace Index: https://www.newspace.im/ I find this resource having excellent and up-to date information of commercial satellite constellations.
  32. Wikipedia, “Satellite constellation”.
  33. LEOLABS Space visualization – SpaceX Starlink mapping. (deselect “Debris”, “Beams”, and “Instruments”, and select “Follow Earth”). An alternative visualization service for Starlink & OneWeb satellites is the website Satellitemap.space (you might go to settings and turn on signal Intensity which will give you the satellite coverage hexagons).
  34. European Union Aviation Safety Agency (EASA). Note that an EASA-type Type Certificate is a critical document in the world of aviation. This certificate is a seal of approval, indicating that a particular type of aircraft, engine, or aviation component meets all the established safety and environmental standards per EASA’s stringent regulations. When an aircraft, engine, or component is awarded an EASA Type Certificate, it signifies a thorough and rigorous evaluation process that it has undergone. This process assesses everything from design and manufacturing to performance and safety aspects. The issuance of the certificate confirms that the product is safe for use in civil aviation and complies with the necessary airworthiness requirements. These requirements are essential to ensure aircraft operating in civil airspace safety and reliability. Beyond the borders of the European Union, an EASA Type Certificate is also highly regarded globally. Many countries recognize or accept these certificates, which facilitate international trade in aviation products and contribute to the global standardization of aviation safety.

ACKNOWLEDGEMENT.

I greatly acknowledge my wife, Eva Varadi, for her support, patience, and understanding during the creative process of writing this article.

I also owe a lot of gratitude to James Ratzer, Partner at New Street Research, for editorial suggestions, great discussions and challenges making the paper far better than it otherwise would have been. I would also like to thank Russel Waller, Pan European Telecoms and ESG Equity Analyst at New Street Research, for being supportive and insistent to get something written for NSR.

I also greatly appreciate my past collaboration and the many discussions on the topic of Stratospheric Drones in particular and advanced antenna designs and properties in general that I have had with Dr. Jaroslav Holis, Senior R&D Manager (Group Technology, Deutsche Telekom AG) over the last couple of years. When it comes to my early involvement in Stratospheric Drones activities with Group Technology Deutsche Telekom AG, I have to recognize my friend, mentor, and former boss, Dr. Bruno Jacobfeuerborn, former CTO Deutsche Telekom AG and Telekom Deutschland, for his passion and strong support for this activity since 2015. My friend and former colleague Rachid El Hattachi deserves the credit for “discovering” and believing in the opportunities that a cellular broadband-based stratospheric drone brings to the telecom industry.

Many thanks to CEO Dr. Reiner Kickert of Leichtwerk AG for providing some high resolution pictures of his beautiful StratoStreamer.

Thanks to my friend Amit Keren for suggesting a great quote that starts this article.

Any errors or unclarities are solely due to myself and not the collaborators and colleagues that have done their best to support this piece.

Telco energy consumption – a path to a greener future?

To my friend Rudolf van der Berg this story is not about how volumetric demand (bytes or bits) results in increased energy consumption (W·h). That notion is silly, as we both “violently” agree on ;-). I recommend that readers also check out Rudolf’s wonderful presentation, “Energy Consumption of the Internet (May 2023),” which he delivered at the RIPE86 student event this year in 2023.

Recently, I had the privilege to watch a presentation by a seasoned executive talk about what his telco company is doing for the environment regarding sustainability and CO2 reduction in general. I think the company is doing something innovative beyond compensating shortfalls with buying certificates and (mis)use of green energy resources.

They replace (reasonably) aggressively their copper infrastructure (country stat for 2022: ~90% of HH/~16% subscriptions) with green sustainable fiber (country stat for 2022: ~78%/~60%). This is an obvious strategy that results in a quantum leap in customer experience potential and helps reduce overall energy consumption resulting from operating the ancient copper network.

Missing a bit imo, was the consideration of and the opportunity to phase out the HFC network (country stat for 2022: ~70%/~60%) and reduce the current HFC+Fibre overbuild of 1.45 and, of course, reduce the energy consumption and operational costs (and complexity) of operating two fixed broadband technologies (3 if we include the copper). However, maybe understandably enough, substantial investments have been made in upgrading to Docsis 3.1. An investment that possibly still is somewhat removed from having been written off.

The “wtf-moment” (in an otherwise very pleasantly and agreeable session) came when the speaker alluded that as part of their sustainability and CO2 reduction strategy, the telco was busy migrating from 4G LTE to 5G with the reasoning that 5G is 90% more energy efficient compared to 4G.

Firstly, it is correct that 5G is (in apples-for-apples comparisons!) ca. 90% more efficient in delivering a single bit compared to 4G. The metric we use is Joules-per-bit or Watts-seconds-per-bit. It is also not uncommon at all to experience Telco executives hinting at the relative greenness of 5G (it is, in my opinion, decidedly not a green broadband communications technology … ).

Secondly, so what! Should we really care about relative energy consumption? After all, we pay for absolute energy consumption, not for whatever relativized measure of consumed energy.

I think I know the answer from the CFO and the in-the-know investors.

If the absolute energy consumption of 5G is higher than that of 4G, I will (most likely) have higher operational costs attributed to that increased power consumption with 5G. If I am not in an apples-for-apples situation, which rarely is the case, and I am anyway really not in, the 5G technology requires substantially more power to provide for new requirements and specifications. I will be worse off regarding the associated cost in absolute terms of money. Unless I also have a higher revenue associated with 5G, I am economically worse off than I was with the older technology.

Having higher information-related energy efficiency in cellular communications systems is a feature of the essential requirement of increasingly better spectral efficiency all else being equal. It does not guarantee that, in absolute monetary terms, a Telco will be better off … far from it!

THE ENERGY OF DELIVERING A BIT.

Energy, which I choose to represent in Joules, is equal to the Power (in Watt or W) that I need to consume per time-unit for a given output unit (e.g., a bit) times the unit of time (e.g., a second) it took to provide the unit.

Take a 4G LTE base station that consumes ca. 5.0kW to deliver a maximum throughput of 160 Mbps per sector (@ 80 MHz per sector). The information energy efficiency of the specific 4G LTE base station (e.g., W·s per bit) would be ca. 10 µJ/bit. The 4G LTE base station requires 10 micro (one millionth) Joules to deliver 1 bit (in 1 second).

In the 5G world, we would have a 5G SA base station, using the same frequency bands as 4G and with an additional 10 MHz @ 700MHz and 100 MHz @ 3.5 GHz included. The 3.5 GHz band is supported by an advanced antenna system (AAS) rather than a classical passive antenna system used for the other frequency bands. This configuration consumes 10 kW with ~40% attributed to the 3.5 GHz AAS, supporting ~1 Gbps per sector (@ 190 MHz per sector). This example’s 5G information energy efficiency would be ca. 0.3 µJ/bit.

In this non-apples-for-apples comparison, 5G is about 30 times more efficient in delivering a bit than 4G LTE (in the example above). Regarding what an operator actually pays for, 5G is twice as costly in energy consumption compared to 4G.

It should be noted that the power consumption is not driven by the volumetric demand but by the time that demand exists and the load per unit of time. Also, base stations will have a power consumption even when idle with the degree depending on the intelligence of the energy management system applied.

So, more formalistic, we have

E per bit = P (in W) · time (in sec) per bit, or in the basic units

J / bit = W·s / bit = W / (bit/s) = W / bps = W / [ MHz · Mbps/MHz/unit · unit-quantity ]

E per bit = P (in W) / [ Bandwidth (in MHz) · Spectral Efficiency (in Mbps/MHz/unit) · unit-quantity ]

It is important to remember that this is about the system spec information efficiency and that there is no direct relationship between the Power that you need and the outputted information your system will ultimately support bit-wise.

\frac{E_{4G}}{bit} \; = \; \frac {\; P_{4G} \;} {\; B_{4G} \; \cdot \; \eta_{4G,eff} \; \cdot N \;\;\;} and \;\;\; \frac{E_{5G}}{bit} \; = \; \frac {\; P_{5G} \;} {\; B_{5G} \; \cdot \; \eta_{5G,eff} \; \cdot N \;}

Thus, the relative efficiency between 4G and 5G is

\frac{E_{4G}/bit}{E_{5G}/bit} \; = \; \frac{\; P_{4G} \;}{\; P_{5G}} \; \cdot \; \frac{\; B_{5G} \;}{\; B_{4G}} \; \cdot \; \frac{\; \eta_{5G,eff} \;}{\; \eta_{4G,eff}}

Currently (i.e., 2023), the various components of the above are approximately within the following ranges.

\frac{P_{4G}}{P_{5G}} \; \lesssim \; 1

\frac{B_{5G}}{B_{4G}} \; > \;2

\frac{\; \eta_{5G,eff} \;}{\; \eta_{4G,eff}} \; \approx \; 10

The power consumption of a 5G RAT is higher than that of a 4G RAT. As we add higher frequency spectrum (e.g., C-band, 6GHz, 23GHz,…) to the 5G RAT, increasingly more spectral bandwidth (B) will be available compared to what was deployed for 4G. This will increase the bit-wise energy efficiency of 5G compared to 4G, although the power consumption is also expected to increase as higher frequencies are supported.

If the bandwidth and system power consumption is the same for both radio access technologies (RATs), then we have the relative information energy efficiency is

\frac{E_{4G}/bit}{E_{5G}/bit} \; \approx \; \frac{\; \eta_{5G,eff} \;}{\; \eta_{4G,eff}} \; \gtrsim \; 10

Depending on the relative difference in spectral efficiency. 5G is specified and designed to have at least ten times (10x) the spectral efficiency of 4G. If you do the math (assuming apples-to-apples applies), it is no surprise that 5G is specified to be 90% more efficient in delivering a bit (in a given unit of time) compared to 4G LTE.

And just to emphasize the obvious,

E_{RAT} \; = \; P_{RAT} \; \cdot \; t \; \approx \; E_{idle} \; + \; P_{BB, RAT} \; \cdot \; t \; +\sum_{freq}P_{freq,\; antenna\; type}\; \cdot \; t_{freq} \;

RAT refers to the radio access technology, BB is the baseband, freq the cellular frequencies, and idle to the situation where the system is not being utilized.

Volume in Bytes (or bits) does not directly relate to energy consumption. As frequency bands are added to a sector (of a base station), the overall power consumption will increase. Moreover, the more computing is required in the antenna, such as for advanced antenna systems, including massive MiMo antennas, the more power will be consumed in the base station. The more the frequency bands are being utilized in terms of time, the higher will the power consumption be.

Indirectly, as the cellular system is being used, customers consume bits and bytes (=8·bit) that will depend on the effective spectral efficiency (in bps/Hz), the amount of effective bandwidth (in Hz) experienced by the customers, e.g., many customers will be in a coverage situation where they may not benefit for example from higher frequency bands), and the effective time they make use of the cellular network resources. The observant reader will see that I like the term “effective.” The reason is that customers rarely enjoy the maximum possible spectral efficiency. Likely, not all the frequency spectrum covering customers is necessarily being applied to individual customers, depending on their coverage situation.

In the report “A Comparison of the Energy Consumption of Broadband Data Transfer Technologies (November 2021),” the authors show the energy and volumetric consumption of mobile networks in Finland over the period from 2010 to 2020. To be clear, I do not support the author’s assertion of causation between volumetric demand and energy consumption. As I have shown above, volumetric usage does not directly cause a given power consumption level. Over the 10-year period shown in the report, they observe a 70% increase in absolute power consumption (from 404 to 686 GWh, CAGR ~5.5%) and a factor of ~70 in traffic volume (~60 TB to ~4,000 TB, CAGR ~52%). Caution should be made in resisting the temptation to attribute the increase in energy over the period to be directly related to the data volume increase, however weak it is (i.e., note that the authors did not resist that temptation). Rudolf van der Berg has raised several issues with the approach of the above paper (as well as with many other related works) and indicated that the data and approach of the authors may not be reliable. Unfortunately, in this respect, it appears that systematic, reliable, and consistent data in the Telco industry is hard to come by (even if that data should be available to the individual telcos).

Technology change from 2G/3G to 4G, site densification, and more frequency bands can more than easily explain the increase in energy consumption (and all are far better explanations than data volume). It should be noted that there will also be reasons that decrease power consumption over time, such as more efficient electronics (e.g., via modernization), intelligent power management applications, and, last but not least, switching off of older radio access technologies.

The factors that drive a cell site’s absolute energy consumption is

  • Radio access technology with new technologies generally consumes more energy than older ones (even if the newer technologies have become increasingly more spectrally efficient).
  • The antenna type and configuration, including computing requirements for advanced signal processing and beamforming algorithms (that will improve the spectral efficiency at the expense of increased absolute energy consumption).
  • Equipment efficiency. In general, new generations of electronics and systems designs tend to be more energy-efficient for the same level of performance.
  • Intelligent energy management systems that allow for effective power management strategies will reduce energy consumption compared to what it would have been without such systems.
  • The network optimization goal policy. Is the cellular network planned and optimized for meeting the demands and needs of the customers (i.e., the economic design framework) or for providing the peak performance to as many customers as possible (i.e., the Umlaut/Ookla performance-driven framework)? The Umlaut/Ookla-optimized network, maxing out on base station configuration, will observe substantially higher energy consumption and associated costs.
The absolute cellular energy consumption has continued to rise as new radio access technologies (RAT) have been introduced irrespective of the leapfrog in those RATS spectral (bits per Hz) and information-related (Joules per bit) efficiencies.

WHY 5G IS NOT A GREEN TECHNOLOGY?

Let’s first re-acquaint ourselves with the 2015 vision of the 5G NGMN whitepaper;

“5G should support a 1,000 times traffic increase in the next ten years timeframe, with energy consumption by the whole network of only half that typically consumed by today’s networks. This leads to the requirement of an energy efficiency increase of x2000 in the next ten years timeframe.” (Section 4.2.2 Energy Efficiency, 5G White Paper by NGMN Alliance, February 2015).

The bold emphasis is my own and not in the paper itself. There is no doubt that the authors of the 5G vision paper had the ambition of making 5G a sustainable and greener cellular alternative than historically had been the case.

So, from the above statement, we have two performance figures that illustrate the ambition of 5G relative to 4G. Firstly, we have a requirement that the 5G energy efficiency should be 2000x higher than 4G (as it was back in the beginning of 2015).

\frac{E_{4G}/bit}{E_{5G}/bit} \; = \; \frac{\; P_{4G} \;}{\; P_{5G}} \; \cdot \; \frac{\; B_{5G} \;}{\; B_{4G}} \; \cdot \; \frac{\; \eta_{5G,eff} \;}{\; \eta_{4G,eff}} \; \geq \; 2,000

or

\frac{\; P_{4G} \;}{\; P_{5G}} \; \cdot \; \frac{\; B_{5G} \;}{\; B_{4G}} \; \geq \; 200

if

\frac{\; \eta_{5G,eff} \;}{\; \eta_{4G,eff}} \; \approx \; 10

Getting more spectrum bandwidth is relatively trivial as you go up in frequency and into, for example, the millimeter wave range (and beyond). However, getting 20+ GHz (e.g., 200+x100 MHz @ 4G) of additional practically usable spectrum bandwidth would be rather (=understatement) ambitious.

And that the absolute energy consumption of the whole 5G network should be half of what it was with 4G

\frac{E_{5G}}{E_{4G}} \; = \; \frac{\; P_{5G} \; \cdot \; t\;}{\; P_{4G} \; \cdot \; t}\; \approx \; \frac{\; P_{5G} \;}{\; P_{4G} \; } \; \leq \; \frac{1}{2}

If you think about this for a moment. Halfing the absolute energy consumption is an enormous challenge, even if it would have been with the same RAT. It requires innovation leapfrogs across the RAT electronic architecture, design, and material science underlying all of it. In other words, fundamental changes are required in the RF frontend (e.g., Power amplifiers, transceivers), baseband processing, DSP, DAC, ADC, cooling, control and management systems, algorithms, compute, etc…

But reality eats vision for breakfast … There really is no sign that the super-ambitious goal set by the NGMN Alliance in early 2015 is even remotely achievable even if we would give it another ten years (i.e., 2035). We are more than two orders of magnitude away from the visionary target of NGMN, and we are almost at the 10-year anniversary of the vision paper. We more or less get the benefit of the relative difference in spectral efficiency (x10), but no innovation beyond that has contributed very much to quantum leap cellular energy efficiency bit-wise.

I know many operators who will say that from a sustainability perspective, at least before the energy prices went through the roof, it really does not matter that 5G, in absolute terms, leads to substantial increases in energy consumption. They use green energy to supply the energy demand from 5G and pay off $CO_2$ deficits with certificates.

First of all, unless the increased cost can be recovered with the customers (e.g., price plan increase), it is a doubtful economic venue to pursue (and has a bit of a Titanic feel to it … going down together while the orchestra is playing).

Second, we should ask ourselves whether it is really okay for any industry to greedily consume sustainable and still relatively scarce green resources without being incentivized (or encouraged) to pursue alternatives and optimize across mobile and fixed broadband technologies. Particularly when fixed broadband technologies, such as fiber, are available, that would lead to a very sizable and substantial reduction in energy consumption … as customers increasingly adapt to fiber broadband.

Fiber is the greenest and most sustainable access technology we can deploy compared to cellular broadband technologies.

SO WHAT?

5G is a reality. Telcos are and will continue to invest substantially into 5G as they migrate their customers from 4G LTE to what ultimately will be 5G Standalone. The increase in customer experience and new capabilities or enablers are significant. By now, most Telcos will (i.e., 2023) have a very good idea of the operational expense associated with 5G (if not … you better do the math). Some will have been exploring investing in their own green power plants (e.g., solar, wind, hydrogen, etc.) to mitigate part of the energy surge arising from transitioning to 5G.

I suspect that as Telcos start reflecting on Open RAN as they pivot towards 6G (-> 2030+), above and beyond what 6G, as a RAT, may bring of additional operational expense pain, there will be new energy consumption and sustainability surprises to the cellular part of Telcos P&L. In general, breaking up an electronic system into individual (non-integrated) parts, as opposed to being integrated into a single unit, is likely to result in an increased power consumption. Some of the operational in-efficiencies that occur in breaking up a tightly integrated design can be mitigated by power management strategies. Though in order to get such power management strategies to work at the optimum may force a higher degree of supplier uniformity than the original intent of breaking up the tightly integrated system.

However, only Telcos that consider both their mobile and fixed broadband assets together, rather than two silos apart, will gain in value for customers and shareholders. Fixed-mobile (network) conversion should be taken seriously and may lead to very different considerations and strategies than 10+ years ago.

With increasing coverage of fiber and with Telcos stimulating aggressive uptake, it will allow those to redesign the mobile networks for what they were initially supposed to do … provide convenience and service where there is no fixed network present, such as when being mobile and in areas where the economics of a fixed broadband network makes it least likely to be available (e.g., rural areas) although LEO satellites (i.e., here today), maybe stratospheric drones (i.e., 2030+), may offer solid economic alternatives for those places. Interestingly, further simplifying the cellular networks supporting those areas today.

TAKE AWAY.

Volume in Bytes (or bits) does not directly relate to the energy consumption of the underlying communications networks that enable the usage.

The duration, the time scale, of the customer’s usage (i.e., the use of the network resources) does cause power consumption.

The bit-wise energy efficiency of 5G is superior to that of 4G LTE. It is designed that way via its spectral efficiency. Despite this, a 5G site configuration is likely to consume more energy than a 4G LTE site in the field and, thus, not a like-for-like in terms of number of bands and type of antennas deployed.

The absolute power consumption of a 5G configuration is a function of the number of bands deployed, the type of antennas deployed, intelligent energy management features, and the effective time 5G resources that customers have demanded.

Due to its optical foundation, Fiber is far more energy efficient in both bit-wise relative terms and absolute terms than any other legacy fixed (e.g., xDSL, HFC) or cellular broadband technology (e.g., 4G, 5G).

Looking forward and with the increasing challenges of remaining sustainable and contributing to CO2 reduction, it is paramount to consider an energy-optimized fixed and mobile converged network architecture as opposed to today’s approach of optimizing the fixed network separately from the cellular network. As a society, we should expect that the industry works hard to achieve an overall reduction in energy consumption, relaxing the demand on existing green energy infrastructures.

With 5G as of today, we are orders of magnitude from the original NGMN vision of energy consumption of only half of what was consumed by cellular networks ten years ago (i.e., 2014), requiring an overall energy efficiency increase of x2000.

Be aware that many Telcos and Infrastructure providers will use bit-wise energy efficiency when they report on energy consumption. They will generally report impressive gains over time in the energy that networks consume to deliver bits to their customers. This is the least one should expect.

Last but not least, the telco world is not static and is RAT-wise not very clean, as mobile networks will have several RATs deployed simultaneously (e.g., 2G, 4G, and 5G). As such, we rarely (if ever) have apples-to-apples comparisons on cellular energy consumption.

ACKNOWLEDGEMENT.

I greatly acknowledge my wife, Eva Varadi, for her support, patience, and understanding during the creative process of writing this article. I also greatly appreciate the discussion on this topic that I have had with Rudolf van der Berg over the last couple of years. I thank him for pointing out and reminding me (when I forget) of the shortfalls and poor quality of most of the academic work and lobbying activities done in this area.

PS

If you are aiming at a leapfrog in absolute energy reduction of your cellular network, above and beyond what you get with your infrastructure suppliers (e.g., Nokia, Ericsson, Huawei…), I really recommend you take a look at Opanga‘s machine learning-based Joule ML solution. The Joules ML has been proven to reduce RAN energy costs by 20% – 40% on top of what the RAT supplier’s (e.g., Ericsson, Nokia, Huawei, etc.) own energy management solutions may bring.

Disclosure: I am associated with Opanga and on their Industry Advisory Board.