Can LEO Satellites close the Gigabit Gap of Europe’s Unconnectables?

Is LEO satellite broadband a cost-effective and capable option for rural areas of Europe? Given that most seem to agree that LEO satellites will not replace mobile broadband networks, it seems only natural to ask whether LEO satellites might help the EU Commission’s Digital Decade Policy Programme (DDPP) 2030 goal of having all EU households (HH) covered by gigabit connections delivered by so-called very high-capacity networks, including gigabit capable fiber-optic and 5G networks, by 2030 (i.e., only focusing on the digital infrastructure pillar of the DDPP).

As of 2023, more than €80 billion had been allocated in national broadband strategies and through EU funding instruments, including the Connecting Europe Facility and the Recovery and Resilience Facility. However, based on current deployment trajectories and cost structures, an additional €120 billion or more is expected to close the remaining connectivity gap from the approximately 15.5 million rural homes without a gigabit option in 2023. This brings the total investment requirement to over €200 billion. The shortfall is most acute in rural and hard-to-reach regions where network deployment is significantly more expensive. In these areas, connecting a single household with high-speed broadband infrastructure, especially via FTTP, can easily exceed €10,000 in public subsidy, given the long distances and low density of premises. It would be a very “cheap” alternative for Europe if a non-EU-based (i.e., USA) satellite constellation could close the gigabit coverage gap even by a small margin. However, given some of the current geopolitical factors, 200 billion euros could enable Europe to establish its own large LEO satellite constellation if it can match (or outperform) the unitary economics of SpaceX, rather than its IRIS² satellite program.

In this article, my analysis focuses on direct-to-dish low Earth orbit (LEO) satellites with expected capabilities comparable to, or exceeding, those projected for SpaceX’s Starlink V3, which is anticipated to deliver up to 1 Terabit per second of total downlink capacity. For such satellites to represent a credible alternative to terrestrial gigabit connectivity, several thousand would need to be in operation. This would allow overlapping coverage areas, increasing effective throughput to household outdoor dishes across sparsely populated regions. Reaching such a scale may take years, even under optimistic deployment scenarios, highlighting the importance of aligning policy timelines with technological maturity.

GIGABITS IN THE EU – WHERE ARE WE, AND WHERE DO WE THINK WE WILL GO?

  • In 2023, Fibre-to-the-Premises (FTTP) rural HH coverage was ca. 52%. For the EU28, this means that approximately 16 million rural homes lack fiber coverage.
  • By 2030, projected FTTP deployment in the EU28 will result in household coverage reaching almost 85% of all rural homes (under so-called BaU conditions), leaving approximately 5.5 million households without it.
  • Due to inferior economics, it is estimated that approximately 10% to 15% of European households are “unconnectable” by FTTP (although not necessarily by FWA or broadband mobile in general).
  • EC estimated (in 2023) that over 80 billion euros in subsidies have been allocated in national budgets, with an additional 120 billion euros required to close the gigabit ambition gap by 2030 (e.g., over 10,000 euros per remaining rural household in 2023).

So, there is a considerable number of so-called “unconnectable” households within the European Union (i.e., EU28). These are, for example, isolated dwellings away from inhabited areas (e.g., settlements, villages, towns, and cities). They often lack the most basic fixed communications infrastructure, although some may have old copper lines or only relatively poor mobile coverage.

The figure below illustrates the actual state of FTTP deployment in rural households in 2023 (orange bars) as well as a Rural deployment scenario that extends FTTP deployment to 2030, using the maximum of the previous year’s deployment level and the average of the last three years’ deployment levels. Any level above 80% grows by 1% pa (arbitrarily chosen). The data source for the above is “Digital Decade 2024: Broadband Coverage in Europe 2023” by the European Commission. The FTTP pace has been chosen individually for suburban and rural areas to match the expectations expressed in the reports for 2030.

ARE LEO DIRECT-TO-DISH (D2D) SATELLITES A CREDIBLE ALTERNATIVE FOR THE “UNCONNECTABLES”?

  • For Europe, a non-EU-based (i.e., US-based) satellite constellation could be a very cost-effective alternative to closing the gigabit coverage gap.
  • Megabit connectivity (e.g., up to 100+ Mbps) is already available today with SpaceX Starlink LEO satellites in rural areas with poor broadband alternatives.
  • The SpaceX Starlink V2 satellite can provide approximately 100 Gbps (V1.5 ~ 20+ Gbps), and its V3 is expected to deliver 1,000 Gbps within the satellite’s coverage area, with a maximum coverage radius of over 500 km.
  • The V3 may have 320 beams (or more), each providing approximately ~3 Gbps (i.e., 320 x 3 Gbps is ca. 1 Tbps). With a frequency re-use factor of 40, 25 Gbps can be supplied within a unique coverage area. With “adjacent” satellites (off-nadir), the capacity within a unique coverage area can be enhanced by additional beams that overlap the primary satellite (nadir).
  • With an estimated EU28 “unconnectable” household density of approximately 1.5 per square kilometer, the LEO satellite constellation would cover more than 20,000 households, each with a capacity of 20 Gbps over an area of 15,000 square kilometers.
  • At a peak-hour user concurrency of 15% and a per-user demand of 1 Gbps, the backhaul demand would reach 3 terabits per second (Tbps). This means we have an oversubscription ratio of approximately 3:1, which must be met by a single 1 Tbps satellite, or could be served by three overlapping satellites.
  • This assumes a 100% take-up rate of the unconnectable HHs and that each would select a 1 Gbps service (assuming such would be available). In rural areas, the take-up rate may not be significantly higher than 60%, and not all households will require a 1 Gbps service.
  • This also assumes that there are no alternatives to LEO satellite direct-to-dish service, which seems unlikely for at least some of the 20,000 “unconnectable” households. Given the typical 5G coverage conditions associated with the frequency spectrum license conditions, one might hope for some decent 5G coverage; alas, it is unlikely to be gigabit in deep rural and isolated areas.

For example, consider the Starlink LEO satellite V1.5, which has a total capacity of approximately 25 Gbps, comprising 32 beams that deliver 800 Mbps per beam, including dual polarization, to a ground-based user dish. It can provide a maximum of 6.4 Gbps over a minimum area of ca. 6,000 km² at nadir with an Earth-based dish directly beneath the satellite. If the coverage area is situated in a UK-based rural area, for example, we would expect to find, on average, 150,000 rural households using an average of 25 rural homes per km². If a household demands 100 Mbps at peak, only 60 households can be online at full load concurrently per area. With 10% concurrency, this implies that we can have a total of 600 households per area out of 150,000 homes. Thus, 1 in 250 households could be allowed to subscribe to a Starlink V1.5 if the target is 100 Mbps per home and a concurrency factor of 10% within the coverage area. This is equivalent to stating that the oversubscription ratio is 250:1, and reflects the tension between available satellite capacity and theoretical rural demand density. In rural UK areas, the beam density is too high relative to capacity to allow universal subscription at 100 Mbps unless more satellites provide overlapping service. For a V1.5 satellite, we can support four regions (i.e., frequency reuse groups), each with a maximum throughput of 6.4 Gbps. Thus, the satellite can support a total of 2,400 households (i.e., 4 x 600) with a peak demand of 100 Mbps and a concurrency rate of 10%. As other satellites (off-nadir) can support the primary satellite, it means that some areas’ demand may be supported by two to three different satellites, providing a multiplier effect that can increase the capacity offered. The Starlink V2 satellite is reportedly capable of supporting up to a total of 100 Gbps (approximately four times that of V1.5), while the V3 will support up to 1 Tbps, which is 40 times that of V1.5. The number of beams and, consequently, the number of independent frequency groups, as well as spectral efficiency, are expected to be improved over V1.5, which are factors that will enhance the overall total capacity of the newer Starlink satellite generations.

By 2030, the EU28 rural areas are expected to achieve nearly 85% FTTP coverage under business-as-usual deployment scenarios. This would leave approximately 5.5 million households, referred to as “unconnectables,” without direct access to high-speed fiber. These households are typically isolated, located in sparsely populated or geographically challenging regions, where the economics of fiber deployment become prohibitively uneconomical. Although there may be alternative broadband options, such as FWA, 5G mobile coverage, or copper, it is unlikely that such “unconnectable” homes would sustainably have a gigabit connection.

This may be where LEO satellite constellations enter the picture as a possible alternative to deploying fiber optic cables in uneconomical areas, such as those that are unconnectable. The anticipated capabilities of Starlink’s third-generation (V3) satellites, offering approximately 1 Tbps of total downlink capacity with advanced beamforming and frequency reuse, already make them a viable candidate for servicing low-density rural areas, assuming reasonable traffic models similar to those of an Internet Service Provider (ISP). With modest overlapping coverage from two or three such satellites, these systems could deliver gigabit-class service to tens of thousands of dispersed households without (much) oversubscription, even assuming relatively high concurrency and usage.

Considering this, there seems little doubt that an LEO constellation, just slightly more capable than SpaceX’s Starlink V3 satellite, appears to be able to fully support the broadband needs of the remaining unconnected European households expected by 2030. This also aligns well with the technical and economic strengths of LEO satellites: they are ideally suited for delivering high-capacity service to regions where population density is too low to justify terrestrial infrastructure, yet digital inclusion remains equally essential.

LOW-EARTH ORBIT SATELLITES DIRECT-TO-DISRUPTION.

I have in my blog “Will LEO Satellite Direct-to-Cell Networks make Terrestrial Networks Obsolete?” I provided some straightforward reasons why the LEO satellite with direct to an unmodified smartphone capabilities (e.g., Lynk Global, AST Spacemobile) would not make existing cellular network obsolete and would be of most value in remote or very rural areas where no cellular coverage would be present (as explained very nicely by Lynk Global) offering a connection alternative to satellite phones such as Iridium, and thus being complementary existing terrestrial cellular networks. Thus, despite the hype, we should not expect a direct disruption to regular terrestrial cellular networks from LEO satellite D2C providers.

Of course, the question could also be asked whether LEO satellites directed to an outdoor (terrestrial) dish could threaten existing fiber optic networks, the business case, and the value proposition. After all, the SpaceX Starlink V3 satellite, not yet operational, is expected to support 1 Terabit per second (Tbps) over a coverage area of several thousand kilometers in diameter. It is no doubt an amazing technological achievement for SpaceX to have achieved a 10x leap in throughput from its present generation V2 (~100 Gbps).

However, while a V3-like satellite may offer an (impressive) total capacity of 1 Tbps, this capacity is not uniformly available across its entire footprint. It is distributed across multiple beams, potentially 256 or more, each with a bandwidth of approximately 4 Gbps (i.e., 1 Tbps / 256 beams). With a frequency reuse factor of, for example, 5, the effective usable capacity per unique coverage area becomes a fraction of the satellite’s total throughput. This means that within any given beam footprint, the satellite can only support a limited number of concurrent users at high bandwidth levels.

As a result, such a satellite cannot support more than roughly a thousand households with concurrent 1 Gbps demand in any single area (or, alternatively, about 10,000 households with 100 Mbps concurrent demand). This level of support would be equivalent to a small FTTP (sub)network serving no more than 20,000 households at a 50% uptake rate (i.e., 10,000 connected homes) and assuming a concurrency of 10%. A deployment of this scale would typically be confined to a localized, dense urban or peri-urban area, rather than the vast rural regions that LEO systems are expected to serve.

In contrast, a single Starlink V3-like satellite would cover a vast region, capable of supporting similar or greater numbers of users, including those in remote, low-density areas that FTTP cannot economically reach. The satellite solution described here is thus not designed to densify urban broadband, but rather to reach rural, remote, and low-density areas where laying fiber is logistically or economically impractical. Therefore, such satellites and conventional large-scale fiber networks are not in direct competition, as they cannot match their density, scale, or cost-efficiency in high-demand areas. Instead, it complements fiber infrastructure by providing connectivity and reinforces the case for hybrid infrastructure strategies, in which fiber serves the dense core, and LEO satellites extend the digital frontier.

However, terrestrial providers must closely monitor their FTTP deployment economics and refrain from extending too far into deep rural areas beyond a certain household density, which is likely to increase over time as satellite capabilities improve. The premise of this blog is that capable LEO satellites by 2030 could serve unconnected households that are unlikely to have any commercial viability for terrestrial fiber and have no other gigabit coverage option. Within the EU28, this represents approximately 5.5 million remote households. A Starlink V3-like 1 Tbps satellite could provide a gigabit service (occasionally) to those households and certainly hundreds of megabits per second per isolated household. Moreover, it is likely that over time, more capable satellites will be launched, with SpaceX being the most likely candidate for such an endeavor if it maintains its current pace of innovation. Such satellites will likely become increasingly interesting for household densities above 2 households per square kilometer. However, suppose an FTTP network has already been deployed. In that case, it seems unlikely that the satellite broadband service would render the terrestrial infrastructure obsolete, as long as it is priced competitively in comparison to the satellite broadband network.

LEO satellite direct-to-dish (D2D) based broadband networks may be a credible and economical alternative to deploying fiber in low-density rural households. The density boundary of viable substitution for a fiber connection with a gigabit satellite D2D connection may shift inward (from deep rural, low-density household areas). This reinforces the case for hybrid infrastructure strategies, in which fiber serves the denser regions and LEO satellites extend the digital frontier to remote and rural areas.

THE USUAL SUSPECT – THE PUN INTENDED.

By 2030, SpaceX’s Starlink will operate one of the world’s most extensive low Earth orbit (LEO) satellite constellations. As of early 2025, the company has launched more than 6,000 satellites into orbit; however, most of these, including V1, V1.5, and V2, are expected to cease operation by 2030. Industry estimates suggest that Starlink could have between 15,000 and 20,000 operational satellites by the end of the decade, which I anticipate to be mainly V3 and possibly a share of V4. This projection depends largely on successfully scaling SpaceX’s Starship launch vehicle, which is designed to deploy up to 60 or more next-generation V3 satellites per mission with the current cadence. However, it is essential to note that while SpaceX has filed applications with the International Telecommunication Union (ITU) and obtained FCC authorization for up to 12,000 satellites, the frequently cited figure of 42,000 satellites includes additional satellites that are currently proposed but not yet fully authorized.

The figure above, based on an idea of John Strand of Strand Consult, provides an illustrative comparison of the rapid innovation and manufacturing cycles of SpaceX LEO satellites versus the slower progression of traditional satellite development and spectrum policy processes, highlighting the growing gap between technological advancement and regulatory adaptation. This is one of the biggest challenges that regulatory institutions and policy regimes face today.

Amazon’s Project Kuiper has a much smaller planned constellation. The Federal Communications Commission (FCC) has authorized Amazon to deploy 3,236 satellites under its initial phase, with a deadline requiring that at least 1,600 be launched and operational by July 2026. Amazon began launching test satellites in 2024 and aims to roll out its service in late 2025 or early 2026. On April 28, 2025, Amazon launched its first 27 operational satellites for Project Kuiper aboard a United Launch Alliance Atlas (ULA) V rocket from Cape Canaveral, Florida. This marks the beginning of Amazon’s deployment of its planned 3,236-satellite constellation aimed at providing global broadband internet coverage. Though Amazon has hinted at potential expansion beyond its authorized count, any Phase 2 remains speculative and unapproved. If such an expansion were pursued and granted, the constellation could eventually grow to 6,000 satellites, although no formal filings have yet been made to support the higher amount.

China is rapidly advancing its low Earth orbit (LEO) satellite capabilities, positioning itself as a formidable competitor to SpaceX’s Starlink by 2030. Two major Chinese LEO satellite programs are at the forefront of this effort: the Guowang (13,000) and Qianfan (15,000) constellations. So, by 2030, it is reasonable to expect that China will field a national LEO satellite broadband system with thousands of operational satellites, focused not just on domestic coverage but also on extending strategic connectivity to Belt and Road Initiative (BRI) countries, as well as regions in Africa, Asia, and South America. Unlike SpaceX’s commercially driven approach, China’s system is likely to be closely integrated with state objectives, combining broadband access with surveillance, positioning, and secure communication functionality. While it remains unclear whether China will match SpaceX’s pace of deployment or technological performance by 2030, its LEO ambitions are unequivocally driven by geopolitical considerations. They will likely shape European spectrum policy and infrastructure resilience planning in the years ahead. Guowang and Qianfan are emblematic of China’s dual-use strategy, which involves developing technologies for both civilian and military applications. This approach is part of China’s broader Military-Civil Fusion policy, which seeks to integrate civilian technological advancements into military capabilities. The dual-use nature of these satellite constellations raises concerns about their potential military applications, including surveillance and communication support for the People’s Liberation Army.

AN ILLUSTRATION OF COVERAGE – UNITED KINGDOM.

It takes approximately 172 Starlink beams to cover the United Kingdom, with 8 to 10 satellites overhead simultaneously. To have a persistent UK coverage in the order of 150 satellite constellations across appropriate orbits. Starlink’s 53° inclination orbital shell is optimized for mid-latitude regions, providing frequent satellite passes and dense, overlapping beam coverage over areas like southern England and central Europe. This results in higher throughput and more consistent connectivity with fewer satellites. In contrast, regions north of 53°N, such as northern England and Scotland, lie outside this optimal zone and depend on higher-inclination shells (70° and 97.6°), which have fewer satellites and wider, less efficient beams. As a result, coverage in these Northern areas is less dense, with lower signal quality and increased latency.

For this blog, I developed a Python script, with fewer than 600 lines of code (It’s a physicist’s code, so unlikely to be super efficient), to simulate and analyze Starlink’s satellite coverage and throughput over the United Kingdom using real orbital data. By integrating satellite propagation, beam modeling, and geographic visualization, it enables a detailed assessment of regional performance from current Starlink deployments across multiple orbital shells. Its primary purpose is to assess how the currently deployed Starlink constellation performs over UK territory by modeling where satellites pass, how their beams are steered, and how often any given area receives coverage. The simulation draws live TLE (Two-Line Element) data from Celestrak, a well-established source for satellite orbital elements. Using the Skyfield library, the code propagates the positions of active Starlink satellites over a 72-hour period, sampling every 5 minutes to track their subpoints across the United Kingdom. There is no limitation on the duration or sampling time. Choosing a more extended simulation period, such as 72 hours, provides a more statistically robust and temporally representative view of satellite coverage by averaging out orbital phasing artifacts and short-term gaps. It ensures that all satellites complete multiple orbits, allowing for more uniform sampling of ground tracks and beam coverage, especially from shells with lower satellite densities, such as the 70° and 97.6° inclinations. This results in smoother, more realistic estimates of average signal density and throughput across the entire region.

Each satellite is classified into one of three orbital shells based on inclination angle: 53°, 70°, and 97.6°. These shells are simulated separately and collectively to understand their individual and combined contributions to UK coverage. The 53° shell dominates service in the southern part of the UK, characterized by its tight orbital band and high satellite density (see the Table below). The 70° shell supplements coverage in northern regions, while the 97.6° polar shell offers sparse but critical high-latitude support, particularly in Scotland and surrounding waters. The simulation assumes several (critical) parameters for each satellite type, including the number of beams per satellite, the average beam radius, and the estimated throughput per beam. These assumptions reflect engineering estimates and publicly available Starlink performance information, but are deliberately simplified to produce regional-level coverage and throughput estimates, rather than user-specific predictions. The simulation does not account for actual user terminal distribution, congestion, or inter-satellite link (ISL) performance, focusing instead on geographic signal and capacity potential.

These parameters were used to infer beam footprints and assign realistic signal density and throughput values across the UK landmass. The satellite type was inferred from its shell (e.g., most 53° shell satellites are currently V1.5), and beam properties were adjusted accordingly.

The table above presents the core beam modeling parameters and satellite-specific assumptions used in the Starlink simulation over the United Kingdom. It includes general values for beam steering behavior, such as Gaussian spread, steering limits, city-targeting probabilities, and beam spacing constraints, as well as performance characteristics tied to specific satellite generations to the extent it is known (e.g., Starlink V1.5, V2 Mini, and V2 Full). These assumptions govern the placement of beams on the Earth’s surface and the capacity each beam can deliver. For instance, the City Exclusion Radius of 0.25 degrees corresponds to a ~25 km buffer around urban centers, where beam placement is probabilistically discouraged. Similarly, the beam radius and throughput per beam values align with known design specifications submitted by SpaceX to the U.S. Federal Communications Commission (FCC), particularly for Starlink’s V1.5 and V2 satellites. The table above also defines overlap rules, specifying the maximum number of beams that can overlap in a region and the maximum number of satellites that can contribute beams to a given point. This helps ensure that simulations reflect realistic network constraints rather than theoretical maxima.

Overall, the developed code offers a geographically and physically grounded simulation of how the existing Starlink network performs over the UK. It helps explain observed disparities in coverage and throughput by visualizing the contribution of each shell and satellite generation. This modeling approach enables planners and researchers to quantify satellite coverage performance at national and regional scales, providing insight into both current service levels and facilitating future constellation evolution, which is not discussed here.

The figure illustrates a 72-hour time-averaged Starlink coverage density over the UK. The asymmetric signal strength pattern reflects the orbital geometry of Starlink’s 53° inclination shell, which concentrates satellite coverage over southern and central England. Northern areas receive less frequent coverage due to fewer satellite passes and reduced beam density at higher latitudes.

This image above presents the Starlink Average Coverage Density over the United Kingdom, a result from a 72-hour simulation using real satellite orbital data from Celestrak. It illustrates the mean signal exposure across the UK, where color intensity reflects the frequency and density of satellite beam illumination at each location.

At the center of the image, a bright yellow core indicating the highest signal strength is clearly visible over the English Midlands, covering cities such as Birmingham, Leicester, and Bristol. The signal strength gradually declines outward in a concentric pattern—from orange to purple—as one moves northward into Scotland, west toward Northern Ireland, or eastward along the English coast. While southern cities, such as London, Southampton, and Plymouth, fall within high-coverage zones, northern cities, including Glasgow and Edinburgh, lie in significantly weaker regions. The decline in signal intensity is especially apparent beyond the 56°N latitude. This pattern is entirely consistent with what we know about the structure of the Starlink satellite constellation. The dominant contributor to coverage in this region is the 53° inclination shell, which contains 3,848 satellites spread across 36 orbital planes. This shell is designed to provide dense, continuous coverage to heavily populated mid-latitude regions, such as the southern United Kingdom, continental Europe, and the continental United States. However, its orbital geometry restricts it to a latitudinal range that ends near 53 to 54°N. As a result, southern and central England benefit from frequent satellite passes and tightly packed overlapping beams, while the northern parts of the UK do not. Particularly, Scotland lies at or beyond the shell’s effective coverage boundary.

The simulation may indicate how Starlink’s design prioritizes population density and market reach. Northern England receives only partial benefit, while Scotland and Northern Ireland fall almost entirely outside the core coverage of the 53° shell. Although some coverage in these areas is provided by higher inclination shells (specifically, the 70° shell with 420 satellites and the 97.6° polar shell with 227 satellites), these are sparser in both the number of satellites and the orbital planes. Their beams may also be broader and less (thus) less focused, resulting in lower average signal strength in high-latitude regions.

So, why is the coverage not textbook nice hexagon cells with uniform coverage across the UK? The simple answer is that real-world satellite constellations don’t behave like the static, idealized diagrams of hexagonal beam tiling often used in textbooks or promotional materials. What you’re seeing in the image is a time-averaged simulation of Starlink’s actual coverage over the UK, reflecting the dynamic and complex nature of low Earth orbit (LEO) systems like Starlink’s. Unlike geostationary satellites, LEO satellites orbit the Earth roughly every 90 minutes and move rapidly across the sky. Each satellite only covers a specific area for a short period before passing out of view over the horizon. This movement causes beam coverage to constantly shift, meaning that any given spot on the ground is covered by different satellites at different times. While individual satellites may emit beams arranged in a roughly hexagonal pattern, these patterns move, rotate, and deform continuously as the satellite passes overhead. The beams also vary in shape and strength depending on their angle relative to the Earth’s surface, becoming elongated and weaker when projected off-nadir, i.e., when the satellite is not directly overhead. Another key reason lies in the structure of Starlink’s orbital configuration. Most of the UK’s coverage comes from satellites in the 53° inclination shell, which is optimized for mid-latitude regions. As a result, southern England receives significantly denser and more frequent coverage than Scotland or Northern Ireland, which are closer to or beyond the edge of this shell’s optimal zone. Satellites serving higher latitudes originate from less densely populated orbital shells at 70° and 97.6°, which result in fewer passes and wider, less efficient beams.

The above heatmap does not illustrate a snapshot of beam locations at a specific time, but rather an averaged representation of how often each part of the UK was covered over a simulation period. This type of averaging smooths out the moment-to-moment beam structure, revealing broader patterns of coverage density instead. That’s why we see a soft gradient from intense yellow in the Midlands, where overlapping beams pass more frequently, to deep purple in northern regions, where passes are less common and less centered.

Illustrates an idealized hexagonal beam coverage footprint over the UK. For visual clarity, only a subset of hexagons is shown filled with signal intensity (yellow core to purple edge), to illustrate a textbook-like uniform tiling. In reality, satellite beams from LEO constellations, such as Starlink, are dynamic, moving, and often non-uniform due to orbital motion, beam steering, and geographic coverage constraints.

The two charts below provide a visual confirmation of the spatial coverage dynamics behind the Starlink signal strength distribution over the United Kingdom. Both are based on a 72-hour simulation using real Starlink satellite data obtained from Celestrak, and they accurately reflect the operational beam footprints and orbital tracks of currently active satellites over the United Kingdom.

This figure illustrates time-averaged Starlink coverage density over the UK with beam footprints (left) and satellite ground tracks (right) by orbital shell. The high density of beams and tracks from the 53° shell over southern UK leads to stronger and more consistent coverage. At the same time, northern regions receive fewer, more widely spaced passes from higher-inclination shells (70° and 97.6°), resulting in lower aggregate signal strength.

The first chart displays the beam footprints (i.e., the left side chart above) of Starlink satellites across the UK, color-coded by orbital shell: cyan for the 53° shell, green for the 70° shell, and magenta for the 97° polar shell. The concentration of cyan beam circles in southern and central England vividly demonstrates the dominance of the 53° shell in this region. These beams are tightly packed and frequent, explaining the high signal coverage in the earlier signal strength heatmap. In contrast, northern England and Scotland are primarily served by green and magenta beams, which are more sparse and cover larger areas — a clear indication of the lower beam density from the higher-inclination shells.

The second chart illustrates the satellite ground tracks (i.e., the right side chart above) over the same period and geographic area. Again, the saturation of cyan lines in the southern UK underscores the intensive pass frequency of satellites in the 53° inclined shell. As one moves north of approximately 53°N, these tracks vanish almost entirely, and only the green (70° shell) and magenta (97° shell) paths remain. These higher inclination tracks cross through Scotland and Northern Ireland, but with less spatial and temporal density, which supports the observed decline in average signal strength in those areas.

Together, these two charts provide spatial and orbital validation of the signal strength results. They confirm that the stronger signal levels seen in southern England stem directly from the concentrated beam targeting and denser satellite presence of the 53° shell. Meanwhile, the higher-latitude regions rely on less saturated shells, resulting in lower signal availability and throughput. This outcome is not theoretical — it reflects the live state of the Starlink constellation today.

The figure illustrates the estimated average Starlink throughput across the United Kingdom over a 72-hour window. Throughput is highest over southern and central England due to dense satellite traffic from the 53° orbital shell, which provides overlapping beam coverage and short revisit times. Northern regions experience reduced throughput from sparser satellite passes and less concentrated beam coverage.

The above chart shows the estimated average throughput of Starlink Direct-2-Dish across the United Kingdom, simulated over 72 hours using real orbital data from Celestrak. The values are expressed in Megabits per second (Mbps) and are presented as a heatmap, where higher throughput regions are shown in yellow and green, and lower values fade into blue and purple. The simulation incorporates actual satellite positions and coverage behavior from the three operational inclination shells currently providing Starlink service to the UK. Consistent with the signal strength, beam footprint density, and orbital track density, the best quality and most supplied capacity are available south of the 53°N inclination.

The strongest throughput is concentrated in a horizontal band stretching from Birmingham through London to the southeast, as well as westward into Bristol and south Wales. In this region, the estimated average throughput peaks at over 3,000 Mbps, which can support more than 30 concurrent customers each demanding 100 Mbps within the coverage area or up to 600 households with an oversubscription rate of 1 to 20. This aligns closely with the signal strength and beam density maps also generated in this simulation and is driven by the dense satellite traffic of the 53° inclination shell. These satellites pass frequently over southern and central England, where their beams overlap tightly and revisit times are short. The availability of multiple beams from different satellites at nearly all times drives up the aggregate throughput experienced at ground level. Throughput falls off sharply beyond approximately 54°N. In Scotland and Northern Ireland, values typically stay well below 1,000 Mbps. This reduction directly reflects the sparser presence of higher-latitude satellites from the 70° and 97.6° shells, which are fewer in number and more widely spaced, resulting in lower revisit frequencies and broader, less concentrated beams. The throughput map thus offers a performance-level confirmation of the underlying orbital dynamics and coverage limitations seen in the satellite and beam footprint charts.

While the above map estimates throughput in realistic terms, it is essential to understand why it does not reflect the theoretical maximum performance implied by Starlink’s physical layer capabilities. For example, a Starlink V1.5 satellite supports eight user downlink channels, each with 250 MHz of bandwidth, which in theory amounts to a total of 2 GHz of spectrum. Similarly, if one assumes 24 beams, each capable of delivering 800 Mbps, that would suggest a satellite capacity in the range of approximately 19–20 Gbps. However, these peak figures assume an ideal case with full spectrum reuse and optimized traffic shaping. In practice, the estimated average throughput shown here is the result of modeling real beam overlap and steering constraints, satellite pass timing, ground coverage limits, and the fact that not all beams are always active or directed toward the same location. Moreover, local beam capacity is shared among users and dynamically managed by the constellation. Therefore, the chart reflects a realistic, time-weighted throughput for a given geographic location, not a per-satellite or per-user maximum. It captures the outcome of many beams intermittently contributing to service across 72 hours, modulated by orbital density and beam placement strategy, rather than theoretical peak link rates.

A valuable next step in advancing the simulation model would be the integration of empirical user experience data across the UK footprint. If datasets such as comprehensive Ookla performance measurements (e.g., Starlink-specific download and upload speeds, latency, and jitter) were available with sufficient geographic granularity, the current Python model could be calibrated and validated against real-world conditions. Such data would enable the adjustment of beam throughput assumptions, satellite visibility estimates, and regional weighting factors to better reflect the actual service quality experienced by users. This would enhance the model’s predictive power, not only in representing average signal and throughput coverage, but also in identifying potential bottlenecks, underserved areas, or mismatches between orbital density and demand.

It is also important to note that this work relies on a set of simplified heuristics for beam steering, which are designed to make the simulation both tractable and transparent. In this model, beams are steered within a fixed angular distance from each satellite’s subpoint, with probabilistic biases against cities and simple exclusion zones (i.e., I operate with an exclusion radius of approximately 25 km or more). However, in reality, Starlink’s beam steering logic is expected to be substantially more advanced, employing dynamic optimization algorithms that account for real-time demand, user terminal locations, traffic load balancing, and satellite-satellite coordination via laser interlinks. Starlink has the crucial (and obvious) operational advantage of knowing exactly where its customers are, allowing it to direct capacity where it is needed most, avoid congestion (to an extent), and dynamically adapt coverage strategies. This level of real-time awareness and adaptive control is not replicated in this analysis, which assumes no knowledge of actual user distribution and treats all geographic areas equally.

As such, the current Python code provides a first-order geographic approximation of Starlink coverage and capacity potential, not a reflection of the full complexity and intelligence of SpaceX’s actual network management. Nonetheless, it offers a valuable structural framework that, if calibrated with empirical data, could evolve into a much more powerful tool for performance prediction and service planning.

Median Starlink download speeds in the United Kingdom, as reported by Ookla, from Q4 2022 to Q4 2024, indicate a general decline through 2023 and early 2024, followed by a slight recovery in late 2024. Source: Ookla.com.

The decline in real-world median user speeds, observed in the chart above, particularly from Q4 2023 to Q3 2024, may reflect increasing congestion and uneven coverage relative to demand, especially in areas outside the dense beam zones of the 53° inclination shell. This trend supports the simulation’s findings: while orbital geometry enables strong average coverage in the southern UK, northern regions rely on less frequent satellite passes from higher-inclination shells, which limits performance. The recovery of the median speed in Q4 2024 could be indicative of new satellite deployments (e.g., more V2 Minis or V2 Fulls) beginning to ease capacity constraints, something future simulation extensions could model by incorporating launch timelines and constellation updates.

Illustrates a European-based dual-use Low Earth Orbit (LEO) satellite constellation providing broadband connectivity to Europe’s millions of unconnectables by 2030 on a secure and strategic infrastructure platform covering Europe, North Africa, and the Middle East.

THE 200 BILLION EUROS QUESTION – IS THERE A PATH TO A EUROPEAN SPACE INDEPENDENCE?

Let’s start with the answer! Yes!

Is €200 billion, the estimated amount required to close the EU28 gigabit gap between 2023 and 2030, likely to enable Europe to build its own LEO satellite constellation and potentially develop one that is more secure, inclusive, and strategically aligned with its values and geopolitical objectives? In comparison, the European Union’s IRIS² (Infrastructure for Resilience, Interconnectivity and Security by Satellite) program has been allocated a total budget of 10+ billion euros aiming at building 264 LEO satellites (1,200 km) and 18 MEO satellites (8,000 km) mainly by the European “Primes” (i.e., the usual “suspects” of legacy defense contractors) by 2030. For that amount, we should even be able to afford our dedicated European stratospheric drone program for real-world use cases, as opposed to, for example, Airbus’s (AALTO) Zephyr fragile platform, which, imo, is more an impressive showcase of an innovative, sustainable (solar-driven) aerial platform than a practical, robust, high-performance communications platform.

A significant portion of this budget should be dedicated to designing, manufacturing, and launching a European-based satellite constellation. If Europe could match the satellite cost price of SpaceX, and not that of IRIS² (which appears to be large based on legacy satellite platform thinking or at least the unit price tag is), it could launch a very substantial number of EU-based LEO satellites within 200 billion euros (also for a lot less obviously). It easily matches the number of SpaceX’s long-term plans and would vastly surpass the satellites authorized under Kuiper’s first phase. To support such a constellation, Europe must invest heavily in launch infrastructure. While Ariane 6 remains in development, it could be leveraged to scale up the Ariane program or develop a reusable European launch system, mirroring and improving upon the capabilities of SpaceX’s Starship. This would reduce long-term launch costs, boost autonomy, and ensure deployment scalability over the decade. Equally essential would be establishing a robust ground segment covering the deployment of a European-wide ground station network, edge nodes, optical interconnects, and satellite laser communication capabilities.

Unlike Starlink, which benefits from SpaceX’s vertical integration, and Kuiper, which is backed by Amazon’s capital and logistics empire, a European initiative would rely heavily on strong multinational coordination. With 200 billion euros, possibly less if the usual suspects (i.e., ” Primes”) are managed accordingly, Europe could close the technology gap rapidly, secure digital sovereignty, and ensure that it is not dependent on foreign providers for critical broadband infrastructure, particularly for rural areas, government services, and defense.

Could this be done by 2030? Doubtful, unless Europe can match SpaceX’s impressive pace of innovation. That is at least to match the 3 years (2015–2018) it took SpaceX to achieve a fully reusable Falcon 9 system and the 4 years (2015–2019) it took to go from concept to the first operational V1 satellite launch. Elon has shown it is possible.

KEY TAKEAWAYS.

LEO satellite direct-to-dish broadband, when strategically deployed in underserved and hard-to-reach areas, should be seen not as a competitor to terrestrial networks but as a strategic complement. It provides a practical, scalable, and cost-effective means to close the final connectivity gap, one that terrestrial networks alone are unlikely to bridge economically. In sparsely populated rural zones, where fiber deployment becomes prohibitively expensive, LEO satellites may render new rollouts obsolete. In these cases, satellite broadband is not just an alternative. It may be essential. Moreover, it can also serve as a resilient backup in areas where rural fiber is already deployed, especially in regions lacking physical network redundancy. Rather than undermining terrestrial infrastructure, LEO extends its reach, reinforcing the case for hybrid connectivity models central to achieving EU-wide digital reach by 2030.

Instead of continuing to subsidize costly last-mile fiber in uneconomical areas, European policy should reallocate a portion of this funding toward the development of a sovereign European Low-Earth Orbit (LEO) satellite constellation. A mere 200 billion euros, or even less, would go a very long way in securing such a program. Such an investment would not only connect the remaining “unconnectables” more efficiently but also strengthen Europe’s digital sovereignty, infrastructure resilience, and strategic autonomy. A European LEO system should support dual-use applications, serving both civilian broadband access and the European defense architecture, thereby enhancing secure communications, redundancy, and situational awareness in remote regions. In a hybrid connectivity model, satellite broadband plays a dual role: as a primary solution in hard-to-reach zones and as a high-availability backup where terrestrial access exists, reinforcing a layered, future-proof infrastructure aligned with the EU’s 2030 Digital Decade objectives and evolving security imperatives.

Non-European dependence poses strategic trade-offs: The rise of LEO broadband providers, SpaceX, and China’s state-aligned Guowang and Qianfan, underscores Europe’s limited indigenous capacity in the Low Earth Orbit (LEO) space. While non-EU options may offer faster and cheaper rural connectivity, reliance on foreign infrastructure raises concerns about sovereignty, data governance, and security, especially amid growing geopolitical tensions.

LEO satellites, especially those similar or more capable than Starlink V3, can technically support the connectivity needs of Europe’s 2030s “unconnectable” (rural) households. Due to geography or economic constraints, these homes are unlikely to be reached by FTTP even under the most ambitious business-as-usual scenarios. A constellation of high-capacity satellites could serve these households with gigabit-class connections, especially when factoring in user concurrency and reasonable uptake rates.

The economics of FTTP deployment sharply deteriorate in very low-density rural regions, reinforcing the need for alternative technologies. By 2030, up to 5.5 million EU28 households are projected to remain beyond the economic viability of FTTP, down from 15.5 million rural homes in 2023. The European Commission has estimated that closing the gigabit gap from 2023 to 2030 requires around €200 billion. LEO satellite broadband may be a more cost-effective alternative, particularly with direct-to-dish architecture, at least for the share of unconnectable homes.

While LEO satellite networks offer transformative potential for deep rural coverage, they do not pose a threat to existing FTTP deployments. A Starlink V3 satellite, despite its 1 Tbps capacity, can serve the equivalent of a small fiber network, about 1,000 homes at 1 Gbps under full concurrency, or roughly 20,000 homes with 50% uptake and 10% busy-hour concurrency. FTTP remains significantly more efficient and scalable in denser areas. Satellites are not designed to compete with fiber in urban or suburban regions, but rather to complement it in places where fiber is uneconomical or otherwise unviable.

The technical attributes of LEO satellites make them ideally suited for sparse, low-density environments. Their broad coverage area and increasingly sophisticated beamforming and frequency reuse capabilities allow them to efficiently serve isolated dwellings, often spread across tens of thousands of square kilometers, where trenching fiber would be infeasible. These technologies extend the digital frontier rather than replace terrestrial infrastructure. Even with SpaceX’s innovative pace, it seems unlikely that this conclusion will change substantially within the next five years, at the very least.

A European LEO constellation could be feasible within a € 200 billion budget: The €200 billion gap identified for full gigabit coverage could, in theory, fund a sovereign European LEO system capable of servicing the “unconnectables.” If Europe adopts leaner, vertically integrated innovation models like SpaceX (and avoids legacy procurement inefficiencies), such a constellation could deliver comparable technical performance while bolstering strategic autonomy.

The future of broadband infrastructure in Europe lies in a hybrid strategy. Fiber and mobile networks should continue to serve densely populated areas, while LEO satellites, potentially supplemented by fixed wireless and 5G, offer a viable path to universal coverage. By 2030, a satellite constellation only slightly more capable than Starlink V3 could deliver broadband to virtually all of Europe’s remaining unconnected homes, without undermining the business case for large-scale FTTP networks already in place.

CAUTIONARY NOTE.

While current assessments suggest that a LEO satellite constellation with capabilities on par with or slightly exceeding those anticipated for Starlink V3 could viably serve Europe’s remaining unconnected households by 2030, it is important to acknowledge the speculative nature of these projections. The assumptions are based on publicly available data and technical disclosures. Still, it is challenging to have complete visibility into the precise specifications, performance benchmarks, or deployment strategies of SpaceX’s Starlink satellites, particularly the V3 generation, or, for that matter, Amazon’s Project Kuiper constellation. Much of what is known comes from regulatory filings (e.g., FCC), industry reports and blogs, Reddit, and similar platforms, as well as inferred capabilities. Therefore, while the conclusions drawn here are grounded in credible estimates and modeling, they should be viewed with caution until more comprehensive and independently validated performance data become available.

THE SATELLITE’S SPECS – MOST IS KEPT A “SECRET”, BUT THERE IS SOME LIGHT.

Satellite capacity is not determined by a single metric, but instead emerges from a tightly coupled set of design parameters. Variables such as spectral efficiency, channel bandwidth, polarization, beam count, and reuse factor are interdependent. Knowing a few of them allows us to estimate, bound, or verify others. This is especially valuable when analyzing or validating constellation design, performance targets, or regulatory filings.

For example, consider a satellite that uses 250 MHz channels with 2 polarizations and a spectral efficiency of 5.0 bps/Hz. These inputs directly imply a channel capacity of 1.25 Gbps and a beam capacity of 2.5 Gbps. If the satellite is intended to deliver 100 Gbps of total throughput, as disclosed in related FCC filings, one can immediately deduce that 40 beams are required. If, instead, the satellite’s reuse architecture defines 8 x 250 MHz channels per reuse group with a reuse factor of 5, and each reuse group spans a fixed coverage area. Both the theoretical and practical throughput within that area can be computed, further enabling the estimation of the total number of beams, the required spectrum, and the likely user experience. These dependencies mean that if the number of user channels, full bandwidth, channel bandwidth, number of beams, or frequency reuse factor is known, it becomes possible to estimate or cross-validate the others. This helps identify design consistency or highlight unrealistic assumptions.

In satellite systems like Starlink, the total available spectrum is limited. This is typically divided into discrete channels, for example, eight 250 MHz channels (as is the case for Starlink’s Ku-band downlink to the user’s terrestrial dish). A key architectural advantage of spot-beam satellites (e.g., with spots that are at least 50 to 80 km wide) is that frequency channels can be reused in multiple spatially separated beams, as long as the beams do not interfere with one another. This is not based on a fixed reuse factor, as seen in terrestrial cellular systems, but on beam isolation, achieved through careful beam shaping, angular separation, and sidelobe control (as also implemented in the above Python code for UK Starlink satellite coverage, albeit in much simpler ways). For instance, one beam covering southern England can use the same frequency channels as another beam covering northern Scotland, because their energy patterns do not overlap significantly at ground level. In a constellation like Starlink’s, where hundreds or even thousands of beams are formed across a satellite footprint, frequency reuse is achieved through simultaneous but non-overlapping spatial beam coverage. The reuse logic is handled dynamically on board or through ground-based scheduling, based on real-time traffic load and beam geometry.

This means that for a given satellite, the total instantaneous throughput is not only a function of spectral efficiency and bandwidth per beam, but also of the number of beams that can simultaneously operate on overlapping frequencies without causing harmful interference. If a satellite has access to 2 GHz of bandwidth and 250 MHz channels, then up to 8 distinct channels can be formed. These channels can be replicated across different beams, allowing many more than 8 beams to be active concurrently, each using one of those 8 channels, as long as they are separated enough in space. This approach allows operators to scale capacity dramatically through dense spatial reuse, rather than relying solely on expanding spectrum allocations. The ability to reuse channels across beams depends on antenna performance, beamwidth, power control, and orbital geometry, rather than a fixed reuse pattern. The same set of channels is reused across non-interfering coverage zones enabled by directional spot beams. Satellite beams can be “stacked on top of each other” up to the number of available channels, or they can be allocated optimally across a coverage area determined by user demand.

Although detailed specifications of commercial satellites, whether in operation or in the planning phase, are usually not publicly disclosed. However, companies are required to submit technical filings to the U.S. Federal Communications Commission (FCC). These filings include orbital parameters, frequency bands in use, EIRP, and antenna gain contours, as well as estimated capabilities of the satellite and user terminals. The FCC’s approval of SpaceX’s Gen2 constellation, for instance, outlines many of these values and provides a foundation upon which informed estimates of system behavior and performance can be made. The filings are not exhaustive and may omit sensitive performance data, but they serve as authoritative references for bounding what is technically feasible or likely in deployment.

ACKNOWLEDGEMENT.

I would like to acknowledge my wife, Eva Varadi, for her unwavering support, patience, and understanding throughout the creative process of writing this article.

FURTHER READINGS.

Kim K. Larsen, “Will LEO Satellite Direct-to-Cellular Networks Make Traditional Mobile Networks Obsolete?”, A John Strand Consult Report, (January 2025). This has also been published in full on my own Techneconomy blog.

Kim K. Larsen, “The Next Frontier: LEO Satellites for Internet Services.” Techneconomyblog (March 2024).

Kim K. Larsen, “Stratospheric Drones & Low Earth Satellites: Revolutionizing Terrestrial Rural Broadband from the Skies?” Techneconomyblog (January 2024).

Kim K. Larsen, “A Single Network Future“, Techneconomyblog (March 2024).

NOTE: My “Satellite Coverage Concept Model,” which I have applied to Starlink Direct-2-Dish coverage and Services in the United Kingdom, is not limited to the UK alone but can be straightforwardly generalized to other countries and areas.

Submarine Cable Sensing for Strategic Infrastructure Defense and Arctic Deployment.

A diver approaches a sensing fiber-optic submarine cable beneath the icy waters of the North Atlantic, as a rusting cargo ship floats above and a submarine lurks nearby. The cable’s radiant rings symbolize advanced sensing capabilities, detecting acoustic, seismic, and movement signals. Yet, its exposure also reveals the vulnerability of subsea infrastructure to tampering, espionage, and sabotage, especially in geopolitically tense regions like the Arctic.

WHY WE NEED VISIBILITY INTO SUBMARINE CABLE ACTIVITY.

We can’t protect what we can’t measure. Today, we are mostly blind concerning our global submarine communications networks. We cannot state with absolute certainty whether critical parts of this infrastructure are already compromised by capable hostile state actors ready to press the button at an appropriate time. If the global submarine cable network were to break down, so would the world order as we know it. Submarine cables form the “invisible” backbone of the global digital infrastructure, yet they remain highly vulnerable. Over 95% of intercontinental internet and data traffic traverses subsea cables (which is in the order of between 25% of the total internet traffic worldwide), but these critical assets lie largely unguarded on the ocean floor, exposed to environmental events, shipping activities, and increasingly, geopolitical interference.

In 2024 and early 2025, multiple high-profile incidents involving submarine cable damage have occurred, highlighting the fragility of undersea communication infrastructure in an increasingly unstable geopolitical environment. Several disruptions affected strategic submarine cable routes, raising concerns about sabotage, poor seamanship, and hybrid threats, particularly in sensitive maritime corridors (e.g., Baltic Sea, Taiwan Strait, Red Sea, etc.).

As also discussed in my recent article (“What lies beneath“), one of the most prominent cases of subsea cable cuts occurred November 2024 in the Baltic Sea, where two critical submarine cables, the East-West Interlink between Lithuania and Sweden, and the C-Lion1 cable between Finland and Germany, were damaged in close temporal and spatial proximity. The Chinese cargo vessel Yi Peng 3 was identified as having been in the vicinity during both incidents. During a Chinese-led probe, investigators from Sweden, Germany, Finland, and Denmark boarded the ship in early December. By March 2025, European officials expressed growing confidence that the breaks were accidental rather than acts of sabotage. In December 2025, and also in the Baltic Sea, the Estlink 2 submarine power cable and two telecommunications cables operated by Elisa were ruptured. The suspected culprit was the Eagle S, an oil tanker believed to be part of Russia’s “shadow fleet”, a group of poorly maintained vessels that emerged after Russia’s invasion of Ukraine to circumvent sanctions and transport goods covertly. These vessels are frequently operated by opportunists with little maritime training or seamanship, posing a growing risk to maritime-based infrastructure.

These recent incidents further emphasize the need for proactive monitoring or sensing tools applied to the submarine cable infrastructure. Today, more than 100 subsea cable outages are logged each year globally. Most are attributed to natural or unintentional human-related causes, including poor seamanship and even worse vessels. Moreover, Authorities have noted that, since Russia’s full-scale invasion of Ukraine in 2022, the use of a “ghost fleet” of vessels, often in barely seaworthy condition and operated by underqualified or loosely regulated crews, has grown substantially in scope. These ships, appearing also to be used for hybrid operations or covert missions, operate under minimal oversight, raising the risk of both deliberate interference and catastrophic negligence.

As detailed in my article “What lies beneath“, several particular cable break signatures may be “fingerprints” of hybrid or hostile interference signatures. This may include simultaneous localized cuts, unnatural uniform damage profiles, and activity in geostrategic cable chokepoints, traits that appear atypical of commercial maritime incidents. One notable pattern is the lack of conventional warning signals, e.g., no seismic precursors, known trawling vessels in the area, and rapid phase discontinuities captured in coherent signal traces of the few sensing equipment on submarine cables we have. Equally concerning is the geopolitical context. The Baltic Sea is a critical artery connecting Northern Europe’s cloud infrastructure. Taiwan’s subsea cables are vital to the global chip supply chain and financial systems. Disrupting these routes can create outsized geopolitical pressure, allowing the hostile actor to maintain plausible deniability..

Modern sensing technologies now offer a pathway to detect and characterize such disturbances. Research by Mazur et al. (OFC 2024) has demonstrated real-time anomaly detection across transatlantic submarine cable systems. Their methodology could spot small mechanical vibrations and sudden cable stresses that precede an optical cable failure. Such sensing systems can be retrofitted onto existing landing stations, enabling authorities or cable operations to issue early alerts for potential sabotage or environmental threats.

Furthermore, continuous monitoring allows real-time threat classification, differentiating between earthquake-triggered phase drift and artificial localized cuts. Combined with AI-enhanced analytics and (near) real-time AIS (Automatic Identification System) information, these sensing systems can serve as a digital tripwire along the seabed, transforming our ability to monitor and defend strategic infrastructure.

Without these capabilities, the subsea cable infrastructure landscape remains an operational blind spot, susceptible to exploitation in the next phase of global competition or geopolitical conflict. As threats evolve and hybrid tactics and actions increase, visibility into what lies beneath is advantageous and essential.

Illustration of a so-called Russian “ghost” vessel (e.g., bulk carrier) dragging its stern anchor through a subsea optical communications cable. It is an informal term that describes a Russian vessel operating covertly or suspiciously, often without broadcasting its identity or location using the Automatic Identification System (AIS), the global maritime safety protocol that civilian ships must use.

ISLANDS AT RISK: THE FRAGILE NETWORK BENEATH THE WAVES.

Submarine fiber-optic cables form the “invisible” backbone of global connectivity, silently transmitting over 95% of international data traffic beneath the world’s oceans (note: intercontinental data traffic represents ~25% of the worldwide data traffic). These subsea cables are essential for everyday internet access, cloud services, financial transactions (i.e., over 10 billion euros daily), critical infrastructure operations, emergency response coordination, and national security. Despite their importance, they are physically fragile, vulnerable to natural disruptions such as undersea earthquakes, volcanic activity, and ice movement, as well as to human causes like accidental trawling, ship anchor drags, and even deliberate sabotage. A single cut to a key cable can isolate entire regions or nations from the global network, disrupt trade and governance, and slow or sever international communication for days or weeks.

This fragility becomes even more acute when viewed through the lens of island nations and territories. The figure below presents a comparative snapshot of various islands across the globe, illustrating the number of international subsea cable connections each has (in blue bars), overlaid with the population size in millions (in orange). The disparity is striking: densely populated islands such as Taiwan, Sri Lanka, or Madagascar often rely on only a few cables, while smaller territories like Saint Helena or Gotland may have just a single connection to the rest of the world. These islands inherently depend on subsea infrastructure for access to digital services, economic stability, and international communication, yet many remain poorly connected or dangerously exposed to single points of failure. Some of these Islands may be less important from a global security, geopolitical context and a defense perspective. However, for the inhabitants of those islands, that of course will not matter much, and some islands are of critical importance to a safe and secure world order.

The chart below underscores a critical truth. Island connectivity is not just a matter of bandwidth or speed but a matter of resilience. For many of the world’s islands, a break in the cable doesn’t just slow the internet; it severs the lifeline. Every additional cable significantly reduces systemic risk. For example, going from two to three cables can cut expected unavailability by more than 60–80%, and moving from three to four cables supports near-continuous availability, which is now required for modern economies and national security.

The bar chart shows the number of subsea cable connections, while the orange line represents each island’s population (plotted on a log-scale), highlighting disparities between connectivity and population density.

Reducing systemic risk means lowering the chance that a single point of failure, or a small set of failures, can cause a complete system breakdown. In the context of subsea cable infrastructure, systemic risk refers to the vulnerability that arises when a country’s or island’s entire digital connectivity relies on just one or two physical links to the outside world. With only two international submarine cables connecting a given island in parallel, it would mean that it is deemed acceptable to have up to ~13 minutes of (a total service loss) downtime per year (note: for a single cable, that would be ~2 days per year). This should be compared to the time it may take to get the submarine cable repaired and operational again (after a cut), which may take weeks, or even months, depending on the circumstances and location. Adding a third submarine cable (parallel to the other two) reduces the maximum expected total loss of service to ~4 seconds per year. The likelihood that all 3 would be compromised by naturally occurring incidents would be very small (i.e., one in ten million). Relying on only two submarine cables for an island’s entire international connectivity, at bandwidth-critical scale, is a high-stakes gamble. While dual-cable redundancy may offer sufficient availability on paper, it fails to account for real-world risks such as correlated failures, extended repair times, and the escalating strategic value of uninterrupted digital access. This represents a technical fragility and a substantial security liability for an island economy and a digitally reliant society.

Suppose one cable is accidentally or deliberately damaged, with little or no redundancy. In that case, the entire system can collapse, cutting off internet access, disrupting communication, and halting financial and governmental operations. Reducing systemic risk involves increasing resilience through redundancy, ensuring the overall system continues functioning even if one or more cables fail. This also means not relying on only one type of connectivity, e.g., subsea cables or satellite. Still, combinations of different kinds of connectivity are incredibly important to safeguard continuous connectivity to the outside world from the perspective of an Island, even if alternative or backup connectivity does not match the capacity of the primary means of connectivity. Moreover, islands with relatively low populations tend to rely on one central terrestrial-based switching hub (e.g., typically at the central population hub), without much or meshed connectivity, exposing all communication on an island if such a hub becomes compromised.

Submarine cables are increasingly recognized as strategic targets in a hybrid warfare or full-scale military conflict scenario. Deliberate severance of these cables, particularly in chokepoints, near shore landing zones (i.e., landing stations), or cable branching points, can be a high-impact, low-visibility tactic to cripple communications without overt military action.

Going from two to three (or three to four) subsea cables may offer some strategic buffer. If an attacker compromises one or even two links, the third can preserve some level of connectivity, allowing essential communications, coordination, and early warning systems to remain operational. This may reduce the impact window for disruption and provide authorities time to respond or re-route traffic. However, it is unlikely to make a substantial difference in a conflict scenario, where a capable hostile actor may easily compromise a relatively low number of submarine cable connections. Moreover, if the terrestrial network is exposed to a single point of failure via a central switching hub design, having multiple subsea connections may matter very little in a crisis situation.

And, think about it, there is no absolute guarantee that the world’s critical subsea infrastructure has not already been compromised by hostile actors. In fact, given the strategic importance of submarine cables and the increasing sophistication of state and non-state actors in hybrid warfare, it appears entirely plausible that certain physical and cyber vulnerabilities have already been identified, mapped, or even covertly exploited.

In short, the absence of evidence is not evidence of absence. While major nations and alliances like NATO have increased efforts to monitor and secure subsea infrastructure, the sheer scale and opacity of the undersea environment mean that strategic surprise is still possible (maybe even likely). It is also worth remembering that most submarine cables operate in the dark in the historical and even present-day context. We rely on their redundancy and robustness, but we largely lack the sensory systems that allow us to proactively defend or observe them in real time.

This is what makes submarine cable sensing technologies such a strategic frontier today and why resilience, through redundancy, sensing technologies, and international cooperation, is critical. We may not be able to prevent every act of sabotage, but we can reduce the risk of catastrophic failure and improve our ability to detect and respond in real time.

THE LIKELY SUSPECTS – THE CAPABLE HOSTILE ACTOR SEEN FROM A WESTERN PERSPECTIVE.

As observed in the Western context, Russia and China are considered the most capable hostile actors in submarine cable sabotage. China is reportedly advancing its ability to conduct such operations at scale. These developments underscore the growing need for technological defenses and multilateral coordination to safeguard global digital infrastructure.

Several state actors possess the capability and potential intent to compromise or destroy submarine communications networks. Among them, Russia is perhaps the most openly scrutinized. Its specialized naval platforms, such as the Yantar-class intelligence ships and deep-diving submersibles like the AS-12 “Losharik”, can access cables on the ocean floor for tapping or cutting purposes. Western military officials have repeatedly raised concerns about Russia’s activities near undersea infrastructure. For example, NATO has warned of increased Russian naval activity near transatlantic cable routes, viewing this as a serious security risk impacting nearly a billion people across North America and Western Europe.

China is also widely regarded as a capable actor in this domain. The People’s Liberation Army Navy (PLAN) and a vast network of state-linked maritime engineering firms possess sophisticated underwater drones, survey vessels, and cable-laying ships. These assets allow for potential cable mapping, interception, or sabotage operations. Chinese maritime activity around strategic chokepoints such as the South China Sea has raised suspicions of dual-use missions under the guise of oceanographic research.

Furthermore, credible reports and analyses suggest that China is developing methods and technologies that could allow it to compromise subsea cable networks at scale. This includes experimental systems enabling simultaneous disruption or surveillance of multiple cables. According to Newsweek, recent Chinese patents may indicate that China has explored ways to “cut or manipulate undersea cables” as part of its broader strategy for information dominance.

Other states, such as North Korea and Iran, may not possess full deep-sea capabilities but remain threats to regional segments, particularly shallow water cables and landing stations. With its history of asymmetric tactics, North Korea could plausibly disrupt cable links to South Korea or Japan. Meanwhile, Iran may threaten Persian Gulf routes, especially during heightened conflict.

While non-state actors are not typically capable of attacking deep-sea infrastructure directly, they could be used by state proxies or engage in sabotage at cable landing sites. These actors may exploit the relative physical vulnerability of cable infrastructure near shorelines or in countries with less robust monitoring systems.

Finally, it is not unthinkable that NATO countries possess the technical means and operational experience to compromise submarine cables if required. However, their actions are typically constrained by strategic deterrence, international law, and alliance norms. In contrast, Russia and China are perceived as more likely to use these capabilities to project coercive power or achieve geopolitical disruption under a veil of plausible deniability.

WE CAN’T PROTECT WHAT WE CAN’T MEASURE – WHAT IS THE SENSE OF SENSING SUBMARINE CABLES?

In the context of submarine fiber-optic cable connections, it should be clear that we cannot protect this critical infrastructure if we are blind to the environment around it and along the cables themselves.

While traditionally designed for high-capacity telecommunications, submarine optical cables are increasingly recognized as dual-use assets, serving civil and defense purposes. When enhanced with distributed sensing technologies, these cables can act as persistent monitoring platforms, capable of detecting physical disturbances along the cable routes in (near) real time.

From a defense perspective, sensing-enabled subsea cables offer a discreet, infrastructure-integrated solution for maritime situational awareness. Technologies such as Distributed Acoustic Sensing (DAS), Coherent Optical Frequency Domain Reflectometry (C-OFDR), and State of Polarization (SOP) sensing can detect anomalies like trawling activity, anchor dragging, undersea vehicle movement, or cable tampering, especially in coastal zones or strategic chokepoints like the GIUK gap or Arctic straits. When paired with AI-driven classification algorithms, these systems can provide early-warning alerts for hybrid threats, such as sabotage or unregistered diver activity near sensitive installations.

For critical infrastructure protection, these technologies play an essential role in real-time monitoring of cable integrity. They can detect:

  • Gradual mechanical strain due to shifting seabed or ocean currents,
  • Seismic disturbances that may precede physical breaks,
  • Ice loading or iceberg impact events in polar regions.

These sensing systems also enable faster fault localization. While they are not likely to prevent a cable from being compromised, whether by accidental impact or deliberate sabotage, they dramatically reduce the time required to identify the problem’s location. In traditional submarine cable operations, pinpointing a break can take days, especially in deep or remote waters. With distributed sensing, operators can localize disturbances within meters along thousands of kilometers of cable, enabling faster dispatch of repair vessels, route reconfiguration, and traffic rerouting.

Moreover, sensing technologies that operate passively or without interrupting telecom traffic, such as SOP sensing or C-OFDR, are particularly well suited for retrofitting onto existing brownfield infrastructure or deployment on dual-use commercial-defense systems. They offer persistent, covert surveillance without consuming bandwidth or disrupting service, an advantage for national security stakeholders seeking scalable, non-invasive monitoring solutions. As such, they are emerging as a critical layer in the defense of underwater communications infrastructure and the broader maritime domain.

We should remember that no matter how advanced our monitoring systems are, they are unlikely to prevent submarine cables from being compromised by natural events like earthquakes and icebergs or unintentional and deliberate human activity such as trawling, anchor strikes, or sabotage. However, the sensing technologies offer the ability to detect and localize problems faster, enabling quicker response and mitigation.

TECHNOLOGY OVERVIEW: SUBMARINE CABLE SENSING.

Modern optical fiber sensing leverages the cable’s natural backscatter phenomena, such as Rayleigh, Brillouin, and Raman effects, to extract environmental data from a subsea communications cable. The physics of these effects is briefly described at the end of this article.

In the following, I will provide a comparative outline of the major sensing technologies in use today or may be deployed in future greenfield submarine fiber deployments. Each method has trade-offs in spatial or temporal resolution, compatibility with existing infrastructure, cost, and robustness to background noise. We will focus on defense applications in general applied to Arctic coastal environments, such as around Greenland. The relevance of each optical cable sensing technology described below to maritime defense will be summarized.

Some of the most promising sensing technologies today are based on the principles of Rayleigh scattering. For most sensing techniques, Rayleigh scattering is crucial in transforming standard optical cables into powerful sensor arrays without necessarily changing the physical cable structure. This makes it particularly valuable for submarine cable applications in the Arctic and strategic defense settings. By analyzing the light that bounces back from within the fiber, these systems can enable (near) real-time monitoring of intrusions or seismic activity over vast distances, spanning thousands of kilometers. Importantly, promising techniques are leverage Rayleigh scattering to function effectively even on legacy cable infrastructure, where installing additional reflectors would be impractical or uneconomical. Since Rayleigh-based sensing can be performed passively and non-invasively, it does not interfere with active data traffic, making it ideal for dual-use cables for communication and surveillance purposes. This approach offers a uniquely scalable and resilient way to enhance situational awareness and infrastructure defense in harsh or remote environments like the Arctic.

Before we get started on the various relevant sensing technologies let us briefly discuss what we mean by a sensing technology’s performance and its sensing capability, that is how well it can detect, localize, and classify physical disturbances, such as vibration, strain, acoustic pressure, or changes in light polarization, along a fiber-optic cable. The performance is typically judged by parameters like spatial resolution, detection range, sensitivity, signal-to-noise ratio, and the system’s ability to operate in noisy or variable environments. In the context of submarine detection, these disturbances are often caused by acoustic signals generated by vessel propulsion, machinery noise, or pressure waves from movement through the water. While the fiber does not measure sound pressure directly, it can detect the mechanical effects of those acoustic waves, such as tiny vibrations or refractive index changes in the surrounding seabed or cable sheath. The technologies we deploy have to be able to detect these vibrations as phase shifts in backscattered light. In contrast, other technologies may track subtle polarization changes induced by environmental stress on the subsea optical cables (as a result of an event in the proximity of the cable). A sensing system is considered effective when it can capture and resolve these indirect signatures of underwater activity with enough fidelity to enable actionable interpretation, especially in complex environments like coastal Arctic zones or the deep ocean.

In underwater acoustics, sound is measured in units of decibels relative to 1 micro Pascal, expressed as “dB re 1 µPa”, which defines a standard reference pressure level. The notation “dB re 1 µPa @ 1 m” refers to the sound pressure level of an underwater source, expressed in decibels relative to 1 micro Pascal and measured at a standard distance of one meter from the source. This metric quantifies how loud an object, such as a submarine, diver, or vessel, sounds when observed at close range, and is essential for modeling how sound propagates underwater and estimating detection ranges. In contrast, noise floor measurements use “dB re 1 µPa/√Hz,” which describes the distribution of background acoustic energy across frequencies, normalized per unit bandwidth. While source level describes how powerful a sound is at its origin, noise floor values indicate how easily such a sound could be detected in a given underwater environment.

Measurements are often normalized to bandwidth to assess sound or noise frequency characteristics, using “dB re 1 µPa/√Hz”. For example, stating a noise level of 90 dB re 1 µPa/√Hz in the 10 to 1000 Hz band means that within that frequency range, the acoustic energy is distributed at an average pressure level referenced per square root of Hertz. This normalization allows fair comparison of signals or noise across different sensing bandwidths. It helps determine whether a signal, such as a submarine’s acoustic signature, can be detected above the background noise floor. The effectiveness of a sensing technology is ultimately judged by whether it can resolve these types of signals with sufficient clarity and reliability for the specific use case.

In the mid-latitude Atlantic Ocean, typical noise floor levels range between 85 and 105 dB re 1 µPa/√Hz in the 10 to 1000 Hz frequency band. This environment is shaped by intense shipping traffic, consistent wave action, wind-generated surface noise, and biological sources such as whales. The noise levels are generally higher near busy shipping lanes and during storms, which raises the acoustic background and makes it more challenging to detect subtle events such as diver activity or low-signature submersibles (e.g., ballistic missile submarine, SSBN). In such settings, sensing techniques must operate with high signal-to-noise ratio thresholds, often requiring filtering or focusing on specific narrow frequency bands and enhanced by machine learning applications.

On the other hand, the Arctic coastal environment, such as the waters surrounding Greenland, is markedly quieter than, for example, the Atlantic Ocean. Here, the noise floor typically falls between 70 and 95 dB re 1 µPa/√Hz, and in winter, when sea ice covers the surface, it can drop even lower to around 60 dB. In these conditions, noise sources are limited to occasional vessel traffic, wind-driven surface activity, and natural phenomena such as glacial calving or ice cracking. The seasonal nature of Arctic noise patterns means that the acoustic environment is especially quiet and stable during winter, creating ideal conditions for detecting faint mechanical disturbances. This quiet background significantly improves the detectability of low-amplitude events, including the movement of stealth submarines, diver-based tampering, or UUV (i.e., unmanned underwater vehicles) activity.

Distributed Acoustic Sensing (DAS) uses phase-sensitive optical time-domain reflectometry (φ-OTDR) to detect acoustic vibrations and dynamic strain in general. Dynamic strain may arise from seismic waves or mechanical impacts along an optical fiber path. DAS allows for structural monitoring at a resolution of ca. 10 meters and a typical distance with amplification of 10 to 100 kilometers (can be extended by more amplifiers). It is an active sensor technology. DAS can be installed on shorter submarine cables (e.g., less than 100 km), although installing on a brownfield subsea cable is relatively complex. For long submarine cables (e.g., transatlantic), DAS would be greenfield deployed in conjunction with the subsea cable rollout, as retrofitting on an existing fiber installation would be impractical.

Phase-sensitive optical time domain reflectometry is a sensing technique that allows an optical fiber, like those used in subsea cables, to act like a long string of virtual microphones or vibration sensors. The method works by sending short pulses of laser light into the fiber and measuring the tiny reflections that bounce back due to natural imperfections inside the glass. When there is no activity near the cable, the backscattered light has a stable pattern. But when something happens near the cable, like a ship dragging an anchor, seismic shaking, or underwater movement, those vibrations cause tiny changes in the fiber’s shape. This physically stretches or compresses the fiber, changing the phase of the light traveling through it. φ-OTDR is specially designed to be sensitive to these phase changes. What is being detected, then, is not a “sound” per se, but a tiny change in the timing (phase) of the light as it reflects back. These phase shifts happen because mechanical energy from the outside world, like movement, stress, or pressure, slightly changes the length of the fiber or its refractive properties at specific points. φ-OTDR is ideal for detecting vibrations, like footsteps (yes, the technique also works on terra firma), vehicle movement, or anchor dragging. It is best suited for acoustic sensing over relatively long distances with moderate resolution.

So, in simple terms:

  • The “event” is not inside the fiber but in sufficient vicinity to cause a reaction in the fiber.
  • That external event causes micro-bending or stretching of the fiber.
  • The fiber cable’s mechanical deformation changes the phase of light that is then detected.
  • The sensing system uses these changes to pinpoint where along the fiber the event happened, often with meter-scale precision.

DAS has emerged as a powerful tool for transforming optical fibers into real-time acoustic sensor arrays, capable of detecting subtle mechanical disturbances such as vibrations, underwater movement, or seismic waves. While this capability is very attractive for defense and critical infrastructure monitoring, its application across existing long-haul subsea cables, particularly transoceanic systems, is severely constrained. The technology requires dark fibers or at least isolated, unused wavelengths, which are generally unavailable in (older) operational submarine systems already carrying high-capacity data traffic. Moreover, most legacy subsea cables were not designed with DAS compatibility in mind, lacking the bidirectional amplification or optical access points required to maintain sufficient signal integrity for acoustic sensing over long distances.

Retrofitting existing transatlantic or pan-Arctic submarine cables for DAS would be technically complex and, in most scenarios, likely economically unfeasible. These systems span thousands of kilometers, are deeply buried or armored along parts of their route, and incorporate in-line repeaters that do not support the backscattering reflection needed for DAS. As a result, implementing DAS across such long-haul infrastructure would entail replacing major cable components or deploying parallel sensing fibers. Both options may likely be inconsistent with the constraints of an already-deployed system. Suppose this kind of sensing capability is deemed strategically necessary. In that case, it may be operationally much less complex and more economical to deploy a greenfield cable with the embedded sensing technology, particularly for submarine cables that are 10 years old or older.

Despite these limitations, DAS offers significant potential for defense applications over shorter submarine segments, particularly near coastal landing points or within exclusive economic zones. One promising use case involves the Arctic and sub-Arctic regions surrounding Greenland. As geopolitical interest in the Arctic intensifies and ice-free seasons expand, the cables that connect Greenland to Iceland, Canada, and northern Europe will increasingly represent strategic infrastructure. DAS could be deployed along these shorter subsea spans, especially within fjords, around sensitive coastal bases, or in narrow straits, to monitor for hybrid threats such as diver incursions, submersible drones, or anchor dragging from unauthorized vessels. Greenland’s coastal cables often traverse relatively short distances without intermediate amplifiers and with accessible routes, making them more amenable to partial DAS coverage, especially if dark fiber pairs or access points exist at the landing stations.

The technology can be integrated into the infrastructure in a greenfield context, where new submarine cables are being designed and laid out. This includes reserving fiber strands exclusively for sensing, installing bidirectional optical amplifiers compatible with DAS, and incorporating coastal and Arctic-specific surveillance requirements into the architecture. For example, new Arctic subsea cables could be designed with DAS-enabled branches that extend into high-risk zones, allowing for passive real-time monitoring of marine activity without deploying sonar arrays or surface patrol assets (e.g., not actively communicate for example a ballistic missile submarine that it has been found as would have been the case with an active sonar).

DAS also supports geophysical and environmental sensing missions relevant to Arctic defense. When deployed along the Greenlandic shelf or near tectonic fault lines, DAS can contribute to early-warning systems for undersea earthquakes, landslides, or ice-shelf collapse events. These capabilities enhance environmental resilience and strengthen military situational awareness in a region where traditional sensing infrastructure is sparse.

DAS is best suited for detecting mid-to-high frequency acoustic energy, such as propeller cavitation or hull vibrations. However, stealth submarines may not produce strong enough vibrations to be detected unless they operate close to the fiber (e.g., <1 km) or in shallow water where coupling to the seabed is enhanced. Detection is plausible under favorable conditions but uncertain in deep-sea environments. However, in shallow Greenlandic coastal waters, DAS may detect a submarine’s acoustic wake, cavitation onset, or low-frequency hull vibrations, especially if the vessel passes within several hundred meters of the fiber.

Deploying φ-OTDR on brownfield submarine cables requires minimal infrastructure changes, as the sensing system can be installed directly at the landing station using a dedicated or wavelength-isolated fiber. However, its effective sensing range is limited to the segment between the landing station and the first in-line optical amplifier, typically around 80 to 100 kilometers. This limitation exists because standard submarine amplifiers are unidirectional and amplify the forward-traveling signal only. They do not support the return of backscattered light required by φ-OTDR, effectively cutting off sensing beyond the first repeater in brownfield systems. Even in a greenfield deployment, φ-OTDR is fundamentally constrained by weak backscatter, incoherent detection, poor long-distance SNR, and amplifier design, making it a technology mainly for coastal environments.

Coherent Optical Frequency Domain Reflectometry (C-OFDR) employs continuous-wave frequency-chirped laser probe signals and measures how the interference pattern (of the reflected light) changes (i.e., coherent detection). It offers high resolution (i.e., 100 -200 meters) and, for telecom-grade implementations, long-range sensing (i.e., 100s km), even over legacy submarine cables without Bragg gratings (i.e., period variation of the refractive index of the fiber). It is an active sensor technology. C-OFDR is one of the most promising techniques for high-resolution distributed sensing over long distances (e.g., transatlantic distances), and it can, in fact, be used on existing operational subsea cables without any special modifications to the cable itself, although with some practical considerations on older systems and limitations due to a reduced dynamic range. However, this sensing technology does require coherent detection systems with narrow-linewidth lasers and advanced DSP, which might make brownfield integration complex without significant upgrades. In contrast, greenfield deployments can seamlessly incorporate C-OFDR by leveraging the coherent optical infrastructure already standard in modern long-haul submarine cables. C-OFDR technique, like φ-OTDR, also relies on sensing changes in lights properties as it is reflected from imperfections in the fiber optical cable (i.e., Rayleigh backscattering). When something (an “event”) happens near the fiber, like the ground shakes from an earthquake, an anchor hits the seabed, or temperature changes, the optical fiber experiences microscopic stretching, squeezing, or vibration. These tiny changes affect how the light reflects back. Specifically, they change the phase and frequency of the returning signal. C-OFDR uses interferometry to measure these small differences very precisely. It is important to understand that the “event” we talk about is not inside the fiber, but its effects are causing changes to the fiber that can be measured by our chosen sensing technique. External forces (like pressure or motion) cause strain or stress in the glass fiber, which changes how the light moves inside. C-OFDR detects those changes and tells you where along the cable these changes happened, sometimes within a few centimeters.

Deploying C-OFDR on brownfield submarine cables is more challenging, as it typically requires more changes to the landing station, such as coherent transceivers with narrow-linewidth lasers and high-speed digital signal processing, which are normally not present in legacy landing stations. Even if such equipment is added at the landing station, like φ-OTDR, sensing may be limited to the segment up to the first in-line amplifier unless modified as shown in the work by Mazur et al.. C-OFDR, compared to φ-OTDR, leverages coherent receivers, DSP, and telecom-grade infrastructure to overcome those barriers, making C-OFDR a very relevant long-haul subsea cable sensing technology.

An interesting paper using a modified C-OFDR technique,  “Continuous Distributed Phase and Polarization Monitoring of Trans-Atlantic Submarine Fiber Optic Cable” by Mazur et al., demonstrates a powerful proof-of-concept for using existing long-haul submarine telecom cables, equipped with more than 70 amplifiers, for real-time environmental sensing without interrupting data transmission. The authors used a prototype system combining a fiber laser, FPGA (Field-Programmable Gate Array), and GPU (Graphical Processing Unit) to perform long-range optical frequency domain reflectometry (C-OFDR) over a 6,500 km transatlantic submarine cable. By measuring phase and polarization changes between repeaters, they successfully detected a 6.4 magnitude earthquake near Ferndale, California, showing the seismic wave propagating in real-time from the West Coast of the USA, across North America, and was eventually observed by Mazur et al. in the Atlantic Ocean. Furthermore, they demonstrated deep-sea temperature measurements by analyzing round-trip time variations along the full cable spans. The system operated for over two months without service interruptions, underscoring the feasibility of repurposing submarine cables as large-scale oceanic sensing arrays for geophysical and defense applications. Their system’s ability to monitor deep-sea environmental variations, such as temperature changes, contributes to situational awareness in remote oceanic regions like the Arctic or the Greenland-Iceland-UK (GIUK) Gap, areas of increasing strategic importance. It is worth noting that while the basic structure of the cable (in terms of span length and repeater placement) is standard for long-haul subsea cable systems, what sets this cable apart is the integration of a non-disruptive monitoring system that leverages existing infrastructure for advanced environmental sensing, a capability not found in most subsea systems deployed purely for telecom.

Furthermore, using C-OFDR and polarization-resolved sensing (SOP) without disrupting live telecommunications traffic provides a discreet means of monitoring infrastructure. This is particularly advantageous for covert surveillance of vital undersea routes. Finally, the system’s fine-grained phase and polarization diagnostics have the potential to detect disturbances such as anchor drags, unauthorized vessel movement, or cable tampering, activities that may indicate hybrid threats or espionage. These features position the technology as a promising enabler for real-time intelligence, surveillance, and reconnaissance (ISR) applications over existing subsea infrastructure.

C-OFDR is very sensitive over long distances and, when optimized with narrowband probing, may detect subtle refractive index changes caused by waterborne pressure variations. While more robust than DAS at long range, its ability to resolve weak, broadband submarine noise signatures remains speculative and would likely require AI-based classification. In Greenland, C-OFDR might be able to detect subtle pressure variations or cable stress caused by passing submarines, but only if the cable is close to the source.

Phase-based sensing, which φ-OTDR belongs to, is an active sensing technique that tracks the phase variation of optical signals for precise mechanical event detection. It requires narrow linewidth lasers and sensitive DSP algorithms. In phase-based sensing, we send very clean, stable light from a narrow-linewidth laser through the fiber cable. We then measure how the phase of that light changes as it travels. These phase shifts are incredibly sensitive to tiny movements, smaller than a wavelength of light. As discussed above, when the fiber is disturbed, even just a little, the light’s phase changes, which is what the system detects. This sensing technology offers a theoretical spatial resolution of 1 meter and is currently expected to be practical over distances less than 10 kilometers. In general, phase-based sensing is a broader class of fiber-optic sensing methods that detect optical phase changes caused by mechanical, thermal, or acoustic disturbances.

Phase-based sensing technologies detect sub-nanometer variations in the phase of light traveling through an optical fiber, offering exceptional sensitivity to mechanical disturbances such as vibrations or pressure waves. However, its practical application over the existing installed base of submarine cable infrastructure remains extremely limited. Some of the more advanced implementations are largely confined to laboratory settings due to the need for narrow-linewidth lasers, high-coherence probe sources, and low-noise environments. These conditions are difficult to achieve across real-world subsea spans, especially those with optical amplifiers and high traffic loads. These technical demands make retrofitting phase-based sensing onto operational subsea cables impractical, particularly given the complexity of accessing in-line repeaters and the susceptibility of phase measurements to environmental noise. Still, as the technology matures and can be adapted to tolerate noisy and lossy environments, it could enable ultra-fine detection of small-scale events such as underwater cutting tools, diver-induced vibrations, or fiber tampering attempts.

In a defense context, phase-based sensing might one day be used to monitor high-risk cable landings or militarized undersea chokepoints where detecting subtle mechanical signatures could provide an early warning of sabotage or surveillance activity. Its extraordinary resolution could also contribute to low-profile detection of seabed motion near sensitive naval installations. While not yet field-deployable at scale, it represents a promising frontier for future submarine sensing systems in strategic environments, typically in proximity to coastal areas.

Coherent MIMO Distributed Fiber Sensing (DFS) is another cutting-edge active sensing technique belonging to the phase-based sensing family that uses polarization-diverse probing for spatially-resolved sensing on deployed multi-core fibers (MCF), enabling robust, high-resolution environmental mapping. This technology remains currently limited to laboratory environments and controlled testbeds, as the widespread installed base of submarine cables does not use MCF and lacks the transceiver infrastructure required to support coherent MIMO interrogation. Retrofitting existing subsea systems with this capability would require complete replacement of the fiber plant, making it infeasible for legacy infrastructure, but potentially interesting for greenfield deployments.

Despite these limitations, the future application of Coherent MIMO DFS in defense contexts is compelling. Greenfield deployments, such as new Arctic cables or secure naval corridors, could enable real-time acoustic and mechanical activity mapping across multiple parallel cores, offering spatial resolution that rivals or exceeds existing sensing platforms. This level of precision could support the detection and classification of complex underwater threats, including stealth submersibles or distributed tampering attempts. With further development, it might also support wide-area surveillance grids embedded directly into the fiber infrastructure of critical sea lanes or military installations. While not deployable on today’s global cable networks, it represents a next-generation tool for submarine situational awareness in future defense-grade fiber systems.

State of Polarization (SOP) sensing technology detects changes in light polarization that allow sensing environmental disturbances to a submarine optical cable. It can be implemented passively using existing coherent transceivers and thus can be used on existing operational submarine cables. The SOP sensing technology does not offer spatial resolution by default. However, it has a very high temporal sensitivity on a millisecond level, allowing it to resolve temporally localized SOP anomalies that may often be precursors for a structurally compromised submarine cable. SOP sensing provides timely and actionable information even without pinpoint spatial resolution for applications like cable break prediction, anomaly detection, and hybrid threat alerts. However, if the temporal information can be mapped back to the compromised physical location within 10s of kilometers. The SOP sensing can cover up to 1000s of kilometers of a submarine system.

SOP sensing provides path-integrated information about mechanical stress or vibration. While it lacks spatial resolution, it could register anomalous polarization disturbances along Arctic cable routes that coincide with suspected submarine activity. Even global SOP anomalies may be suspicious in Greenland’s sparse traffic environment, but localizing the source would remain challenging. It is likely a technique that, combined with C-OFDR, would offer both a spatial and temporal picture that, in combination, could become a promising use case. SOP provides fast, passive temporal detection, while C-OFDR (or DAS) delivers spatial resolution and event classification. The combination may offer a more robust and operationally viable architecture for strategic subsea sensing, suitable for civilian and defense applications across existing and future cable systems.

Deploying SOP-based sensing on brownfield submarine cables requires no changes to the cable infrastructure, such as landing stations. It passively monitors changes in the state of polarization at the transceiver endpoints. However, this method does not provide spatial resolution and cannot localize events along the cable. It also does not rely on backscatter, and therefore its sensing capability is not limited by the presence of amplifiers, unlike φ-OTDR or C-OFDR. The limitation, instead, is that SOP sensing provides only a global, integrated signal over the entire fiber span, making it effective for detecting disturbances but not pinpointing their location.

Table: Performance characteristics of key optical fiber sensing technologies for subsea applications.
The table summarizes spatial resolution, operational range, minimum detectable sound levels, activation state, and compatibility with existing subsea cable infrastructure. Values reflect current best estimates and lab performance where applicable, highlighting trade-offs in detection sensitivity and deployment feasibility across sensing modalities. Range depends heavily on system design. While traditional C-OFDR typically operates over short ranges (<100 m), advanced variants using telecom-grade coherent receivers may extend reach to 100s of km at lower resolution. This table, as well as the text, considers the telecom-grade variant of C-OFDR.

Beyond the sensing technologies already discussed, such as DAS (including φ-OTDR), C-OFDR, SOP, and Coherent MIMO DFS, several additional, lesser-known sensing modalities can be deployed on or alongside submarine cables. These systems differ in physical mechanisms, deployment feasibility, and sensitivity, and while some remain experimental, others are used in niche environmental or energy-sector applications. Several of these have implications for defense-related detection scenarios, including submarine tracking, sabotage attempts, or unauthorized anchoring, particularly in strategically sensitive Arctic regions like Greenland’s West and East Coasts.

One such system is Brillouin-based distributed sensing, including Brillouin Optical Time Domain Analysis (BOTDA) and Brillouin Optical Time Domain Reflectometry (BOTDR). These methods operate by sending pulses down the fiber and analyzing the Brillouin frequency shift, which varies with temperature and strain. The spatial resolution is typically between 0.5 and 1 meter, and the sensing range can extend to 50 km under optimized conditions. The system’s strength is detecting slow-moving structural changes, such as seafloor deformation, tectonic strain, or sediment pressure buildup. However, because the Brillouin interaction is weak and slow to respond, it is poorly suited for real-time detection of fast or low-amplitude acoustic events like those produced by a stealth submarine or diver. Anchor dragging might be detected, but only if it results in significant, sustained strain in the cable. These systems could be modestly effective in shallow Arctic shelf environments, such as Greenland’s west coast, but they are not viable for real-time defense monitoring.

Another temperature-focused method is Raman-based distributed temperature sensing (DTS). This technique analyzes the ratio of Stokes and anti-Stokes backscatter to detect temperature changes along the fiber, with spatial resolution typically on the order of 1 meter and ranges up to 10–30 km. Raman DTS is widely used in the oil and gas industry for downhole monitoring, but is not optimized for dynamic or mechanical disturbances. It offers little utility in detecting diver activity, submarine motion, or anchor drag unless such events lead to secondary thermal effects. Furthermore, Raman DTS is unsuitable for detecting fast-moving threats like submarines or divers. It can detect slow thermal anomalies caused by prolonged contact, buried tampering devices, or gradual sediment buildup. Thus, it may serve as a background “health monitor” for defense-relevant subsea critical infrastructures. As its enabling mechanism is Raman scattering, which is even weaker than Rayleigh and Brillouin scattering, it is likely to make this sensor technology unsuitable for Arctic defense applications. Moreover, the cold and thermally stable Arctic seabed provides a limited dynamic range for temperature-induced sensing.

A more advanced but experimental method is optical frequency comb (OFC)-based sensing, which uses an ultra-stable frequency comb to probe changes in fiber length and strain with sub-picometer resolution. This offers unparalleled spatial granularity (down to millimeters) and could, in theory, detect subtle refractive index changes induced by acoustic coupling or mechanical perturbation. However, range is limited to short spans (<10 km), and implementation is complex and not yet field-viable. This technology might detect micro-vibrations from nearby submersibles or diver-induced strain signatures in a future defense-grade network, especially greenfield deployments in Arctic coastal corridors. The physical mechanism is interferometric phase detection, amplified by comb coherence and time-of-flight mapping. Frequency comb-based techniques could be the foundation for a next-generation submarine cable monitoring system, especially in greenfield defense-focused coastal deployments requiring excellent spatial resolution under variable environmental conditions. Unlike traditional reflectometry or phase sensing, the laser frequency comb should be able to maintain calibrated performance in fluctuating Arctic environments, where salinity and temperature affect refractive index dramatically, and therefore, a key benefit for Greenlandic and Arctic deployments.

Another emerging direction is Integrated Sensing and Communication (ISAC), where linear frequency-modulated sensing signals are embedded directly into the optical communication waveform. This approach avoids dedicated dark fiber and can achieve moderate spatial resolution (~100–500 meters) with ranges of up to 80 km using coherent receivers. ISAC has been proposed for simultaneous data transmission and distributed vibration sensing. In Arctic coastal areas, where telecom capacity may be underutilized and infrastructure redundancy is limited, ISAC could enable non-invasive monitoring of anchor strikes or structural cable disturbances. It may not detect quiet submarines unless direct coupling occurs, but it could potentially flag diver-based sabotage or hybrid threats that cause physical cable contact.

Lastly, hybrid systems combining external sensor pods, such as tethered hydrophones, magnetometers, or pressure sensors, with submarine cables are deployed in specialized ocean observatories (e.g., NEPTUNE Canada). These use the cable for power and telemetry and offer excellent sensitivity for detecting underwater acoustic and geophysical events. However, they require custom cable interfaces, increased power provisioning, and are not easily retrofitted to commercial or legacy submarine systems. In Arctic settings, such systems could offer unparalleled awareness of glacier calving, seismic activity, or vessel movement in chokepoints like the Kangertittivaq (i.e., Scoresby Sund) or the southern exit of Baffin Bay (i.e., Avannaata Imaa). The main limitation of hybrid systems lies in their cost and the need for local infrastructure support. The economics relative to such systems’ benefits requires careful consideration compared to more conventional maritime sensor architectures.

DEFENSE SCENARIOS OF CRITICAL SUBSEA CABLE INFRASTRUCTURE.

Submarine cable infrastructure is increasingly recognized as a medium for data transmission and a platform for environmental and security monitoring. With the integration of advanced optical sensing technologies, these cables can detect and interpret physical disturbances across vast underwater distances. This capability opens up new opportunities for national defense, situational awareness, and infrastructure resilience, particularly in coastal and Arctic regions where traditional surveillance assets are limited. The following section outlines how different sensing modalities, such as DAS, C-OFDR, SOP, and emerging MIMO DFS, can support key operational objectives ranging from seismic early warning to hybrid threat detection. Each scenario case reflects a unique combination of acoustic signature, environmental setting, and technological suitability.

  • Intrusion Detection: Detect tampering, trawling, or vehicle movement near cables in coastal zones.
  • Seismic Early Warning: Monitor undersea earthquakes with high fidelity, enabling early warning for tsunami-prone regions.
  • Cable Integrity Monitoring: Identify precursor events to fiber breaks and trigger alerts to reroute traffic or dispatch response teams.
  • Hybrid Threat Detection: Monitor signs of hybrid warfare activities such as sabotage or unauthorized seabed operations near strategic cables. This also includes anchor-dragging sounds.
  • Maritime Domain Awareness: Track vessel movement patterns in sensitive maritime zones using vibrations induced along shore-connected cable infrastructure.

Intrusion Detection involving trawling, tampering, or underwater vehicle movement near the cable is best addressed using Distributed Acoustic Sensing (DAS), especially on coastal Arctic subsea cables where environmental noise is lower and mechanical coupling between the cable and the seafloor is stronger. DAS can detect short-range, high-frequency mechanical disturbances from human activity. However, this is more challenging in the open ocean due to poor acoustic coupling and cable burial. Coherent Optical Frequency Domain Reflectometry (C-OFDR) combined with State of Polarization (SOP) sensing offers a more passive and feasible alternative in such environments. C-OFDR can detect strain anomalies and localized pressure effects, while SOP sensing can identify anomalous polarization drift patterns caused by motion or stress, even on live traffic-carrying fibers.

For Seismic Early Warning, phase-based sensing (including both φ-OTDR and C-OFDR) is well suited across coastal and oceanic deployments. These technologies detect low-frequency ground motion with high sensitivity and temporal resolution. Phase-based methods can sense teleseismic activity or tectonic shifts along the cable route in deep ocean environments. The advantage increases in the Arctic coastal zones due to low background noise and shallow deployment, enabling the detection of smaller regional seismic events. Additionally, SOP sensing, while not a primary seismic tool, can detect long-duration cable strain or polarization shifts during large quakes, offering a redundant sensing layer.

Combining C-OFDR and SOP sensing is most effective for Cable Integrity Monitoring, particularly for early detection of fiber stress, micro-bending, or fatigue before a break occurs. SOP sensing works especially well for long-haul ocean cables with live data traffic, where passive, non-intrusive monitoring is essential. C-OFDR is more sensitive to local strain patterns and can precisely locate deteriorating sections. In Arctic coastal cables, this combination enables operators to detect damage from ice scouring, sediment movement, or thermal stress due to permafrost dynamics.

Hybrid Threat Detection benefits most from high-resolution, multi-modal sensing, such as detecting sabotage or seabed tampering by divers or unmanned vehicles. Along coastal regions, including Greenland’s fjords, Coherent MIMO Distributed Fiber Sensing (DFS), although still in its early stages, shows great promise due to its ability to spatially resolve overlapping disturbance signatures across multiple cores or polarizations. DAS may also contribute to near-shore detection if acoustic coupling is sufficient. On ocean cables, SOP sensing fused with AI-based anomaly detection provides a stealthy, always-on layer of hybrid threat monitoring, especially when other modalities (e.g., sonar, patrols) are absent or infeasible.

Finally, DAS is effective along coastal fiber segments for Maritime Domain Awareness, particularly tracking vessel movement in sensitive Arctic corridors or near military installations. It detects the acoustic and vibrational signatures of passing vessels, anchor deployment, or underwater vehicle operation. These signatures can be classified using spectrogram-based AI models to differentiate between fishing boats, cargo vessels, or small submersibles. While unable to localize the event, SOP sensing can flag cumulative disturbances or repetitive mechanical interactions along the fiber. This use case becomes less practical in oceanic settings unless vessel activity occurs near cable landing zones or shallow fiber stretches.

These scenario considerations have been summarised in the Table below.

Table: Summarises of subsea sensing use cases and corresponding detection performance.
The table outlines representative sound power levels, optimal sensing technologies, environmental suitability, and estimated detection distances for key maritime and defense-related use cases. Detection range is inferred from typical source levels, local noise floors, and sensing system capabilities in Arctic coastal and oceanic environments.

LEGACY SUBSEA SENSING NETWORKS: SONOR SYSTEMS AND THEIR EVOLVING ROLE.

The observant reader might at this point feel (rightly) that I am totally ignoring the good old sonar (e.g., sound navigation and ranging), which has been around since World War I and is thus approximately 110 years old as a technology. In the Cold War era, at its height from the 1950s to the 1980s, sonar technology advanced further into the strategic domain. The United States and its allies developed large-scale systems like SOSUS (Sound Surveillance System) and SURTASS (Surveillance Towed Array Sensor System) to detect and monitor the growing fleet of Soviet nuclear submarines. These systems enabled long-range, continuous underwater surveillance, establishing sonar as a tactical tool and a key component of strategic deterrence and early warning architectures.

So, let us briefly look at Sonar as a defensive (and offensive) technology.

Undersea sensing as a cornerstone of naval strategy and maritime situational awareness; for example, see the account “66 Years of Undersea Surveillance” by Taddiken et al. Throughout the Cold War, the world’s major powers invested heavily in long-range underwater surveillance systems, especially passive and active sonar networks. These systems remain relevant today, providing persistent monitoring for submarine detection, anti-access/area denial operations, and undersea infrastructure protection.

Passive sonar systems detect acoustic signatures emitted by ships, submarines, and underwater seismic activity. These systems rely on the natural propagation of sound through water and are often favored for their stealth since they do not emit signals. Their operation is inherently covert. In contrast, active sonar transmits acoustic pulses and measures reflected signals to detect and range objects that might not produce detectable noise, such as quiet submarines or inert objects on the seafloor.

The most iconic example of a passive sonar network is the U.S. Navy’s Sound Surveillance System (SOSUS), initially deployed in the 1950s. SOSUS comprises a series of hydrophone arrays fixed to the ocean floor and connected by undersea cables to onshore processing stations. While much of SOSUS remains classified, its operational role continues today with mobile and advanced fixed networks under the Integrated Undersea Surveillance System (IUSS). Other nations have developed analogous capabilities, including Russia’s MGK-series networks, China’s emerging Great Undersea Wall system, and France’s SLAMS network. These systems offer broad area acoustic coverage, especially in strategic chokepoints like the GIUK (Greenland-Iceland-UK) gap and the South China Sea.

Despite sonar’s historical and operational value, traditional sonar networks have significant limitations. Passive sonar is susceptible to acoustic masking by oceanic noise and may struggle to detect vessels employing acoustic stealth technologies. Active sonar, while more precise, risks disclosing its location to adversaries due to its emitted signals. Furthermore, sonar performance is constrained by water conditions, salinity, temperature gradients, and depth, affecting acoustic propagation. Additionally, sonar coverage is inherently sparse and highly dependent on the geographical layout of sensor arrays and underwater topology. Furthermore, deployment and maintenance of sonar arrays are logistically complex and costly, often requiring naval support or undersea construction assets. These limitations suggest a decreasing standalone effectiveness of sonar systems in high-resolution detection, mainly as adversaries develop quieter and more agile underwater vehicles.

This table summarizes key sonar technologies used in naval and infrastructure surveillance, highlighting typical unit spacing, effective coverage radius, and operational notes for systems ranging from deep-ocean fixed arrays (SOSUS/IUSS) to mobile and nearshore defense systems.

Think of sonar as a radar for the sea, sensing outward into the subsea environment. Due to sound propagation characteristics (i.e., in water sound travels more than 4 times faster and attenuates very slowly compared to sound waves in air), sonar is an ideal technology for submarine detection and seismic monitoring. In contrast, optical sensing in subsea cables is like a tripwire or seismograph, detecting anything that physically touches, moves, or perturbs the cable along its length. The emergence of distributed sensing over fiber optics has introduced a transformative approach to undersea and terrestrial monitoring. Distributed Acoustic Sensing (DAS), Distributed Fiber Sensing (DFS), and Coherent Optical Frequency Domain Reflectometry (C-OFDR) leverage the existing footprint of submarine telecommunications infrastructure to detect environmental disturbances, including vibrations, seismic activity, and human interaction with cables, at high spatial and temporal resolution. Unlike traditional sonar, these fiber-based systems do not rely on acoustic wave propagation in water but instead monitor the optical fiber’s phase, strain, or polarization variations. So, very simple sonar uses acoustics to sense sound waves in water, while fiber-based sensing is based on optics and how lights travel in an optical fiber. When embedded in submarine cables, such sensing techniques allow for continuous, covert, and high-resolution surveillance of the cable’s immediate environment, including detection of trawler interactions, anchor dragging, subsea landslides, and localized mechanical disturbances. They operate within the optical transmission spectrum without interrupting the core data service. While sonar systems excel at broad ocean surveillance and object tracking, their coverage is limited to specific regions and depths where arrays are installed. Conversely, fiber-based sensing offers persistent surveillance along entire transoceanic links, albeit restricted to the immediate vicinity of the cable path. Together, these systems should not be seen as competitors but very much complementary tools. Sonar covers the strategic expanse, while fiber-optic sensing provides fine-grained visibility where infrastructure resides.

This table contrasts traditional active and passive sonar networks with emerging fiber-integrated sensing systems (e.g., DAS, DFS, and C-OFDR) across key operational dimensions, including detection medium, infrastructure, spatial resolution, and security characteristics. It highlights the complementary strengths of each technology for undersea surveillance and strategic infrastructure monitoring.

The future of sonar sensing lies in hybridization and adaptive intelligence. Ongoing research explores networks that combine passive sonar arrays with intelligent edge processing using AI/ML to discriminate between ambient and threat signatures. There is also a push to integrate mobile platforms, such as Unmanned Underwater Vehicles (UUVs), into sonar meshes, expanding spatial coverage dynamically based on threat assessments. Material advances may also lead to miniaturized or modular hydrophone systems that can be ad hoc or embedded into multipurpose seafloor assets. Some navies are exploring Acoustic Vector Sensors (AVS), which can detect the pressure and direction of incoming sound waves, offering a richer data set for tracking and identification. Coupled with improvements in real-time ocean modeling and environmental acoustics, these future sonar systems may offer higher fidelity detection even in shallow and complex coastal waters where passive sensors are less effective. Moreover, integration with optical fiber systems is an area of active development. Some proposals suggest co-locating acoustic sensors with fiber sensing nodes or utilizing fiber backhaul for sonar telemetry in real-time, thereby merging the benefits of both approaches into a coherent undersea surveillance architecture.

THE ARCTIC DEPLOYMENT CONCEPT.

As global power competition extends into the Arctic, military planners and analysts are increasingly concerned about the growing strategic role of Greenland’s coastal waters, particularly in the context of Russian nuclear submarine operations. For decades, Russia has maintained a doctrine of deploying ballistic missile submarines (SSBNs) capable of launching nuclear retaliation strikes from stealth positions in remote ocean zones. Once naturally shielded by persistent sea ice, the Arctic has become more navigable due to climate change, creating new opportunities for submerged access to maritime corridors and concealment zones.

Historically, Russian submarines seeking proximity to U.S. and NATO targets would patrol areas along the Greenland-Iceland-UK (GIUK) gap and the eastern coast of Greenland, using the remoteness and challenging acoustic environment to remain hidden. However, strategic speculation and evolving threat assessments now suggest a westward shift, toward the sparsely monitored Greenlandic West Coast. This region offers even greater stealth potential due to limited surveillance infrastructure, complex fjord geography, and weaker sensor coverage than traditional GIUK chokepoints. Submarines could strike the U.S. East Coast from these waters in under 15 minutes, leveraging geographic proximity and acoustic ambiguity. Even if the difference in warning time would be no more than about 2–4 minutes depending on launch angle, trajectory, and detection latency, in the context of strategic warning systems and nuclear command and control, the loss of several minutes of additional reaction time can matter significantly, especially for early-warning systems, evacuation orders, or launch-on-warning decisions.

U.S. and Canadian defense communities have increasingly voiced concern over this evolving threat. U.S. Navy leadership, including Vice Admiral Andrew Lewis, has warned that the U.S. East Coast is “no longer a sanctuary,” underscoring the return of great power maritime competition and the pressing need for situational awareness even in home waters. As Russia modernizes its submarine fleet with quieter propulsion and longer-range missiles, its ability to hide near strategic seams like Greenland becomes a direct vulnerability to North American security.

This emerging risk makes the case for integrating advanced sensing capabilities into subsea cable infrastructure across Greenland and the broader Arctic theatre. Cable-based sensing technologies, such as Distributed Acoustic Sensing (DAS) and State of Polarization (SOP) monitoring, could dramatically enhance NATO’s ability to detect anomalous underwater activity, particularly in the fjords and shallow coastal regions of Greenland’s western seaboard. In a region where traditional sonar and surface surveillance are limited by ice, darkness, and remoteness, the subsea cable system could become an invisible tripwire, transforming Greenland’s digital arteries into dual-use defense assets.

Therefore, advanced sensing technologies should not be treated as optional add-ons but as foundational elements of Greenland’s Arctic defense architecture. Particular technologies that can work well and are relatively uncomplicated to operationalize on brownfield subsea cable installations. These would offer a critical layer of redundancy, early warning, and environmental insight, capabilities uniquely suited to the high north’s emerging strategic and climatic realities.

The Arctic Deployment Concept outlines a forward-looking strategy to integrate submarine cable sensing technologies into the defense and intelligence infrastructure of the Arctic region, particularly Greenland, as geopolitical tensions and environmental instability intensify. Greenland’s strategic location at the North Atlantic and Arctic Ocean intersection makes it a critical node in transatlantic communications and military situational awareness. As climate change opens new maritime passages and exposes previously ice-locked areas, the region becomes increasingly vulnerable, not only to environmental hazards like shifting ice masses and undersea seismic activity, but also to the growing risks of geopolitical friction, cyber operations, and hybrid threats targeting critical infrastructure.

In this context, sensing-enhanced submarine cables offer a dual-use advantage: they carry data traffic and serve as real-time monitoring assets, effectively transforming passive infrastructure into a distributed sensor network. These capabilities are especially vital in Greenland, where terrestrial sensing is sparse, the weather is extreme, and response times are long due to the remoteness of the terrain. By embedding Distributed Acoustic Sensing (DAS), Coherent Optical Frequency Domain Reflectometry (C-OFDR), and State of Polarization (SOP) sensing along cable routes, operators can monitor for ice scouring, tectonic activity, tampering, or submarine presence in near real time.

This chart illustrates the Greenlandic telecommunications provider Tusass’s infrastructure (among other things). Note that Tussas is the incumbent and only telecom provider in Greenland. Currently, five hydropower plants (shown above; location is only indicative) provide more than 80% of Greenland’s electricity demand. Greenland’s new international airport became operational in Nuuk in November 2024. Source: from the Tusass Annual Report 2023 with some additions and minor edits.

As emphasized in the article “Greenland: Navigating Security and Critical Infrastructure in the Arctic”, Greenland is not only a logistical hub for NATO but also home to increasingly digitalized civilian systems. This dual-use nature of Arctic subsea cables underscores the need for resilient, secure, and monitored communications infrastructure. Given the proximity of Greenland to the GIUK gap, a historic naval choke point between Greenland, Iceland, and the UK, any interruption or undetected breach in subsea connectivity here could undermine both civilian continuity and allied military posture in the region.

Moreover, the cable infrastructure along Greenland’s coastline, connecting remote settlements, research stations, and defense assets, is highly linear and often exposed to physical threats from shifting icebergs, seabed movement, or vessel anchoring. These shallow, coastal environments are ideally suited for sensing deployments, where good coupling between the fiber and the seabed enables effective detection of local activity. Integrating sensing technologies here supports ISR (i.e., Intelligence, Surveillance, and Reconnaissance) and predictive maintenance. It extends domain awareness into remote fjords and ice-prone straits where traditional radar or sonar systems may be ineffective or cost-prohibitive.

The map of Greenland’s telecommunications infrastructure provides a powerful visual framework for understanding how sensing capabilities could be integrated into the nation’s subsea cable system to enhance strategic awareness and defense. The western coastline, where the majority of Greenland’s population resides (~35%) and where the main subsea cable infrastructure runs, offers an ideal geographic setting for deploying cable-integrated sensing technologies. The submarine cable routes from Nanortalik in the south to Upernavik in the north connect critical civilian hubs such as Nuuk, Ilulissat, and Qaqortoq, while simultaneously passing near U.S. military installations like Pituffik Space Base. While essential for digital connectivity, this infrastructure also represents a strategic vulnerability if left unsensed and unprotected.

Given that Russian nuclear-powered submarines (e.g., SSBMs) are suspected of operating closer to the Greenlandic coastline, shifting from the historical GIUK gap to potentially less monitored regions along the west, Greenland’s cable network could be transformed into an invisible perimeter sensor array. Technologies such as Distributed Acoustic Sensing (DAS) and State of Polarization (SOP) monitoring could be layered onto the existing fiber without disrupting data traffic. These technologies would allow authorities to detect minute vibrations from nearby vessel movement or unauthorized subsea activity, and to monitor for seismic shifts or environmental anomalies like iceberg scouring.

The map above shows the submarine cable backbone, microwave-chain sites, and satellite ground stations. If integrated, these components could act as hybrid communication-and-sensing relay points, particularly in remote locations like Qaanaaq or Tasiilaq, further extending domain awareness into previously unmonitored fjords and inlets. The location of the new international airport in Nuuk, combined with Nuuk’s proximity to hydropower and a local datacenter, also suggests that the capital could serve as a national hub for submarine cable-based surveillance and anomaly detection processing.

Much of this could be operationalized using existing infrastructure with minimal intrusion (at least in the proximity of Greenland’s coastline). Brownfield sensing upgrades, mainly using coherent transceiver-based SOP methods or in-line C-OFDR reflectometry, may be implemented on live cable systems, allowing Greenland’s existing communications network to become a passive tripwire for submarine activity and other hybrid threats. This way, the infrastructure shown on the map could evolve into a dual-use defense asset, vital in securing Greenland’s civilian connectivity and NATO’s northern maritime flank.

POLICY AND OPERATIONAL CONSIDERATIONS.

As discussed previously, today, we are essentially blind to what happens to our submarine infrastructure, which carries over 95% of the world’s intercontinental internet traffic and supports more than 10 trillion euros daily in financial transactions. This incredibly important global submarine communications network was taken for granted for a long time, almost like a deploy-and-forget infrastructure. It is worthwhile to remember that we cannot protect what we cannot measure.

Arctic submarine cable sensing is as much a policy and sourcing question as a technical one. The integration of sensing platforms should follow a modular, standards-aligned approach, supported by international cooperation, robust cybersecurity measures, and operational readiness for Arctic conditions. If implemented strategically, these systems can offer enhanced resilience and a model for dual-use infrastructure governance in the digital age.

As Arctic geostrategic relevance increases due to climate change, geopolitical power rivalry, and the expansion of digital critical infrastructure, submarine cable sensing has emerged as both a technological opportunity and a governance challenge. The deployment of sensing techniques such as State of Polarization (SOP) monitoring and Coherent Optical Frequency Domain Reflectometry (C-OFDR) offers the potential to transform traditionally passive infrastructure into active, real-time monitoring platforms. However, realizing this vision in the Arctic, particularly for Greenlandic and trans-Arctic cable systems, requires a careful approach to policy, interoperability, sourcing, and operational governance.

One of the key operational advantages of SOP-based sensing is that it allows for continuous, passive monitoring of subsea cables without consuming bandwidth or disrupting live traffic​. When analyzed using AI-enhanced models, SOP fluctuations provide a low-impact way to detect seismic activity, cable tampering, or trawling events. This makes SOP a highly viable candidate for brownfield deployments in the Arctic, where live traffic-carrying cables traverse vulnerable and logistically challenging environments. Similarly, C-OFDR, while slightly more complex in deployment, has been demonstrated in real-world conditions on transatlantic cables, offering precise localization of environmental disturbances using coherent interferometry without the need for added reflectors​.

From a policy standpoint, Arctic submarine sensing intersects with civil, commercial, and defense domains, making multinational coordination essential. Organizations such as NATO, NORDEFCO (Nordic Defence Cooperation), and the Arctic Council must harmonize protocols for sensor data sharing, event attribution, and incident response. While SOP and C-OFDR generate valuable geophysical and security-relevant data, questions remain about how such data can be lawfully shared across borders, especially when detected anomalies may involve classified infrastructure or foreign-flagged vessels.

Moreover, integration with software-defined networking and centralized control planes can enable rapid traffic rerouting when anomalies are detected, improving resilience against natural or intentional disruptions. This also requires technical readiness in Greenlandic and Nordic telecom systems, many of which are evolving toward open architectures but may still depend on legacy switching hubs vulnerable to single points of failure.

Sensory compatibility and strategic trust must guide the acquisition and sourcing of sensing systems. Vendors like Nokia Bell Labs, which developed AI-based SOP anomaly detection models, have demonstrated in-band sensing on submarine networks without service degradation. A sourcing team may want to ensure that due diligence is done on the foundational models and that high-risk countries or vendors have not compromised their origin. I would recommend that sourcing teams follow the European Union’s 5G security framework as guidance in selecting the algorithmic solution, ensuring that no high-risk vendor/country has been involved at any point in the model development, training, or operational aspects of inferences and updates that are involved in applications of such models. By the way, it might be a very good and safe idea to extend this principle to the submarine cable construction and repair industry (just saying!).

When sourcing such systems, governments and operators should prioritize:

  • Proven compatibility with coherent transceiver infrastructure (i.e., brownfield submarine cable installations). Needless to say, solutions are tested before final sourcing (e.g., PoC).
  • Supplier alignment with NATO or Nordic/Arctic security frameworks. At a minimum, guidance should be taken from the EU 5G security framework and its approach to high-risk vendors and countries.
  • Firmware and AI models need clear IP ownership and cybersecurity compliance. Needless to say, the foundational models must originate from trusted companies and markets.
  • Inclusion of post-deployment support in Arctic (and beyond Arctic) operational conditions.

It cannot be emphasized enough that not all sensing systems are equally suitable for long-haul submarine cable stretches, such as transatlantic routes. Different sensing strategies may be required for the same subsea cable at different cable parts or spans (e.g., the bottom of the Atlantic Ocean vs coastal areas or proximity). A hybrid sensing approach is often more effective than a single solution. The physical length, signal attenuation, repeater spacing, and bandwidth constraints inherent to long-haul cables introduce technical limitations that influence which sensing techniques are viable and scalable.

For example, φ-OTDR (phase-sensitive OTDR) and standard DAS techniques, while powerful for acoustic sensing on terrestrial or coastal cables, face significant challenges over ultra-long distances due to signal loss and diminishing signal-to-noise ratio. These methods typically require access to dark fiber and may struggle to operate effectively across repeated links or when deployed mid-span across thousands of kilometers without amplification. Contrastingly, techniques like State of Polarization (SOP) sensing and Coherent Optical Frequency Domain Reflectometry (C-OFDR) have demonstrated strong potential for brownfield integration on transoceanic cables. SOP sensing can operate passively on live, traffic-carrying fibers and has been successfully demonstrated over 6,500 km transatlantic spans without an invasive retrofit​. Similarly, C-OFDR, particularly in its in-line coherent implementation, can leverage existing coherent transceivers and loop-back paths to perform long-range distributed sensing across legacy infrastructure..

This leads to the reasonable conclusion that a mix of sensing technologies tailored to cable type, length, environment, and use case is appropriate and necessary. For example, coastal or Arctic shelf cables may benefit more from high-resolution φ-OTDR/DAS deployments. In contrast, transoceanic cables call for SOP, or C-OFDR-based systems compatible with repeated, live traffic environments. This modular, multi-modal approach ensures maximum coverage, resilience, and relevance, especially as sensing is extended across greenfield and brownfield deployments.

Thus, hybrid sensing architectures are emerging as a best practice, with each technique contributing unique strengths toward a comprehensive monitoring and defense capability for critical submarine infrastructure.

Last but not least, cybersecurity and signal integrity protections are critical. Sensor platforms that generate real-time alerts must include spoofing detection, data authentication, and secured telemetry channels to prevent manipulation or false alarms. SOP sensing, for instance, may be vulnerable to polarization spoofing unless validated against multi-parameter baselines, such as concurrent C-OFDR strain signatures or external ISR (i.e., Intelligence, Surveillance, and Reconnaissance) inputs.

CONCLUSION AND RECOMMENDATION.

Submarine cables are indispensable for global connectivity, transmitting over 95% of international internet traffic, yet they remain primarily unmonitored and physically vulnerable. Recent events and geopolitical tensions reveal that hostile actors could target this infrastructure with plausible deniability, especially in regions with low surveillance like the Arctic. As described in this article, enhanced sensing technologies, such as DAS, SOP, and C-OFDR, can provide real-time awareness and threat detection, transforming passive infrastructure into active security assets. This is particularly urgent for islands and Arctic regions like Greenland, where fragile cable networks (in the sense of few independent international connections) represent single points of failure.

Key Considerations:

  • Submarine cables are strategic, yet “blind & deaf” infrastructures.
    Despite carrying the majority of global internet and financial data, most cables lack embedded sensing capabilities, leaving them vulnerable to natural and hybrid threats. This is especially true in the Arctic and island regions with minimal redundancy.
  • Recent hybrid threat patterns reinforce the need for monitoring.
    Cases like the 2024–2025 Baltic and Taiwan cable incidents show patterns (e.g., clean cuts, sudden phase shifts) that may be consistent with deliberate interference. These events demonstrate how undetected tampering can have immediate national and global impacts.
  • The Arctic is both a strategic and environmental hotspot.
    Melting sea ice has made the region more accessible to submarines and sabotage, while Greenland’s cables are often shallow, unprotected, and linked to critical NATO and civilian installations. Integrating sensing capabilities here is urgent.
  • Sensing systems enable early warning and reduce repair times.
    Technologies like SOP and C-OFDR can be applied to existing (brownfield) subsea systems without disrupting live traffic. This allows for anomaly detection, seismic monitoring, and rapid localization of cable faults, cutting response times from days to minutes.
  • Hybrid sensing systems and international cooperation are essential.
    No single sensing technology fits all submarine environments. The most effective strategy for resilience and defense involves combining multiple modalities tailored to cable type, geography, and threat level while ensuring trusted procurement and governance.
  • Relying on only one or two submarine cables for an island’s entire international connectivity at a bandwidth-critical scale is a high-stakes gamble. For example, a dual-cable redundancy may offer sufficient availability on paper. However, it fails to account for real-world risks such as correlated failures, extended repair times, and the escalating strategic value of uninterrupted digital access.
  • Quantity doesn’t matter for capable hostile actors: for a capable hostile actor, whether a country or region has two, three, or a handful of international submarine cables is unlikely to matter in terms of compromising those critical infrastructure assets.

In addition to the key conclusions above, there is a common belief that expanding the number of international submarine cables from two to three or three to four offers meaningful protection against deliberate sabotage by hostile state actors. While intuitively appealing, this notion underestimates a determined adversary’s intent and capability. For a capable actor, targeting an additional one or two cables is unlikely to pose a serious operational challenge. If the goal is disruption or coercion, a capable adversary will likely plan for multi-point compromise from the outset (including landing station considerations).

However, what cannot be overstated is the resilience gained through additional, physically distinct (parallel) cable systems. Moving from two to three truly diverse and independently repairable cables improves system availability by a factor of roughly 200, reducing expected downtime from over hours per year to under one minute. Expanding to four cables can reduce expected downtime to mere seconds annually. These figures reflect statistical robustness and operational continuity in the face of failure. Yet availability alone is not enough. Submarine cable repair timelines remain long, stretching from weeks to months, even under favorable conditions. And while natural disruptions are significant, they are no longer our only concern. Undersea infrastructure has become a deliberate target in hybrid and kinetic conflict scenarios in today’s geopolitical climate. The most pressing threat is not that these cables might be compromised, but that they may already be; we are simply unaware. The undersea domain is poorly monitored, poorly defended, and rich in asymmetric leverage.

Submarine cable infrastructure is not just the backbone of global digital connectivity. It is also a strategic asset with profound implications for civil society and national defense. The reliance on subsea cables for internet access, financial transactions, and governmental coordination is absolute. Satellite-based communications networks can only carry an infinitesimal amount of the traffic carried by subsea cable networks. If the global submarine cable network were to break down, so would the world order as we know it. Integrating advanced sensing technologies such as SOP, DAS, and C-OFDR into these networks transforms them from passive conduits into dynamic surveillance and monitoring systems. This dual-use capability enables faster fault detection and enhanced resilience for civilian communication systems, but also supports situational awareness, early-warning detection, and hybrid threat monitoring in contested or strategically sensitive areas like the Arctic. Ensuring submarine cable systems are robust, observable, and secured must therefore be seen as a shared priority, bridging commercial, civil, and military domains.

THE PHYSICS BEHIND SENSING – A BIT OF BACKUP.

Rayleigh Scattering: Imagine shining a flashlight through a long glass tunnel. Even though the glass tunnel looks super smooth, it has tiny bumps and little specks you can not see. When the light hits those tiny bumps, some bounce back, like a ball bounces off a wall. That bouncing light is called Rayleigh scattering.

Rayleigh scattering is a fundamental optical phenomenon in which light is scattered by small-scale variations in the refractive index of a medium, such as microscopic imperfections or density fluctuations within an optical fiber. It occurs naturally in all standard single-mode fibers and results in a portion of the transmitted light being scattered in all directions, including backward toward the transmitter. The intensity of Rayleigh backscattered light is typically very weak, but it can be detected and analyzed using highly sensitive receivers. The scattering is elastic, meaning there is no change in wavelength between the incident and scattered light.

In distributed fiber optic sensing (DFOS), Rayleigh backscatter forms the basis for several techniques:

  • Distributed Acoustic Sensing (DAS):
    The DAS sensing solution uses phase-sensitive optical time-domain reflectometry (i.e., φ-OTDR) to measure minute changes in the backscattered phase caused by vibrations. These changes indicate environmental disturbances such as seismic waves, intrusions, or cable movement.
  • Coherent Optical Frequency Domain Reflectometry (C-OFDR):
    C-OFDR leverages Rayleigh backscatter to measure changes in the fiber over distance with high resolution. By sweeping a narrow-linewidth laser over a frequency range and detecting interference from the backscatter, C-OFDR enables continuous distributed sensing along submarine cables. Unlike earlier methods requiring Bragg gratings, recent innovations allow this technique to work even over legacy subsea cables without them.
  • Coherent Receiver Sensing:
    This technique monitors Rayleigh backscatter and polarization changes using existing telecom equipment’s DSP (digital signal processing) capabilities. This allows for passive sensing with no additional probes, and the sensing does not interfere with data traffic.

Brillouin Scattering: Imagine you are talking through a long string tied between two cups, like a string telephone most of us played with as kids (before everyone got a smartphone when they turned 3 years old). Now, picture that the string is not still. It shakes a little, like shivering or wiggling in the wind or the strain of the hands holding the cups. When your voice travels down that string, it bumps into those little wiggles. That bumping makes the sound of your voice change a tiny bit. Brillouin scattering is like that. When light travels through our string (that could be a glass fiber), the tiny wiggles inside the string make the light change direction, and the way that light and cable “wiggles” work together can tell our engineers stories about what happens inside the cable.

Brillouin scattering is a nonlinear optical effect that occurs when light interacts with acoustic (sound) waves within the optical fiber. When a continuous wave or pulsed laser signal travels through the fiber, it can generate small pressure waves due to a phenomenon known as electrostriction. These pressure waves slightly change the optical fiber’s refractive index and act like a moving grating, scattering some of the light backward. This backward-scattered light experiences a frequency shift, known as the Brillouin shift, which is directly related to the temperature and strain in the fiber at the scattering point.

Commercial Brillouin-based systems are technically capable of monitoring subsea communications cables, especially for strain and temperature sensing. However, they are not yet standard in the submarine communications cable industry, and integration typically requires dedicated or dark fibers, as the sensing cannot share the same fiber with active data traffic.

Raman Scattering: Imagine you are shining a flashlight through a glass of water. Most of the light goes straight through, like cars driving down a road without turning. But sometimes, a tiny bit of light bumps into something inside the water, like a little water molecule, and bounces off differently. It’s like the car suddenly makes a tiny turn and changes its color. This little bump and color change is what we call Raman scattering. It is a special effect as it helps scientists figure out what’s inside things, like what water is made of, by looking at how the light changes when it bounces off.

Raman scattering is primarily used in submarine fiber cable sensing for Distributed Temperature Sensing (DTS). This technique exploits the temperature-dependent nature of Raman scattering to measure the temperature along the entire length of an optical fiber, which can be embedded within or run alongside a submarine cable. Raman scattering has several applications in submarine cables. It is used for environmental monitoring by detecting gradual thermal changes caused by ocean currents or geothermal activity. Regarding cable integrity, it can identify hotspots that might indicate electrical faults or compromised insulation in power cables. In Arctic environments, Raman-based Distributed Temperature Sensing (DTS) can help infer changes in surrounding ice or seawater temperatures, aiding in ice detection. Additionally, it supports early warning systems in the energy and offshore sectors by identifying overheating and other thermal anomalies before they lead to critical failures.

However, Raman scattering has notable limitations. Because it is a weak optical effect, DTS systems based on Raman scattering require high-powered lasers and highly sensitive detectors. It is also unsuitable for detecting dynamic events such as vibrations or acoustic signals, better sensed using Rayleigh or Brillouin scattering. Furthermore, Raman-based DTS typically offers spatial resolutions of one meter or more and has a slow response time, making it less effective for identifying rapid or short-lived events like submarine activity or tampering.

Commercial Raman-DTS solutions exist and are actively deployed in subsea power cable monitoring. Their use in telecom submarine cables is less common but technically feasible, particularly for infrastructure integrity monitoring rather than data-layer diagnostics.

FURTHER READING.

ACKNOWLEDGEMENT.

I greatly acknowledge my wife, Eva Varadi, for her support, patience, and understanding during the creative process of writing this article. I am furthermore indebted to Andreas Gladisch, VP Emerging Technologies – Deutsche Telekom AG, for sharing his expertise on fiber-optical sensing technologies with me and providing some of the foundational papers on which my article and research have been based. I always come away wiser from our conversations.