5G Economics – The Numbers (Appendix X).

5G essense

100% COVERAGE.

100% 5G coverage is not going to happen with 30 – 300 GHz millimeter-wave frequencies alone.

The “NGMN 5G white paper” , which I will in the subsequent parts refer to as the 5G vision paper, require the 5G coverage to be 100%.

At 100% cellular coverage it becomes somewhat academic whether we talk about population coverage or geographical (area) coverage. The best way to make sure you cover 100% of population is covering 100% of the geography. Of course if you cover 100% of the geography, you are “reasonably” ensured to cover 100% of the population.

While it is theoretically possible to cover 100% (or very near to) of population without covering 100% of the geography, it might be instructive to think why 100% geographical coverage could be a useful target in 5G;

  1. Network-augmented driving and support for varous degrees of autonomous driving would require all roads to be covered (however small).
  2. Internet of Things (IoT) Sensors and Actuators are likely going to be of use also in rural areas (e.g., agriculture, forestation, security, waterways, railways, traffic lights, speed-detectors, villages..) and would require a network to connect to.
  3. Given many users personal area IoT networks (e.g., fitness & health monitors, location detection, smart-devices in general) ubiquitous becomes essential.
  4. Internet of flying things (e.g., drones) are also likely to benefit from 100% area and aerial coverage.

However, many countries remain lacking in comprehensive geographical coverage. Here is an overview of the situation in EU28 (as of 2015);

For EU28 countries, 14% of all house holds in 2015 still had no LTE coverage. This was approx.30+ million households or equivalent to 70+ million citizens without LTE coverage. The 14% might seem benign. However, it covers a Rural neglect of 64% of households not having LTE coverage. One of the core reasons for the lack of rural (population and household) coverage is mainly an economic one. Due to the relative low number of population covered per rural site and compounded by affordability issues for the rural population, overall rural sites tend to have low or no profitability. Network sharing can however improve the rural site profitability as site-related costs are shared.

From an area coverage perspective, the 64% of rural households in EU28 not having LTE coverage is likely to amount to a sizable lack of LTE coverage area. This rural proportion of areas and households are also very likely by far the least profitable to cover for any operator possibly even with very progressive network sharing arrangements.

Fixed broadband, Fiber to the Premises (FTTP) and DOCSIS3.0, lacks further behind that of mobile LTE-based broadband. Maybe not surprisingly from an business economic perspective, in rural areas fixed broadband is largely unavailable across EU28.

The chart below illustrates the variation in lack of broadband coverage across LTE, Fiber to the Premises (FTTP) and DOCSIS3.0 (i.e., Cable) from a total country perspective (i.e., rural areas included in average).

We observe that most countries have very far to go on fixed broadband provisioning (i.e., FTTP and DOCSIS3.0) and even on LTE coverage lacks complete coverage. The rural coverage view (not shown here) would be substantially worse than the above Total view.

The 5G ambition is to cover 100% of all population and households. Due to the demographics of how rural households (and populations) are spread, it is also likely that fairly large geographical areas would need to be covered in order to come true on the 100% ambition.

It would appear that bridging this lack of broadband coverage would be best served by a cellular-based technology. Given the fairly low population density in such areas relative higher average service quality (i.e., broadband) could be delivered as long as the cell range is optimized and sufficient spectrum at a relative low carrier frequency (< 1 GHz) would be available. It should be remembered that the super-high 5G 1 – 10 Gbps performance cannot be expected in rural areas. Due to the lower carrier frequency range need to provide economic rural coverage both advanced antenna systems and very large bandwidth (e.g., such as found in the mm-frequency range)  would not be available to those areas. Thus limiting the capacity and peak performance possible even with 5G.

I would suspect that irrespective of the 100% ambition, telecom providers would be challenged by the economics of cellular deployment and traffic distribution. Rural areas really sucks in profitability, even in fairly aggressive sharing scenarios. Although multi-party (more than 2) sharing might be a way to minimize the profitability burden on deep rural coverage.

The above chart shows the relationship between traffic distribution and sites. As a rule of thumb 50% of revenue is typically generated by 10% of all sites (i.e., in a normal legacy mobile network) and approx. 50% of (rural) sites share roughly 10% of the revenue. Note: in emerging markets the distribution is somewhat steeper as less comprehensive rural coverage typically exist. (Source: The ABC of Network Sharing – The Fundamentals.).

Irrespective of my relative pessimism of the wider coverage utility and economics of millimeter-wave (mm-wave) based coverage, there shall be no doubt that mm-wave coverage will be essential for smaller and smallest cell coverage where due to density of users or applications will require extreme (in comparison to today’s demand) data speeds and capacities. Millimeter-wave coverage-based architectures offer very attractive / advanced antenna solutions that further will allow for increased spectral efficiency and throughput. Also the possibility of using mm-wave point to multipoint connectivity as last mile replacement for fiber appears very attractive in rural and sub-urban clutters (and possible beyond if the cost of the electronics drop according the expeced huge increase in demand for such). This last point however is in my opinion independent of 5G as Facebook with their Terragraph development have shown (i.e., 60 GHz WiGig-based system). A great account for mm-wave wireless communications systems  can be found in T.S. Rappaport et al.’s book “Millimeter Wave Wireless Communications” which not only comprises the benefits of mm-wave systems but also provides an account for the challenges. It should be noted that this topic is still a very active (and interesting) research area that is relative far away from having reached maturity.

In order to provide 100% 5G coverage for the mass market of people & things, we need to engage the traditional cellular frequency bands from 600 MHz to 3 GHz.

1 – 10 Gbps PEAK DATA RATE PER USER.

Getting a Giga bit per second speed is going to require a lot of frequency bandwidth, highly advanced antenna systems and lots of additional cells. And that is likely going to lead to a (very) costly 5G deployment. Irrespective of the anticipated reduced unit cost or relative cost per Byte or bit-per-second.

At 1 Gbps it would take approx. 16 seconds to download a 2 GB SD movie. It would take less than a minute for the HD version (i.e., at 10 Gbps it just gets better;-). Say you have a 16GB smartphone, you loose maybe up to 20+% for the OS, leaving around 13GB for things to download. With 1Gbps it would take less than 2 minutes to fill up your smartphones storage (assuming you haven’t run out of credit on your data plan or reached your data ceiling before then … of course unless you happen to be a customer of T-Mobile US in which case you can binge on = you have no problems!).

The biggest share of broadband usage comes from video streaming which takes up 60% to 80% of all volumetric traffic pending country (i.e., LTE terminal penetration dependent). Providing higher speed to your customer than is required by the applied video streaming technology and smartphone or tablet display being used, seems somewhat futile to aim for. The Table below provides an overview of streaming standards, their optimal speeds and typical viewing distance for optimal experience;

Source: 5G Economics – An Introduction (Chapter 1).

So … 1Gbps could be cool … if we deliver 32K video to our customers end device, i.e., 750 – 1600 Mbps optimal data rate. Though it is hard to see customers benefiting from this performance boost given current smartphone or tablet display sizes. The screen size really have to be ridiculously large to truly benefit from this kind of resolution. Of course Star Trek-like full emersion (i.e., holodeck) scenarios would arguably require a lot (=understatement) bandwidth and even more (=beyond understatement) computing power … though such would scenario appears unlikely to be coming out of cellular devices (even in Star Trek).

1 Gbps fixed broadband plans have started to sell across Europe. Typically on Fiber networks although also on DOCSIS3.1 (10Gbps DS/1 Gbps US) networks as well in a few places. It will only be a matter of time before we see 10 Gbps fixed broadband plans being offered to consumers. Irrespective of compelling use cases might be lacking it might at least give you the bragging rights of having the biggest.

From European Commissions “Europe’s Digital Progress Report 2016”,  22 % of European homes subscribe to fast broadband access of at least 30 Mbps. An estimated 8% of European households subscribe to broadband plans of at least 100 Mbps. It is worth noticing that this is not a problem with coverage as according with the EC’s “Digital Progress Report” around 70% of all homes are covered with at least 30 Mbps and ca. 50% are covered with speeds exceeding 100 Mbps.

The chart below illustrates the broadband speed coverage in EU28;

Even if 1Gbps fixed broadband plans are being offered, still majority of European homes are at speeds below the 100 Mbps. Possible suggesting that affordability and household economics plays a role as well as the basic perceived need for speed might not (yet?) be much beyond 30 Mbps?

Most aggregation and core transport networks are designed, planned, built and operated on a assumption of dominantly customer demand of lower than 100 Mbps packages. As 1Gbps and 10 Gbps gets commercial traction, substantial upgrades are require in aggregation, core transport and last but not least possible also on an access level (to design shorter paths). It is highly likely distances between access, aggregation and core transport elements are too long to support these much higher data rates leading to very substantial redesigns and physical work to support this push to substantial higher throughputs.

Most telecommunications companies will require very substantial investments in their existing transport networks all the way from access to aggregation through the optical core switching networks, out into the world wide web of internet to support 1Gbps to 10 Gbps. Optical switching cards needs to be substantially upgraded, legacy IP/MPLS architectures might no longer work very well (i.e., scale & complexity issue).

Most analysts today believe that incumbent fixed & mobile broadband telecommunications companies with a reasonable modernized transport network are best positioned for 5G compared to mobile-only operators or fixed-mobile incumbents with an aging transport infrastructure.

What about the state of LTE speeds across Europe? OpenSignal recurrently reports on the State of LTE, the following summarizes LTE speeds in Mbps as of June 2017 for EU28 (with the exception of a few countries not included in the OpenSignal dataset);

The OpenSignal measurements are based on more than half a million devices, almost 20 billion measurements over the period of the 3 first month of 2017.

The 5G speed ambition is by todays standards 10 to 30+ times away from present 2016/2017 household fixed broadband demand or the reality of provided LTE speeds.

Let us look at cellular spectral efficiency to be expected from 5G. Using the well known framework;

In essence, I can provide very high data rates in bits per second by providing a lot of frequency bandwidth B, use the most spectrally efficient technologies maximizing η, and/or add as many cells N that my economics allow for.

In the following I rely largely on Jonathan Rodriquez great book on “Fundamentals of 5G Mobile Networks” as a source of inspiration.

The average spectral efficiency is expected to be coming out in the order of 10 Mbps/MHz/cell using advanced receiver architectures, multi-antenna, multi-cell transmission and corporation. So pretty much all the high tech goodies we have in the tool box is being put to use of squeezing out as many bits per spectral Hz available and in a sustainable matter. Under very ideal Signal to Noise Ratio conditions, massive antenna arrays of up to 64 antenna elements (i.e., an optimum) seems to indicate that 50+ Mbps/MHz/Cell might be feasible in peak.

So for a spectral efficiency of 10 Mbps/MHz/cell and a demanded 1 Gbps data rate we would need 100 MHz frequency bandwidth per cell (i.e., using the above formula). Under very ideal conditions and relative large antenna arrays this might lead to a spectral requirement of only 20 MHz at 50 Mbps/MHz/Cell. Obviously, for 10 Gbps data rate we would require 1,000 MHz frequency bandwidth (1 GHz!) per cell at an average spectral efficiency of 10 Mbps/MHz/cell.

The spectral efficiency assumed for 5G heavily depends on successful deployment of many-antenna segment arrays (e.g., Massive MiMo, beam-forming antennas, …). Such fairly complex antenna deployment scenarios work best at higher frequencies, typically above 2GHz. Also such antenna systems works better at TDD than FDD with some margin on spectral efficiency. These advanced antenna solutions works perfectly  in the millimeter wave range (i.e., ca. 30 – 300 GHz) where the antenna segments are much smaller and antennas can be made fairly (very) compact (note: resonance frequency of the antenna proportional to half the wavelength with is inverse proportional to the carrier frequency and thus higher frequencies need smaller material dimension to operate).

Below 2 GHz higher-order MiMo becomes increasingly impractical and the spectral efficiency regress to the limitation of a simple single-path antenna. Substantially lower than what can be achieved at much high frequencies with for example massive-MiMo.

So for the 1Gbps to 10 Gbps data rates to work out we have the following relative simple rationale;

  • High data rates require a lot of frequency bandwidth (>100 MHz to several GHz per channel).
  • Lots of frequency bandwidth are increasingly easier to find at high and very high carrier frequencies (i.e., why millimeter wave frequency band between 30 – 300 GHz is so appealing).
  • High and very high carrier frequencies results in small, smaller and smallest cells with very high bits per second per unit area (i.e., the area is very small!).
  • High and very high carrier frequency allows me to get the most out of higher order MiMo antennas (i.e., with lots of antenna elements),
  • Due to fairly limited cell range, I boost my overall capacity by adding many smallest cells (i.e., at the highest frequencies).

We need to watch out for the small cell densification which tends not to scale very well economically. The scaling becomes a particular problem when we need hundreds of thousands of such small cells as it is expected in most 5G deployment scenarios (i.e., particular driven by the x1000 traffic increase). The advanced antenna systems required (including the computation resources needed) to max out on spectral efficiency are likely going to be one of the major causes of breaking the economical scaling. Although there are many other CapEx and OpEx scaling factors to be concerned about for small cell deployment at scale.

Further, for mass market 5G coverage, as opposed to hot traffic zones or indoor solutions, lower carrier frequencies are needed. These will tend to be in the usual cellular range we know from our legacy cellular communications systems today (e.g., 600 MHz – 2.1 GHz). It should not be expected that 5G spectral efficiency will gain much above what is already possible with LTE and LTE-advanced at this legacy cellular frequency range. Sheer bandwidth accumulation (multi-frequency carrier aggregation) and increased site density is for the lower frequency range a more likely 5G path. Of course mass market 5G customers will benefit from faster reaction times (i.e., lower latencies), higher availability, more advanced & higher performing services arising from the very substantial changes expected in transport networks and data centers with the introduction of 5G.

Last but not least to this story … 80% and above of all mobile broadband customers usage, data as well as voice, happens in very few cells (e.g., 3!) … representing their Home and Work.

Source: Slideshare presentation by Dr. Kim “Capacity planning in mobile data networks experiencing exponential growth in demand.”

As most of the mobile cellular traffic happen at the home and at work (i.e., thus in most cases indoor) there are many ways to support such traffic without being concerned about the limitation of cell ranges.

The giga bit per second cellular service is NOT a service for the mass market, at least not in its macro-cellular form.

≤ 1 ms IN ROUND-TRIP DELAY.

A total round-trip delay of 1 or less millisecond is very much attuned to niche service. But a niche service that nevertheless could be very costly for all to implement.

I am not going to address this topic too much here. It has to a great extend been addressed almost to ad nauseam in 5G Economics – An Introduction (Chapter 1) and 5G Economics – The Tactile Internet (Chapter 2). I think this particular aspect of 5G is being over-hyped in comparison to how important it ultimately will turn out to be from a return on investment perspective.

Speed of light travels ca. 300 km per millisecond (ms) in vacuum and approx. 210 km per ms in fiber (some material dependency here). Lately engineers have gotten really excited about the speed of light not being fast enough and have made a lot of heavy thinking abou edge this and that (e.g., computing, cloud, cloudlets, CDNs,, etc…). This said it is certainly true that most modern data centers have not been build taking too much into account that speed of light might become insufficient. And should there really be a great business case of sub-millisecond total (i.e., including the application layer) roundtrip time scales edge computing resources would be required a lot closer to customers than what is the case today.

It is common to use delay, round-trip time or round-trip delay, or latency as meaning the same thing. Though it is always cool to make sure people really talk about the same thing by confirming that it is indeed a round-trip rather than single path. Also to be clear it is worthwhile to check that all people around the table talk about delay at the same place in the OSI stack or  network path or whatever reference point agreed to be used.

In the context of  the 5G vision paper it is emphasized that specified round-trip time is based on the application layer (i.e., OSI model) as reference point. It is certainly the most meaningful measure of user experience. This is defined as the End-2-End (E2E) Latency metric and measure the complete delay traversing the OSI stack from physical layer all the way up through network layer to the top application layer, down again, between source and destination including acknowledgement of a successful data packet delivery.

The 5G system shall provide 10 ms E2E latency in general and 1 ms E2E latency for use cases requiring extremely low latency.

The 5G vision paper states “Note these latency targets assume the application layer processing time is negligible to the delay introduced by transport and switching.” (Section 4.1.3 page 26 in “NGMN 5G White paper”).

In my opinion it is a very substantial mouthful to assume that the Application Layer (actually what is above the Network Layer) will not contribute significantly to the overall latency. Certainly for many applications residing outside the operators network borders, in the world wide web, we can expect a very substantial delay (i.e., even in comparison with 10 ms). Again this aspect was also addressed in my two first chapters.

Very substantial investments are likely needed to meet E2E delays envisioned in 5G. In fact the cost of improving latencies gets prohibitively more expensive as the target is lowered. The overall cost of design for 10 ms would be a lot less costly than designing for 1 ms or lower. The network design challenge if 1 millisecond or below is required, is that it might not matter that this is only a “service” needed in very special situations, overall the network would have to be designed for the strictest denominator.

Moreover, if remedies needs to be found to mitigate likely delays above the Network Layer, distance and insufficient speed of light might be the least of worries to get this ambition nailed (even at the 10 ms target). Of course if all applications are moved inside operator’s networked premises with simpler transport paths (and yes shorter effective distances) and distributed across a hierarchical cloud (edge, frontend, backend, etc..), the assumption of negligible delay in layers above the Network Layer might become much more likely. However, it does sound a lot like America Online walled garden fast forward to the past kind of paradigm.

So with 1 ms E2E delay … yeah yeah … “play it again Sam” … relevant applications clearly need to be inside network boundary and being optimized for processing speed or silly & simple (i.e., negligible delay above the Network Layer), no queuing delay (to the extend of being in-efficiency?), near-instantaneous transmission (i.e., negligible transmission delay) and distances likely below tenth of km (i.e., very short propagation delay).

When the speed of light is too slow there are few economic options to solve that challenge.

≥ 10,000 Gbps / Km2 DATA DENSITY.

The data density is maybe not the most sensible measure around. If taken too serious could lead to hyper-ultra dense smallest network deployments.

This has always been a fun one in my opinion. It can be a meaningful design metric or completely meaningless.

There is of course nothing particular challenging in getting a very high throughput density if an area is small enough. If I have a cellular range of few tens of meters, say 20 meters, then my cell area is smaller than 1/1000 of a km2. If I have 620 MHz bandwidth aggregated between 28 GHz and 39 GHz (i.e., both in the millimeter wave band) with a 10 Mbps/MHz/Cell, I could support 6,200 Gbps/km2. That’s almost 3 Petabyte in an hour or 10 years of 24/7 binge watching of HD videos. Note given my spectral efficiency is based on an average value, it is likely that I could achieve substantially more bandwidth density and in peaks closer to the 10,000 Gbps/km2 … easily.

Pretty Awesome Wow!

The basic; a Terabit equals 1024 Gigabits (but I tend to ignore that last 24 … sorry I am not).

With a traffic density of ca. 10,000 Gbps per km2, one would expect to have between 1,000 (@ 10 Gbps peak) to 10,000 (@ 1 Gbps peak) concurrent users per square km.

At 10 Mbps/MHz/Cell one would expect to have a 1,000 Cell-GHz/km2. Assume that we would have 1 GHz bandwidth (i.e., somewhere in the 30 – 300 GHz mm-wave range), one would need 1,000 cells per km2. On average with a cell range of about 20 meters (smaller to smallest … I guess what Nokia would call an Hyper-Ultra-Dense Network;-). Thus each cell would minimum have between 1 to 10 concurrent users.

Just as a reminder! 1 minutes at 1 Gbps corresponds to 7.5 GB. A bit more than what you need for a 80 minute HD (i.e., 720pp) full movie stream … in 1 minutes. So with your (almost) personal smallest cell what about the remaining 59 minutes? Seems somewhat wasteful at least until kingdom come (alas maybe sooner than that).

It would appear that the very high 5G data density target could result in very in-efficient networks from a utilization perspective.

≥ 1 MN / Km2 DEVICE DENSITY.

One million 5G devices per square kilometer appears to be far far out in a future where one would expect us to be talking about 7G or even higher Gs.

1 Million devices seems like a lot and certainly per km2. It is 1 device per square meter on average. A 20 meter cell-range smallest cell would contain ca. 1,200 devices.

To give this number perspective lets compare it with one of my favorite South-East Asian cities. The city with one of the highest population densities around, Manila (Philippines). Manila has more than 40 thousand people per square km. Thus in Manila this would mean that we would have about 24 devices per person or 100+ per household per km2. Overall, in Manila we would then expect approx. 40 million devices spread across the city (i.e., Manila has ca. 1.8 Million inhabitants over an area of 43 km2. Philippines has a population of approx. 100 Million).

Just for the curious, it is possible to find other more populated areas in the world. However, these highly dense areas tends to be over relative smaller surface areas, often much smaller than a square kilometer and with relative few people. For example Fadiouth Island in Dakar have a surface area of 0.15 km2 and 9,000 inhabitants making it one of the most pop densest areas in the world (i.e., 60,000 pop per km2).

I hope I made my case! A million devices per km2 is a big number.

Let us look at it from a forecasting perspective. Just to see whether we are possibly getting close to this 5G ambition number.

IHS forecasts 30.5 Billion installed devices by 2020, IDC is also believes it to be around 30 Billion by 2020. Machina Research is less bullish and projects 27 Billion by 2025 (IHS expects that number to be 75.4 Billion) but this forecast is from 2013. Irrespective, we are obviously in the league of very big numbers. By the way 5G IoT if at all considered is only a tiny fraction of the overall projected IoT numbers (e.g., Machine Research expects 10 Million 5G IoT connections by 2024 …that is extremely small numbers in comparison to the overall IoT projections).

A consensus number for 2020 appears to be 30±5 Billion IoT devices with lower numbers based on 2015 forecasts and higher numbers typically from 2016.

To break this number down to something that could be more meaningful than just being Big and impressive, let just establish a couple of worldish numbers that can help us with this;

  • 2020 population expected to be around 7.8 Billion compared to 2016 7.4 Billion.
  • Global pop per HH is ~3.5 (average number!) which might be marginally lower in 2020. Urban populations tend to have less pop per households ca. 3.0. Urban populations in so-called developed countries are having a pop per HH of ca. 2.4.
  • ca. 55% of world population lives in Urban areas. This will be higher by 2020.
  • Less than 20% of world population lives in developed countries (based on HDI). This is a 2016 estimate and will be higher by 2020.
  • World surface area is 510 Million km2 (including water).
  • of which ca. 150 million km2 is land area
  • of which ca. 75 million km2 is habitable.
  • of which 3% is an upper limit estimate of earth surface area covered by urban development, i.e., 15.3 Million km2.
  • of which approx. 1.7 Million km2 comprises developed regions urban areas.
  • ca. 37% of all land-based area is agricultural land.

Using 30 Billion IoT devices by 2020 is equivalent to;

  • ca. 4 IoT per world population.
  • ca. 14 IoT per world households.
  • ca. 200 IoT per km2 of all land-based surface area.
  • ca. 2,000 IoT per km2 of all urban developed surface area.

If we limit IoT’s in 2020 to developed countries, which wrongly or rightly exclude China, India and larger parts of Latin America, we get the following by 2020;

  • ca. 20 IoT per developed country population.
  • ca. 50 IoT per developed country households.
  • ca. 18,000 IoT per km2 developed country urbanized areas.

Given that it would make sense to include larger areas and population of both China, India and Latin America, the above developed country numbers are bound to be (a lot) lower per Pop, HH and km2. If we include agricultural land the number of IoTs will go down per km2.

So far far away from a Million IoT per km2.

What about parking spaces, for sure IoT will add up when we consider parking spaces!? … Right? Well in Europe you will find that most big cities will have between 50 to 200 (public) parking spaces per square kilometer (e.g., ca. 67 per km2 for Berlin and 160 per km2 in Greater Copenhagen). Aha not really making up to the Million IoT per km2 … what about cars?

In EU28 there are approx. 256 Million passenger cars (2015 data) over a population of ca. 510 Million pops (or ca. 213 million households). So a bit more than 1 passenger car per household on EU28 average. In Eu28 approx. 75+% lives in urban area which comprises ca. 150 thousand square kilometers (i.e., 3.8% of EU28’s 4 Million km2). So one would expect little more (if not a little less) than 1,300 passenger cars per km2. You may say … aha but it is not fair … you don’t include motor vehicles that are used for work … well that is an exercise for you (too convince yourself why that doesn’t really matter too much and with my royal rounding up numbers maybe is already accounted for). Also consider that many EU28 major cities with good public transportation are having significantly less cars per household or population than the average would allude to.

Surely, public street light will make it through? Nope! Typical bigger modern developed country city will have on average approx. 85 street lights per km2, although it varies from 0 to 1,000+. Light bulbs per residential household (from a 2012 study of the US) ranges from 50 to 80+. In developed countries we have roughly 1,000 households per km2 and thus we would expect between 50 thousand to 80+ thousand lightbulbs per km2. Shops and business would add some additions to this number.

With a cumulated annual growth rate of ca. 22% it would take 20 years (from 2020) to reach a Million IoT devices per km2 if we will have 20 thousand per km2 by 2020. With a 30% CAGR it would still take 15 years (from 2020) to reach a Million IoT per km2.

The current IoT projections of 30 Billion IoT devices in operation by 2020 does not appear to be unrealistic when broken down on a household or population level in developed areas (even less ambitious on a worldwide level). The 18,000 IoT per km2 of developed urban surface area by 2020 does appear somewhat ambitious. However, if we would include agricultural land the number would become possible a more reasonable.

If you include street crossings, traffic radars, city-based video monitoring (e.g., London has approx. 300 per km2, Hong Kong ca. 200 per km2), city-based traffic sensors, environmental sensors, etc.. you are going to get to sizable numbers.

However, 18,000 per km2 in urban areas appears somewhat of a challenge. Getting to 1 Million per km2 … hmmm … we will see around 2035 to 2040 (I have added an internet reminder for a check-in by 2035).

Maybe the 1 Million Devices per km2 ambition is not one of the most important 5G design criteria’s for the short term (i.e., next 10 – 20 years).

Oh and most IoT forecasts from the period 2015 – 2016 does not really include 5G IoT devices in particular. The chart below illustrates Machina Research IoT forecast for 2024 (from August 2015). In a more recent forecast from 2016, Machine Research predict that by 2024 there would be ca. 10 million 5G IoT connections or 0.04% of the total number of forecasted connections;

The winner is … IoTs using WiFi or other short range communications protocols. Obviously, the cynic in me (mea culpa) would say that a mm-wave based 5G connections can also be characterized as short range … so there might be a very interesting replacement market there for 5G IoT … maybe? 😉

Expectations to 5G-based IoT does not appear to be very impressive at least over the next 10 years and possible beyond.

The un-importance of 5G IoT should not be a great surprise given most 5G deployment scenarios are focused on millimeter-wave smallest 5G cell coverage which is not good for comprehensive coverage of  IoT devices not being limited to those very special 5G coverage situations being thought about today.

Only operators focusing on comprehensive 5G coverage re-purposing lower carrier frequency bands (i.e., 1 GHz and lower) can possible expect to gain a reasonable (as opposed to niche) 5G IoT business. T-Mobile US with their 600 MHz  5G strategy might very well be uniquely positions for taking a large share of future proof IoT business across USA. Though they are also pretty uniquely position for NB-IoT with their comprehensive 700MHz LTE coverage.

For 5G IoT to be meaningful (at scale) the conventional macro-cellular networks needs to be in play for 5G coverage .,, certainly 100% 5G coverage will be a requirement. Although, even with 5G there maybe 100s of Billion of non-5G IoT devices that require coverage and management.

≤ 500 km/h SERVICE SUPPORT.

Sure why not?  but why not faster than that? At hyperloop or commercial passenger airplane speeds for example?

Before we get all excited about Gbps speeds at 500 km/h, it should be clear that the 5G vision paper only proposed speeds between 10 Mbps up-to 50 Mbps (actually it is allowed to regress down to 50 kilo bits per second). With 200 Mbps for broadcast like services.

So in general, this is a pretty reasonable requirement. Maybe with the 200 Mbps for broadcasting services being somewhat head scratching unless the vehicle is one big 16K screen. Although the users proximity to such a screen does not guaranty an ideal 16K viewing experience to say the least.

What moves so fast?

The fastest train today is tracking at ca. 435 km/h (Shanghai Maglev, China).

Typical cruising airspeed for a long-distance commercial passenger aircraft is approx. 900 km/h. So we might not be able to provide the best 5G experience in commercial passenger aircrafts … unless we solve that with an in-plane communications system rather than trying to provide Gbps speed by external coverage means.

Why take a plane when you can jump on the local Hyperloop? The proposed Hyperloop should track at an average speed of around 970 km/h (faster or similar speeds as commercial passengers aircrafts), with a top speed of 1,200 km/h. So if you happen to be in between LA and San Francisco in 2020+ you might not be able to get the best 5G service possible … what a bummer! This is clearly an area where the vision did not look far enough.

Providing services to moving things at a relative fast speed does require a reasonable good coverage. Whether it being train track, hyperloop tunnel or ground to air coverage of commercial passenger aircraft, new coverage solutions would need to be deployed. Or alternative in-vehicular coverage solutions providing a perception of 5G experience might be an alternative that could turn out to be more economical.

The speed requirement is a very reasonable one particular for train coverage.

50% TOTAL NETWORK ENERGY REDUCTION.

If 5G development could come true on this ambition we talk about 10 Billion US Dollars (for the cellular industry). Equivalent to a percentage point on the margin.

There are two aspects of energy efficiency in a cellular based communication system.

  • User equipment that will benefit from longer intervals without charging and thus improve customers experience and overall save energy from less frequently charges.
  • Network infrastructure energy consumption savings will directly positively impact a telecom operators Ebitda.

Energy efficient Smartphones

The first aspect of user equipment is addressed by the 5G vision paper under “4.3 Device Requirements”  sub-section “4.3.3 Device Power Efficiency”; Battery life shall be significantly increased: at least 3 days for a smartphone, and up tp 15 years for a low-cost MTC device.” (note: MTC = Machine Type Communications).

Apple’s iPhone 7 battery life (on a full charge) is around 6 hours of constant use with 7 Plus beating that with ca. 3 hours (i.e., total 9 hours). So 3 days will go a long way.

From a recent 2016 survey from Ask Your Target Market on smartphone consumers requirements to battery lifetime and charging times;

  • 64% of smartphone owners said they are at least somewhat satisfied with their phone’s battery life.
  • 92% of smartphone owners said they consider battery life to be an important factor when considering a new smartphone purchase.
  • 66% said they would even pay a bit more for a cell phone that has a longer battery life.

Looking at the mobile smartphone & tablet non-voice consumption it is also clear why battery lifetime and not in-important the charging time matters;

Source: eMarketer, April 2016. While 2016 and 2017 are eMarketer forecasts (why dotted line and red circle!) these do appear well in line with other more recent measurements.

Non-voice smartphone & tablet based usage is expected by now to exceed 4 hours (240 minutes) per day on average for US Adults.

That longer battery life-times are needed among smartphone consumers is clear from sales figures and anticipated sales growth of smartphone power-banks (or battery chargers) boosting the life-time with several more hours.

It is however unclear whether the 3 extra days of a 5G smartphone battery life-time is supposed to be under active usage conditions or just in idle mode. Obviously in order to matter materially to the consumer one would expect this vision to apply to active usage (i.e., 4+ hours a day at 100s of Mbps – 1Gbps operations).

Energy efficient network infrastructure.

The 5G vision paper defines energy efficiency as number of bits that can be transmitted over the telecom infrastructure per Joule of Energy.

The total energy cost, i.e., operational expense (OpEx), of telecommunications network can be considerable. Despite our mobile access technologies having become more energy efficient with each generation, the total OpEx of energy attributed to the network infrastructure has increased over the last 10 years in general. The growth in telco infrastructure related energy consumption has been driven by the consumer demand for broadband services in mobile and fixed including incredible increase in data center computing and storage requirements.

In general power consumption OpEx share of total technology cost amounts to 8% to 15% (i.e., for Telcos without heavy reliance of diesel). The general assumption is that with regular modernization, energy efficiency gain in newer electronics can keep growth in energy consumption to a minimum compensating for increased broadband and computing demand.

Note: Technology Opex (including NT & IT) on average lays between 18% to 25% of total corporate Telco Opex. Out of the Technology Opex between 8% to 15% (max) can typically be attributed to telco infrastructure energy consumption. The access & aggregation contribution to the energy cost typically would towards 80% plus. Data centers are expected to increasingly contribute to the power consumption and cost as well. Deep diving into the access equipment power consumption, ca. 60% can be attributed to rectifiers and amplifiers, 15% by the DC power system & miscellaneous and another 25% by cooling.

5G vision paper is very bullish in their requirement to reduce the total energy and its associated cost; it is stated “5G should support a 1,000 times traffic increase in the next 10 years timeframe, with an energy consumption by the whole network of only half that typically consumed by today’s networks. This leads to the requirement of an energy efficiency of x2,000 in the next 10 years timeframe.” (sub-section “4.6.2 Energy Efficiency” NGMN 5G White Paper).

This requirement would mean that in a pure 5G world (i.e., all traffic on 5G), the power consumption arising from the cellular network would be 50% of what is consumed todayIn 2016 terms the Mobile-based Opex saving would be in the order of 5 Billion US$ to 10+ Billion US$ annually. This would be equivalent to 0.5% to 1.1% margin improvement globally (note: using GSMA 2016 Revenue & Growth data and Pyramid Research forecast). If energy price would increase over the next 10 years the saving / benefits would of course be proportionally larger.

As we have seen in the above, it is reasonable to expect a very considerable increase in cell density as the broadband traffic demand increases from peak bandwidth (i.e., 1 – 10 Gbps) and traffic density (i.e., 1 Tbps per km2) expectations.

Depending on the demanded traffic density, spectrum and carrier frequency available for 5G between 100 to 1,000 small cell sites per km2 could be required over the next 10 years. This cell site increase will be required in addition to existing macro-cellular network infrastructure.

Today (in 2017) an operator in EU28-sized country may have between ca. 3,500 to 35,000 cell sites with approx. 50% covering rural areas. Many analysts are expecting that for medium sized countries (e.g., with 3,500 – 10,000 macro cellular sites), operators would eventually have up-to 100,000 small cells under management in addition to their existing macro-cellular sites. Most of those 5G small cells and many of the 5G macro-sites we will have over the next 10 years, are also going to have advanced massive MiMo antenna systems with many active antenna elements per installed base antenna requiring substantial computing to gain maximum performance.

It appears with today’s knowledge extremely challenging (to put it mildly) to envision a 5G network consuming 50% of today’s total energy consumption.

It is highly likely that the 5G radio node electronics in a small cell environment (and maybe also in a macro cellular environment?) will consume less Joules per delivery bit (per second) due to technology advances and less transmitted power required (i.e., its a small or smallest cell). However, this power efficiency technology and network cellular architecture gain can very easily be destroyed by the massive additional demand of small, smaller and smallest cells combined with highly sophisticated antenna systems consuming additional energy for their compute operations to make such systems work. Furthermore, we will see operators increasingly providing sophisticated data center resources network operations as well as for the customers they serve. If the speed of light is insufficient for some services or country geographies, additional edge data centers will be introduced, also leading to an increased energy consumption not present in todays telecom networks. Increased computing and storage demand will also make the absolute efficiency requirement highly challenging.

Will 5G be able to deliver bits (per second) more efficiently … Yes!

Will 5G be able to reduce the overall power consumption of todays telecom networks with 50% … highly unlikely.

In my opinion the industry will have done a pretty good technology job if we can keep the existing energy cost at the level of today (or even allowing for unit price increases over the next 10 years).

The Total power reduction of our telecommunications networks will be one of the most important 5G development tasks as the industry cannot afford a new technology that results in waste amount of incremental absolute cost. Great relative cost doesn’t matter if it results in above and beyond total cost.

≥ 99.999% NETWORK AVAILABILITY & DATA CONNECTION RELIABILITY.

A network availability of 5Ns across all individual network elements and over time correspond to less than a second a day downtime anywhere in the network. Few telecom networks are designed for that today.

5 Nines (5N) is a great aspiration for services and network infrastructures. It also tends to be fairly costly and likely to raise the level of network complexity. Although in the 5G world of heterogeneous networks … well its is already complicated.

5N Network Availability.

From a network and/or service availability perspective it means that over the cause of the day, your service should not experience more than 0.86 seconds of downtime. Across a year the total downtime should not be more than 5 minutes and 16 seconds.

The way 5N Network Availability is define is “The network is available for the targeted communications in 99.999% of the locations  where the network is deployed and 99.999% of the time”. (from “4.4.4 Resilience and High Availability”, NGMN 5G White Paper).

Thus in a 100,000 cell network only 1 cell is allowed experience a downtime and for no longer than less than a second a day.

It should be noted that there are not many networks today that come even close to this kind of requirement. Certainly in countries with frequent long power outages and limited ancillary backup (i.e., battery and/or diesel) this could be a very costly design requirement. Networks relying on weather-sensitive microwave radios for backhaul or for mm-wave frequencies 5G coverage would be required to design in a very substantial amount of redundancy to keep such high geographical & time availability requirements

In general designing a cellular access network for this kind of 5N availability could be fairly to very costly (i.e., Capex could easily run up in several percentage points of Revenue).

One way out from a design perspective is to rely on hierarchical coverage. Thus, for example if a small cell environment is un-available (=down!) the macro-cellular network (or overlay network) continues the service although at a lower service level (i.e., lower or much lower speed compared to the primary service). As also suggested in the vision paper making use of self-healing network features and other real-time measures are expected to further increase the network infrastructure availability. This is also what one may define as Network Resilience.

Nevertheless, the “NGMN 5G White Paper” allows for operators to define the level of network availability appropriate from their own perspective (and budgets I assume).

5N Data Packet Transmission Reliability.

The 5G vision paper, defines Reliability as “… amount of sent data packets successfully delivered to a given destination, within the time constraint required by the targeted service, divided by the total number of sent data packets.”. (“4.4.5 Reliability” in “NGMN 5G White Paper”).

It should be noted that the 5N specification in particular addresses specific use cases or services of which such a reliability is required, e.g., mission critical communications and ultra-low latency service. The 5G allows for a very wide range of reliable data connection. Whether the 5N Reliability requirement will lead to substantial investments or can be managed within the overall 5G design and architectural framework, might depend on the amount of traffic requiring 5Ns.

The 5N data packet transmission reliability target would impose stricter network design. Whether this requirement would result in substantial incremental investment and cost is likely dependent on the current state of existing network infrastructure and its fundamental design.

 

5G Economics – An Introduction (Chapter 1)

After 3G came 4G. After 4G comes 5G. After 5G comes 6G. The Shrivatsa of Technology.

This blog (over the next months a series of Blogs dedicated to 5G), “5G Economics – An Introduction”, has been a very long undertaking. In the making since 2014. Adding and then deleting as I change my opinion and then changed it again. The NGNM Alliance “NGMN 5G White Paper” (here after the NGMN whitepaper) by Rachid El Hattachi & Javan Erfanian has been both a source of great visionary inspiration as well as a source of great worry when it comes to the economical viability of their vision. Some of the 5G ideas and aspirations are truly moonshot in nature and would make the Singularity University very proud.

So what is the 5G Vision?

“5G is an end-to-end ecosystem to enable a fully mobile and connected society. It empowers value creation towards customers and partners, through existing and emerging use cases, delivered with consistent experience, and enabled by sustainable business models.” (NGMN 5G Vision, NGMN 5G whitepaper).

The NGMN 5G vision is not only limited to enhancement of the radio/air interface (although it is the biggest cost & customer experience factor). 5G seeks to capture the complete end-2-end telecommunications system architecture and its performance specifications. This is an important difference from past focus on primarily air interface improvements (e.g., 3G, HSPA, LTE, LTE-adv) and relative modest evolutionary changes to the core network architectural improvements (PS CN, EPC). In particular, the 5G vision provides architectural guidance on the structural separation of hardware and software. Furthermore, it utilizes the latest development in software defined telecommunications functionality enabled by cloudification and virtualization concepts known from modern state-of-the art data centers. The NGMN 5G vision most likely have accepted more innovation risk than in the past as well as being substantially more ambitious in both its specifications and the associated benefits.

“To boldly go where no man has gone before”

In the following, I encourage the reader to always keep in the back of your mind; “It is far easier to criticize somebody’s vision, than it is to come with the vision yourself”. I have tons of respect for the hard and intense development work, that so far have been channeled into making the original 5G vision into a deployable technology that will contribute meaningfully to customer experience and the telecommunications industry.

For much of the expressed concerns in this blog and in other critiques, it is not that those concerns have not been considered in the NGMN whitepaper and 5G vision, but more that those points are not getting much attention.

The cellular “singularity”, 5G that is, is supposed to hit us by 2020. In only four years. Americans and maybe others, taking names & definitions fairly lightly, might already have “5G” ( a l’Americaine) in a couple of years before the real thing will be around.

The 5G Vision is a source of great inspiration. The 5G vision will (and is) requiring a lot of innovation efforts, research & development to actually deliver on what for most parts are very challenging improvements over LTE.

My own main points of concern are in particular towards the following areas;

  • Obsession with very high sustainable connection throughputs (> 1 Gbps).
  • Extremely low latencies (1 ms and below).
  • Too little (to none) focus on controlling latency variation (e.g., jitter), which might be of even greater importance than very low latency (<<10 ms) in its own right. I term this network predictability.
  • Too strong focus on frequencies above 3 GHz in general and in particular the millimeter wave range of 30 GHz to 300 GHz.
  • Backhaul & backbone transport transformation needed to support the 5G quantum leap in performance has been largely ignored.
  • Relative weak on fixed – mobile convergence.

Not so much whether some of the above points are important or not .. they are of course important. Rather it is a question of whether the prioritization and focus is right. A question of channeling more efforts into very important (IMO) key 5G success factors, e.g., transport, convergence and designing 5G for the best user experience (and infinitely faster throughput per user is not the answer) ensuring the technology to be relevant for all customers and not only the ones who happens to be within coverage of a smallest cell.

Not surprisingly the 5G vision is a very mobile system centric. There is too little attention to fixed-mobile convergence and the transport solutions (backhaul & backbone) that will enable the very high air-interface throughputs to be carried through the telecoms network. This is also not very surprising as most mobile folks, historically did not have to worry too much about transport at least in mature advanced markets (i.e., the solutions needed was there without innovation an R&D efforts).

However, this is a problem. The required transport upgrade to support the 5G promises is likely to be very costly. The technology economics and affordability aspects of what is proposed is still very much work in progress. It is speculated that new business models and use cases will be enabled by 5G. So far little has been done in quantifying those opportunities and see whether those can justify some of the incremental cost that surely operators will incur as the deploy 5G.

CELLULAR CAPACITY … IT WORKS FOR 5G TOO!

To create more cellular capacity measured in throughput is easy or can be made so with a bit of approximations. “All” we need is an amount of frequency bandwidth Hz, an air-interface technology that allow us to efficiently carry a certain amount of information in bits per second per unit bandwidth per capacity unit (i.e., we call this spectral efficiency) and a number of capacity units or multipliers which for a cellular network is the radio cell. The most challenging parameter in this game is the spectral efficiency as it is governed by the laws of physics with a hard limit (actually silly me … bandwidth and capacity units are obviously as well), while a much greater degree of freedom governs the amount of bandwidth and of course the number of cells.

 

Spectral efficiency is given by the so-called Shannon’s Law (for the studious inclined I recommend to study his 1948 paper “A Mathematical Theory of Communications”). The consensus is that we are very close to the Shannon Limit in terms of spectral efficiency (in terms of bits per second per Hz) of the cellular air-interface itself. Thus we are dealing with diminishing returns of what can be gained by further improving error correction, coding and single-input single-output (SISO) antenna technology.

I could throw more bandwidth at the capacity problem (i.e., the reason for the infatuation with the millimeter wave frequency range as there really is a lot available up there at 30+ GHz) and of course build a lot more cell sites or capacity multipliers (i.e., definitely not very economical unless it results in a net positive margin). Of course I could (and most likely will if I had a lot of money) do both.

I could also try to be smart about the spectral efficiency and Shannon’s law. If I could reduce the need for or even avoid building more capacity multipliers or cell sites, by increasing my antenna system complexity it is likely resulting in very favorable economics. It turns out that multiple antennas acts as a multiplier (simplistic put) for the spectral efficiency compared to a simple single (or legacy) antenna system. Thus, the way to improve the spectral efficiency inevitable leads us to substantially more complex antenna technologies (e.g., higher order MiMo, massive MiMo, etc…).

Building new cell sites or capacity multiplier should always be the last resort as it is most likely the least economical option available to boost capacity.

Thus we should be committing increasingly more bandwidth (i.e., 100s – 1000s of Mhz and beyond) assuming it is available (i.e, if not we are back to adding antenna complexity and more cell sites). The need for very large bandwidths, in comparison with what is deployed in today’s cellular systems, automatically forces the choices into high frequency ranges, i.e., >3 GHz and into the millimeter wave range of above 30 GHz. The higher frequency band leads in inevitably to limited coverage and a high to massive demand for small cell deployment.

Yes! It’s a catch 22 if there ever was one. The higher carrier frequency increases the likelihood of more available bandwidth. higher carrier frequency also results in a reduced the size of our advanced complex antenna system (which is good). Both boost capacity to no end. However, my coverage area where I have engineered the capacity boost reduces approx. with the square of the carrier frequency.

Clearly, ubiquitous 5G coverage at those high frequencies (i.e., >3 GHz) would be a very silly endeavor (to put it nicely) and very un-economical.

5G, as long as the main frequency deployed is in the high or very high frequency regime, would remain a niche technology. Irrelevant to a large proportion of customers and use cases.

5G needs to be macro cellular focused to become relevant for all customers and economically beneficial to most use cases.

THE CURIOUS CASE OF LATENCY.

The first time I heard about the 5G 1 ms latency target (communicated with a straight face and lots of passion) was to ROFL. Not a really mature reaction (mea culpa) and agreed, many might have had the same reaction when J.F. Kennedy announced to put a man on the moon and safely back on Earth within 10 years. So my apologies for having had a good laugh (likely not the last to laugh though in this matter).

In Europe, the average LTE latency is around 41±9 milliseconds including pinging an external (to the network) server but does not for example include the additional time it takes to load a web page or start a video stream. The (super) low latency (1 ms and below) poses other challenges but at least relevant to the air-interface and a reasonable justification to work on a new air-interface (apart from studying channel models in the higher frequency regime). The best latency, internal to the mobile network itself, you can hope to get out of “normal” LTE as it is commercially deployed is slightly below 20 ms (without considering re-transmission). For pre-allocated LTE this can further be reduced towards the 10 ms (without considering re-transmission which adds at least 8 ms). In 1 ms light travels ca. 200 km (in optical fiber). To support use cases requiring 1 ms End-2-End latency, all transport & processing would have to be kept inside the operators network. Clearly, the physical transport path to the location, where processing of the transported data would occur, would need to be very short to guaranty 1 ms. The relative 5G latency improvement over LTE would need to be (much) better than 10 (LTE pre-allocated) to 20 times (scheduled “normal” LTE), ignoring re-transmission (which would only make the challenge bigger.

An example. Say that 5G standardization folks gets the latency down to 0.5 ms (vs the ~ 20 – 10 ms today), the 5G processing node (i.e., Data Center) cannot be more than 50 km away from the 5G-radio cell (i..e, it takes light ca. 0.5 ms travel 100 km in fiber). This latency (budget) challenge has led the Telco industry to talk about the need for so-called edge computing and the need for edge data centers to provide the 5G promise of very low latencies. Remember this is opposing the past Telco trend of increasing centralization of computing & data processing resources. Moreover, it is bound to lead to incremental cost. Thus, show me the revenues.

There is no doubt that small, smaller and smallest 5G cells will be essential for providing the very lowest latencies and the smallness is coming for “free” given the very high frequencies planned for 5G. The cell environment of a small cell is more ideal than a macro-cellular harsh environment. Thus minimizing the likelihood of re-transmission events. And distances are shorter which helps as well.

I believe that converged telecommunications operators, are in a better position (particular compared to mobile only operations) to leverage existing fixed infrastructure for a 5G architecture relying on edge data centers to provide very low latencies. However, this will not come for free and without incremental costs.

How much faster is fast enough from a customer experience perspective? According with John Carmack, CTO of Oculus Rift, “.. when absolute delays are below approximately 20 milliseconds they are generally imperceptible.” particular as it relates to 3D systems and VR/AR user experience which is a lot more dynamic than watching content loading. According to recent research specific to website response time indicates that anything below 100 ms wil be perceived as instantaneous. At 1 second users will sense the delay but would be perceived as seamless. If a web page loads in more than 2 seconds user satisfaction levels drops dramatically and a user would typically bounce. Please do note that most of this response or download time overhead has very little to do with connection throughput, but to do with a host of other design and configuration issues. Cranking up the bandwidth will not per se solve poor browsing performance.

End-2-End latency in the order of 20 ms are very important for a solid high quality VR user experience. However, to meet this kind of performance figure the VR content needs to be within the confines for the operator’s own network boundaries.

End-2-End (E2E) latencies of less than 100 ms would in general be perceived as instantaneous for normal internet consumption (e.g., social media, browsing, …). However that this still implies that operators will have to focus on developing internal to their network’s latencies far below the over-all 100 ms target and that due to externalities might try to get content inside their networks (and into their own data centers).

A 10-ms latency target, while much less moonshot, would be a far more economical target to strive for and might avoid substantial incremental cost of edge computing center deployments. It also resonates well with the 20 ms mentioned above, required for a great VR experience (leaving some computing and process overhead).

The 1-ms vision could be kept for use cases involving very short distances, highly ideal radio environment and with compute pretty much sitting on top of the whatever needs this performance, e.g., industrial plants, logistic / warehousing, …

Finally, the targeted extreme 5G speeds will require very substantial bandwidths. Such large bandwidths are readily available in the high frequency ranges (i.e., >3 GHz). The high frequency domain makes a lot of 5G technology challenges easier to cope with. Thus cell ranges will be (very) limited in comparison to macro cellular ones, e.g., Barclays Equity Research projects 10x times more cells will be required for 5G (10x!). 5G coverage will not match that of the macro cellular (LTE) network. In which case 5G will remain niche. With a lot less relevance to consumers. Obviously, 5G will have to jump the speed divide (a very substantial divide) to the macro cellular network to become relevant to the mass market. Little thinking appears to be spend on this challenge currently.     

THE VERY FINE ART OF DETECTING MYTH & BALONEY.

Carl Sagan, in his great article  The Fine Art of Baloney Detection, states that one should “Try not to get overly attached to a hypothesis just because it’s yours.”. Although Carl Sagan starts out discussing the nature of religious belief and the expectations of an afterlife, much of his “Baloney Detection Kit” applies equally well to science & technology. In particular towards our expert expectations towards consumerism and its most likely demand. After all, isn’t Technology in some respects our new modern day religion?

Some might have the impression that expectations towards 5G, is the equivalent of a belief in an afterlife or maybe more accurately resurrection of the Telco business model to its past glory. It is almost like a cosmic event, where after entropy death, the big bang gives birth to new, and supposedly unique (& exclusive) to our Telco industry, revenue streams that will make  all alright (again). There clearly is some hype involved in current expectations towards 5G, although the term still has to enter the Gartner hype cycle report (maybe 2017 will be the year?).

The cynic (mea culpa) might say that it is in-evitable that there will be a 5G after 4G (that came after 3G (that came after 2G)). We also would expect 5G to be (a lot) better than 4G (that was better than 3G, etc..).

so …

Well … Better for who? … Better for Telcos? Better for Suppliers? Better revenues? Their Shareholders? Better for our Consumers? Better for our Society? Better for (engineering) job security? … Better for Everyone and Everything? Wow! Right? … What does better mean?

  • Better speed … Yes! … Actually the 5G vision gives me insanely better speeds than LTE does today.
  • Better latency … Internal to the operator’s own network Yes! … Not per default noticeable for most consumer use cases relying on the externalities of the internet.
  • Better coverage … well if operators can afford to provide 100% 5G coverage then certainly Yes! Consumers would benefit even at a persistent 50 Mbps level.
  • Better availability …I don’t really think that Network Availability is a problem for the general consumer where there is coverage (at least not in mature markets, Myanmar absolutely … but that’s an infrastructure problem rather than a cellular standard one!) … Whether 100% availability is noticeable or not will depend a lot on the starting point.
  • Better (in the sense of more) revenues … Work in Progress!
  • Better margins … Only if incremental 5G cost to incremental 5G revenue is positive.
  • etc…

Recently William Webb published a book titled “The 5G Myth: And why consistent connectivity is a better future” (reminder: a myth is a belief or set of beliefs, often unproven or false, that have accrued around a person, phenomenon, or institution). William Web argues;

  • 5G vision is flawed and not the huge advance in global connectivity as advertised.
  • The data rates promised by 5G will not be sufficiently valued by the users.
  • The envisioned 5G capacity demand will not be needed.
  • Most operators can simply not afford the cost required to realize 5G.
  • Technology advances are in-sufficient to realize the 5G vision.
  • Consistent connectivity is the more important aim of a 5G technology.

I recommend all to read William Webb’s well written and even better argued book. It is one for the first more official critiques of the 5G Vision. Some of the points certainly should have us pause and maybe even re-evaluate 5G priorities. If anything, it helps to sharpen 5G arguments.

Despite William Webb”s critique of 5G, one need to realize that a powerful technology vision of what 5G could be, even if very moonshot, does leapfrog innovation, needed to take a given technology too a substantially higher level, than what might otherwise be the case. If the 5G whitepaper by Rachid El Hattachi & Javan Erfanian had “just” been about better & consistent coverage, we would not have had the same technology progress independent of whether the ultimate 5G end game is completely reachable or not. Moreover, to be fair to the NGMN whitepaper, it is not that the whitepaper does not consider consistent connectivity, it very much does. It is more a matter of where lies the main attention of the industry at this moment. That attention is not on consistent connectivity but much more on niche use cases (i.e., ultra high bandwidth at ultra low latencies).

Rest assured, over the next 10 to 15 years we will see whether William Webb will end up in the same category as other very smart in the know people getting their technology predictions proven wrong (e.g., IBM Chairman Thomas Watson’s famous 1943 quote that “… there is a world market for maybe five computers.” and NO! despite claims of the contrary Bill Gates never said “640K of memory should be enough for anybody.”).

Another, very worthy 5G analysis, also from 2016, is the Barclays Equity Research “5G – A new Dawn”  (September 2016) paper. The Barclays 5G analysis concludes ;

  • Mobile operator’s will need 10x more sites over the next 5 to 10 years driven by 5G demand.
  • There will be a strong demand for 5G high capacity service.
  • The upfront cost for 5G will be very substantial.
  • The cost of data capacity (i.e., Euro per GB) will fall approx. a factor 13 between LTE and 5G (note: this is “a bit” of a economic problem when capacity is supposed to increase a factor 50).
  • Sub-scale Telcos, including mobile-only operations, may not be able to afford 5G (note: this point, if true, should make the industry very alert towards regulatory actions).
  • Having a modernized super-scalable fixed broadband transport network likely to be a 5G King Maker (note: Its going to be great to be an incumbent again).

To the casual observer, it might appear that Barclays is in strong opposition to William Webb’s 5G view. However, maybe that is not completely so.

If it is true, that only very few Telco’s, primarily modernized incumbent fixed-mobile Telco’s, can afford to build 5G networks, one might argue that the 5G Vision is “somewhat” flawed economically. The root cause for this assumed economical flaw (according with Barclays, although they do not point out it is a flaw!) clearly is the very high 5G speeds, assumed to be demanded by the user. Resulting in massive increase in network densification and need for radically modernized & re-engineered transport networks to cope with this kind of demand.

Barclays assessments are fairly consistent with the illustration shown below of the likely technology cost impact, showing the challenges a 5G deployment might have;

Some of the possible operational cost improvements in IT, Platforms and Core shown in the above illustration arises from the natural evolving architectural simplifications and automation strategies expected to be in place by the time of the 5G launch. However, the expected huge increase in small cells are the root cause of most of the capital and operational cost pressures expected to arise with 5G. Depending on the original state of the telecommunications infrastructure (e.g., cloudification, virtualization,…), degree of transport modernization (e.g., fiberization), and business model (e.g., degree of digital transformation), the 5G economical impact can be relative modest (albeit momentarily painful) to brutal (i.e., little chance of financial return on investment). As discussed in the Barclays “5G – A new dawn” paper.

Furthermore, if the relative cost of delivering a 5G Byte is 13 – 14 times lower than an LTE Byte, and the 5G capacity demand is 50 times higher than LTE, the economics doesn’t work out very well. So if I can produce a 5G Byte at 1/14th of an LTE Byte, but my 5G Byte demand is 50x higher than in LTE, I could (simplistically) end up with more than 3x more absolute cost for 5G. That’s really Ugly! Although if Barclays are correct in the factor 10 higher number of 5G sites, then a (relevant) cost increase of factor 3 doesn’t seem completely unrealistic. Of course Barclays could be wrong! Unfortunately, an assessment of the incremental revenue potential has yet to be provided. If the price for a 5G Byte could be in excess of a factor 3 of an LTE Byte … all would be cool!

If there is something to be worried about, I would worry much more about the Barclays 5G analysis than the challenges of William Webb (although certainly somehow intertwined).

What is the 5G market potential in terms of connections?

At this moment very few 5G market uptake forecasts have yet made it out in the open. However, taking the Strategy Analytics August 2016 5G FC of ca. 690 million global 5G connections by year 2025 we can get an impression of how 5G uptake might look like;

Caution! Above global mobile connection forecast is likely to change many time as we approaches commercial launch and get much better impression of the 5G launch strategies of the various important players in the Telco Industry. In my own opinion, if 5G will be launched primarily in the mm-wave bands around and above 30 GHz, I would not expect to see a very aggressive 5G uptake. Possible a lot less than the above (with the danger of putting myself in the category of badly wrong forecasts of the future). If 5G would be deployed as an overlay to existing macro-cellular networks … hmmm who knows! maybe above would be a very pessimistic view of 5G uptake?

THE 5G PROMISES (WHAT OTHERS MIGHT CALL A VISION).

Let’s start with the 5G technology vision as being presented by NGMN and GSMA.

GSMA (Groupe Speciale Mobile Association) 2014 paper entitled ‘Understanding 5G: Perspective on future technology advancements in mobile’ have identified 8 main requirements; 

1.    1 to 10 Gbps actual speed per connection at a max. of 10 millisecond E2E latency.

Note 1: This is foreseen in the NGMN whitepaper only to be supported in dense urban areas including indoor environments.

Note 2: Throughput figures are as experienced by the user in at least 95% of locations for 95% of the time.

Note 3: In 1 ms speed the of light travels ca. 200 km in optical fiber.

2.    A Minimum of 50 Mbps per connection everywhere.

Note 1: this should be consistent user experience outdoor as well as indoor across a given cell including at the cell edge.

Note 2: Another sub-target under this promise was ultra-low cost Networks where throughput might be as low as 10 Mbps.

3.    1,000 x bandwidth per unit area.

Note: notice the term per unit area & think mm-wave frequencies; very small cells, & 100s of MHz frequency bandwidth. This goal is not challenging in my opinion.

4.    1 millisecond E2E round trip delay (tactile internet).

Note: The “NGMN 5G White Paper” does have most 5G use cases at 10 ms allowing for some slack for air-interface latency and reasonable distanced transport to core and/or aggregation points.

5.    Massive device scale with 10 – 100 x number of today’s connected devices.

Note: Actually, if one believes in the 1 Million Internet of Things connections per km2 target this should be aimed close to 1,000+ x rather than the 100 x for an urban cell site comparison.

6.    Perception of 99.999% service availability.

Note: ca. 5 minutes of service unavailability per year. If counted on active usage hours this would be less than 2.5 minutes per year per customer or less than 1/2 second per day per customer.

7.    Perception of 100% coverage.

Note: In 2015 report from European Commission, “Broadband Coverage in Europe 2015”, for EU28, 86% of households had access to LTE overall. However, only 36% of EU28 rural households had access to LTE in 2015.

8.    90% energy reduction of current network-related energy consumption.

Note: Approx. 1% of a European Mobile Operator’s total Opex.

9.    Up-to 10 years battery life for low-power Internet of Things 5G devices. 

The 5G whitepaper also discusses new business models and business opportunities for the Telco industry. However, there is little clarity on what would be the relevant 5G business targets. In other words, what would 5G as a technology bring, in additional Revenues, in Churn reduction, Capex & Opex (absolute) Efficiencies, etc…

More concrete and tangible economical requirements are badly required in the 5G discussion. Without it, is difficult to see how Technology can ensure that the 5G system that will be developed is also will be relevant for the business challenges in 2020 and beyond.

Today an average European Mobile operator spends approx. 40 Euro in Total Cost of Ownership (TCO) per customer per anno on network technology (and slightly less on average per connection). Assuming a capital annualization rate of 5 years and about 15% of its Opex relates to Technology (excluding personnel cost).

The 40 Euro TCO per customer per anno sustains today an average LTE EU28 customer experience of 31±9 Mbps downlink speed @ 41±9 ms (i.e., based on OpenSignal database with data as of 23 December 2016). Of course this also provides for 3G/HSPA network sustenance and what remains of the 2G network.

Thus, we might have a 5G TCO ceiling at least without additional revenue. The maximum 5G technology cost per average speed (in downlink) of 1 – 10 Gbps @ 10 ms should not be more than 40 Euro TCO per customer per anno (i.e, and preferably a lot less at the time we eventually will launch 5G in 2020).

 

Thus, our mantra when developing the 5G system should be:

5G should not add additional absolute cost burden to the Telecom P&L.

and also begs the question of proposing some economical requirements to partner up with the technology goals.

 

5G ECONOMIC REQUIREMENTS (TO BE CONSIDERED).

  • 5G should provide new revenue opportunities in excess of 20% of access based revenue (e.g., Europe mobile access based revenue streams by 2021 expected to be in the order of 160±20 Billion Euro; thus the 5G target for Europe should be to add an opportunity of ca. 30±5 Billion in new non-access based revenues).
  • 5G should not add to Technology  TCO while delivering up-to 10 Gbps @ 10 ms (with a floor level of 1 Gbps) in urban areas.
  • 5G focus on delivering macro-cellular customer experience at minimum 50 Mbps @ maximum 10 ms.
  • 5G should target 20% reduction of Technology TCO while delivering up-to 10 Gbps @ 10 ms (min. 1 Gbps).
  • 5G should keep pursuing better spectral efficiency (i.e., Mbps/MHz/cell) not only through means antennas designs, e.g., n-order MiMo and Massive-MiMo, that are largely independent of the air-interface (i.e., works as well with LTE).
  • Target at least 20% 5G device penetration within first 2 years of commercial launch (note: only after 20% penetration does the technology efficiency become noticeable).

In order not to increment the total technology TCO, we would at the very least need to avoid adding additional physical assets or infrastructure to the existing network infrastructure. Unless such addition provide a net removal of other physical assets and thus associated cost. This is in the current high frequency, and resulting demand for huge amount of small cells, going to be very challenging but would be less so by focusing more on macro cellular exploitation of 5G.

Thus, there need to be a goal to also overlay 5G on our existing macro-cellular network. Rather than primarily focus on small, smaller and smallest cells. Similar to what have been done for LT and was a much more challenge with UMTS (i.e., due to optimum cellular grid mismatch between the 2G voice-based and the 3G more data-centric higher frequency network).

What is the cost reference that should be kept in mind?

As shown below, the pre-5G technology cost is largely driven by access cost related to the number of deployed sites in a given network and the backhaul transmission.

Adding more sites, macro-cellular or a high number of small cells, will increase Opex and add not only a higher momentary Capex demand, but also burden future cash requirements. Unless equivalent cost can removed by the 5G addition.

Obviously, if adding additional physical assets leads to verifiable incremental margin, then accepting incremental technology cost might be perfectly okay (let”s avoid being radical financial controllers).

Though its always wise to remember;

Cost committed is a certainty, incremental revenue is not.

NAUGHTY … IMAGINE A 5G MACRO CELLULAR NETWORK (OHH JE!).

From the NGMN whitepaper, it is clear that 5G is supposed to be served everywhere (albeit at very different quality levels) and not only in dense urban areas. Given the economical constraints (considered very lightly in the NGMN whitepaper) it is obvious that 5G would be available across operators existing macro-cellular networks and thus also in the existing macro cellular spectrum regime. Not that this gets a lot of attention.

In the following, I am proposing a 5G macro cellular overlay network providing a 1 Gbps persistent connection enabled by massive MiMo antenna systems. This though experiment is somewhat at odds with the NGMN whitepaper where their 50 Mbps promise might be more appropriate. Due to the relative high frequency range in this example, massive MiMo might still be practical as a deployment option.

If you follow all the 5G news, particular on 5G trials in US and Europe, you easily could get the impression that mm-wave frequencies (e.g., 30 GHz up-to 300 GHz) are the new black.

There is the notion that;

“Extremely high frequencies means extremely fast 5G speeds”

which is baloney! It is the extremely large bandwidth, readily available in the extremely high frequency bands, that make for extremely fast 5G (and LTE of course) speeds.

We can have GHz bandwidths instead of MHz (i.e, 1,000x) to play with! … How extremely cool is that not? We totally can suck at fundamental spectral efficiency and still get out extremely high throughputs for the consumers data consumption.

While this mm-wave frequency range is very cool, from an engineering perspective and for sure academically as well, it is also extremely non-matching our existing macro-cellular infrastructure with its 700 to 2.6 GHz working frequency range. Most mobile networks in Europe have been build on a 900 or 1800 MHz fundamental grid, with fill in from UMTS 2100 MHz coverage and capacity requirements.

Being a bit of a party pooper, I asked whether it wouldn’t be cool (maybe not to the extreme … but still) to deploy 5G as an overlay on our existing (macro) cellular network? Would it not be economically more relevant to boost the customer experience across our macro-cellular networks, that actually serves our customers today? As opposed to augment the existing LTE network with ultra hot zones of extreme speeds and possible also an extreme number of small cells.

If 5G would remain an above 3 GHz technology, it would be largely irrelevant to the mass market and most use cases.

A 5G MACRO CELLULAR THOUGHT EXAMPLE.

So let’s be (a bit) naughty and assume we can free up 20MHz @ 1800 MHz. After all, mobile operators tend to have a lot of this particular spectrum anyway. They might also re-purpose 3G/LTE 2.1 GHz spectrum (possibly easier than 1800 MHz pending overall LTE demand).

In the following, I am ignoring that whatever benefits I get out of deploying higher-order MiMo or massive MiMo (mMiMo) antenna systems, will work (almost) equally well for LTE as it will for 5G (all other things being equal).

Remember we are after

  • A lot more speed. At least 1 Gbps sustainable user throughput (in the downlink).
  • Ultra-responsiveness with latencies from 10 ms and down (E2E).
  • No worse 5G coverage than with LTE (at same frequency).

Of course if you happen to be a NGMN whitepaper purist, you will now tell me that I my ambition should only be to provide sustainable 50 Mbps per user connection. It is nevertheless an interesting thought exercise to explore whether residential areas could be served, by the existing macro cellular network, with a much higher consistent throughput than 50 Mbps that might ultimately be covered by LTE rather than needing to go to 5G. Anywhere both Rachid El Hattachi and Jarvan Erfenian knew well enough to hedge their 5G speed vision against the reality of economics and statistical fluctuation.

and I really don’t care about the 1,000x (LTE) bandwidth per unit area promise!

Why? The 1,000x promise It is fairly trivial promise. To achieve it, I simply need a high enough frequency and a large enough bandwidth (and those two as pointed out goes nicely hand in hand). Take a 100 meter 5G-cell range versus a 1 km LTE-cell range. The 5G-cell is 100 times smaller in coverage area and with 10x more 5G spectral bandwidth than for LTE (e.g., 200 MHz 5G vs 20 MHz LTE), I would have the factor 1,000 in throughput bandwidth per unit area. This without having to assume mMiMo that I could also choose to use for LTE with pretty much same effect.

Detour to the cool world of Academia: University of Bristol published recently (March 2016) a 5G spectral efficiency of ca. 80 Mbps/MHz in a 20 MHz channel. This is about 12 times higher than state of art LTE spectral efficiency. Their base station antenna system was based on so-called massive MiMo (mMiMo) with 128 antenna elements, supporting 12 users in the cell as approx. 1.6 Gbps (i.e., 20 MHz x 80 Mbps/MHz). The proof of concept system operated 3.5 GHz and in TDD mode (note: mMiMo does not scale as well for FDD and pose in general more challenges in terms of spectral efficiency). National Instruments provides a very nice overview of 5G MMiMo systems in their whitepaper “5G Massive MiMo Testbed: From Theory to Reality”.

A picture of the antenna system is shown below;

Figure above: One of the World’s First Real-Time massive MIMO Testbeds–Created at Lund University. Source: “5G Massive MiMo (mMiMo) Testbed: From Theory to Reality” (June 2016).

For a good read and background on advanced MiMo antenna systems I recommend Chockalingam & Sundar Rajan’s book on “Large MiMo Systems” (Cambridge University Press, 2014). Though there are many excellent accounts of simple MiMo, higher-order MiMo, massive MiMo, Multi-user MiMo antenna systems and the fundamentals thereof.

Back to naughty (i.e., my 5G macro cellular network);

So let’s just assume that the above mMiMO system, for our 5G macro-cellular network,

  • Ignoring that such systems originally were designed and works best for TDD based systems.
  • and keeping in mind that FDD mMiMo performance tends to be lower than TDD all else being equal.

will, in due time, be available for 5G with a channel of at least 20 MHz @ 1800MHz. And at a form factor that can be integrated well with existing macro cellular design without incremental TCO.

This is a very (VERY!) big assumption. Requirements of substantially more antenna space for massive MiMo systems, at normal cellular frequency ranges, are likely to result. Structural integrity of site designs would have to be checked and possibly be re-enforced to allow for the advanced antenna system, contributing to both additional capital cost and possible incremental tower/site lease.

So we have (in theory) a 5G macro-cellular overlay network with at least cell speeds of 1+Gbps, which is ca. 10 – 20 times that of today’s LTE networks cell performance (not utilizing massive MiMo!). If I have more 5G spectrum available, the performance would increase linearly (and a bit) accordingly.

The observant reader will know that I have largely ignored the following challenges of massive MiMo (see also Larsson et al’s “Massive MiMo for Next Generation Wireless Systems” 2014 paper);

  1. mMiMo designed for TDD, but works at some performance penalty for FDD.
  2. mMiMo will really be deployable at low total cost of ownership (i.e., it is not enough that the antenna system itself is low cost!).
  3. mMiMo performance leap frog comes at the price of high computational complexity (e.g., should be factored into the deployment cost).
  4. mMiMo relies on distributed processing algorithms which at this scale is relative un-exploited territory (i.e., should be factored into the deployment cost).

But wait a minute! I might (naively) theorize away additional operational cost of the active electronics and antenna systems on the 5G cell site (overlaid on legacy already present!). I might further assume that the Capex of the 5G radio & antenna system can be financed within the regular modernization budget (assuming such a budget exists). But … But surely our access and core transport networks have not been scaled for a factor 10 – 20 (and possibly a lot more than that) in crease in throughput per active customer?

No it has not! Really Not!

Though some modernized converged Telcos might be a lot better positioned for thefixed broadband transformation required to sustain the 5G speed promise.

For most mobile operators, it is highly likely that substantial re-design and investments of transport networks will have to be made in order to support the 5G target performance increase above and beyond LTE.

Definitely a lot more on this topic in a subsequent Blog.

ON THE 5G PROMISES.

Lets briefly examine the 8 above 5G promises or visionary statements and how these impact the underlying economics. As this is an introductory chapter, the deeper dive and analysis will be referred to subsequent chapters.

NEED FOR SPEED.

PROMISE 1: From 1 to 10 Gbps in actual experienced 5G speed per connected device (at a max. of 10 ms round-trip time).

PROMISE 2: Minimum of 50 Mbps per user connection everywhere (at a max. of 10 ms round-trip time).

PROMISE 3: Thousand times more bandwidth per unit area (compared to LTE).

Before anything else, it would be appropriate to ask a couple of questions;

“Do I need this speed?” (The expert answer if you are living inside the Telecom bubble is obvious! Yes Yes Yes ….Customer will not know they need it until they have it! …).

“that kind of sustainable speed for what?” (Telekom bubble answer would be! Lots of useful things! … much better video experience, 4K, 8K, 32K –> fully emerged holographic VR experience … Lots!)

“am I willing to pay extra for this vast improvement in my experience?” (Telekom bubble answer would be … ahem … that’s really a business model question and lets just have marketing deal with that later).

What is true however is:

My objective measurable 5G customer experience, assuming the speed-coverage-reliability promise is delivered, will quantum leap to un-imaginable levels (in terms of objectively measured performance increase).

Maybe more importantly, will the 5G customer experience from the very high speed and very low latency really be noticeable to the customer? (i.e, the subjective or perceived customer experience dimension).

Let’s ponder on this!

In Europe end of 2016, the urban LTE speed and latency user experience per connection would of course depend on which network the customer would be (not all being equal);

In 2016 on average in Europe an urban LTE user, experienced a DL speed of 31±9 Mbps, UL speed of 9±2 Mbps and latency around 41±9 milliseconds. Keep in mind that OpenSignal is likely to be closer to the real user’s smartphone OTT experience, as it pings a server external to the MNOs network. It should also be noted that although the OpenSignal measure might be closer to the real customer experience, it still does not provide the full experience from for example page load or video stream initialization and start.

The 31 Mbps urban LTE user experience throughput provides for a very good video streaming experience at 1080p (e.g., full high definition video) even on a large TV screen. Even a 4K video stream (15 – 32 Mbps) might work well, provided the connection stability is good and that you have the screen to appreciate the higher resolution (i.e., a lot bigger than your 5” iPhone 7 Plus). You are unlikely to see the slightest difference on your mobile device between the 1080p (9 Mbps) and 480p (1.0 – 2.3 Mbps) unless you are healthy young and/or with a high visual acuity which is usually reserved for the healthy & young.

With 5G, the DL speed is targeted to be at least 1 Gbps and could be as high as 10 Gbps, all delivered within a round trip delay of maximum 10 milliseconds.

5G target by launch (in 2020) is to deliver at least 30+ times more real experienced bandwidth (in the DL) compared to what an average LTE user would experience in Europe 2016. The end-2-end round trip delay, or responsiveness, of 5G is aimed to be at least 4 times better than the average experienced responsiveness of LTE in 2016. The actual experience gain between LTE and 3G has been between 5 – 10 times in DL speed, approx. 3 –5 times in UL and between 2 to 3 times in latency (i.e., pinging the same server exterior to the mobile network operator).

According with Sandvine’s 2015 report on “Global Internet Phenomena Report for APAC & Europe”, in Europe approx. 46% of the downstream fixed peak aggregate traffic comes from real-time entertainment services (e.g., video & audio streamed or buffered content such as Netflix, YouTube and IPTV in general). The same report also identifies that for Mobile (in Europe) approx. 36% of the mobile peak aggregate traffic comes from real-time entertainment. It is likely that the real share of real-time entertainment is higher, as video content embedded in social media might not be counted in the category but rather in Social Media. Particular for mobile, this would bring up the share with between 10% to 15% (more in line with what is actually measured inside mobile networks). Real-time entertainment and real-time services in general is the single most important and impacting traffic category for both fixed and mobile networks.

Video viewing experience … more throughput is maybe not better, more could be useless.

Video consumption is a very important component of real-time entertainment. It amounts to more than 90% of the bandwidth consumption in the category. The Table below provides an overview of video formats, number of pixels, and their network throughput requirements. The tabulated screen size is what is required (at a reasonable viewing distance) to detect the benefit of a given video format in comparison with the previous. So in order to really appreciate 4K UHD (ultra high definition) over 1080p FHD (full high definition), you would as a rule of thumb need double the screen size (note there are also other ways to improved the perceived viewing experience). Also for comparison, the Table below includes data for mobile devices, which obviously have a higher screen resolution in terms of pixels per inch (PPI) or dots per inch (DPI). Apart from 4K (~8 MP) and to some extend  8K (~33 MP), the 16K (~132 MP) and 32K (~528 MP) are still very yet exotic standards with limited mass market appeal (at least as of now).

We should keep in mind that there are limits to the human vision with the young and healthy having a substantial better visual acuity than what can be regarded as normal 20/20 vision. Most magazines are printed at 300 DPI and most modern smartphone displays seek to design for 300 DPI (or PPI) or more. Even Steve Jobs has addressed this topic;

However, it is fair to point out that  this assumed human vision limitation is debatable (and have been debated a lot). There is little consensus on this, maybe with the exception that the ultimate limit (at a distance of 4 inch or 10 cm) is 876 DPI or approx. 300 DPI (at 11.5 inch / 30 cm).

Anyway, what really matters is the customers experience and what they perceive while using their device (e.g., smartphone, tablet, laptop, TV, etc…).

So lets do the visual acuity math for smartphone like displays;

We see (from the above chart) that for an iPhone 6/7 Plus (5.5” display) any viewing distance above approx. 50 cm, a normal eye (i.e., 20/20 vision) would become insensitive to video formats better than 480p (1 – 2.3 Mbps). In my case, my typical viewing distance is ca. 30+ cm and I might get some benefits from 720p (2.3 – 4.5 Mbps) as opposed to 480p. Sadly my sight is worse than the norm of 20/20 (i.e., old! and let’s just leave it at that!) and thus I remain insensitive to the resolution improvements 720p would provide. If you have a device with at or below 4” display (e.g., iPhone 5 & 4) the viewing distance where normal eyes become insensitive is ca. 30+ cm.

All in all, it would appear that unless cellular user equipment, and the way these are being used, changes very fundamentally the 480p to 720p range might be more than sufficient.

If this is true, it also implies that a cellular 5G user on a reliable good network connection would need no more than 4 – 5 Mbps to get an optimum viewing (and streaming) experience (i.e., 720p resolution).

The 5 Mbps streaming speed, for optimal viewing experience, is very far away from our 5G 1-Gbps promise (x200 times less)!

Assuming instead of streaming we want to download movies, assuming we lots of memory available on our device … hmmm … then a typical 480p movie could be download in ca. 10 – 20 seconds at 1Gbps, a 720p movie between 30 and 40 seconds, and a 1080p would take 40 to 50 seconds (and likely a waste due to limitations to your vision).

However with a 5G promise of super reliable ubiquitous coverage, I really should not need to download and store content locally on storage that might be pretty limited.

Downloads to cellular devices or home storage media appears somewhat archaic. But would benefit from the promised 5G speeds.

I could share my 5G-Gbps with other users in my surrounding. A typical Western European household in 2020 (i.e., about the time when 5G will launch) would have 2.17 inhabitants (2.45 in Central Eastern Europe), watching individual / different real-time content would require multiples of the bandwidth of the optimum video resolution. I could have multiple video streams running in parallel, to likely the many display devices that will be present in the consumer’s home, etc… Still even at fairly high video streaming codecs, a consumer would be far away from consuming the 1-Gbps (imagine if it was 10 Gbps!).

Okay … so video consumption, independent of mobile or fixed devices, does not seem to warrant anywhere near the 1 – 10 Gbps per connection.

Surely EU Commission wants it!

EU Member States have their specific broadband coverage objectives – namely: ‘Universal Broadband Coverage with speeds at least 30 Mbps by 2020’ (i.e, will be met by LTE!) and ‘Broadband Coverage of 50% of households with speeds at least 100 Mbps by 2020 (also likely to be met with LTE and fixed broadband means’.

The European Commission’s “Broadband Coverage in Europe 2015” reports that 49.2% of EU28 Households (HH) have access to 100 Mbps (i.e., 50.8% of all HH have access to less than 100 Mbps) or more and 68.2% to broadband speeds above 30 Mbps (i.e., 32.8% of all HH with access to less than 30 Mbps). No more than 20.9% of HH within EU28 have FTTP (e.g., DE 6.6%, UK UK 1.4%, FR 15.5%, DK 57%).

The EU28 average is pretty good and in line with the target. However, on an individual member state level, there are big differences. Also within each of the EU member states great geographic variation is observed in broadband coverage.

Interesting, the 5G promises to per user connection speed (1 – 10 Gbps), coverage (user perceived 100%) and reliability (user perceived 100%) is far more ambitious that the broadband coverage objectives of the EU member states.

So maybe indeed we could make the EU Commission and Member States happy with the 5G Throughput promise. (this point should not be underestimated).

Web browsing experience … more throughput and all will be okay myth!

So … Surely, the Gbps speeds can help provide a much faster web browsing / surfing experience than what is experienced today for LTE and for the fixed broadband? (if there ever was a real Myth!).

In other words the higher the bandwidth, the better the user’s web surfing experience should become.

While bandwidth (of course) is a factor in customers browsing experience, it is but a factor out of several that also governs the customers real & perceived internet experience; e.g., DNS Lookups (this can really mess up user experience), TCP, SSL/TLS negotiation, HTTP(S) requests, VPN, RTT/Latency, etc…

An excellent account of these various effects is given by Jim Getty’s “Traditional AQM is not enough” (i.e., AQM: Active Queue Management). Measurements (see Jim Getty’s blog) strongly indicates that at a given relative modest bandwidth (>6+ Mbps) there is no longer any noticeable difference in page load time. In my opinion there are a lot of low hanging fruits in network optimization that provides large relative improvements in customer experience than network speed alone..

Thus one might carefully conclude that, above a given throughput threshold it is unlikely that more throughput would have a significant effect on the consumers browsing experience.

More work needs to be done in order to better understand the experience threshold after which more connection bandwidth has diminishing returns on the customer’s browsing experience. However, it would appear that 1-Gbps 5G connection speed would be far above that threshold. An average web page in 2016 was 2.2 MB which from an LTE speed perspective would take 568 ms to load fully provided connection speed was the only limitation (which is not the case). For 5G the same page would download within 18 ms assuming that connection speed was the only limitation.

Downloading content (e.g., FTTP). 

Now we surely are talking. If I wanted to download the whole Library of the US Congress (I like digital books!), I am surely in need for speed!?

The US Congress have estimated that the whole print collection (i.e., 26 million books) adds up to 208 terabytes.Thus assuming I have 208+ TB of storage, I could within 20+ (at 1 Gbps) to 2+ (at 20 Gbps) days download the complete library of the US Congress.

In fact, at 1 Gbps would allow me to download 15+ books per second (assuming 1 book is on average 3oo pages and formatted at 600 DPI TIFF which is equivalent to ca. 8 Mega Byte).

So clearly, for massive file sharing (music, videos, games, books, documents, etc…), the 5G speed promise is pretty cool.

Though, it does assume that consumers would continue to see a value in storing information locally on their personally devices or storage medias. The idea remains archaic, but I guess there will always be renaissance folks around.

What about 50 Mbps everywhere (at a 10 ms latency level)?

Firstly, providing a customers with a maximum latency of 10 ms with LTE is extremely challenging. It would be highly unlikely to be achieved within existing LTE networks, particular if transmission retrials are considered. From OpenSignal December 2016 measurements shown in the chart below, for urban areas across Europe, the LTE latency is on average around 41±9 milliseconds. Considering the LTE latency variation we are still 3 – 4 times away from the 5G promise. The country average would be higher than this. Clearly this is one of the reasons why the NGMN whitepaper proposes a new air-interface. As well as some heavy optimization and redesigns in general across our Telco networks.

The urban LTE persistent experience level is very reasonable but remains lower than the 5G promise of 50 Mbps, as can be seen from the chart below;

The LTE challenge however is not the customer experience level in urban areas but on average across a given geography or country. Here LTE performs substantially worse (also on throughput) than what the NGMN whitepaper’s ambition is. Let us have a look at the current LTE experience level in terms of LTE coverage and in terms of (average) speed.

Based on European Commission “Broadband Coverage in Europe 2015” we observe that on average the total LTE household coverage is pretty good on an EU28 level. However, the rural households are in general underserved with LTE. Many of the EU28 countries still lack LTE consistent coverage in rural areas. As lower frequencies (e.g., 700 – 900 MHz) becomes available and can be overlaid on the existing rural networks, often based on 900 MHz grid, LTE rural coverage can be improved greatly. This economically should be synchronized with the normal modernization cycles. However, with the current state of LTE (and rural network deployments) it might be challenging to reach a persistent level of 50 Mbps per connection everywhere. Furthermore, the maximum 10 millisecond latency target is highly unlikely to be feasible with LTE.

In my opinion, 5G would be important in order to uplift the persistent throughput experience to at least 50 Mbps everywhere (including cell edge). A target that would be very challenging to reach with LTE in the network topologies deployed in most countries (i.e., particular outside urban/dense urban areas).

The customer experience value to the general consumer of a maximum 10 millisecond latency is in my opinion difficult to assess. At a 20 ms response time would most experiences appear instantaneous. The LTE performance of ca. 40 ms E2E external server response time, should satisfy most customer experience use case requirements beside maybe VR/AR.

Nevertheless, if the 10 ms 5G latency target can be designed into the 5G standard without negative economical consequences then that might be very fine as well.

Another aspect that should be considered is the additional 5G market potential of providing a persistent 50 Mbps service (at a good enough & low variance latency). Approximately 70% of EU28 households have at least a 30 Mbps broadband speed coverage. If we look at EU28 households with at least 50 Mbps that drops to around 55% household coverage. With the 100% (perceived)coverage & reliability target of 5G as well as 50 Mbps everywhere, one might ponder the 30% to 45% potential of households that are likely underserved in term of reliable good quality broadband. Pending the economics, 5G might be able to deliver good enough service at a substantial lower cost compared more fixed centric means.

Finally, following our expose on video streaming quality, clearly a 50 Mbps persistent 5G connectivity would be more than sufficient to deliver a good viewing experience. Latency would be less of an issue in the viewing experience as longs as the variation in the latency can be kept reasonable low.

 

Acknowledgement

I greatly acknowledge my wife Eva Varadi for her support, patience and understanding during the creative process of creating this Blog.

 

WORTHY 5G & RELATED READS.

  1. “NGMN 5G White Paper” by R.El Hattachi & J. Erfanian (NGMN Alliance, February 2015).
  2. “Understanding 5G: Perspectives on future technological advancement in mobile” by D. Warran & C. Dewar (GSMA Intelligence December 2014).
  3. “Fundamentals of 5G Mobile Networks” by J. Rodriguez (Wiley 2015).
  4.  “The 5G Myth: And why consistent connectivity is a better future” by William Webb (2016).
  5. “Software Networks: Virtualization, SDN, 5G and Security”by G. Pujolle (Wile 2015).
  6. “Large MiMo Systems” by A. Chockalingam & B. Sundar Rajan (Cambridge University Press 2014).
  7. “Millimeter Wave Wireless Communications” by T.S. Rappaport, R.W. Heath Jr., R.C. Daniels, J.N. Murdock (Prentis Hall 2015).
  8. “The Limits of Human Vision” by Michael F. Deering (Sun Microsystems).
  9. “Quad HD vs 1080p vs 720p comparison: here’s what’s the difference” by Victor H. (May 2014).
  10. “Broadband Coverage in Europe 2015: Mapping progress towards the coverage objectives of the Digital Agenda” by European Commission, DG Communications Networks, Content and Technology (2016).

Mobile Data Consumption, the Average Truth? the Average Lie?

“Figures often beguile me” leading to the statement that “There are three kinds of lies: lies, damned lies, and statistics.” (Mark Twain, 1906).

We are so used to averages … Read any blog or newspaper article trying to capture a complex issue and its more than likely that you are being told a story of averages … Adding to Mark Twain’s quote on Lies, in our data intense world ” The Average is often enough the road to an un-intentional Lie” .. or just about “The Average Lie” .

Imagine this! Having (at the same time) your feet in the oven at 80C and you head in the freezer at -6C … You would be perfectly OK! On average! as your average temperature would equal 80C + (-6C) divided by 2 which is 37C, i.e., the normal and recommended body temperature for an adult human being. However both your feet and your head is likely to suffer from such an experiment (and therefore really should not be tried out … or left to Finns used to Sauna and Icy water … though even the Finns seldom enjoyed this simultaneously).

Try this! Add together the age of the members your household and divide by the number of members. This would give you the average age of your household … does the average age you calculated have any meaning? … if you have young children or grandparents living with you, I think that there is a fairly high chance that the answers to that question is NO! …  The average age of my family”s household is 28 years. However, this number is a meaningless average representation of my household. It is 20 times higher than my sons age and about 40% lower than my own age.

Most numbers, most conclusions, most stories, most (average) analysis are based on an average representation of one or another Reality …. and as such can easily lead to Reality Distortion.

When we are presented with averages (or mean values as it is also called in statistics), we tend to substitute Average with Normal and believe that the story represents most of us (i.e., statistically this means about 68% of us all). More often than not we sit back with the funny feeling that if what we just read is “normal” then maybe we are not.

On mobile data consumption (I ll come back to Smartphone data consumption a bit later) … There is one (non-average) truth about mobile data consumption that has widely (and correctly) been communicated …

Very few mobile customers (10%) consumes the very most of the mobile data traffic (90%).

(see for example: http://www.nytimes.com/2012/01/06/technology/top-1-of-mobile-users-use-half-of-worlds-wireless-bandwidth.html/).

Lets just assume that a mobile operator make claim to an average 200MB monthly consumption (source: http://gigaom.com/broadband/despite-critics-cisco-stands-by-its-data-deluge/). Lets assume that 10% of customer base generating 90% of the traffic. It follows that the high usage segment has an average  volumetric usage of 1,800MB and the low usage segment an average volumetric usage of only 22MB.  In other words 10% of the customer base have 80+ times higher consumption than the remaining 90%. The initial average consumption (taken across the whole customer base) of 200MB communicated is actually 9 times higher than the average consumption of 90% of the customer base. It follows (with some use case exceptions) that the 10% high usage segment spends a lot more Network Resources and Time. The time the high usage segment spend actively with their device are likely to be a lot higher than the 90% low usage segment.

The 200MB is hardly normal! It is one of many averages that can be calculated. Obviously 200MB is a lot more “sexy” than to state that 90% of the customer base consumes typically 22MB.

Created using PiktoChart http://app.piktochart.com.

Do Care about Measurement and Data Processing!

What further complicates consumptive values being quoted is how the underlying data have been measured, processed and calculated!

  1. Is the averaging done over the whole customer base?,
  2. Is the averaging done over active customers?, or
  3. A subset of active customers (i.e., 2G vs 3G, 3G vs HSPA+ vs LTE vs WiFi, smartphone  vs basic phone, iPad vs iPhone vs Laptop, prepaid vs postpaid, etc..) or
  4. A smaller subset based on particular sample criteria (i.e., iOS, Android, iPad, iPhone, Galaxy, price plan, etc..) or availability (mobile Apps installed, customer approval, etc..).  or …

Without knowing the basis of a given average number any bright analysis or cool conclusion might be little more than Conjecture or Clever Spin.

On Smartphone Usage

One the most recent publicized studies on Smartphone usage comes from O2/Telefonica UK (Source: http://mediacentre.o2.co.uk/Press-Releases/Making-calls-has-become-fifth-most-frequent-use-for-a-Smartphone-for-newly-networked-generation-of-users-390.aspx). The O2 data provides an overview of average daily Smartphone usage across 10 use case categories.

The O2’s Smartphone statistics have been broken down in detail by one of our industry”s brightest Tomi Ahonen (A Must Read http://www.communities-dominate.blogs.com/ though it is drowning in his Nokia/Mr. Elop “Howler Letters”). Tomi points out the Smartphone’s disruptive replacement potential of many legacy consumer products (e.g., think: watch, alarm clock, camera,  etc..).

The O2 Smartphone data is intuitive and exactly what one would expect! Boring really! Possible with the exception of Tomi’s story telling (see above reference)! The data was so boring that The Telegraph (source: http://www.telegraph.co.uk/technology/mobile-phones/9365085/Smartphones-hardly-used-for-calls.html) had to conclude that “Smartphones Hardly Used for Calls”. Relative to other uses of course not really an untruth.

Though The Telegraph did miss 9or did not care) the fact that both Calls and SMS appeared to be what one would expect (and why would a Smartphone generate more Voice and SMS than Normal? … hmmmm). Obviously, the Smartphone is used for a lot of other stuff than calling and SMSing! The data tells us that an average Smartphone user (whatever that means) spend ca. 42 minutes on web browsing and social networking while “only” 22 minutes on Calls and SMS (i.e., actually 9 minutes of SMS sounds more like a teenager than a high-end smartphone user … but never mind that!). There are lots of other stuff going on with that Smartphone. In fact out of the total daily usage of 128 minutes only 17% of the time (i.e., 22 minutes) is used for Plain Old Mobile Telephony Services (The POMTS). We do however find that both voice minutes and legacy messaging consumption are declining faster in the Smartphone segment than for Basic Phones (which are declining rapidly as well) as OTT Mobile Apps alternatives substitute POMTS (see inserted chart from http://www.slideshare.net/KimKyllesbechLarsen/de-risking-the-broadband-business-model-kkl2411201108x).

I have no doubt that the O2 data represents an averaging across a given Smartphone sample, the question is how does this data help us to understand the Real Smartphone User and his behavior.

So how did O2 measure this data?

(1) To be reliable and reasonable, data collection should be done by an App residing in the O2 customer’s smartphone. An alternative (2) would be deep packet inspection (dpi) but this would only capture network usage which can (and in most cases will be) very different from the time the customer actively uses his Smartphone. (3) Obviously the data could also be collected by old fashion Questionnaires being filled in. This would be notoriously unreliable and I cannot imagine this being the source.

Thus, I am making the reasonable guess that the Smartphone Data Collection is mobile App based.

“Thousand and 1 Questions”: Does the data collected represents a normal O2 Smartphone user? or a particular segment that don’t mind having a Software Sniffer (i.e., The Sniffer) on the used device reporting his behavior? Is “The Sniffer” a standard already installed (and activated?) App on all Smartphone devices?, only on a certain segment? or is it downloadable? (i..e, which would require a certain effort from the customer), is the collection done for both prepaid & contract customers, both old and new smartphones (i.e., usage patterns depends on OS version/type, device capabilities such as air interface speed DL & UL, CPU, memory management, etc..) … is WiFi included or excluded?, what about Apps running in the background (are these included), etc…

I should point out that it is always much easier to poke at somebody else data analysis than it often is to collect, analyse and present such data. Though, depending on the answer to the above “1,000 + 1” questions the O2 data either becomes a fair representation of an O2 Smartphone customer or “just” an interesting data point for one of their segments.

If the average Smartphone cellular (i.e., no WiFi blend) monthly consumption in UK is ca. 450MB (+/-50MB) and if the consumer had on average cellular speed of 0.5Mbps (i.e., likely conservative with exception of streaming services which could be lower), one would expect that Time spend consuming Network Resources would be no more than 120 minutes per month or 5 minutes per day (@ R99 384kbps this would be ca. 6 min per day). If I would chose a more sophisticated QoS distribution, the Network Consumption Time would anyway not change with an order of magnitude or more.

So we have 5 minutes of Mobile Data Network Time Consumption daily versus O2’s Smartphone usage time of 106 minutes (wo Calls & SMS) … A factor 22 in difference!

For every minutes of mobile data network consumption the customer spends 20+ minutes actively with his device (i.e., reading, writing, playing, etc..).

So …. Can we trust the O2 Smartphone data?

Trend wise the data certainly appear reasonable! Whether the data represents a majority of the O2 smartphone users or not … I doubt somewhat. However, without having a more detailed explanation of data collection, sampling, and analysis it’s difficult to conclude how representable the O2 Smartphone data really is for their Smartphone customers.

Alas this is the problem with most of the mobile data user and usage statistics being presented to the public as an average (i.e., have had my share of this challenge as well).

Clearly we spend a lot more time with our device than the device spends actively at the mobile network. This trend has been known for a long time from the fixed internet. O2 points out that the Smartphone, with its mobile applications, has become the digital equivalent to a “Swiss Army Knife” and as a consequence (as Tomi also points out in his Blog) already in the process of replacing a host of legacy consumer devices, such as the watch, alarm clock, camera (both still pictures and video), books, music radios, and of course last but not least substituting The POMTS.

I have made argued and shown examples that Average Numbers we are presented with are notorious by character. What other choices do we have?  Would it be better to report the Median? rather than the Average (or  Mean)? The Median divides a given consumptive distribution in half (i.e., 50% of customers have a consumption below the Median and 50% above). Alternative we could report the Mode which would give us the most frequent consumption across our consumer distribution.

Of course if consumer usage was distributed normally (i.e., symmetric bell shaped) Mean, Median and Mode would be one and the same (and we would all be happy and bored). Not so much luck!

Most consumptive behaviors tends to be much more skewed and asymmetric (i.e., “the few takes the most”) than the normal distribution (that most of us instinctively uses when we are presented with figures). Most people are not likely to spend much thought on how a given number is calculated. However, it might be constructive to provide a %tage of the customers for which their usage is below the reported average. The reader should however note that in case the percentage figure is different from 50%, the consumptive distribution is skewed and

onset of Reality Distortion has occurred.

Wireless Broadband Access (BWA) Greenfield Ambition… (from March 2008)

In case you are contemplating starting a wireless broadband, maybe even mobile broadband, greenfield operation in Europe there will be plenty of opportunity the next 1 to 2 years.Will it be a great business in Western Europes mature market? – probably not – but it still might be worth pursuing. The mobile incumbants will have a huge edge when it comes to spectrum and capacity for growth which will be very difficult to compete against for a Greenfield with comparable limited spectrum.Upcoming 2.50 GHz to 2.69 GHz spectrum (i.e., 2.6 GHz for short) auctions, often refered to as the UMTS extension band spectrum, are being innitiated in several European countries (United Kingdom, The Netherlands, Sweden, etc..). Thus, we are talking about 190 MHz of bandwidth up for sale to the highest bidder(s). Compared this with the UMTS auction at the 2.1 GHz band which was 140 Mhz. The European Commission has recommended to split up the 190 MHz into 2×70 MHz for FDD operations (basically known as UMTS extension band in some countries) and a (minimum ) 1×50 MHz part for TDD operation.

In general it is expected that incumbent mobile operators (e.g., Vodafone, T-Mobile, KPN, Orange, Telefonica/O2, etc..) will bid for the 2.6 GHz FDD spectrum, supplementing their existing UMTS 2.10 GHz spectrum mitigating possible growth limitation they might foresee in the future. The TDD spectrum is in particular expected to be contended by new companies, greenfield operations as well as fixed-line operators (i.e, BT) with the ambition to launch broadband wireless access BWA (i..e, WiMAX) networks. Thus, new companies which intend to compete with today’s mobile operators and their mobile broadband data proporsitions. Furthermore, just as mobile operators with broadband data competes with fixed broadband business (i.e., DSL & cable); so is it expected that the new players would likewise compete with both existing fixed and mobile broadband data proporsitions. Obviously, new business might not limit their business models to broadband data but also provide voice offerings.

Thus, the competive climate would become stronger as more players contend for the same customers and those customer’s wallet.

Let’s analyse the Greenfields possible business model as the economical value of starting up a broadband data business in mature markets of Western Europe. The analysis will be done on a fairly high level which would give us an indication of the value of the Greenfield Business model as well as what options a new business would have to optimize that value.

FDD vs TDD Spectrum

The 2.6 GHz auction is in its principles assymetric, allocating more bandwidth to FDD based operation than to TDD-based Broadband Wireless Access (BWA) deployment; 2×70 MHz vs 1×50 MHz. It appears fair to assuming that most incumbent operators will target 2×20 MHz FDD which coincide with the minimum bandwidth target for the Next-Generation Mobile Network (NGMN)/Long-Term Evolution (LTE) Network vision (ref: 3GPP LTE).

For the entrant interested in the part of the 1×50 MHz TDD spectrum would in worst case need 3x the FDD spectrum to get an equivalent per sector capacity as an FDD player, i.e., 2×20 MHz FDD equivalent to 1×60 MHz TDD with a frequency re-use of 3 used by the TDD operator. Thus, in a like-for-like a TDD player would have difficulty matching the incumbants spectrum position at 2.6 GHz (ignoring the incumbant having a significantly stronger spectrum position from the beginning).

Of course better antenna systems (moving to re-use 1), improved radio resource management, higher spectral efficiency (i.e., Mbps/MHz) as well as improved overall link budgets might mitigate possible disadvantage in spectral assymmetry benefiting the TDD player. However, those advantages are more a matter of time before competing access technologies bridge an existing performance gab (technology equivalent tit-for-tat).

Comparing actual network performance of FDD-based UMTS/HSPA (High-Speed Packet Access) with WiMAX 802.16e-2005 the performance is roughly equivalent in terms of spectral efficiency. However, in general in Europe there has been allocated far more FDD-based spectrum than TDD-based which overall does result in a considerable capacity and growth issues for TDD-based business models. Long-Term Evolution (LTE) path is likely to be developed both for FDD and TDD based access and equivalent performance might be expected in terms of bits-per-second to Hz performance.

Thus, it is likely that a TDD-based network would become capacity limited sooner than a mobile operator having a full portfolio of FDD-based spectrum (i.e., 900 MHz (GSM), 1800 MHz (GSM), 2,100 MHz (FDD UMTS) and 2,500 MHz (FDD – UMTS/LTE) to its disposition. Therefore, a TDD based business model could be expected to look differently than an incumbants mobile operators existing business model.

The Greenfield BWA Business Case

Assume that Greenfield BWA intends to start-up its BWA business in a market with 17 million inhabitants, 7.4 million households, and a surface area of 34,000 km2. The Greenfield’s business model is based on house-hold coverage with focus on Urban and Sub-Urban areas covering 80% of the population and 60% of the surface area.

It is worth mentioning that the valuation approach presented here is high-level and should not replace proper financial modelling and due dilligence. This said, the following approach does provide a good guidance to the attractiveness of a business proporsition.

Greenfield BWA – The Technology Part

The first exercise the business modeller is facing is to size the network needed consistent with the business requirements and vision. How many radio nodes would be required to provide coverage and support the projected demand – is the question to ask! Given frequency and radio technology it is relative straightforward to provide a business model estimate of the site numbers needed.

Using standard radio engineering framework (e.g., Cost231 Walfish-Ikegami cell range model (Ref.:Cost321)) a reasonable estimate for a typical maximum cell range which can be expected subject to the radio environment (i.e, dense-city, urban, sub-urban and rural). Greenfield BWA intends to deploy (mobile) WiMAX at 2.6 GHz. Using the standard radio engineering formula a 1.5 km @ 2.6 GHz Uplink limited cell range is estimated. Uplink limited implies that the range between the Customer Premise Equipment (CPE) and the Basestation (BS) is shorter than the other direction from BS to CPE. This is a normal situation as the CPE equipment often is the limiting factor in network deployment considerations.

The 1.5-km cell range we have estimated above should be compared with typical cell ranges observed in actual mobile networks (e.g., GSM900, GSM1800 and UMTS2100). Typically in dense-city (i.e., Top-3 cities) areas, the cell range is between 0.5 and 0.7 km depending on load. In urban/metropolitan radio environment we often find an average between 2.0 – 2.5 km cell range depending on deployed frequency, cell load and radio environment. In sub-urban and rural areas one should expect an average cell range between 2.0 – 3.5 km depending on frequency and radio environment. Typically cell load would be more important in city and urban areas (i.e., less frequency dependence) while the frequency will be most important in sub-urban and rural areas (i.e., low-frequency => higher cell range => fewer sites; higher frequency => lower cell range => higher number of sites).The cell range (i.e., 1.5 km) and effective surface area targeted for network deployment (i.e., 20,000 km2) provides an estimate for the number of coverage driven sites of ca. 3,300 BWA nodes. Whether more sites would be needed due to capacity limitations can be assessed once the market and user models have been defined.

Using typical infrastructure pricing and site-build cost the investment level for Western Europe (i.e., Capital expenses, Capex) should not exceed 350 million Euro for the network deployment all included. Assuming that the related network operational expense can be limited to 10%(excluding personnel cost) of the cumulated Capex, we have a yearly Network related opex of 35 million Euro (after rollout target has been reached). After the the final deployment target has been reached the Greenfield should assume a capital expense level of minimum 10% of their service revenue.

It should not take Greenfield BWA more than 4 years to reach their rollout target. This can further be accelerated if Greenfield BWA can share existing incumbant network infrastructure (i.e., site sharing) or use independent tower companies services. In the following assume that the BWA site rollout can be done within 3 years of launch.

Greenfield BWA the Market & Finance Part

Greenfield BWA will target primarily the house-hold market with broadband wireless access services based on the WiMAX (i.e., 802.16e standard). Voice over IP will be supported and offered with the subscription.

Furthermore, the Greenfield BWA intends to provide stationary as well as normadic services to the house-hold segment. In addition Greenfield BWA also will provide some mobility in the areas they provide coverage. However, this would not be their primary concern and thus national roaming would not be offered (reducing roaming charges/cost).

Greenfield BWA reaches a steady-state (i.e., after final site rollout) customer market-share of 20% of the Household base; ca. 1.1 million household subscriptions on which they have a blended revenue per household €20 per month can be expected. Thus, a yearly service revenue of ca. 265 million Euro. From year 4 and onwards a maintenance Capex level of 25 million Euro is kept (i.e., ca. 10% of revenue).

Greenfield BWA manage its cost strictly and achieve an EBITDA margin of 40% from year 4 onwards (i.e, total annual operational cost of 160 million Euro).

Depreciation & Amortisation (D&A) level is kept at a level of $40 million annually (steady-state). Furthermore, Greenfield Inc has an effective tax rate of 30%.

Now we can actually estimate the free cash flow (FCF) Greenfield Inc would generate from the 4th year forward:

(all in million Euro)
Revenue €265
-Opex €158
=EBITDA €106
– D&A €40 (ignoring spectrum amortization)
– Tax €20 (i.e., 30%)
+ D&A €40
=Gross Cash Flow €86
-Capex €25
=FCF €61

assuming zero percent FCF growth rate and operating with a 10% (i.e., this could be largely optimistic for a pure Greenfield operation. Having 15% – 25% is not unheard off to reflect the high risks) Weighted Average Cost of Capital (i.e., WACC) the perpetuity value from year 4 onwards would be €610 million. In Present Value this is €416 million, net €288 million for the initial 3 years discounted capital investment (for network deployment) and considering the first 3 years cumulated discounted EBITDA 12 million provides

a rather weak business case of ca. 140 million (upper) valuation prior to spectrum investment where-of bulk valuation arises from the continuation value (i.e., 4 year onwards).

Alternative valuation would be to take a multiple of the EBITDA (4th year) as a sales price valuation equivalent; typically one would expect between 6x and 10x the (steady-state) EBITDA and thus €636 mio (6x) to €1,000 mio (10x).

The above valuation assumptions are optimistic and it is worthwhile to note the following;

1. €20 per month per household customer should be seen as optimistic upper value; lower and more realistic might not be much more than €15 per month.
2. 20% market share is ambitious particular after 3 years operation.
3. 40% margin with 15% customer share and 3,300 radio nodes is optimistic but might be possible if Greenfield BWA can make use of Network Sharing and other cost synergies in relation to for example outsourcing.
4. 10% WACC is assumed. This is rather low given start-up scenario. Would not be surprised that this could be estimated to be as high as 15% to 20%.If point 1 to 4 lower boundaries would be applied to above valuation logic the business case would very quickly turn in red (i.e., negative); leading to the conclusion of a significant business risk given the scope of above business model.Our hypothetical Greenfield BWA should target paying minimum license fee for the TDD spectrum; upper boundary should not exceed €50 million to mitigate too optimistic business assumptions.The City-based Operation Model

Greenfield BWA could choose to focus their business model on the top-10 cities and their metropolitan areas. Lets assume that by this 50% of population or house-holds are captured as well as 15% of the surface area. This should be compared with the above assumptions 80% population and 60% surface area coverage.

The key business drivers would look as follows (in paranthesis the previous values have been shown for reference).

Sites 850 (3,300) rollout within 1 to 2 years (3 years).
Capex €100 mio (€350) for initial deployment; afterwhich €18 mio (€25).

Customer 0.74 mio (1.1)
Revenue €178 mio (€264)
EBITDA €72 mio (€106)
Opex €108 mio (€160)
FCF €38 mio (€61)
Value €210 mio (€140)

The city-based network strategy is about 50% more valuable than a more extensive coverage strategy would be.

Alternative valuation would be to take a multiple of the EBITDA (3rd year) as the sales price valuation equivalent; typically one would expect between 6x and 10x the (steady-state) EBITDA and thus €432 mio (6x) to €720 mio (10x).

Interestingly (but not surprising!) Greenfield BWA would be better of focusing on smaller network but in areas of high population density is financially more attractive. Greenfield BWA should avoid coverage based rollout strategy known from the mobile operator business model.

The question is how important is it for the Greenfield BWA to provide coverage everywhere? if their target is primarily households based customers with normadic and static mobility requirements then such a “coverage where the customer is” business model might actually work?

Source: http://harryshell.blogspot.de/2008/03/wireless-broadband-access-bwa.html

Did you know? Did you consider? (from March 2008)

In 2007 the European average mobile revenue per user (ARPU per month) was €28+/-€6; a drop of ca. 4% compared to 2006 (the EU inflation level in 2007 was ca. 2.3%).

of the €28 ARPU, ca. 16% could be attributed to non-voice usage (i.e,. €4.5).

of the €4.5 Non-Voice ARPU, ca. 65% could be attributed to SMS usage (i.e, €3.0).

Thus, leaving €1.5 for non-voice (mobile) data service (i.e., 5.4% of total ARPU).

The increase that most European countries have seen in their mobile Non-Voice Revenue has by far not been able to compensate for the drop in ARPU across most countries over the last 5 to 6 years.

Adding advanced data (e.g., UMTS and HSPA) capabilities to the mobile networks around Europe has not resulted in getting more money out of the mobile customer (but absolute revenue has grown due to customer intake).

Although most European UMTS/HSPA operators report a huge uptake (in relative terms) of Bytes generated by the customers, this is not reflected in the ARPU development.

Maybe it really does not matter as long as the mobile operators overall financial performance remains excelent (i.e., Revenues, Customers, EBITDA, Cash, ….)?

Is it possible to keep healthy financial indicators with decreasing ARPU, huge data usage growth and investments into brand-new radio access technologies targeting the €1.5 per month per user?

Source: http://harryshell.blogspot.de/2008_03_01_archive.html

Winner of the 700-MHz Auction is … Google! (from April 2008)

The United States has recently ended (March 2008) the auction of 5 blocks (see details below) of the analog TV spectrum band of 700-MHz. More specifically the band between 698 – 763 MHz (UL) and 728 – 793 MHZ (DL), with a total bandwidth of 2×28 MHz. In addition a single band 1×6 MHz in 722 – 728 MHz range was likewise auctioned. The analog TV band is expected to be completely vacated by Q1 2009.

The USA 700 MHz auction result was an impressive total of $19.12 billion, spend buying the following spectrum blocks: A (2×6 MHz), B (2×6 MHz), C (2×11 MHz) and E (1×6 MHz) blocks. The D (2×5 Mhz) block did not reach the minimum level. A total of 52 MHz (i.e, 2×23 + 1×6 MHz) bandwidth was auctioned off.

Looking with European eyes on the available spectrum allocated per block it is not very impressive (which is similar to other US Frequency Blocks per Operator, e.g., AWS & PCS). The 700 MHz frequency is clearly very economical for radio network coverage deployment in particular compared the high-frequency AWS spectrum used by T-Mobile, Verizon and Sprint. However, the 6 to 11 MHz (UL/DL) is not very impressive from a capacity sustainance perspective. It is quiet likely that this spectrum would be exhausted and rapidly leading to a significant additional financial commitment to cell splits / capacity extensions.

This $19.12 billion for 52 MHz translates to $1.22 per MHz spectrum per Population @ 700 MHz.

This should be compared to following historical auctions
* $0.56/MHz/Pop @ 1,700 MHz in 2006 US AWS auction
* $0.15/MHz/Pop (USA Auction 22 @ 1999) to $4.74/MHz/Pop (NYC, Verizon).
* $1.23/MHz/Pop Canadian 2000 PCS1900 Auction of 40MHz.
* $5.94/MHz/Pop UK UMTS auction (2001) in UK auctioning a total of 2×60 MHz FDD spectrum (TDD not considered).
* $7.84/MHz/Pop German UMTS auction in 2001 (2×60 MHz FDD, TDD not considered).

(Note: the excesses of the European UMTS auctions clearly illustrates a different time and place).

What is particular interesting is that Verizon “knocked-out” Google by paying $4.74 billion for the nationwide C-block of 2×11 MHz. “Beating” Google’s offer of $4.6 billion.

However, Google does not appear too sadened of the outcome and …. why should they! Google has to a great extend influenced the spectrum conditions allowing for open access (although it remains to be seen what this really means) to the C spectrum block; The USA Federal Communications Commission (FCC) has proposed to apply “open access” requirements for devices and applications on a the nation wide spectrum block C (2×11 MHz). 

Clearly Google should be regarded as the winner of the 700 MHz auction. They have avoided committing a huge amount of cash for the spectrum and on-top having to deploy even more cash to build and operate a wireless network (i.e., which is really their core business anyway).

Googling the Business Case
Google was willing to put down $4.6 billion for the 2×11 MHz @ 700 MHz. Let’s stop up an ask how their business case possible could have looked like.

At 700 MHz, with not too ambitious bandwidth per user requirements, Google might achieve a typical cell range between 2.5 and 4 km (Uplink limited, i.e., user equipment connection to base station). Although in “broadcast/downlink” mode, the cell range could be significantly larger (and downlink is all you really need for advertisement and broadcast;-).

Assume Google’s ambition was top-100 cities and 1-2% of the USA surface area they would need at least 30 thousand nodes. Financially (all included) this would likely result in $3 to $5 billion network capital expense (Capex) and a technology driven annual operational expense (Opex) of $300 to $500 million (in steady-state). On top of the spectrum price.

Using above rough technology indicators Google (if driven by sound financial principles) must have had a positive business case for a cash-out of minimum $8 billion over 10 years, incl. spectrum and discounted with WACC of 8% (all in all being very generous) and annual Technology Opex of minimum $300 million. On top of this comes customer acquisition, sales & marketing, building a wireless business operations (obviously they might choose to outsource all that jazz).

… and then dont forget the customer device that needs to be developed for the 700 MHz band (note GSM 750 MHz falls inside the C-band). Typically takes between 3 to 5 years to get a critical customer mass and then only if the market is stimulated.

It would appear to be a better business proporsition to let somebody else pay for spectrum, infrastructure, operation, etc… and just do what Google does best … selling advertisments and deliver search results … for mobile devices … maybe even agnostic to the frequency (seems better than wait until critical mass has been reached at the 700 MHz).

But then again … Google reported for full year 2007 a $16.4 billion in advertising revenues (up 56% compared to the previous year).(see refs Google Investor Relations). Imagine what this could be if extended to wireless / mobile market. Still lower than Verizon’s 2007 full year revnue of $23.8B (up 5.5% from 2006) but not that much lower considering the difference in growth rate.

The “successfull” proud owners (Verizon, AT&T Mobility, etc….) of the 700 MHz spectrum might want to keep in mind that Google’s business case for entering wireless must have been far beyond the their proposed $4.6 billion.

Appendix:
The former analog TV spectrum auction has been divided UHF spectrum into 5 blocks:
Block A: 2×6 MHz bandwidth (698–704 and 728–734 MHz); $3.96 billion
Block B: 2×6 MHz bandwidth (704–710 and 734–740 MHz); $9.14 billion dominated by AT&T Mobility.
Block C: 2×11 MHz bandwidth (746–757 and 776–787 MHz) Verizon $4.74 billion
Block D: 2×5 MHz bandwidth (758–763 and 788–793 MHz) No bids above the minimum.
Block E: 1×6 MHz bandwidth (722–728 MHz)Frontier Wireless LCC $1.26 billion

Source: http://harryshell.blogspot.de/2008/04/winner-of-700-mhz-auction-is-google.html

Backhaul Pains (from April 2008)

Backhaul, which is the connection between a radio node and the core network, is providing mobile-wireless operators possible with the biggest headache ever (apart from keeping a healthy revenue growth in mature markets 😉 … it can be difficult to come by in the right quantities and can be rather costly with conventional transmission cost-structures … Backhaul is expected to have delayed the Sprint WiMAX rollout of their Xohm branded wireless internet service. A Sprint representative is supposed to have said: “You need a lot of backhaul capacity to do what’s required for WiMax.” (see forexample WiMax.com blog)

What’s a lot?

Well … looking at the expected WiMAX speed per Base Station (BS) of up-to 50 Mbps (i.e., 12 – 24x typical backhaul supporting voice demand), it is clear that finding suitable and low-cost bachaul solutions might be challenging. Conventional leased lines would be grossly un-economical at least if priced conventionally; xDSL and Fiber-to-the-Premises (FTTP) infrastructure that could support (economically?) such bandwidth demand is not widely deployed yet.

Is this a Sprint issue only? Nope! …. Sprint cannot be the only mobile-wireless operator with this problem – for UMTS/HSPA mobile operators the story should be pretty much the same (unless an operator has a good and modern microwave backhaul network supporting the BS speed).

Backhaul Pains – Scalability Issues
The backhaul connection can be either via a Leased Line (LL) or a Microwave (MW) radio link. Sometimes a MW link can be leased as well and might even be called a leased line.

With microwave (MW) links one can easily deliver multiples of 2.048 Mbps (i.e., 10 – 100 Mbps) on the same connection for relative low capital cost (€500 – €1,000 per 2.048 Mbps) and low operational expense. However planning and deployment experience and spectrum is required.

In many markets network operators have been using conventional (fixed) leased lines, leased from incumbent fixed-line providers. The pricing model is typically based on an upfront installation fee (might be capitalized) and a re-occurring monthly lease. On a yearly basis this operational expense can be in the order of €5,000 per 2.048 Mbps, i.e., 5x to 10 x the amount of a MW connection. Some price-models trade-off the 1-off installation fee with a lower lease cost.

Voice was the Good for Backhaul; Before looking at the broadband wireless data bandwidth demand its worth noticing that in the good old Voice days (i.e., GSM, IS95, ..) 1x to 2x 2.048 Mbps was more than sufficient to support most demands on a radio base station (BS).

Mobile-Wireless Broadband data enablers are the Bad and quickly becoming the Very Ugly for Backhaul; With the deployment of High Speed Packet Access (HSPA) on-top of UMTS and with WiMAX (a la Sprint) a BS can easily provide between 7.2 to 14.4 Mbps or higher per sector depending on available bandwidth. With 3 sectors per BS the total supplied data capacity could (in theory … ) be in excess of 21 Mbps per radio Base Station.

From the perspective of backhaul connectivity one would need at least an equivalent bandwidth of 10x 2.048 Mbps connections. Assuming such backhaul lease bandwidth is available in the first instance, with conventional leased line pricing structure, such capacity would be very expensive, i.e., €50,000 per backhaul connection per year. Thus, for 1,000 radio nodes an operator would pay on an annual basis 50 million Euro (Opex directly hitting the EBITDA). This operational expense could be 8 times more than a voice-based operational leased-line expense.

Now that’s alot!

Looking a little ahead (i.e., next couple of years) our UMTS and WiMAX based mobile networks will undergo the so-called Long-Term Evolution (LTE; FDD and TDD based) with expected radio node downlink (i.e., base station to user equipment) capacity between 173 Mbps and 326 Mbps depending on antenna system and available bandwidth (i.e., minimum 20 Mhz spectrum per sector). Thus over a 3-sectored BS (theoretical) speeds in excess of 520 Mbps might be dreamed of (i.e., 253x 2.048 Mbps – and this is HUGE!:-). Alas across a practical real-life deployed base station (on average) no more than 1/3 of the theoretical speed should be expected.

“Houston we have a problem” … should be ringing in any CFO / CTO’s ears – a. Financially near-future developments could significantly strain the Technology Opex budgets and b.Technically providing cost-efficient backhaul capacity that can sustain the promised land.

A lot of that above possible cost can and should be avoided; looking at possible remedies we have several options;

1. High capacity microwave backhaul can prevent the severe increase in leased line cost; provided spectrum and expertise is available. Financially microwave deployment has the advantage of being mainly capital-investment driven with resulting little additional operational expense per connection. It is expected that microwave solutions will be available in the next couple of years which can provide connection capacity of 100 Mbps and above.

Microwave backhaul solutions are clearly economical. However, it is doubtful that LTE speed requirements can be met even with most efficient microwave backhaul solutions?

2. Move to different leased line (LL) pricing mechanisms such as flat pricing (eat all you can for x-Euro). Changing the LL pricing structure is not sufficient. At the same time providers of leased-line infrastructure will be “forced” (i.e., by economics and bandwidth demand) to move to new types of leased bandwidth solutions and architectures in order to sustain the radio network capabilities; ADSL is expected to develop from 8(DL)/1(UL) Mbps to 25(DL)/3.5(UL) Mbps with ADSL2+; VDSL (UL/DL symmetric) from ca. 100 Mbps to 250 Mbps with VDSL2 (ITU-T G.993.2 standard).

Clearly a VDSL2-based infrastructure could support today’s HSPA/WiMAX requirements, as well as the initial bandwidth requirements of LTE. Although VDSL2-based networks are being deployed around Europe (and the world) it is not not widely available.

Another promising mean of supporting the radio-access bandwidth requirements is Fiber to the Premises (FTTP), such as for example offered by Verizon in certain areas of USA (Verizon FiOS Service). With Gigabit Passive Optical Network (GPON, ITU-T G.984 standard) maximum speeds of 2,400 Mbps (DL) and 1,200 Mbps (UL) can be expected. If available FTTP to the base station would be ideal – provided that the connection is priced no higher than a standard 2.048 Mbps leased line to day (i.e., €5,000 benchmark). Note that for a mobile operator it could be acceptable to pay a large 1-off installation fee which could partly finance the FTTP connection to the base station.

Cost & Pricing Expectations
It is in general accepted by industry analysts that broadband wireless services are not going to add much to mobile operators total service revenue growth. In optimistic revenue scenarios data revenue compensates for stagnating/falling voice revenues. EBITDA margins will (actually are!) under pressure and the operational expenses will be violently scrutinized.

Thus, mobile operators deploying UMTS/HSPA, WiMAX and eventually (in the short-term) LTE cannot afford to have its absolute Opex increase. Therefore, if a mobile-wireless operator has a certain backhaul Opex, it would try to keep it at the existing level or reduce it over time (to mitigate possible revenue decline).

For the backhaul leased-capacity providers this is sort of bad news (or good? as it forces them to become economically more efficient) …. as they would have to finance their new fixed higher-bandwidth infrastructures (i.e., VDSL or FTTP) with little additional revenue from the mobile-wireless operators.

Economically it is not clear whether mobile-wireless cost-structure expectations will meet the leased-capacity providers total-cost of deploying networks supporting the mobile-wireless bandwidth demand.

However, for the provider of leased fixed-bandwith, providing VDSL2 and/or FTTP to the residential market should finance their deployment model.

With more than 90% of all data traffic being consumed in-house/in-door and with VDSL2/Fiber-to-the-Home (FTTH) solutions being readily available to the Homes (in urban environments at least) of business as well as residential customers, will mobile-wireless LTE base stations be loaded to the extend that very-high capacity (i.e., beyond 50 Mbps) backhaul connections would be needed?

Source: http://harryshell.blogspot.de/2008/04/backhaul-pains.html