On Cellular Data Pricing, Revenue & Consumptive Growth Dynamics, and Elephants in the Data Pipe.

I am getting a bit sentimental as I haven’t written much about cellular data consumption for the last 10+ years. At the time, it did not take long for most folks in and out of our industry to realize that data traffic and, thereby, so many believed, the total cost of providing the cellular data would be growing far beyond the associated data revenues, e.g., remember the famous scissor chart back in the early two thousand tens. Many believed (then) that cellular data growth would be the undoing of the cellular industry. In 2011 many believed that the Industry only had a few more years before the total cost of providing cellular data would exceed the revenue rendering cellular data unprofitable. Ten years after, our industry remains alive and kicking (though they might not want to admit it too loudly).

Much of the past fear was due to not completely understanding the technology drivers, e.g., bits per second is a driver, and bytes that price plans were structured around not so much. The initial huge growth rates of data consumption that were observed did not make the unease smaller, i.e., often forgetting that a bit more can be represented as a huge growth rate when you start with almost nothing. Moreover, we also did have big scaling challenges with 3G data delivery. It became quickly clear that 3G was not what it had been hyped to be by the industry.

And … despite the historical evidence to the contrary, there are still to this day many industry insiders that believe that a Byte lost or gained is directly related to a loss or gain in revenue in a linear fashion. Our brains prefer straight lines and linear thinking, happily ignoring the unpleasantries of the non-linear world around us, often created by ourselves.

Figure 1 illustrates linear or straight-line thinking (left side), preferred by our human brains, contrasting the often non-linear reality (right side). It should be emphasized that horizontal and vertical lines, although linear, are not typically something that instinctively enters the cognitive process of assessing real-world trends.

Of course, if the non-linear price plans for cellular data were as depicted above in Figure 1, such insiders would be right even if anchored in linear thinking (i.e., even in the non-linear example to the right, an increase in consumption (GBs) leads to an increase in revenue). However, when it comes to cellular data price plans, the price vs. consumption is much more “beastly,” as shown below (in Figure 2);

Figure 2 illustrates the two most common price plan structures in Telcoland; (a, left side) the typical step function price logic that associates a range of data consumption with a price point, i.e., the price is a constant independent of the consumption over the data range. The price level is presented as price versus the maximum allowed consumption. This is by far the most common price plan logic in use. (b, right side) The “unlimited” price plan logic has one price level and allows for unlimited data consumption. T-Mobile US, Swisscom, and SK Telecom have all endorsed the unlimited with good examples of such pricing logic. The interesting fact is that most of those operators have several levels of unlimited tied to the consumptive behavior where above a given limit, the customer may be throttled (i.e., the speed will be reduced compared to before reaching the limit), or (and!) the unlimited plan is tied to either radio access technology (e.g., 4G, 4G+5G, 5G) or a given speed (e.g., 50 Mbps, 100 Mbps, 1Gbps, ..).

Most cellular data price plans follow a step function-like pricing logic as shown in Figure 2 (left side), where within each level, the price is constant up to the nominal data consumption value (i.e., purple dot) of the given plan, irrespective of the consumption. The most extreme version of this logic is the unlimited price plan, where the price level is independent of the volumetric data consumption. Although, “funny” enough, many operators have designed unlimited price plans that, in one way or another, depend on the customers’ consumption, e.g., after a certain level of unlimited consumption (e.g., 200 GB), cellular speed is throttled substantially (at least if the cell under which the customer demand resources are congested). So the “logic” is that if you wanted true unlimited, you still need to pay more than if you only require “unlimited”. Note that for the mathematically inclined, the step function is regarded as (piece-wise) linear … Although our linear brains might not appreciate that finesse very much. Maybe a heuristic that “The brain thinks in straight lines” would be more precisely restated as “The brain thinks in continuous non-constant monotonous straight lines”.

Any increase in consumption within a given pricing-consumption level will not result in any additional revenue. Most price plans allow for considerable growth without incurring additional associated revenues.

NETHERLANDS vs INDONESIA – BRIEFLY.

I like to keep informed and updated about markets I have worked in, with operators I have worked for, and with. I have worked across the globe in many very diverse markets and with operators in vastly different business cycles gives an interesting perspective on our industry. Throughout my career, I have been super interested in the difference between Telco operations and strategies in so-called mature markets versus what today may be much more of a misnomer than 10+ years ago, emerging markets.

The average cellular, without WiFi, consumption per customer in Indonesia was ca. 8 GB per month in 2022. That consumption would cost around 50 thousand Rp (ca. 3 euros) per month. For comparison, in The Netherlands, that consumption profile would cost a consumer around 16 euros per month. As of May 2023, the median cellular download speed was 106 Mbps (i.e., helped by countrywide 5G deployment, for 4G only, the speed would be around 60 to 80 Mbps) compared with 22 Mbps in Indonesia (i.e., where 5G has just been launched. Interestingly, although most likely coincidental, in Indonesia, a cellular data customer would pay ca. 5 times less than in the Netherlands for the same volumetric consumption. Note that for 2023, the average annual income in Indonesia is about one-quarter of that in the Netherlands. However, the Indonesian cellular consumer would also have one-fifth of the quality measured by downlink speed from the cellular base station to the consumer’s smartphone.

Let’s go deeper into how effective consumptive growth of cellular data is monetized… what may impact the consumptive growth, positively and negatively, and how it relates to the telco’s topline.

CELLULAR BUSINESS DYNAMICS.

Figure 3 Between 2016 and 2021, Western European Telcos lost almost 7% of their total cellular turnover (ca. 7+ billion euros over the markets I follow). This corresponds to a total revenue loss of ca. 1.4% per year over the period. To no surprise, the loss of cellular voice-based revenue has been truly horrendous, with an annual loss ca. 30%, although the Covid year (2021 and 2022, for that matter) was good to voice revenues (as we found ourselves confined to our homes and a call away from our colleagues). On the positive side, cellular data-based revenues have “positively” contributed to the revenue in Western Europe over the period (we don’t really know the counterfactual), with an annual growth of ca. 4%. Since 2016 cellular data revenues have exceeded that of cellular voice revenues and are 2022 expected to be around 70% of the total cellular revenue (for Western Europe). Cellular revenues have been and remain under pressure, even with a positive contribution from cellular data. The growth of cellular data volume (not including the contribution generated from WiFi usage) has continued to grow with a 38% annualized growth rate and is today (i.e., 2023) more than five times that of 2016. The annual growth rate of cellular data consumption per customer is somewhat lower ranging from the mid-twenties to the end-thirties percent. Needless to say that the corresponding cellular ARPU has not experienced anywhere near similar growth. In fact, cellular ARPU has generally been lowered over the period.

Some, in my opinion, obvious observations that are worth making on cellular data (I come to realize that although I find these obvious, I am often confronted with a lack of awareness or understanding of those);

Cellular data consumption grows much (much) faster than the corresponding data revenue (i.e., 38% vs 4% for Western Europe).

The unit growth of cellular data consumption does not lead to the same unit growth in the corresponding cellular data revenues.

Within most finite cellular data plans (thus the not unlimited ones), substantial data growth potential can be realized without resulting in a net increase of data-related revenues. This is, of course, trivial for unlimited plans.

The anticipated death of the cellular industry back in the twenty-tens was an exaggeration. The Industry’s death by signaling, voluptuous & unconstrained volumes of demanded data, and ever-decreasing euros per Bytes remains a fading memory and, of course, in PowerPoints of that time (I have provided some of my own from that period below). A good scare does wonders to stimulate innovation to avoid “Armageddon.” The telecom industry remains alive and well.

Figure 4 The latest data (up to 2022) from OECD on mobile data consumption dynamics. Source data can be found at OECD Data Explorer. The data illustrates the slowdown in cellular data growth from a customer perspective and in terms of total generated mobile data. Looking over the period, the 5-year cumulative growth rate between 2016 and 2021 is higher than 2017 to 2022 as well as the growth rate between 2022 and 2021 was, in general, even lower. This indicates a general slowdown in mobile data consumption as 4G consumption (in Western Europe) saturates and 5G consumption still picks up. Although this is not an account of the observed growth dynamics over the years, given the data for 2022 was just released, I felt it was worth including these for completeness. Unfortunately, I have not yet acquired the cellular revenue structure (e.g., voice and data) for 2022, it is work in progress.

WHAT DRIVES CONSUMPTIVE DATA GROWTH … POSITIVE & NEGATIVE.

What drives the consumer’s cellular data consumption? As I have done with my team for many years, a cellular operator with data analytics capabilities can easily check the list of positive and negative contributors driving cellular data consumption below.

Positive Growth Contributors:

  • Customer or adopter uptake. That is, new or old, customers that go from non-data to data customers (i.e., adopting cellular data).
  • Increased data consumption (i.e., usage per adopter) within the cellular data customer base that is driven by a lot of the enablers below;
  • Affordable pricing and suitable price plans.
  • More capable Radio Access Technology (RAT), e.g., HSDPA → HSPA+ → LTE → 5G, effectively higher spectral efficiency from advanced antenna systems. Typically will drive up the per-customer data consumption to the extent that pricing is not a barrier to usage.
  • More available cellular frequency spectrum is provisioned on the best RAT (regarding spectral efficiency).
  • Good enough cellular network consistent with customer demand.
  • Affordable and capable device ecosystem.
  • Faster mobile device CPU leads to higher consumption.
  • Faster & more capable mobile GPUs lead to higher consumption.
  • Device screen size. The larger the screen, the higher the consumption.
  • Access to popular content and social media.

Figure 5 illustrates the description of data growth as depending on the uptake of Adopters and the associated growth rate α(t) multiplied by the Usage per Adopter and the associated growth rate of usage μ(t). The growth of the Adopters can typically be approximated by an S-curve reaching its maximum as there are few more customers left to adopt a new service or product or RAT (i.e., α(t)→0%). As described in this section, the growth of usage per adopter, μ(t), will depend on many factors. Our intuition of μ is that it is positive for cellular data and historically has exceeded 30%. A negative μ would be an indication of consumptive churn. It should not be surprising that overall cellular data consumption growth can be very large as the Adopter growth rate is at its peak (i.e., around the S-curve inflection point), and Usage growth is high as well. It also should not be too surprising that after Adopter uptake has reached the inflection point, the overall growth will slow down and eventually be driven by the Usage per Adopter growth rate.

Figure 6 Using the OECD data (OECD Data Explorer) for the Western European mobile data per customer consumptive growth from 2011 to 2022, the above illustrates the annual growth rate of per-customer data mobile consumption. Mobile data consumption is a blend of usage across the various RATs enabling packet data usage. There is a clear increased annual growth after introducing LTE (4G) followed by a slowdown in annual growth, possibly due to reaching saturation in 4G adaptation, i.e., α3G→4G(t) → 0% leaving μ4G(t) driving the cellular data growth. There is a relatively weak increase in 2021, and although the timing coincides with 5G non-standalone (NSA) introduction (typically at 700 MHz or dynamics spectrum share (DSS) with 4G, e.g., Vodafone-Ziggo NL using their 1800 MHz for 4G and 5G) the increase in 2020 may be better attributed to Covid lockdown than a spurt in data consumption due to 5G NSA intro.

Anything that creates more capacity and quality (e.g., increased spectral efficiency, more spectrum, new, more capable RAT, better antennas, …) will, in general, result in an increased usage overall as well as on a per-customer basis (remember most price plans allow for substantial growth within the plans data-volume limit without incurring more cost for the customer). If one takes the above counterfactual, it should not be surprising that this would result in slower or negative consumption growth.

Negative growth contributors:

  • Cellular congestion causes increased packet loss, retransmissions, and deteriorating latency and speed performance. All in all, congestion may have a substantial negative impact on the customer’s service experience.
  • Throttling policies will always lower consumption and usage in general, as quality is intentionally lowered by the Telco.
  • Increased share of QUIC content on the network. The QUIC protocol is used by many streaming video providers (e.g., Youtube, Facebook, TikTok, …). The protocol improves performance (e.g., speed, latency, packet delivery, network changes, …) and security. Services using QUIC will “bully” other applications that use TCP/IP, encouraging TCP/IP to back off from using bandwidth. In this respect, QUIC is not a fair protocol.
  • Elephant flow dynamics (e.g., few traffic flows causing cell congestion and service degradation for the many). In general, elephant flows, particularly QUIC based, will cause an increase in TCP/IP data packet retransmissions and timing penalties. It is very much a situation where a few traffic flows cause significant service degradation for many customers.

One of the manifestations of cell congestion is packet loss and packet retransmission. Packet loss due to congestion ranges from 1% to 5%. or even several times higher at moments of peak traffic or if the user is in a poor cellular coverage area. The higher the packet loss, the worse the congestion, and the worse the customer experience. The underlying IP protocols will attempt to recover a lost packet by retransmission. The retransmission rate can easily exceed 10% to 15% in case of congestion. Generally, for a reliable and well-operated network, the packet loss should be well below 1% and even as low as 0.1%. Likewise, one would expect a packet retransmission rate of less than 2% (I believe the target should be less than 1%).

Thus, customers that happen to be under a given congested cell (e.g., caused by an elephant flow) would incur a substantially higher rate of retransmitted data packages (i.e., 10% to 15% or higher) as the TCP/IP protocol tries to make up for lost data packages. The customer may experience substantial service quality degradation and, as a final (unintended) “insult”, often be charged for those additional retransmitted data volumes.

From a cellular perspective, as the congestion has been relieved, the cellular operator may observe that the volume on the congested cell actually drops. The reason is that the packet loss and retransmission drops to a level far below the congested one (e.g., typically below 1%). As the quality improves for all customers demanding service from the previously overloaded (i.e., congested) cell, sustainable volume growth will commence in total and as well as will the average consumption on a customer basis. As will be shown below for normal cellular data consumption and most (if not all) price plans, a few percentage points drop in data volume will not have any meaningful effect on revenues. Either because the (temporary) drop happens within the boundaries of a given price plan level and thus has no effect on revenue, or because the overall gainful consumptive growth, as opposed to data volume attributed to poor quality, far exceeds the volume loss due to improved capacity and quality of a congested cell.

Well-balanced and available cellular sites will experience positive and sustainable data traffic growth.

Congested and over-loaded cellular sites will experience a negative and persistent reduction of data traffic.

Actively managing the few elephant flows and their negative impact on the many will increase customer satisfaction, reduce consumptive churn, and increase data growth, easily compensating for the congestion-induced increases due to packet retransmission. And unless an operator consistently is starved for radio access investments, or has poor radio access capacity management processes, most cell congestion can be attributed to the so-called elephant flows.

CELLULAR DATA CONSUMPTION IN REAL NETWORKS – ON A SECTOR LEVEL.

And irrespective of whatever drives positive and negative growth, it is worth remembering that daily traffic variations on a sector-by-sector basis and an overall cellular network level are entirely natural. An illustration of such natural sector variation over a (non-holiday) week is shown below in Figure 7 (c) for a sector in the top-20% of busiest sectors. In this example, the median variation over all sectors in the same week, as shown below, was around 10%. I often observe that even telco people (that should know better) find this natural variation quite worrisome as it appears counterintuitive to their linear growth expectations. Proper statistical measurement & analysis methodologies must be in place if inferences and solid analysis are required on a sector (or cell) basis over a relatively short time period (e.g., day, days, week, weeks,…).

Figure 7 illustrates the cellular data consumption daily variation over a (non-holiday) week. In the above, there are three examples (a) a sector from the bottom 20% in terms of carried volume, (b) a sector with a median data volume, and (c) a sector taken from the top 20% of carried data volume. Over the three different sectors (low, median, high) we observe very different variations over weekdays. From the top-20%, we have an almost 30% variation between the weekly minimum (Tuesday) and the weekly maximum (Thursday) to the bottom-20% with a variation in excess of 200% over the week. The charts above show another trend we observe in cellular networks regarding consumptive variations over time. Busy sectors tend to have a lower weekly variation than less busy sectors. I should point out that I have made no effort to select particular sectors. I could easily find some (of the less busy sectors) with even more wild variations than shown above.

The day-to-day variation is naturally occurring based on the dynamic behavior of the customers served by a given sector or cell (in a sector). I am frequently confronted with technology colleagues (whom I respect for their deep technical knowledge) that appear to expect (data) traffic on all levels monotonously increase with a daily growth rate that amounts to the annual CAGR observed by comparing the end-of-period volume level with the beginning of period volume level. Most have not bothered to look at actual network data and do not understand (or, to put it more nicely, simply ignore) the naturally statistical behavior of traffic that drives hourly, daily, weekly, and monthly variations. If you let statistical variations that you have no control over drive your planning & optimization decisions. In that case, you will likely fail to decide on the business-critical ones you can control.

An example of a high-traffic (top-20%) sector’s complete 365 day variations of data consumption is shown below in Figure 8. We observe that the average consumption (or traffic demand) increases nicely over the year with a bit of a slowdown (in this European example) during the summer vacation season (same around official holidays in general). Seasonal variations is naturally occurring and often will result in a lower-than-usual daily growth rate and a change in daily variations. In the sector traffic example below, Tuesdays and Saturdays are (typically) lower than the average, and Thursdays are higher than average. The annual growth is positive despite the consumptive lows over the year, which would typically freak out my previously mentioned industry colleagues. Of course, every site, sector, and cell will have a different yearly growth rate, most likely close to a normal distribution around the gross annual growth rate.

Figure 8 illustrates a top-20% sector’s data traffic growth dynamics (in GB) over a calendar year’s 365 days. Tuesdays and Saturdays are likely below the weekly average data consumption, and Thursdays are more likely to be above. Furthermore, the daily traffic growth is slowing around national holidays and in the summer vacation (i.e., July & August for this particular Western European country).

And to nail down the message. As shown in the example in Figure 9 below, every sector in your cellular network from one time period to the other will have a different positive and negative growth rate. The net effect over time (in terms of months more than days or weeks) is positive as long as customers adopt the supplied RAT (i.e., if customers are migrating from 4G to 5G, it may very well be that 4G consumed data will decline while the 5G consumed data will increase) and of course, as long as the provided quality is consistent with the expected and demanded quality, i.e., sectors with congestion, particular so-called elephant-flow induced congestion, will hurt the quality of the many that may reduce their consumptive behavior and eventually churn.

Figure 9 illustrates the variation in growth rates across 15+ thousand sectors in a cellular network comparing the demanded data volume between two consecutive Mondays per sector. Statistical analysis of the above data shows that the overall average value is ca. 0.49% and slightly skewed towards the positive growths rates (e.g., if you would compare a Monday with a Tuesday, the histogram would typically be skewed towards the negative side of the growth rates as Tuesday are a lower traffic day compared to Monday). Also, with the danger of pointing out the obvious, the daily or weekly growth rates expected from an annual growth rate of, for example, 30% are relatively minute, with ca. 0.07% and 0.49%, respectively.

The examples above (Figures 7, 8, and 9) are from a year in the past when Verstappen had yet to win his first F1 championship. That particular weekend also did not show F1 (or Sunday would have looked very different … i.e., much higher) or any other big sports event.

CELLULAR DATA PRICE PLAN LOGIC.

Figure 10 above is an example of the structure of a price plan. Possibly represented slightly differently from how your marketeer would do (and I am at peace with that). We observe the illustration of a price level of 8 data volume intervals on the upper left chart. This we can also write as (following the terminology of the lower right corner);

Thus, for the p_1 package allowing the customer to consume up to 3 GB is priced at 20 (irrespective of whether the customer would consume less). For package p_5 a consumer would pay 100 for a data consumption allowance up to 35 GB. Of course, we assume that the consumer choosing this package would generally consume more than 24 GB, which is the next cheaper package (i.e., p_4).

The price plan example above clearly shows that each price level offers customers room to grow before upgrading to the next level. For example, a customer consuming no more than 8 GB per month, fitting into p_3, could increase consumption with 4 GB (+50%) before considering the next level price plan (i.e., p_4). This is just to illustrate that even if the customer’s consumption may grow substantially, one should not per se be expecting more revenue.

Even though it should be reasonably straightforward that substantial growth of a customer base data consumption cannot be expected to lead to an equivalent growth in revenue, many telco insiders instinctively believe this should be the case. I believe that the error may be due to many mentally linearizing the step-function price plans (see Figure 2 upper right side) and simply (but erroneously) believing that any increase (decrease) in consumption directly results in an increase (or decrease) in revenue.

DATA PRICING LOGIC & USAGE DISTRIBUTION.

If we want to understand how consumptive behavior impacts cellular operators’ toplines, we need to know how the actual consumption distributes across the pricing logic. As a high-level illustration, Figure 11 (below) shows the data price step-function logic from Figure 9 with an overall consumptive distribution superimposed (orange solid line). It should be appreciated that while this provides a fairly clear way of associating consumption with pricing, it is an oversimplification at best. It will nevertheless allow me to estimate crudely the number of customers that are likely to have chosen a particular price plan matching their demand (and affordability). In reality, we will have customers that have chosen a given price plan but either consume less than the limit of the next cheaper plan (thus, if consistently so, could save but go to that plan). We will also have customers that consume more than their allowed limit. Usually, this would result in the operator throttling the speed and sending a message to the customer that the consumption exceeds the limit of the chosen price plan. If a customer would consistently overshoots the limits (with a given margin) of the chosen plan, it is likely that eventually, the customer will upgrade to the next more expensive plan with a higher data allowance.

Figure 11 above illustrates on the left side a consumptive distribution (orange line) identified by its mean and standard deviation superimposed on our price plan step-function logic example. The right summarizes the consumptive distribution across the eight price plan levels. Note that there is a 9th level in case the 200 GB limit is breached (0.2% in this example). I am assuming that such customers pay twice the price for the 200 GB price plan (i.e., 320).

In the example of an operator with 100 million cellular customers, the consumptive distribution and the given price plan lead to a fiat of 7+ billion per month. However, with a consumptive growth rate of 30% to 40% annually per active cellular data user (on average), what kind of growth should we expect from the associated cellular data revenues?

Figure 12 In the above illustration, I have mapped the consumptive distribution to the price plan levels and then developed the begin-of-period consumptive distribution (i.e., the light green curve) month by month until month 12 has been reached (i.e., the yellow curve). I assume the average monthly consumptive cellular data growth is 2.5% or ca. 35% after 12 months. Furthermore, I assume that for the few customers falling outside the 200 GB limit that they will purchase another 200 GB plan. For completeness, the previous 12 months (previous year) need to be carried out to compare the total cumulated cellular data revenue between the current and previous periods.

Within the current period (shown in Figure 12 above), the monthly cellular data revenue CAGR comes out at 0.6% or a total growth of 7.4% of monthly revenue between the beginning period and the end period. Over the same period, the average data consumption (per user) grew by ca. 34.5%. In terms of the current year’s total data revenue to the previous year’s total data revenue, we get an annual growth rate of 8.3%. This illustrates that it should not be surprising that the revenue growth can be far smaller than the consumptive growth given price plans such as the above.

It should be pointed out that the above illustration of consumptive and revenue growth simplifies the growth dynamics. For example, the simulation ignores seasonal swings over a 12-month period. Also, it attributes 1-to-1 all consumption falling within the price range to that particular price level when there is always spillover on both upper and lower levels of a price range that will not incur higher or lower revenues. Moreover, while mapping the consumptive distribution to the price-plan giga-byte intervals makes the simulation faster (and setup certainly easier), it is also not a very accurate approach to the coarseness of the intervals.

A LEVEL DEEPER.

While working with just one consumptive distribution, as in Figure 11 and Figure 12 above, allows for simpler considerations, it does not fully reflect the reality that every price plan level will have its own consumptive distribution. So let us go that level deeper and see whether it makes a difference.

Figure 13 above, illustrates the consumptive distribution within a given price plan range, e.g., the “5 GB @ 30” price-plan level for customers with a consumption higher than 3 GB and less than or equal to 5 GB. It should come as no surprise that some customers may not reach even the 3 GB, even though they pay for (up to) 5 GB, and some may occasionally exceed the 5 GB limit. In the example above, 10% of customers have a consumption below 3 GB (and could have chosen the next cheaper plan of up to 3 GB), and 3% exceed the limits of the chosen plan (an event that may result in the usage speed being throttled). As the average usage within a given price plan level approaches the ceiling (e.g., 5 GB in the above illustration), in general, the standard deviation will reduce accordingly as customers will jump to the Next Expensive Plan to meet their consumptive needs (e.g., “12 GB @ 50” level in the illustration above).

Figure 14 generalizes Figure 11 to the full price plan and, as illustrated in Figure 12, let the consumption profiles develop in time over a 12-month period (Initial and +12 month shown in the above illustration). The difference between the initial and 12 months can be best appreciated with the four smaller figures that break up the price plan levels in 0 to 40 GB and 40 to 200 GB.

The result in terms of cellular data revenue growth is comparable to that of the higher-level approach of Figure 12 (ca. 8% annual revenue growth vs 34 % overall consumptive annual growth rate). The detailed approach of Figure 11 is, however, more complicated to get working and requires much more real data to work with (which obviously should be available to operators in this time and age). One should note that in the illustrated example price plan (used in the figures above) that at a 2.5% monthly consumptive growth rate (i.e., 34% annually), it would take a customer an average of 24 months (spread of 14 to 35 month depending on level) to traverse a price plan level from the beginning of the level (e.g., 5 GB) to the end of the level (12 GB). It should also be clear that as a customer enters the highest price plan levels (e.g., 100 GB and 200 GB), little additional can be expected to be earned on those customers over their consumptive lifetime.

The illustrated detailed approach shown above is, in particular, useful to test a given price plan’s profitability and growth potential, given the particularities of the customers’ consumptive growth dynamics.

The additional finesse that could be considered in the analysis could be an affordability approach because the growth within a given price level slows down as the average consumption approaches the limit of the given price level. This could be considered by slowing the mean growth rate and allowing for the variance to narrow as the density function approaches the limit. In my simpler approach, the consumptive distributions will continue to grow at a constant growth rate. In particular, one should consider more sophisticated approaches to modeling the variance that determines the spillover into less and more expensive levels. An operator should note that consumption that reduces or consistently falls into the less expensive level expresses consumptive churn. This should be monitored on a customer level as well as on a radio access cell level. Consumptive churn often reflects the supplied radio access quality is out of sync with the customer demand dynamics and expectations. On a radio access cell level, the diligent operator will observe a sharp increase in retransmitted data packages and increased latency on a flow (and active customer basis) hallmarks of a congested cell.

WRAPPING UP.

To this day, 20+ odd years after the first packet data cellular price plans were introduced, I still have meetings with industry colleagues where they state that they cannot implement quality-enhancing technologies for the fear that data consumption may reduce and by that their revenues. Funny enough, often the fear is that by improving the quality for typically many of their customers being penalized by a few customers’ usage patterns (e.g., the elephants in the data pipe), the data packet loss and TCP/IP retransmissions are reducing as the quality is improving and more customers are getting the service they have paid for. It is ignoring the commonly established fact of our industry that improving the customer experience leads to sustainable growth in consumption that consequently may also have a positive topline impact.

I am often in situations where I am surprised with how little understanding and feeling Telco employees have for their own price plans, consumptive behavior, and the impact these have on their company’s performance. This may be due to the fairly complex price plans telcos are inventing, and our brain’s propensity for linear thinking certainly doesn’t make it easier. It may also be because Telcos rarely spend any effort educating their employees about their price plans and products (after all, employees often get all the goodies for “free”, so why bother?). Do a simple test at your next town hall meeting and ask your CXOs about your company’s price plans and their effectiveness in monetizing consumption.

So what to look out for?

Many in our industry have an inflated idea (to a fault) about how effective consumptive growth is being monetized within their company’s price plans.

Most of today’s cellular data plans can accommodate substantial growth without leading to equivalent associated data revenue growth.

The apparent disconnect between the growth rate of cellular data consumption (CAGR ~30+%), in its totality as well on an average per-customer basis, and cellular data revenues growth rate (CAGR < 10%) is simply due to the industry’s price plan structures allowing for substantial growth without a proportion revenue growth.

ACKNOWLEDGEMENT.

I greatly acknowledge my wife, Eva Varadi, for her support, patience, and understanding during the creative process of writing this Blog.

FURTHER READING.

Kim Kyllesbech Larsen, Mind Share: Right Pricing LTE … and Mobile Broadband in general (A Technologist’s observations) (slideshare.net), (May 2012). A cool seminal presentation on various approaches to pricing mobile data. Contains a lot of data that illustrates how far we have come over the last 10 years.

Kim Kyllesbech Larsen, Mobile Data-centric Price Plans – An illustration of the De-composed. | techneconomyblog (February, 2015). Exploring UK mobile mixed-services price plans in an attempt to decipher the price of data which at the time (often still is) a challenge to figure out due to (intentional?) obfuscation.

Kim Kyllesbech Larsen, The Unbearable Lightness of Mobile Voice. | techneconomyblog (January, 2015). On the demise of voice revenue and rise of data. More of a historical account today.

Tellabs “End of Profit” study executive summary (wordpress.com), (2011). This study very much echoed the increasing Industry concern back in 2010-2012 that cellular data growth would become unprofitable and the industry’s undoing. The basic premise was that the explosive growth of cellular data and, thus, the total cost of maintaining the demand would lead to a situation where the total cost per GB would exceed the revenue per GB within the next couple of years. This btw. was also a trigger point for many cellular-focused telcos to re-think their strategies towards the integrated telco having internal access to fixed and mobile broadband.

B. de Langhe et al., “Linear Thinking in a Nonlinear World”, Harvard Business Review, (May-June, 2017). It is a very nice and compelling article about how difficult it is to get around linear thinking in a non-linear world. Our brains prefer straight lines and linear patterns and dependencies. However, this may lead to rather amazing mistakes and miscalculations in our clearly nonlinear world.

OECD Data Explorer A great source of telecom data, for example, cellular data usage per customer, and the number of cellular data customers, across many countries. Recently includes 2022 data.

I have used Mobile Data – Europe | Statista Market Forecast to better understand the distribution between cellular voice and data revenues. Most Telcos do not break out their cellular voice and data revenues from their total cellular revenues. Thus, in general, such splits are based on historical information where it was reported, extrapolations, estimates, or more comprehensive models.

Kim Kyllesbech Larsen, The Smartphone Challenge (a European perspective) (slideshare.net) (April 2011). I think it is sort of a good account for the fears of the twenty-tens in terms of signaling storms, smartphones (=iPhone) and unbounded traffic growth, etc… See also “Eurasia Mobile Markets Challenges to our Mobile Networks Business Model” (September 2011).

Geoff Huston, “Comparing TCP and QUIC”, APNIC, (November 2022).

Anna Saplitski et al., “CS244 ’16: QUIC loss recovery”, Reproducing Network Research, (May 2016).

RFC9000, “QUIC: A UDP-Based Multiplexed and Secure Transport“, Internet Engineering Task Force (IETF), (February 2022).

Dave Gibbons, What Are Elephant Flows And Why Are They Driving Up Mobile Network Costs? (forbes.com) (February 2019).

K.-C. Lan and J. Heidemann, “A measurement study of correlations of Internet flow characteristic” (February 2006). This seminal paper has inspired many other research works on elephant flows. A flow should be understood as an unidirectional series of IP packets with the same source and destination addresses, port numbers, and protocol numbers. The authors define elephant flows as flows with a size larger than the mean plus three standard deviations of the sampled data. Though it is important to point out that the definition is less important. Such elephant flows are typically few (less than 20%) but will cause cell congestion by reducing the quality of many requiring a service in such an affected cell.

Opanga Networks is a fascinating and truly innovative company. Using AI, they have developed their solution around the idea of how to manage data traffic flows, reduce congestion, and increase customer quality. Their (N2000) solution addresses particular network situations where a limited number of customer data usage takes up a disproportionate amount of resources within the cellular network (i.e., the problem with elephant flows). Opanga’s solution optimizes those traffic congestion-impacting flows and results in an overall increase in service quality and customer experience. Thus, the beauty of the solution is that the few traffic patterns, causing the cellular congestion, continue without degradation, allowing the many traffic patterns that were impacted by the few to continue at their optimum quality level. Overall, many more customers are happy with their service. The operator avoids an investment of relatively poor return and can either save the capital or channel it into a much higher IRR (internal rate of return) investment. I have seen tangible customer improvements exceeding 30+ percent improvement to congested cells, avoiding substantial RAN Capex and resulting Opex. And the beauty is that it does not involve third-party network vendors and can be up and running within weeks with an investment that is easily paid back within a few months. Opanga’s product pipeline is tailor-made to alleviate telecom’s biggest and thorniest challenges. Their latest product, with the appropriate name Joules, enables substantial radio access network energy savings above and beyond what features the telcos have installed from their Radio Access Network suppliers. Disclosure: I am associated with Opanga as an advisor to their industrial advisory board.

RAN Unleashed … Strategies for being the best (or the worst) cellular network (Part III).

I have been spending my holiday break this year (December 2021) updating my dataset on Western Europe Mobile Operators, comprising 58+ mobile operators in 16 major Western European markets, focusing on spectrum positions, market dynamics, technology diffusion (i.e., customer migration to 5G), advanced antenna strategies, (modeled) investment levels and last but not least answering the question: what makes a cellular network the best in a given market or the world. What are the critical ingredients for an award-winning mobile network?

An award-winning cellular network, the best network, also provides its customers with a superior experience, the best network experience possible in a given market.

I am fascinated by the many reasons and stories we tell ourselves (and others) why this or that cellular network is the best. The story may differ whether you are an operator, a network supplier, or an analyst covering the industry. I have had the privileged to lead a mobile network (T-Mobile Netherlands) that won the Umlaut best mobile network award in The Netherlands since 2016 (5 consecutive times) and even scored the highest amount of points in the world in 2019 and 2020/2021. So, I guess it would make me a sort of “authority” on winning best network awards? (=sarcasm).

In my opinion and experience, a cellular operator has a much better than fair chance at having the best mobile network, compared to its competition, with access to the most extensive active spectrum portfolio, across all relevant cellular bands, implemented on a better (or best) antenna technology (on average) situated on a superior network footprint (e.g., more sites).

For T-Mobile Netherlands, firstly, we have the largest spectrum portfolio (260 MHz) compared to KPN (205 MHz) and Vodafone (215 MHz). The spectrum advantage of T-Mobile, as shown above, is both in low-band (< 1800 MHz) as well as mid-band range (> 1500 MHz). Secondly, as we started out back in 1998, our cell site grid was based on 1800 MHz, requiring a denser cell site grid (thus, more sites required) than the networks based on 900 MHz of the two Dutch incumbent operators, KPN and Vodafone. Therefore, T-Mobile ended up with more cell sites than our competition. We maintained the site advantage even after the industry’s cell grid densification needs of UMTS at 2100 MHz (back in the early 2000s). Our two very successful mergers have also helped our site portfolio, back in 2007 acquiring and merging with Orange NL and in 2019 merging with Tele2 NL.

The number of sites (or cells) matter for coverage, capacity, and overall customer experience. Thirdly, T-Mobile was also first in deploying advanced antenna systems in the Dutch market (e.g., aggressive use of higher-order MiMo antennas) across many of our frequency bands and cell sites. Our antenna strategy has allowed for a high effective spectral efficiency (across our network). Thus, we could (and can) handle more bits per second in our network than our competition.

Moreover, over the last 3 years, T-Mobile has undergone (passive) site modernization that has improved coverage and quality for our customers. This last point is not surprising since the original network was built based on a single 1800 MHz frequency, and since 1998 we have added 7 additional bands (from 700 MHz to 2.5 GHz) that need to be considered in the passive site optimization. Of course, as site modernization is ongoing, an operator (like T-Mobile) also should consider the impact of future bands that may be required (e.g., 3.x GHz). Optimize subject to the past as well as the future spectrum outlook. Last but not least, we at T-Mobile have been blessed with a world-class engineering team that has been instrumental in squeezing out continuous improvements of our cellular network over the last 6 years.

So, suppose you have 25% less spectrum than a competitor. In that case, you either need to compensate by building 25% more cells (very costly & time-consuming), deploying better antennas with a 25% better effective spectral efficiency (limited, costly & relatively easy to copy/match), or a combination of both (expensive & time-consuming). The most challenging driver to copy for network superiority is the amount of spectrum. A competitor only compensates by building more sites, deploying better antenna technology, and over decades to try to equalize spectrum position is subsequent spectrum auctions (e.g., valid for Europe, not so for the USA where acquired spectrum usually is owned in perpetuity).

T-Mobile has consistently won the best mobile network award over the last 6 years (and 5 consecutive times) due to these 3 multiplying core dimensions (i.e., spectrum × antenna technology × sites) and our world-class leading engineering team.

THE MAGIC RECIPE FOR CELLULAR PERFORMANCE.

We can formalize the above network heuristics in the following key (very beautiful IMO) formula for cellular network capacity measured in throughput (bits per second);

It is actually that simple. Cellular capacity is made as simple as possible, dependent on three basic elements, but not more straightforward. Maybe, super clear, though only active spectrum counts. Any spectrum not deployed is an opportunity for a competitor to gain network leadership on you.

If an operator has a superior spectrum position and everything else is equal (i.e., antenna technology & the number of sites), that operator should be unbeatable in its market.

There are some caveats, though. In an overloaded (congested) cellular network, performance would decrease, and superior network performance would be unlikely to be ensured compared to competitors not experiencing such congestion. Furthermore, spectrum superiority must be across the depth of the market-relevant cellular frequencies (i.e., 600 MHz – 3.x GHz and higher). In other words, if a cellular operator “only” has to work with, for example, 100 MHz @ 3.5GHz, it is unlikely that this would guarantee a superior network performance across a market (country) compared to a much better balance spectrum portfolio.

The option space any operator has is to consider the following across the three key network quality dimensions;

Let us look at the hypothetical Western European country Mediana. Mediana, with a population of 25 million, has 3 mobile operators each have 8 cellular frequency bands, incumbent Winky has a total cellular bandwidth of 270 MHz, Dipsy has 220 MHz, and Po has 320 MHz (top their initial weaker spectrum position through acquisitions). Apart from having the most robust spectrum portfolio, Po also has more cell sites than any other in the market (10,000) and keeps winning the best network award. Winky, being the incumbent, is not happy about this situation. No new spectrum opportunities will become available in the next 10 years. Winky’s cellular network, based initially on 900MHz but densified over time, has about 20% fewer sites than Po. Po and Winky’s deployed state of antenna technology is comparable.

What can Winky do to gain network leadership? Winky has assessed that Po has ca. 20% stronger spectrum position than they, state of antenna technology is comparable, and they (Po) have ca. 20% more sites. Using the above formula, Winky estimates that Po’s have 44% more raw cellular network quality available compared to their own capability. Winky’s commenced a network modernization program that adds another 500 new sites and significantly improves their antenna technology. After this modernization program, Winky has decreased its site deficit to having 10% fewer sites than Po and almost 60% better antenna technology capability than Po. Overall, using the above network quality formula, Winky has changed their network position to a lead over Po with ca. 18%. In theory, it should have an excellent chance to capture the best network award.

Of course, Po could simply follow and deploy the same antenna technology as Winky and would easily overtake Winky’s position due to its superior spectrum position (that Winky cannot beat the next 10 to 15 years at least).

In economic terms, it may be tempting to conclude that Winky has avoided 625 Million Euro in spectrum fees by possessing 50 MHz less than Po (i.e., median spectrum fee in Mediana is 0.50 Euro per MHz per pop times the avoided 50 MHz times the population of Mediana 25 Million pops) and that for sure should allow Winky to make a lot of network (and market) investments to gain network leadership. By adding more sites, assuming it is possible to do where they are needed and invest in better antenna technology. However, do the math with realistic prices and costs incurred over a 10 to 15 year period (i.e., until the next spectrum opportunity). You may be more likely to find a higher total cost for Winky than the spectrum fee avoidance. Also, the strategy of Winky is easy to copy and overtake in the next modernization cycle of Po.

Is there any value for operators engaging in such the best network equivalent of a “nuclear arms” race? That interesting question is for another article. Though the answer (spoiler alert) is (maybe) not so black and white as one may think.

An operator can compensate for a weaker spectrum position by adding more cell sites and deploying better antenna technologies.

A superior spectrum portfolio is not an entitlement. Still, an opportunity to become the sustainable best network in a given market (for the duration that spectrum is available to the operator, e.g., 10 – 15 years in Europe at least).

WESTERN EUROPE SPECTRUM POSITIONS.

A cellular operator’s spectrum position is an important prerequisite for superior performance and customer experience. If an operator has the highest amount of spectrum (well balanced over low, mid, and high-frequency bands), it will have a powerful position to become the best network in that given market. Using Spectrum Monitor’s Global Mobile Frequency database (last updated May 2021), I analyzed the spectrum position of a total of 58 cellular operators in 16 Western European markets. The result is shown below as (a) Total spectrum position, (b) Low-band spectrum position covering spectrum below and including 1500 MHz (SDL band), and (c) Mid-band spectrum covering the spectrum above 1500 MHz (SDL band). For clarity, I include the 3.X GHz (C-band) as mid-band and do not include any mmWave (n257 band) positions (anyway would be high band, obviously).

4 operators are in a category by themselves with 400+ MHz of total cellular bandwidth in their spectrum portfolios; A1 (Austria), TDC (Denmark), Cosmote (Greece), and Swisscom (Switzerland). TDC and Swisscom have incredibly strong low-band and mid-band positions compared to their competition. Magenta in Austria has a 20 MHz advantage to A1 in low-band (very good) but trails A1 with 92 MHz in mid-band (not so good). Cosmote slightly follows behind on low-band compared to Vodafone (+10 MHz in their favor), and they head the Greek race with +50 MHz (over Vodafone) in mid-band. All 4 operators should be far ahead of their competitors in network quality. At least if they used their spectrum resources wisely in combination with good (or superior) antenna technologies and a sufficient cellular network footprint. In all else being equal, these 4 operators should be sustainable unbeatable based on their incredible strong spectrum positions. Within Western Europe, I would, over the next few years, expect to see all round best networks with very high best network benchmark scores in Denmark (TDC), Switzerland (Swisscom), Austria (A1), and Greece (Cosmote). Western European countries with relatively more minor surface areas (e.g., <100,000 square km) should outperform much larger countries.

In fact, 3 of the 4 top spectrum-holding operators also have the best cellular networks in their markets. The only exception is A1 in Austria, which lost to Magenta in the most recent Umlaut best network benchmark. Magenta has the best low-band position in the Austrian market, providing for above and beyond cellular indoor-quality coverage that the low-band provides.

There are so many more interesting insights in my collected data. Alas for another article at another time (e.g., topics like the economic value of being the best and winning awards, industry investment levels vs. performance, infrastructure strategies, incumbent vs. later stages operator dynamics, 3.X GHz and mmWave positions in WEU, etc…).

The MNO rank within a country will depend on the relative spectrum position between 1st and 2nd operator. If below 10% (i.e., dark red in chart below), I assess that it will be relative easy for number 2 to match or beat number 1 with improved antenna technology. As the relative strength of the spectrum position of number 1 relative to number 2 is increased, it will become increasingly difficult (assuming number 1 uses an optimal deployment strategy).

The Stars (e.g., #TDCNet / #Nuuday#Swisscom and #EE) have more than a 30% relative spectrum strength compared to the 2nd ranked MNO in a given market. They will have to severely mess up, not to take (or have!) the best cellular network position in their relevant markets. Moreover, network economically, the Stars should have a substantial better Capex position compared to their competitors (although 1 of the Stars seem a “bit” out-of-whack in their sustainable Capex spend, but may be due to fixed broadband focus as well?). As a “cherry on the pie” both Nuuday/TDCNet and Swisscom have some of the strongest spectral overhead positions (i.e., MHz per pop) in Western Europe (relative small populations to very strong spectrum portfolios), which is obviously should enable superior customer experience.

HOW AND HOW NOT TO WIN BEST NETWORK AWARDS.

Out of the 16 cellular operators having the best networks (i.e., rank 1), 12 (75%) also had the strongest (in market) spectrum positions. 3 Operators having the second-best spectrum position ended up taking the best network position, and 1 operator (WindTre, Italy) with the 3rd best spectrum position took the pole network position. The incumbent TIM (Italy) has the strongest spectrum position both in low- (+40 MHz vs. WindTre) and mid-band (+52 MHz vs. WindTre). Clearly, it is not a given that having a superior spectrum position also leads to a superior network position. Though 12 out of 16 operators leverage their spectrum superiority compared to their respective competitors.

For operators with the 2nd largest spectrum position, more variation is observed. 7 out of 16 operators end up with the 2nd position as best network (using Umlaut scoring). 3 ended up as best network, and the rest either in 3rd or 4th position. The reason is that often the difference between 2nd and 3rd spectrum rank position is not per see considerable and therefor, other effects, such as several sites, better antenna technologies, and/or better engineering team, are more likely to be decisive factors.

Nevertheless, the total spectrum is a strong predictor for having the best cellular network and winning the best network award (by Umlaut).

As I have collected quite a rich dataset for mobile operators in Western Europe, it may also be possible to model the expected ranking of operators in a given market. Maybe even reasonably predict an Umlaut score (Hakan, don’t worry, I am not quite there … yet!). This said, while the dataset comprises 58+ operators across 16 markets, more data would be required to increase the confidence in benchmark predictions (if that is what one would like to do). Particular to predict absolute benchmark scores (e.g., voice, data, and crowd) as compiled by Umlaut. Speed benchmarks, ala what Ookla’s provides, are (much) easier to predict with much less sophistication (IMO).

Here I will just show my little toy model using the following rank data (using Jupyter R);

The rank dataset set has 64 rows representing rank data and 5 columns containing (1) performance rank (perf_rank, the response), (2) total spectrum rank (spec_rank, predictor), (3) low-band spectrum rank (lo_spec_rank, predictor), (4) high-band spectrum rank (hi_spec_rank, predictor) and (5) Hz-per-customer rank (hz_cust_rank, predictor).

Concerning the predictor (or feature) Hz-per-customer, I am tracking all cellular operators’ so-called spectrum-overhead, which indicates how much Hz can be assigned to a customer (obviously an over-simplification but nevertheless an indicator). Rank 1 means that there is a significant overhead. That is, we have a lot of spectral capacity per customer. Rank 4 has the opposite meaning: the spectral overhead is small, and we have less spectral capacity per customer. It is good to remember that this particular feature is usually dynamic unless the spectrum situation changes for a given cellular operator (e.g., like traffic and customers may grow).

A (very) simple illustration of the “toy model” is shown below, choosing only low-band and high-band ranks as relevant predictors. Almost 60% of the network-benchmark rank can be explained by the low- and high-band ranks.

The model can, of course, be enriched by including more features, such as effective antenna-capability, Hz-per-Customer, Hz-per-Byte, Coverage KPI, Incident rates, Equipment Aging, Supplier, investment level (over last 2 – 3 years), etc… Given the ongoing debate of the importance of supplier to best networks (and their associated awards), I do not find a particularly strong correlation between RAN (incl. antenna) supplier, network performance, and benchmark rank. The total amount of deployed spectrum is a more important predictor. Of course, given the network performance formula above, if an antenna deployment delivers more effective spectral efficiency (or antenna “boost”) than competitors, it will increase the overall network quality for that operator. However, such an operator would still need to overcompensate the potential lack of spectrum compared to a spectrum-superior competitor.

END THOUGHTS.

Having the best cellular network in a market is something to be very proud of. Winning best network awards is obviously great for an operator and its employees. However, it should really mean that the customers of that best network operator also get the best cellular experience compared to any other operator in that market. A superior customer experience is key.

Firstly, the essential driver (enabler) for best network or network leadership is having a superior spectrum position. In low-band, mid-band, and longer-term also in high-band (e.g., mmWave spectrum). The second is having a good coverage footprint across your market. Compared to competitors, a superior spectrum portfolio could even be with fewer cell sites than a competitor with an inferior spectrum position (forced to densify earlier due to spectral capacity limitations as traffic increases). For a spectrum laggard, building more cell sites is costly (i.e., Capex, Opex, and Time) to attempt to improve or match a superior spectrum competitor. Thirdly, having superior antenna technology deployed is essential. It is also a relatively “easy” way to catch up with a superior competitor, at least in the case of relative minor spectrum position differences. Compared to buying additional spectrum (assuming such is available when you need it) or building out a substantial amount of new cell sites to equalize a cellular performance difference, investing into the best (or better or good-enough-to-win) antenna technology, particular for a spectrum laggard, seems to be the best strategy. Economically, relative to the other two options, and operationally, as time-to-catch-up can be relatively short.

After all, this has been said and done, a superior cellular spectrum portfolio is one of the best predictors for having the best network and even winning the best network award.

Economically, it could imply that a spectrum-superior operator, depending on the spectrum distance to the next-best spectrum position in a given market, may not need to invest in the same level of antenna technology as an inferior operator or could delay such investments to a more opportune moment. This could be important, particularly as advanced antenna development is still at its “toddler” state, and more innovative, powerful (and economical) solutions are expected over the next few years. Though, for operators with relatively minor spectrum differences, the battle will be via the advancement of antenna technology and further cell site sectorization (as opposed to building new sites).

ACKNOWLEDGEMENT.

I greatly acknowledge my wife, Eva Varadi, for her support, patience, and understanding during the creative process of writing this Blog. Also, many of my Deutsche Telekom AG and Industry colleagues, in general, have in countless ways contributed to my thinking and ideas leading to this little Blog. Again, I would like to draw attention to Petr Ledl and his super-competent team in Deutsche Telekom’s Group Research & Trials. Thank you so much for being a constant inspiration and always being available to talk antennas and cellular tech in general.

FURTHER READINGS.

Spectrum Monitoring, “Global Mobile Frequencies Database”, the last update on the database was May 2021. You have a limited amount of free inquiries before you will have to pay an affordable fee for access.

Umlaut, “Umlaut Benchmarking” is an important resources for mobile (and fixed) network benchmarks across the world. The umlaut benchmarking methodology is the de-facto industry standard today and applied in more than 120 countries measuring over 200 mobile networks worldwide. I have also made use of the associated Connect Testlab resouce; www.connect-testlab.com. Most network benchmark data goes back to at least 2017. The Umlaut benchmark is based on in-country drive test for voice and data as well as crowd sourced data. It is by a very big margin The cellular network benchmark to use for ranking cellular operators (imo).

Speedtest (Ookla), “Global Index”, most recent data is Q3, 2021. There are three Western European markets that I have not found any Umlaut (or P3 prior to 2020) benchmarks for; Denmark, France and Norway. For those markets I have (regrettably) had to use Ookla data which is clearly not as rich as Umlaut (at least for public domain data).

5G Standalone – European Demand & Expectations (Part I).

By the end of 2020, according with Ericsson, it was estimated that there where ca. 7.6 million 5G subscriptions in Western Europe (~ 1%). Compare this to North America’s ca. 14 million (~4%) and 190 million (~11%) North East Asia (e.g, China, South Korea, Japan, …).

Maybe Western Europe is not doing that great, when it comes to 5G penetration, in comparison with other big regional markets around the world. To some extend the reason may be that 4G network’s across most of Western Europe are performing very well and to an extend more than servicing consumers demand. For example, in The Netherlands, consumers on T-Mobile’s 4G gets, on average, a download speed of 100+ Mbps. About 5× the speed you on average would get in USA with 4G.

From the October 2021 statistics of the Global mobile Suppliers Association (GSA), 180 operators worldwide (across 72 countries) have already launched 5G. With 37% of those operators actively marketing 5G-based Fixed Wireless Access (FWA) to consumers and businesses. There are two main 5G deployment flavors; (a) non-standalone (NSA) deployment piggybacking on top of 4G. This is currently the most common deployment model, and (b) as standalone (SA) deployment, independently from legacy 4G. The 5G SA deployment model is to be expected to become the most common over the next couple of years. As of October 2021, 15 operators have launched 5G SA. It should be noted that, operators with 5G SA launched are also likely to support 5G in NSA mode as well, to provide 5G to all customers with a 5G capable handset (e.g., at the moment only 58% of commercial 5G devices supports 5G SA). Only reason for not supporting both NSA and SA would be for a greenfield operator or that the operator don’t have any 4G network (none of that type comes to my mind tbh). Another 25 operators globally are expected to be near launching standalone 5G.

It should be evident, also from the illustration below, that mobile customers globally got or will get a lot of additional download speed with the introduction of 5G. As operators introduce 5G, in their mobile networks, they will leapfrog their available capacity, speed and quality for their customers. For Europe in 2021 you would, with 5G, get an average downlink (DL) speed of 154 ± 90 Mbps compared to 2019 4G DL speed of 26 ± 8 Mbps. Thus, with 5G, in Europe, we have gained a whooping 6× in DL speed transitioning from 4G to 5G. In Asia Pacific, the quality gain is even more impressive with a 10× in DL speed and somewhat less in North America with 4× in DL speed. In general, for 5G speeds exceeding 200 Mbps on average may imply that operators have deployed 5G in the C-band band (e.g., with the C-band covering 3.3 to 5.0 GHz).

The above DL speed benchmark (by Opensignal) gives a good teaser for what to come and to expect from 5G download speed, once a 5G network is near you. There is of course much more to 5G than downlink (and uplink) speed. Some caution should be taken in the above comparison between 4G (2019) and 5G (2021) speed measurements. There are still a fair amount of networks around the world without 5G or only started upgrading their networks to 5G. I would expect the 5G average speed to reduce a bit and the speed variance to narrow as well (i.e., performance becoming more consistent).

In a previous blog I describe what to realistically expect from 5G and criticized some of the visionary aspects of the the original 5G white paper paper published back in February 2015. Of course, the tech-world doesn’t stand still and since the original 5G visionary paper by El Hattachi and Erfanian. 5G has become a lot more tangible as operators deploy it or is near deployment. More and more operators have launched 5G on-top of their 4G networks and in the configuration we define as non-standalone (i.e., 5G NSA). Within the next couple of years, coinciding with the access to higher frequencies (>2.1 GHz) with substantial (unused or underutilized) spectrum bandwidths of 50+ MHz, 5G standalone (SA) will be launched. Already today many high-end handsets support 5G SA ensuring a leapfrog in customer experience above and beyond shear mobile broadband speeds.

The below chart illustrates what to expect from 5G SA, what we already have in the “pocket” with 5G NSA, and how that may compare to existing 4G network capabilities.

There cannot be much doubt that with the introduction of the 5G Core (5GC) enabling 5G SA, we will enrich our capability and service-enabler landscape. Whether all of this cool new-ish “stuff” we get with 5G SA will make much top-line sense for operators and convenience for consumers at large is a different story for a near-future blog (so stay tuned). Also, there should not be too much doubt that 5G NSA already provide most of what the majority of our consumers are looking for (more speed).

Overall, 5G SA brings benefits, above and beyond NSA, on (a) round-trip delay (latency) which will be substantially lower in SA, as 5G does not piggyback on the slower 4G, enabling the low latency in ultra-reliable low latency communications (uRLLC), (b) a factor of 250× improvement device density (1 Million devices per km2) that can be handled supporting massive machine type communication scenarios (mMTC), (c) supports communications services at higher vehicular speeds, (d) in theory should result in low device power consumption than 5G NSA, and (e) enables new and possible less costly ways to achieve higher network (and connection) availability (e.g., with uRLLC).

Compared to 4G, 5G SA brings with it a more flexible, scalable and richer set of quality of service enablers. A 5G user equipment (UE) can have up to 1,024 so called QoS flows versus a 4G UE that can support up to 8 QoS classes (tied into the evolved packet core bearer). The advantage of moving to 5G SA is a significant reduction of QoS driven signaling load and management processing overhead, in comparison to what is the case in a 4G network. In 4G, it has been clear that the QoS enablers did not really match the requirements of many present day applications (i.e., brutal truth maybe is that the 4G QoS was outdated before it went live). This changes with the introduction of 5G SA.

So, when is it a good idea to implement 5G Standalone for mobile operators?

There are maybe three main events that should trigger operators to prepare for and launch 5G SA;

  1. Economical demand for what 5G SA offers.
  2. Critical mass of 5G consumers.
  3. Want to claim being the first to offer 5G SA.

with the 3rd point being the least serious but certainly not an unlikely factor in deploying 5G SA. Apart from potentially enriching consumers experience, there are several operational advantages of transitioning to a 5GC, such as more mature IT-like cloudification of our telecommunications networks (i.e., going telco-cloud native) leading to (if designed properly) a higher degree of automation and autonomous network operations. Further, it may also allow the braver parts of telco-land to move a larger part of its network infrastructure capabilities into the public-cloud domain operated by hyperscalers or network-cloud consortia’s (if such entities will appear). Another element of the 5G SA cloud nativification (a new word?) that is frequently not well considered, is that it will allow operators to start out (very) small and scale up as business and consumer demand increases. I would expect that particular with hyperscalers and of course the-not-so-unusual-telco-supplier-suspects (e.g., Ericsson, Nokia, Huawei, Samsung, etc…), operators could launch fairly economical minimum viable products based on a minimum set of 5G SA capabilities sufficient to provide new and cost-efficient services. This will allow early entry for business-to-business new types of QoS and (or) slice-based services based on our new 5G SA capabilities.

Western Europe mobile market expectations – 5G technology share.

By end of 2021, it is expected that Western Europe would have in the order of 36 Million 5G connections, around a 5% 5G penetration. Increasing to 80 Million (11%) by end of 2022. By 2024 to 2025, it is expected that 50% of all mobile connections would be 5G based. As of October 2021 ca. 58% of commercial available mobile devices supports already 5G SA. This SA share is anticipated to grow rapidly over the next couple of years making 5G NSA increasingly unimportant.

Approaching 50% of all connections being 5G appears a very good time to aim having 5G standalone implemented and launched for operators. Also as this may coincide with substantial efforts to re-farming existing frequency spectrum from 4G to 5G as 5G data traffic exceeds that of 4G.

For Western Europe 2021, ca. 18% of the total mobile connections are business related. This number is expected to steadily increase to about 22% by 2030. With the introduction of new 5G SA capabilities, as briefly summarized above, it is to be expected that the 5G business connection share quickly will increase to the current level and that business would be able to directly monetize uRLLC, mMTC and the underlying QoS and network slicing enablers. For consumers 5G SA will bring some additional benefits but maybe less obvious new monetization possibilities, beyond the proportion of consumers caring about latency (e.g., gamers). Though, it appears likely that the new capabilities could bring operators efficiency opportunities leading to improved margin earned on consumers (for another article).

Recommendation:

  • Learn as much as possible from recent IT cloudification journeys (e.g., from monolithic to cloud, understand pros and cons with lift-and-shift strategies and the intricacies of operating cloud-native environments in public cloud domains).
  • Aim to have 5GC available for 5G SA launch latest by 2024.
  • Run 5GC minimum viable product poc’s with friendly (business) users prior to bigger launch.
  • As 5G is launched on C-band / 3.x GHz it may likewise be a good point in time to have 5G SA available. At least for B2B customers that may benefit from uRLLC, lower latency in general, mMTC, a much richer set of QoS, network slicing, etc…
  • Having a solid 4G to 5G spectrum re-farming strategy ready between now and 2024 (too late imo). This should map out 4G+NSA and SA supply dynamics as increasingly customers get 5G SA capabilities in their devices.

Western Europe mobile market expectations – traffic growth.

With the growth of 5G connections and the expectation that 5G would further boost the mobile data consumption, it is expected that by 2023 – 2024, 50% of all mobile data traffic in Western Europe would be attributed to 5G. This is particular driven by increased rollout of 3.x GHz across the Western European footprint and associated massive MiMo (mMiMo) antenna deployments with 32×32 seems to be the telco-lands choice. In blended mobile data consumption a CAGR of around 34% is expected between 2020 and 2030, with 2030 having about 26× more mobile data traffic than that of 2020. Though, I suspect that in Western Europe, aggressive fiberization of telecommunications consumer and business markets, over the same period, may ultimately slow the growth (and demand) on mobile networks.

A typical Western European operator would have between 80 – 100+ MHz of bandwidth available for 4G its downlink services. The bandwidth variation being determined by how much is required of residual 3G and 2G services and whether the operator have acquired 1500MHz SDL (supplementary downlink) spectrum. With an average 4G antenna configuration of 4×4 MiMo and effective spectral efficiency of 2.25 Mbps/MHz/sector one would expect an average 4G downlink speed of 300+ Mbps per sector (@ 90 MHz committed to 4G). For 5G SA scenario with 100 MHz of 3.x GHz and 2×10 MHz @ 700 MHz, we should expect an average downlink speed of 500+ Mbps per sector for a 32×32 massive MiMo deployment at same effective spectral efficiency as 4G. In this example, although naïve, quality of coverage is ignored. With 5G, we more than double the available throughput and capacity available to the operator. So the question is whether we remain naïve and don’t care too much about the coverage aspects of 3.x GHz, as beam-forming will save the day and all will remain cheesy for our customers (if something sounds too good to be true, it rarely is true).

In an urban environment it is anticipated that with beam-forming available in our mMiMo antenna solutions downlink coverage will be reasonably fine (i.e., on average) with 3.x GHz antennas over-layed on operators existing macro-cellular footprint with minor densification required (initially). In the situation that 3.x GHz uplink cannot reach the on-macro-site antenna, the uplink can be closed by 5G @ 700 MHz, or other lower cellular frequencies available to the operator and assigned to 5G (if in standalone mode). Some concerns have been expressed in literature that present advanced higher order antenna’s (e.g., 16×16 and above ) will on average provide a poorer average coverage quality over a macro cellular area than what consumers would be used to with lower order antennas (e.g., 4×4 or lower) and that the only practical (at least with today’s state of antennas) solution would be sectorization to make up for beam forming shortfalls. In rural and sub-urban areas advanced antennas would be more suitable although the demand would be a lot less than in a busy urban environment. Of course closing the 3.x GHz with existing rural macro-cellular footprint may be a bigger challenge than in an urban clutter. Thus, massive MiMo deployments in rural areas may be much less economical and business case friendly to deploy. As more and more operators deploy 3.x GHz higher-order mMiMo more field experience will become available. So stay tuned to this topic. Although I would reserve a lot more CapEx in my near-future budget plans for substantial more sectorization in urban clutter than what I am sure is currently in most operators plans. Maybe in rural and suburban areas the need for sectorizations would be much smaller but then densification may be needed in order to provide a decent 3.x GHz coverage in general.

Western Europe mobile market expectations – 5G RAN Capex.

That brings us to another important aspect of 5G deployment, the Radio Access Network (RAN) capital expenditures (CapEx). Using my own high-level (EU-based) forecast model based on technology deployment scenario per Western European country that in general considers 1 – 3% growth in new sites per anno until 2024, then from 2025 onwards, I assuming 2 – 5% growth due to densifications needs of 5G, driven by traffic growth and before mentioned coverage limitations of 3.x GHz. Exact timing and growth percentages depends on initial 5G commercial launch, timing of 3.x GHz deployment, traffic density (per site), and site density considering a country’s surface area.

According with Statista, Western Europe had in 2018 a cellular site base of 421 thousands. Further, Statista expected this base will grow with 2% per anno in the years after 2018. This gives an estimated number of cellular sites of 438k in 2020 that has been assumed as a starting point for 2020. The model estimates that by 2030, over the next 10 years, an additional 185k (+42%) sites will have been built in Western Europe to support 5G demand. 65% (120+k) of the site growth, over the next 10 years, will be in Germany, France, Italy, Spain and UK. All countries with relative larger geographical areas that are underserved with mobile broadband services today. Countries with incumbent mobile networks, originally based on 900 MHz GSM grids (of course densified since the good old GSM days), and thus having coarser cellular grids with higher degree of mismatching the higher 5G cellular frequencies (i.e., ≥ 2.5 GHz). In the model, I have not accounted for an increased demand of sectorizations to keep coverage quality upon higher order mMiMO deployments. This, may introduce some uncertainty in the Capex assessment. However, I anticipate that sectorization uncertainty may be covered in the accelerated site demand the last 5 years of the period.

In the illustration above, the RAN capital investment assumes all sites will eventually be fiberized by 2025. That may however be an optimistic assumption and for some countries, even in Western Europe, unrealistic and possibly highly uneconomical. New sites, in my model, are always fiberized (again possibly too optimistic). Miscellaneous (Misc.) accounts for any investments needed to support the RAN and Fiber investments (e.g., Core, Transport, Cap. Labor, etc..).

In the economical estimation price erosion has been taken into account. This erosion is a blended figure accounting for annual price reduction on equipment and increases in labor cost. I am assuming a 5-year replacement cycle with an associated 10% average price increase every 5 years (on the previous year’s eroded unit price). This accounts for higher capability equipment being deployed to support the increased traffic and service demand. The economical justification for the increase unit price being that otherwise even more new sites would be required than assumed in this model. In my RAN CapEx projection model, I am assuming rational deployment, that is demand driven deployment. Thus, operators investments are primarily demand driven, e.g., only deploying infrastructure required within a given financial recovery period (e.g., depreciation period). Thus, if an operator’s demand model indicate that it will need a given antenna configuration within the financial recovery period, it deploys that. Not a smaller configuration. Not a bigger configuration. Only the one required by demand within the financial recovery period. Of course, there may be operators with other deployment incentives than pure demand driven. Though on average I suspect this would have a neglectable effect on the scale of Western Europe (i.e., on average Western Europe Telco-land is assumed to be reasonable economically rational).

All in all, demand over the next 8 years leads to an 80+ Billion Euro RAN capital expenditure, required between 2022 and 2030. This, equivalent to a annual RAN investment level of a bit under 10 Billion Euro. The average RAN CapEx to Mobile Revenue over this period would be ca. 6.3%, which is not a shockingly high level (tbh), over a period that will see an intense rollout of 5G at increasingly higher frequencies and increasingly capable antenna configurations as demand picks up. Biggest threat to capital expenditures is poor demand models (or no demand models) and planning processes investing too much too early, ultimately resulting in buyers regret and cycled in-efficient investment levels over the next 10 years. And for the reader still awake and sharp, please do note that I have not mentioned the huge elephant in the room … The associated incremental operational expense (OpEx) that such investments will incur.

As mobile revenues are not expected to increase over the period 2022 to 2030, this leaves 5G investments main purpose to maintaining current business level dominated by consumer demand. I hope this scenario will not materialize. Given how much extra quality and service potential 5G will deliver over the next 10 years, it seems rather pessimistic to assume that our customers would not be willing to pay more for that service enhancement that 5G will brings with it. Alas, time will show.

Acknowledgement.

I greatly acknowledge my wife Eva Varadi for her support, patience and understanding during the creative process of writing this Blog. Petr Ledl, head of DTAG’s Research & Trials, and his team’s work has been a continuous inspiration to me (thank you so much for always picking up on that phone call Petr!). Also many of my Deutsche Telekom AG, T-Mobile NL & Industry colleagues in general have in countless of ways contributed to my thinking and ideas leading to this little Blog. Thank you!

Further readings.

Kim Kyllesbech Larsen, “5G Standalone Will Deliver! – But What?”, Keynote presentation at Day 2 Telecoms Europe 5G Conference, (November 2021). A YouTube voice over is given here on the presentation.

Kim Kyllesbech Larsen, “5G Economics – The Numbers (Appendix X).”, Techneconomyblog.com, (July 2017).

Kim Kyllesbech Larsen, “5G Economics – An Introduction (Chapter 1)”, Techneconomyblog.com, (December 2016).

Peter Boyland, “The State of Mobile Network Experience – Benchmarking mobile on the eve of the 5G revolution”, OpenSignal, (May 2019).

Ian Fogg, “Benchmarking the Global 5G Experience”, OpenSignal, (November 2021).

Rachid El Hattachi & Javan Erfanian , “5G White Paper”, NGMN Alliance, (February 2015). See also “5G White Paper 2” by Nick Sampson (Orange), Javan Erfanian (Bell Canada) and Nan Hu (China Mobile).

Global Mobile Frequencies Database. (last update, 25 May 2021). I recommend very much to subscribe to this database (€595,. single user license). Provides a wealth of information on spectrum portfolios across the world.

Thomas Alsop, “Number of telecom tower sites in Europe by country in 2018 (in 1,000s)”, Statista Telecommunications, (July 2020).

Jia Shen, Zhongda Du, & Zhi Zhang, “5G NR and enhancements, from R15 to R16”, Elsevier Science, (2021). Provides a really good overview of what to expect from 5G standalone. Particular, very good comparison with what is provided with 4G and the differences with 5G (SA and NSA).

Ali Zaidi, Fredrik Athley, Jonas Medbo, Ulf Gustavsson, Giuseppe Durisi, & Xiaoming Chen, “5G Physical Layer Principles, Models and Technology Components”, Elsevier Science, (2018). The physical layer will always pose a performance limitation on a wireless network. Fundamentally, the amount of information that can be transferred between two locations will be limited by the availability of spectrum, the laws of electromagnetic propagation, and the principles of information theory. This book provides a good description of the 5G NR physical layer including its benefits and limitations. It provides a good foundation for modelling and simulation of 5G NR.

Thomas L. Marzetta, Erik G. Larsson, Hong Yang, Hien Quoc Ngo, “Fundamentals of Massive MIMO”, Cambridge University Press, (2016). Excellent account of the workings of advanced antenna systems such as massive MiMo. 

Western Europe: Western Europe has a bit of a fluid definition (I have found), here Western Europe includes the following countries comprising a population of ca. 425 Million people (in 2021); Austria, Belgium, Denmark, Finland, France, Germany, Greece, Ireland, Italy, Netherlands, Norway, Portugal, Spain, Sweden, Switzerland United Kingdom, Andorra, Cyprus, Faeroe Islands, Greenland, Guernsey, Jersey, Malta, Luxembourg, Monaco, Liechtenstein, San Marino, Gibraltar.

5G Economics – The Numbers (Appendix X).

5G essense

100% COVERAGE.

100% 5G coverage is not going to happen with 30 – 300 GHz millimeter-wave frequencies alone.

The “NGMN 5G white paper” , which I will in the subsequent parts refer to as the 5G vision paper, require the 5G coverage to be 100%.

At 100% cellular coverage it becomes somewhat academic whether we talk about population coverage or geographical (area) coverage. The best way to make sure you cover 100% of population is covering 100% of the geography. Of course if you cover 100% of the geography, you are “reasonably” ensured to cover 100% of the population.

While it is theoretically possible to cover 100% (or very near to) of population without covering 100% of the geography, it might be instructive to think why 100% geographical coverage could be a useful target in 5G;

  1. Network-augmented driving and support for varous degrees of autonomous driving would require all roads to be covered (however small).
  2. Internet of Things (IoT) Sensors and Actuators are likely going to be of use also in rural areas (e.g., agriculture, forestation, security, waterways, railways, traffic lights, speed-detectors, villages..) and would require a network to connect to.
  3. Given many users personal area IoT networks (e.g., fitness & health monitors, location detection, smart-devices in general) ubiquitous becomes essential.
  4. Internet of flying things (e.g., drones) are also likely to benefit from 100% area and aerial coverage.

However, many countries remain lacking in comprehensive geographical coverage. Here is an overview of the situation in EU28 (as of 2015);

broadband coverage in eu28

For EU28 countries, 14% of all house holds in 2015 still had no LTE coverage. This was approx.30+ million households or equivalent to 70+ million citizens without LTE coverage. The 14% might seem benign. However, it covers a Rural neglect of 64% of households not having LTE coverage. One of the core reasons for the lack of rural (population and household) coverage is mainly an economic one. Due to the relative low number of population covered per rural site and compounded by affordability issues for the rural population, overall rural sites tend to have low or no profitability. Network sharing can however improve the rural site profitability as site-related costs are shared.

From an area coverage perspective, the 64% of rural households in EU28 not having LTE coverage is likely to amount to a sizable lack of LTE coverage area. This rural proportion of areas and households are also very likely by far the least profitable to cover for any operator possibly even with very progressive network sharing arrangements.

Fixed broadband, Fiber to the Premises (FTTP) and DOCSIS3.0, lacks further behind that of mobile LTE-based broadband. Maybe not surprisingly from an business economic perspective, in rural areas fixed broadband is largely unavailable across EU28.

The chart below illustrates the variation in lack of broadband coverage across LTE, Fiber to the Premises (FTTP) and DOCSIS3.0 (i.e., Cable) from a total country perspective (i.e., rural areas included in average).

delta to 100% hh coverage

We observe that most countries have very far to go on fixed broadband provisioning (i.e., FTTP and DOCSIS3.0) and even on LTE coverage lacks complete coverage. The rural coverage view (not shown here) would be substantially worse than the above Total view.

The 5G ambition is to cover 100% of all population and households. Due to the demographics of how rural households (and populations) are spread, it is also likely that fairly large geographical areas would need to be covered in order to come true on the 100% ambition.

It would appear that bridging this lack of broadband coverage would be best served by a cellular-based technology. Given the fairly low population density in such areas relative higher average service quality (i.e., broadband) could be delivered as long as the cell range is optimized and sufficient spectrum at a relative low carrier frequency (< 1 GHz) would be available. It should be remembered that the super-high 5G 1 – 10 Gbps performance cannot be expected in rural areas. Due to the lower carrier frequency range need to provide economic rural coverage both advanced antenna systems and very large bandwidth (e.g., such as found in the mm-frequency range)  would not be available to those areas. Thus limiting the capacity and peak performance possible even with 5G.

I would suspect that irrespective of the 100% ambition, telecom providers would be challenged by the economics of cellular deployment and traffic distribution. Rural areas really sucks in profitability, even in fairly aggressive sharing scenarios. Although multi-party (more than 2) sharing might be a way to minimize the profitability burden on deep rural coverage.

ugly_tail_thumb.png

The above chart shows the relationship between traffic distribution and sites. As a rule of thumb 50% of revenue is typically generated by 10% of all sites (i.e., in a normal legacy mobile network) and approx. 50% of (rural) sites share roughly 10% of the revenue. Note: in emerging markets the distribution is somewhat steeper as less comprehensive rural coverage typically exist. (Source: The ABC of Network Sharing – The Fundamentals.).

Irrespective of my relative pessimism of the wider coverage utility and economics of millimeter-wave (mm-wave) based coverage, there shall be no doubt that mm-wave coverage will be essential for smaller and smallest cell coverage where due to density of users or applications will require extreme (in comparison to today’s demand) data speeds and capacities. Millimeter-wave coverage-based architectures offer very attractive / advanced antenna solutions that further will allow for increased spectral efficiency and throughput. Also the possibility of using mm-wave point to multipoint connectivity as last mile replacement for fiber appears very attractive in rural and sub-urban clutters (and possible beyond if the cost of the electronics drop according the expeced huge increase in demand for such). This last point however is in my opinion independent of 5G as Facebook with their Terragraph development have shown (i.e., 60 GHz WiGig-based system). A great account for mm-wave wireless communications systems  can be found in T.S. Rappaport et al.’s book “Millimeter Wave Wireless Communications” which not only comprises the benefits of mm-wave systems but also provides an account for the challenges. It should be noted that this topic is still a very active (and interesting) research area that is relative far away from having reached maturity.

In order to provide 100% 5G coverage for the mass market of people & things, we need to engage the traditional cellular frequency bands from 600 MHz to 3 GHz.

1 – 10 Gbps PEAK DATA RATE PER USER.

Getting a Giga bit per second speed is going to require a lot of frequency bandwidth, highly advanced antenna systems and lots of additional cells. And that is likely going to lead to a (very) costly 5G deployment. Irrespective of the anticipated reduced unit cost or relative cost per Byte or bit-per-second.

At 1 Gbps it would take approx. 16 seconds to download a 2 GB SD movie. It would take less than a minute for the HD version (i.e., at 10 Gbps it just gets better;-). Say you have a 16GB smartphone, you loose maybe up to 20+% for the OS, leaving around 13GB for things to download. With 1Gbps it would take less than 2 minutes to fill up your smartphones storage (assuming you haven’t run out of credit on your data plan or reached your data ceiling before then … of course unless you happen to be a customer of T-Mobile US in which case you can binge on = you have no problems!).

The biggest share of broadband usage comes from video streaming which takes up 60% to 80% of all volumetric traffic pending country (i.e., LTE terminal penetration dependent). Providing higher speed to your customer than is required by the applied video streaming technology and smartphone or tablet display being used, seems somewhat futile to aim for. The Table below provides an overview of streaming standards, their optimal speeds and typical viewing distance for optimal experience;

video-resolution-vs-bandwitdh-requirements_thumb.png

Source: 5G Economics – An Introduction (Chapter 1).

So … 1Gbps could be cool … if we deliver 32K video to our customers end device, i.e., 750 – 1600 Mbps optimal data rate. Though it is hard to see customers benefiting from this performance boost given current smartphone or tablet display sizes. The screen size really have to be ridiculously large to truly benefit from this kind of resolution. Of course Star Trek-like full emersion (i.e., holodeck) scenarios would arguably require a lot (=understatement) bandwidth and even more (=beyond understatement) computing power … though such would scenario appears unlikely to be coming out of cellular devices (even in Star Trek).

1 Gbps fixed broadband plans have started to sell across Europe. Typically on Fiber networks although also on DOCSIS3.1 (10Gbps DS/1 Gbps US) networks as well in a few places. It will only be a matter of time before we see 10 Gbps fixed broadband plans being offered to consumers. Irrespective of compelling use cases might be lacking it might at least give you the bragging rights of having the biggest.

From European Commissions “Europe’s Digital Progress Report 2016”,  22 % of European homes subscribe to fast broadband access of at least 30 Mbps. An estimated 8% of European households subscribe to broadband plans of at least 100 Mbps. It is worth noticing that this is not a problem with coverage as according with the EC’s “Digital Progress Report” around 70% of all homes are covered with at least 30 Mbps and ca. 50% are covered with speeds exceeding 100 Mbps.

The chart below illustrates the broadband speed coverage in EU28;

broadband speed hh coverage.png

Even if 1Gbps fixed broadband plans are being offered, still majority of European homes are at speeds below the 100 Mbps. Possible suggesting that affordability and household economics plays a role as well as the basic perceived need for speed might not (yet?) be much beyond 30 Mbps?

Most aggregation and core transport networks are designed, planned, built and operated on a assumption of dominantly customer demand of lower than 100 Mbps packages. As 1Gbps and 10 Gbps gets commercial traction, substantial upgrades are require in aggregation, core transport and last but not least possible also on an access level (to design shorter paths). It is highly likely distances between access, aggregation and core transport elements are too long to support these much higher data rates leading to very substantial redesigns and physical work to support this push to substantial higher throughputs.

Most telecommunications companies will require very substantial investments in their existing transport networks all the way from access to aggregation through the optical core switching networks, out into the world wide web of internet to support 1Gbps to 10 Gbps. Optical switching cards needs to be substantially upgraded, legacy IP/MPLS architectures might no longer work very well (i.e., scale & complexity issue).

Most analysts today believe that incumbent fixed & mobile broadband telecommunications companies with a reasonable modernized transport network are best positioned for 5G compared to mobile-only operators or fixed-mobile incumbents with an aging transport infrastructure.

What about the state of LTE speeds across Europe? OpenSignal recurrently reports on the State of LTE, the following summarizes LTE speeds in Mbps as of June 2017 for EU28 (with the exception of a few countries not included in the OpenSignal dataset);

opensignal state of lte 2017

The OpenSignal measurements are based on more than half a million devices, almost 20 billion measurements over the period of the 3 first month of 2017.

The 5G speed ambition is by todays standards 10 to 30+ times away from present 2016/2017 household fixed broadband demand or the reality of provided LTE speeds.

Let us look at cellular spectral efficiency to be expected from 5G. Using the well known framework;

cellular capacity fundamentals

In essence, I can provide very high data rates in bits per second by providing a lot of frequency bandwidth B, use the most spectrally efficient technologies maximizing η, and/or add as many cells N that my economics allow for.

In the following I rely largely on Jonathan Rodriquez great book on “Fundamentals of 5G Mobile Networks” as a source of inspiration.

The average spectral efficiency is expected to be coming out in the order of 10 Mbps/MHz/cell using advanced receiver architectures, multi-antenna, multi-cell transmission and corporation. So pretty much all the high tech goodies we have in the tool box is being put to use of squeezing out as many bits per spectral Hz available and in a sustainable matter. Under very ideal Signal to Noise Ratio conditions, massive antenna arrays of up to 64 antenna elements (i.e., an optimum) seems to indicate that 50+ Mbps/MHz/Cell might be feasible in peak.

So for a spectral efficiency of 10 Mbps/MHz/cell and a demanded 1 Gbps data rate we would need 100 MHz frequency bandwidth per cell (i.e., using the above formula). Under very ideal conditions and relative large antenna arrays this might lead to a spectral requirement of only 20 MHz at 50 Mbps/MHz/Cell. Obviously, for 10 Gbps data rate we would require 1,000 MHz frequency bandwidth (1 GHz!) per cell at an average spectral efficiency of 10 Mbps/MHz/cell.

The spectral efficiency assumed for 5G heavily depends on successful deployment of many-antenna segment arrays (e.g., Massive MiMo, beam-forming antennas, …). Such fairly complex antenna deployment scenarios work best at higher frequencies, typically above 2GHz. Also such antenna systems works better at TDD than FDD with some margin on spectral efficiency. These advanced antenna solutions works perfectly  in the millimeter wave range (i.e., ca. 30 – 300 GHz) where the antenna segments are much smaller and antennas can be made fairly (very) compact (note: resonance frequency of the antenna proportional to half the wavelength with is inverse proportional to the carrier frequency and thus higher frequencies need smaller material dimension to operate).

Below 2 GHz higher-order MiMo becomes increasingly impractical and the spectral efficiency regress to the limitation of a simple single-path antenna. Substantially lower than what can be achieved at much high frequencies with for example massive-MiMo.

So for the 1Gbps to 10 Gbps data rates to work out we have the following relative simple rationale;

  • High data rates require a lot of frequency bandwidth (>100 MHz to several GHz per channel).
  • Lots of frequency bandwidth are increasingly easier to find at high and very high carrier frequencies (i.e., why millimeter wave frequency band between 30 – 300 GHz is so appealing).
  • High and very high carrier frequencies results in small, smaller and smallest cells with very high bits per second per unit area (i.e., the area is very small!).
  • High and very high carrier frequency allows me to get the most out of higher order MiMo antennas (i.e., with lots of antenna elements),
  • Due to fairly limited cell range, I boost my overall capacity by adding many smallest cells (i.e., at the highest frequencies).

We need to watch out for the small cell densification which tends not to scale very well economically. The scaling becomes a particular problem when we need hundreds of thousands of such small cells as it is expected in most 5G deployment scenarios (i.e., particular driven by the x1000 traffic increase). The advanced antenna systems required (including the computation resources needed) to max out on spectral efficiency are likely going to be one of the major causes of breaking the economical scaling. Although there are many other CapEx and OpEx scaling factors to be concerned about for small cell deployment at scale.

Further, for mass market 5G coverage, as opposed to hot traffic zones or indoor solutions, lower carrier frequencies are needed. These will tend to be in the usual cellular range we know from our legacy cellular communications systems today (e.g., 600 MHz – 2.1 GHz). It should not be expected that 5G spectral efficiency will gain much above what is already possible with LTE and LTE-advanced at this legacy cellular frequency range. Sheer bandwidth accumulation (multi-frequency carrier aggregation) and increased site density is for the lower frequency range a more likely 5G path. Of course mass market 5G customers will benefit from faster reaction times (i.e., lower latencies), higher availability, more advanced & higher performing services arising from the very substantial changes expected in transport networks and data centers with the introduction of 5G.

Last but not least to this story … 80% and above of all mobile broadband customers usage, data as well as voice, happens in very few cells (e.g., 3!) … representing their Home and Work.

most traffic in very few cells

Source: Slideshare presentation by Dr. Kim “Capacity planning in mobile data networks experiencing exponential growth in demand.”

As most of the mobile cellular traffic happen at the home and at work (i.e., thus in most cases indoor) there are many ways to support such traffic without being concerned about the limitation of cell ranges.

The giga bit per second cellular service is NOT a service for the mass market, at least not in its macro-cellular form.

≤ 1 ms IN ROUND-TRIP DELAY.

A total round-trip delay of 1 or less millisecond is very much attuned to niche service. But a niche service that nevertheless could be very costly for all to implement.

I am not going to address this topic too much here. It has to a great extend been addressed almost to ad nauseam in 5G Economics – An Introduction (Chapter 1) and 5G Economics – The Tactile Internet (Chapter 2). I think this particular aspect of 5G is being over-hyped in comparison to how important it ultimately will turn out to be from a return on investment perspective.

Speed of light travels ca. 300 km per millisecond (ms) in vacuum and approx. 210 km per ms in fiber (some material dependency here). Lately engineers have gotten really excited about the speed of light not being fast enough and have made a lot of heavy thinking abou edge this and that (e.g., computing, cloud, cloudlets, CDNs,, etc…). This said it is certainly true that most modern data centers have not been build taking too much into account that speed of light might become insufficient. And should there really be a great business case of sub-millisecond total (i.e., including the application layer) roundtrip time scales edge computing resources would be required a lot closer to customers than what is the case today.

It is common to use delay, round-trip time or round-trip delay, or latency as meaning the same thing. Though it is always cool to make sure people really talk about the same thing by confirming that it is indeed a round-trip rather than single path. Also to be clear it is worthwhile to check that all people around the table talk about delay at the same place in the OSI stack or  network path or whatever reference point agreed to be used.

In the context of  the 5G vision paper it is emphasized that specified round-trip time is based on the application layer (i.e., OSI model) as reference point. It is certainly the most meaningful measure of user experience. This is defined as the End-2-End (E2E) Latency metric and measure the complete delay traversing the OSI stack from physical layer all the way up through network layer to the top application layer, down again, between source and destination including acknowledgement of a successful data packet delivery.

The 5G system shall provide 10 ms E2E latency in general and 1 ms E2E latency for use cases requiring extremely low latency.

The 5G vision paper states “Note these latency targets assume the application layer processing time is negligible to the delay introduced by transport and switching.” (Section 4.1.3 page 26 in “NGMN 5G White paper”).

In my opinion it is a very substantial mouthful to assume that the Application Layer (actually what is above the Network Layer) will not contribute significantly to the overall latency. Certainly for many applications residing outside the operators network borders, in the world wide web, we can expect a very substantial delay (i.e., even in comparison with 10 ms). Again this aspect was also addressed in my two first chapters.

Very substantial investments are likely needed to meet E2E delays envisioned in 5G. In fact the cost of improving latencies gets prohibitively more expensive as the target is lowered. The overall cost of design for 10 ms would be a lot less costly than designing for 1 ms or lower. The network design challenge if 1 millisecond or below is required, is that it might not matter that this is only a “service” needed in very special situations, overall the network would have to be designed for the strictest denominator.

Moreover, if remedies needs to be found to mitigate likely delays above the Network Layer, distance and insufficient speed of light might be the least of worries to get this ambition nailed (even at the 10 ms target). Of course if all applications are moved inside operator’s networked premises with simpler transport paths (and yes shorter effective distances) and distributed across a hierarchical cloud (edge, frontend, backend, etc..), the assumption of negligible delay in layers above the Network Layer might become much more likely. However, it does sound a lot like America Online walled garden fast forward to the past kind of paradigm.

So with 1 ms E2E delay … yeah yeah … “play it again Sam” … relevant applications clearly need to be inside network boundary and being optimized for processing speed or silly & simple (i.e., negligible delay above the Network Layer), no queuing delay (to the extend of being in-efficiency?), near-instantaneous transmission (i.e., negligible transmission delay) and distances likely below tenth of km (i.e., very short propagation delay).

When the speed of light is too slow there are few economic options to solve that challenge.

≥ 10,000 Gbps / Km2 DATA DENSITY.

The data density is maybe not the most sensible measure around. If taken too serious could lead to hyper-ultra dense smallest network deployments.

This has always been a fun one in my opinion. It can be a meaningful design metric or completely meaningless.

There is of course nothing particular challenging in getting a very high throughput density if an area is small enough. If I have a cellular range of few tens of meters, say 20 meters, then my cell area is smaller than 1/1000 of a km2. If I have 620 MHz bandwidth aggregated between 28 GHz and 39 GHz (i.e., both in the millimeter wave band) with a 10 Mbps/MHz/Cell, I could support 6,200 Gbps/km2. That’s almost 3 Petabyte in an hour or 10 years of 24/7 binge watching of HD videos. Note given my spectral efficiency is based on an average value, it is likely that I could achieve substantially more bandwidth density and in peaks closer to the 10,000 Gbps/km2 … easily.

Pretty Awesome Wow!

The basic; a Terabit equals 1024 Gigabits (but I tend to ignore that last 24 … sorry I am not).

With a traffic density of ca. 10,000 Gbps per km2, one would expect to have between 1,000 (@ 10 Gbps peak) to 10,000 (@ 1 Gbps peak) concurrent users per square km.

At 10 Mbps/MHz/Cell one would expect to have a 1,000 Cell-GHz/km2. Assume that we would have 1 GHz bandwidth (i.e., somewhere in the 30 – 300 GHz mm-wave range), one would need 1,000 cells per km2. On average with a cell range of about 20 meters (smaller to smallest … I guess what Nokia would call an Hyper-Ultra-Dense Network;-). Thus each cell would minimum have between 1 to 10 concurrent users.

Just as a reminder! 1 minutes at 1 Gbps corresponds to 7.5 GB. A bit more than what you need for a 80 minute HD (i.e., 720pp) full movie stream … in 1 minutes. So with your (almost) personal smallest cell what about the remaining 59 minutes? Seems somewhat wasteful at least until kingdom come (alas maybe sooner than that).

It would appear that the very high 5G data density target could result in very in-efficient networks from a utilization perspective.

≥ 1 MN / Km2 DEVICE DENSITY.

One million 5G devices per square kilometer appears to be far far out in a future where one would expect us to be talking about 7G or even higher Gs.

1 Million devices seems like a lot and certainly per km2. It is 1 device per square meter on average. A 20 meter cell-range smallest cell would contain ca. 1,200 devices.

To give this number perspective lets compare it with one of my favorite South-East Asian cities. The city with one of the highest population densities around, Manila (Philippines). Manila has more than 40 thousand people per square km. Thus in Manila this would mean that we would have about 24 devices per person or 100+ per household per km2. Overall, in Manila we would then expect approx. 40 million devices spread across the city (i.e., Manila has ca. 1.8 Million inhabitants over an area of 43 km2. Philippines has a population of approx. 100 Million).

Just for the curious, it is possible to find other more populated areas in the world. However, these highly dense areas tends to be over relative smaller surface areas, often much smaller than a square kilometer and with relative few people. For example Fadiouth Island in Dakar have a surface area of 0.15 km2 and 9,000 inhabitants making it one of the most pop densest areas in the world (i.e., 60,000 pop per km2).

I hope I made my case! A million devices per km2 is a big number.

Let us look at it from a forecasting perspective. Just to see whether we are possibly getting close to this 5G ambition number.

IHS forecasts 30.5 Billion installed devices by 2020, IDC is also believes it to be around 30 Billion by 2020. Machina Research is less bullish and projects 27 Billion by 2025 (IHS expects that number to be 75.4 Billion) but this forecast is from 2013. Irrespective, we are obviously in the league of very big numbers. By the way 5G IoT if at all considered is only a tiny fraction of the overall projected IoT numbers (e.g., Machine Research expects 10 Million 5G IoT connections by 2024 …that is extremely small numbers in comparison to the overall IoT projections).

A consensus number for 2020 appears to be 30±5 Billion IoT devices with lower numbers based on 2015 forecasts and higher numbers typically from 2016.

To break this number down to something that could be more meaningful than just being Big and impressive, let just establish a couple of worldish numbers that can help us with this;

  • 2020 population expected to be around 7.8 Billion compared to 2016 7.4 Billion.
  • Global pop per HH is ~3.5 (average number!) which might be marginally lower in 2020. Urban populations tend to have less pop per households ca. 3.0. Urban populations in so-called developed countries are having a pop per HH of ca. 2.4.
  • ca. 55% of world population lives in Urban areas. This will be higher by 2020.
  • Less than 20% of world population lives in developed countries (based on HDI). This is a 2016 estimate and will be higher by 2020.
  • World surface area is 510 Million km2 (including water).
  • of which ca. 150 million km2 is land area
  • of which ca. 75 million km2 is habitable.
  • of which 3% is an upper limit estimate of earth surface area covered by urban development, i.e., 15.3 Million km2.
  • of which approx. 1.7 Million km2 comprises developed regions urban areas.
  • ca. 37% of all land-based area is agricultural land.

Using 30 Billion IoT devices by 2020 is equivalent to;

  • ca. 4 IoT per world population.
  • ca. 14 IoT per world households.
  • ca. 200 IoT per km2 of all land-based surface area.
  • ca. 2,000 IoT per km2 of all urban developed surface area.

If we limit IoT’s in 2020 to developed countries, which wrongly or rightly exclude China, India and larger parts of Latin America, we get the following by 2020;

  • ca. 20 IoT per developed country population.
  • ca. 50 IoT per developed country households.
  • ca. 18,000 IoT per km2 developed country urbanized areas.

Given that it would make sense to include larger areas and population of both China, India and Latin America, the above developed country numbers are bound to be (a lot) lower per Pop, HH and km2. If we include agricultural land the number of IoTs will go down per km2.

So far far away from a Million IoT per km2.

What about parking spaces, for sure IoT will add up when we consider parking spaces!? … Right? Well in Europe you will find that most big cities will have between 50 to 200 (public) parking spaces per square kilometer (e.g., ca. 67 per km2 for Berlin and 160 per km2 in Greater Copenhagen). Aha not really making up to the Million IoT per km2 … what about cars?

In EU28 there are approx. 256 Million passenger cars (2015 data) over a population of ca. 510 Million pops (or ca. 213 million households). So a bit more than 1 passenger car per household on EU28 average. In Eu28 approx. 75+% lives in urban area which comprises ca. 150 thousand square kilometers (i.e., 3.8% of EU28’s 4 Million km2). So one would expect little more (if not a little less) than 1,300 passenger cars per km2. You may say … aha but it is not fair … you don’t include motor vehicles that are used for work … well that is an exercise for you (too convince yourself why that doesn’t really matter too much and with my royal rounding up numbers maybe is already accounted for). Also consider that many EU28 major cities with good public transportation are having significantly less cars per household or population than the average would allude to.

Surely, public street light will make it through? Nope! Typical bigger modern developed country city will have on average approx. 85 street lights per km2, although it varies from 0 to 1,000+. Light bulbs per residential household (from a 2012 study of the US) ranges from 50 to 80+. In developed countries we have roughly 1,000 households per km2 and thus we would expect between 50 thousand to 80+ thousand lightbulbs per km2. Shops and business would add some additions to this number.

With a cumulated annual growth rate of ca. 22% it would take 20 years (from 2020) to reach a Million IoT devices per km2 if we will have 20 thousand per km2 by 2020. With a 30% CAGR it would still take 15 years (from 2020) to reach a Million IoT per km2.

The current IoT projections of 30 Billion IoT devices in operation by 2020 does not appear to be unrealistic when broken down on a household or population level in developed areas (even less ambitious on a worldwide level). The 18,000 IoT per km2 of developed urban surface area by 2020 does appear somewhat ambitious. However, if we would include agricultural land the number would become possible a more reasonable.

If you include street crossings, traffic radars, city-based video monitoring (e.g., London has approx. 300 per km2, Hong Kong ca. 200 per km2), city-based traffic sensors, environmental sensors, etc.. you are going to get to sizable numbers.

However, 18,000 per km2 in urban areas appears somewhat of a challenge. Getting to 1 Million per km2 … hmmm … we will see around 2035 to 2040 (I have added an internet reminder for a check-in by 2035).

Maybe the 1 Million Devices per km2 ambition is not one of the most important 5G design criteria’s for the short term (i.e., next 10 – 20 years).

Oh and most IoT forecasts from the period 2015 – 2016 does not really include 5G IoT devices in particular. The chart below illustrates Machina Research IoT forecast for 2024 (from August 2015). In a more recent forecast from 2016, Machine Research predict that by 2024 there would be ca. 10 million 5G IoT connections or 0.04% of the total number of forecasted connections;

iot connections 2024

The winner is … IoTs using WiFi or other short range communications protocols. Obviously, the cynic in me (mea culpa) would say that a mm-wave based 5G connections can also be characterized as short range … so there might be a very interesting replacement market there for 5G IoT … maybe? 😉

Expectations to 5G-based IoT does not appear to be very impressive at least over the next 10 years and possible beyond.

The un-importance of 5G IoT should not be a great surprise given most 5G deployment scenarios are focused on millimeter-wave smallest 5G cell coverage which is not good for comprehensive coverage of  IoT devices not being limited to those very special 5G coverage situations being thought about today.

Only operators focusing on comprehensive 5G coverage re-purposing lower carrier frequency bands (i.e., 1 GHz and lower) can possible expect to gain a reasonable (as opposed to niche) 5G IoT business. T-Mobile US with their 600 MHz  5G strategy might very well be uniquely positions for taking a large share of future proof IoT business across USA. Though they are also pretty uniquely position for NB-IoT with their comprehensive 700MHz LTE coverage.

For 5G IoT to be meaningful (at scale) the conventional macro-cellular networks needs to be in play for 5G coverage .,, certainly 100% 5G coverage will be a requirement. Although, even with 5G there maybe 100s of Billion of non-5G IoT devices that require coverage and management.

≤ 500 km/h SERVICE SUPPORT.

Sure why not?  but why not faster than that? At hyperloop or commercial passenger airplane speeds for example?

Before we get all excited about Gbps speeds at 500 km/h, it should be clear that the 5G vision paper only proposed speeds between 10 Mbps up-to 50 Mbps (actually it is allowed to regress down to 50 kilo bits per second). With 200 Mbps for broadcast like services.

So in general, this is a pretty reasonable requirement. Maybe with the 200 Mbps for broadcasting services being somewhat head scratching unless the vehicle is one big 16K screen. Although the users proximity to such a screen does not guaranty an ideal 16K viewing experience to say the least.

What moves so fast?

The fastest train today is tracking at ca. 435 km/h (Shanghai Maglev, China).

Typical cruising airspeed for a long-distance commercial passenger aircraft is approx. 900 km/h. So we might not be able to provide the best 5G experience in commercial passenger aircrafts … unless we solve that with an in-plane communications system rather than trying to provide Gbps speed by external coverage means.

Why take a plane when you can jump on the local Hyperloop? The proposed Hyperloop should track at an average speed of around 970 km/h (faster or similar speeds as commercial passengers aircrafts), with a top speed of 1,200 km/h. So if you happen to be in between LA and San Francisco in 2020+ you might not be able to get the best 5G service possible … what a bummer! This is clearly an area where the vision did not look far enough.

Providing services to moving things at a relative fast speed does require a reasonable good coverage. Whether it being train track, hyperloop tunnel or ground to air coverage of commercial passenger aircraft, new coverage solutions would need to be deployed. Or alternative in-vehicular coverage solutions providing a perception of 5G experience might be an alternative that could turn out to be more economical.

The speed requirement is a very reasonable one particular for train coverage.

50% TOTAL NETWORK ENERGY REDUCTION.

If 5G development could come true on this ambition we talk about 10 Billion US Dollars (for the cellular industry). Equivalent to a percentage point on the margin.

There are two aspects of energy efficiency in a cellular based communication system.

  • User equipment that will benefit from longer intervals without charging and thus improve customers experience and overall save energy from less frequently charges.
  • Network infrastructure energy consumption savings will directly positively impact a telecom operators Ebitda.

Energy efficient Smartphones

The first aspect of user equipment is addressed by the 5G vision paper under “4.3 Device Requirements”  sub-section “4.3.3 Device Power Efficiency”; Battery life shall be significantly increased: at least 3 days for a smartphone, and up tp 15 years for a low-cost MTC device.” (note: MTC = Machine Type Communications).

Apple’s iPhone 7 battery life (on a full charge) is around 6 hours of constant use with 7 Plus beating that with ca. 3 hours (i.e., total 9 hours). So 3 days will go a long way.

From a recent 2016 survey from Ask Your Target Market on smartphone consumers requirements to battery lifetime and charging times;

  • 64% of smartphone owners said they are at least somewhat satisfied with their phone’s battery life.
  • 92% of smartphone owners said they consider battery life to be an important factor when considering a new smartphone purchase.
  • 66% said they would even pay a bit more for a cell phone that has a longer battery life.

Looking at the mobile smartphone & tablet non-voice consumption it is also clear why battery lifetime and not in-important the charging time matters;

smartphone usage time per day

Source: eMarketer, April 2016. While 2016 and 2017 are eMarketer forecasts (why dotted line and red circle!) these do appear well in line with other more recent measurements.

Non-voice smartphone & tablet based usage is expected by now to exceed 4 hours (240 minutes) per day on average for US Adults.

That longer battery life-times are needed among smartphone consumers is clear from sales figures and anticipated sales growth of smartphone power-banks (or battery chargers) boosting the life-time with several more hours.

It is however unclear whether the 3 extra days of a 5G smartphone battery life-time is supposed to be under active usage conditions or just in idle mode. Obviously in order to matter materially to the consumer one would expect this vision to apply to active usage (i.e., 4+ hours a day at 100s of Mbps – 1Gbps operations).

Energy efficient network infrastructure.

The 5G vision paper defines energy efficiency as number of bits that can be transmitted over the telecom infrastructure per Joule of Energy.

The total energy cost, i.e., operational expense (OpEx), of telecommunications network can be considerable. Despite our mobile access technologies having become more energy efficient with each generation, the total OpEx of energy attributed to the network infrastructure has increased over the last 10 years in general. The growth in telco infrastructure related energy consumption has been driven by the consumer demand for broadband services in mobile and fixed including incredible increase in data center computing and storage requirements.

In general power consumption OpEx share of total technology cost amounts to 8% to 15% (i.e., for Telcos without heavy reliance of diesel). The general assumption is that with regular modernization, energy efficiency gain in newer electronics can keep growth in energy consumption to a minimum compensating for increased broadband and computing demand.

Note: Technology Opex (including NT & IT) on average lays between 18% to 25% of total corporate Telco Opex. Out of the Technology Opex between 8% to 15% (max) can typically be attributed to telco infrastructure energy consumption. The access & aggregation contribution to the energy cost typically would towards 80% plus. Data centers are expected to increasingly contribute to the power consumption and cost as well. Deep diving into the access equipment power consumption, ca. 60% can be attributed to rectifiers and amplifiers, 15% by the DC power system & miscellaneous and another 25% by cooling.

5G vision paper is very bullish in their requirement to reduce the total energy and its associated cost; it is stated “5G should support a 1,000 times traffic increase in the next 10 years timeframe, with an energy consumption by the whole network of only half that typically consumed by today’s networks. This leads to the requirement of an energy efficiency of x2,000 in the next 10 years timeframe.” (sub-section “4.6.2 Energy Efficiency” NGMN 5G White Paper).

This requirement would mean that in a pure 5G world (i.e., all traffic on 5G), the power consumption arising from the cellular network would be 50% of what is consumed todayIn 2016 terms the Mobile-based Opex saving would be in the order of 5 Billion US$ to 10+ Billion US$ annually. This would be equivalent to 0.5% to 1.1% margin improvement globally (note: using GSMA 2016 Revenue & Growth data and Pyramid Research forecast). If energy price would increase over the next 10 years the saving / benefits would of course be proportionally larger.

As we have seen in the above, it is reasonable to expect a very considerable increase in cell density as the broadband traffic demand increases from peak bandwidth (i.e., 1 – 10 Gbps) and traffic density (i.e., 1 Tbps per km2) expectations.

Depending on the demanded traffic density, spectrum and carrier frequency available for 5G between 100 to 1,000 small cell sites per km2 could be required over the next 10 years. This cell site increase will be required in addition to existing macro-cellular network infrastructure.

Today (in 2017) an operator in EU28-sized country may have between ca. 3,500 to 35,000 cell sites with approx. 50% covering rural areas. Many analysts are expecting that for medium sized countries (e.g., with 3,500 – 10,000 macro cellular sites), operators would eventually have up-to 100,000 small cells under management in addition to their existing macro-cellular sites. Most of those 5G small cells and many of the 5G macro-sites we will have over the next 10 years, are also going to have advanced massive MiMo antenna systems with many active antenna elements per installed base antenna requiring substantial computing to gain maximum performance.

It appears with today’s knowledge extremely challenging (to put it mildly) to envision a 5G network consuming 50% of today’s total energy consumption.

It is highly likely that the 5G radio node electronics in a small cell environment (and maybe also in a macro cellular environment?) will consume less Joules per delivery bit (per second) due to technology advances and less transmitted power required (i.e., its a small or smallest cell). However, this power efficiency technology and network cellular architecture gain can very easily be destroyed by the massive additional demand of small, smaller and smallest cells combined with highly sophisticated antenna systems consuming additional energy for their compute operations to make such systems work. Furthermore, we will see operators increasingly providing sophisticated data center resources network operations as well as for the customers they serve. If the speed of light is insufficient for some services or country geographies, additional edge data centers will be introduced, also leading to an increased energy consumption not present in todays telecom networks. Increased computing and storage demand will also make the absolute efficiency requirement highly challenging.

Will 5G be able to deliver bits (per second) more efficiently … Yes!

Will 5G be able to reduce the overall power consumption of todays telecom networks with 50% … highly unlikely.

In my opinion the industry will have done a pretty good technology job if we can keep the existing energy cost at the level of today (or even allowing for unit price increases over the next 10 years).

The Total power reduction of our telecommunications networks will be one of the most important 5G development tasks as the industry cannot afford a new technology that results in waste amount of incremental absolute cost. Great relative cost doesn’t matter if it results in above and beyond total cost.

≥ 99.999% NETWORK AVAILABILITY & DATA CONNECTION RELIABILITY.

A network availability of 5Ns across all individual network elements and over time correspond to less than a second a day downtime anywhere in the network. Few telecom networks are designed for that today.

5 Nines (5N) is a great aspiration for services and network infrastructures. It also tends to be fairly costly and likely to raise the level of network complexity. Although in the 5G world of heterogeneous networks … well its is already complicated.

5N Network Availability.

From a network and/or service availability perspective it means that over the cause of the day, your service should not experience more than 0.86 seconds of downtime. Across a year the total downtime should not be more than 5 minutes and 16 seconds.

The way 5N Network Availability is define is “The network is available for the targeted communications in 99.999% of the locations  where the network is deployed and 99.999% of the time”. (from “4.4.4 Resilience and High Availability”, NGMN 5G White Paper).

Thus in a 100,000 cell network only 1 cell is allowed experience a downtime and for no longer than less than a second a day.

It should be noted that there are not many networks today that come even close to this kind of requirement. Certainly in countries with frequent long power outages and limited ancillary backup (i.e., battery and/or diesel) this could be a very costly design requirement. Networks relying on weather-sensitive microwave radios for backhaul or for mm-wave frequencies 5G coverage would be required to design in a very substantial amount of redundancy to keep such high geographical & time availability requirements

In general designing a cellular access network for this kind of 5N availability could be fairly to very costly (i.e., Capex could easily run up in several percentage points of Revenue).

One way out from a design perspective is to rely on hierarchical coverage. Thus, for example if a small cell environment is un-available (=down!) the macro-cellular network (or overlay network) continues the service although at a lower service level (i.e., lower or much lower speed compared to the primary service). As also suggested in the vision paper making use of self-healing network features and other real-time measures are expected to further increase the network infrastructure availability. This is also what one may define as Network Resilience.

Nevertheless, the “NGMN 5G White Paper” allows for operators to define the level of network availability appropriate from their own perspective (and budgets I assume).

5N Data Packet Transmission Reliability.

The 5G vision paper, defines Reliability as “… amount of sent data packets successfully delivered to a given destination, within the time constraint required by the targeted service, divided by the total number of sent data packets.”. (“4.4.5 Reliability” in “NGMN 5G White Paper”).

It should be noted that the 5N specification in particular addresses specific use cases or services of which such a reliability is required, e.g., mission critical communications and ultra-low latency service. The 5G allows for a very wide range of reliable data connection. Whether the 5N Reliability requirement will lead to substantial investments or can be managed within the overall 5G design and architectural framework, might depend on the amount of traffic requiring 5Ns.

The 5N data packet transmission reliability target would impose stricter network design. Whether this requirement would result in substantial incremental investment and cost is likely dependent on the current state of existing network infrastructure and its fundamental design.

 

5G Economics – The Tactile Internet (Chapter 2)

If you have read Michael Lewis book “Flash Boys”, I will have absolutely no problem convincing you that a few milliseconds improvement in transport time (i.e., already below 20 ms) of a valuable signal (e.g., containing financial information) can be of tremendous value. It is all about optimizing transport distances, super efficient & extremely fast computing and of course ultra-high availability. The ultra-low transport and process latencies is the backbone (together with the algorithms obviously) of the high frequency trading industry that takes a market share of between 30% (EU) and 50% (US) of the total equity trading volume.

In a recent study by The Boston Consulting Group (BCG) “Uncovering Real Mobile Data Usage and Drivers of Customer Satisfaction” (Nov. 2015) study it was found that latency had a significant impact on customer video viewing satisfaction. For latencies between 75 – 100 milliseconds 72% of users reported being satisfied. The user experience satisfaction level jumped to 83% when latency was below 50 milliseconds. We have most likely all experienced and been aggravated by long call setup times (> couple of seconds) forcing us to look at the screen to confirm that a call setup (dialing) is actually in progress.

Latency and reactiveness or responsiveness matters tremendously to the customers experience and whether it is a bad, good or excellent one.

The Tactile Internet idea is an integral part of the “NGMN 5G Vision” and part of what is characterized as Extreme Real-Time Communications. It has further been worked out in detail in the ITU-T Technology Watch Report  “The Tactile Internet” from August 2014.

The word Tactile” means perceptible by touch. It closely relates to the ambition of creating a haptic experience. Where haptic means a sense of touch. Although we will learn that the Tactile Internet vision is more than a “touchy-feeling” network vision, the idea of haptic feedback in real-time (~ sub-millisecond to low millisecond regime) is very important to the idea of a Tactile Network experience (e.g., remote surgery).

The Tactile Internet is characterized by

  • Ultra-low latency; 1 ms and below latency (as in round-trip-time / round-trip delay).
  • Ultra-high availability; 99.999% availability.
  • Ultra-secure end-2-end communications.
  • Persistent very high bandwidths capability; 1 Gbps and above.

The Tactile Internet is one of the corner stones of 5G. It promises ultra-low end-2-end latencies in the order of 1 millisecond at Giga bits per second speeds and with five 9’s of availability (translating into a 500 ms per day average un-availability).

Interestingly, network predictability and variation in latency have not been receiving too much focus within the Tactile Internet work. Clearly, a high degree of predictability as well as low jitter (or latency variation), could be very desirable property of a tactile network. Possibly even more so than absolute latency in its own right. A right sized round-trip-time with imposed managed latency, meaning a controlled variation of latency, is very essential to the 5G Tactile Internet experience.

It’s 5G on speed and steroids at the same time.

elephant in the room

Let us talk about the elephant in the room.

We can understand Tactile latency requirements in the following way;

An Action including (possible) local Processing, followed by some Transport and Remote Processing of data representing the Action, results in a Re-action again including (possible) local Processing. According with Tactile Internet Vision, the time of this whole even from Action to Re-action has to have run its cause within 1 millisecond or one thousand of a second. In many use cases this process is looped as the Re-action feeds back, resulting in another action. Note in the illustration below, Action and Re-action could take place on the same device (or locality) or could be physically separated. The processes might represent cloud-based computations or manipulations of data or data manipulations local to the device of the user as well as remote devices. It needs to be considered that the latency time scale for one direction is not at all given to be the same in the other direction (even for transport).

tactile internet 1

The simplest example is the mouse click on a internet link or URL (i.e., the Action) resulting a translation of the URL to an IP address and the loading of the resulting content on your screen (i.e., part of the process) with the final page presented on the your device display (i.e., Re-action). From the moment the URL is mouse-clicked until the content is fully presented should take no longer than 1 ms.

tactile internet 2

A more complex use case might be remote surgery. In which a surgical robot is in one location and the surgeon operator is at another location manipulating the robot through an operation. This is illustrated in the above picture. Clearly, for a remote surgical procedure to be safe (i.e., within the margins of risk of not having the possibility of any medical assisted surgery) we would require a very reliable connection (99.999% availability), sufficient bandwidth to ensure adequate video resolution as required by the remote surgeon controlling the robot, as little as possible latency allowing the feel of instantaneous (or predictable) reaction to the actions of the controller (i.e., the surgeons) and of course as little variation in the latency (i.e., jitter) allowing system or human correction of the latency (i.e., high degree of network predictability).

The first Complete Trans-Atlantic Robotic Surgery happened in 2001. Surgeons in New York (USA) remotely operated on a patient in Strasbourg, France. Some 7,000 km away or equivalent to 70 ms in round-trip-time (i.e., 14,000 km in total) for light in fiber. The total procedural delay from hand motion (action) to remote surgical response (reaction) showed up on their video screen took 155 milliseconds. From trials on pigs any delay longer than 330 ms was thought to be associated with an unacceptable degree of risk for the patient. This system then did not offer any haptic feedback to the remote surgeon. This remains the case for most (if not all) remote robotic surgical systems in option today as the latency in most remote surgical scenarios render haptic feedback less than useful. An excellent account for robotic surgery systems (including the economics) can be found at this web site “All About Robotic Surgery”. According to experienced surgeons at 175 ms (and below) a remote robotic operation is perceived (by the surgeon) as imperceptible.

It should be clear that apart from offering long-distance surgical possibilities, robotic surgical systems offers many other benefits (less invasive, higher precision, faster patient recovery, lower overall operational risks, …). In fact most robotic surgeries are done with surgeon and robot being in close proximity.

Another example of coping with lag or latency is a Predator drone pilot. The plane is a so-called unmanned combat aerial vehicle and comes at a price of ca. 4 Million US$ (in 2010) per piece. Although this aerial platform can perform missions autonomously  it will typically have two pilots on the ground monitoring and possible controlling it. The typical operational latency for the Predator can be as much as 2,000 milliseconds. For takeoff and landing, where this latency is most critical, typically the control is handed to to a local crew (either in Nevada or in the country of its mission). The Predator cruise speed is between 130 and 165 km per hour. Thus within the 2 seconds lag the plane will have move approximately 100 meters (i.e., obviously critical in landing & take off scenarios). Nevertheless, a very high degree of autonomy has been build into the Predator platform that also compensates for the very large latency between plane and mission control.

Back to the Tactile Internet latency requirements;

In LTE today, the minimum latency (internal to the network) is around 12 ms without re-transmission and with pre-allocated resources. However, the normal experienced latency (again internal to the network) would be more in the order of 20 ms including 10% likelihood of retransmission and assuming scheduling (which would be normal). However, this excludes any content fetching, processing, presentation on the end-user device and the transport path beyond the operators network (i.e., somewhere in the www). Transmission outside the operator network typically between 10 and 20 ms on-top of the internal latency. The fetching, processing and presentation of content can easily add hundreds of milliseconds to the experience. Below illustrations provides a high level view of the various latency components to be considered in LTE with the transport related latencies providing the floor level to be expected;

latency in networks

In 5G the vision is to achieve a factor 20 better end-2-end (within the operators own network) round-trip-time compared to LTE; thus 1 millisecond.

 

So … what happens in 1 millisecond?

Light will have travelled ca. 200 km in fiber or 300 km in free-space. A car driving (or the fastest baseball flying) 160 km per hour will have moved 4 cm. A steel ball falling to the ground (on Earth) would have moved 5 micro meter (that’s 5 millionth of a meter). In a 1Gbps data stream, 1 ms correspond to ca. 125 Kilo Bytes worth of data. A human nerve impulse last just 1 ms (i.e., in a 100 millivolt pulse).

 

It should be clear that the 1 ms poses some very dramatic limitations;

  • The useful distance over which a tactile applications would work (if 1 ms would really be the requirements that is!) will be short ( likely a lot less than 100 km for fiber-based transport)
  • The air-interface (& number of control plane messages required) needs to reduce dramatically from milliseconds down to microseconds, i.e., factor 20 would require no more than 100 microseconds limiting the useful cell range).
  • Compute & processing requirements, in terms of latency, for UE (incl. screen, drivers, local modem, …), Base Station and Core would require a substantial overhaul (likely limiting level of tactile sophistication).
  • Require own controlled network infrastructure (at least a lot easier to manage latency within), avoiding any communication path leaving own network (walled garden is back with a vengeance?).
  • Network is the sole responsible for latency and can be made arbitrarily small (by distance and access).

Very small cells, very close to compute & processing resources, would be most likely candidates for fulfilling the tactile internet requirements. 

Thus instead of moving functionality and compute up and towards the cloud data center we (might) have an opposing force that requires close proximity to the end-users application. Thus, the great promise of cloud-based economical efficiency is likely going to be dented in this scenario by requiring many more smaller data centers and maybe even micro-data centers moving closer to the access edge (i.e., cell site, aggregation site, …). Not surprisingly, Edge Cloud, Edge Data Center, Edge X is really the new Black …The curse of the edge!?

Looking at several network and compute design considerations a tactile application would require no more than 50 km (i.e., 100 km round-trip) effective round-trip distance or 0.5 ms fiber transport (including switching & routing) round-trip-time. Leaving another 0.5 ms for air-interface (in a cellular/wireless scenario), computing & processing. Furthermore, the very high degree of imposed availability (i.e., 99.999%) might likewise favor proximity between the Tactile Application and any remote Processing-Computing. Obviously,

So in all likelihood we need processing-computing as near as possible to the tactile application (at least if one believes in the 1 ms and about target).

One of the most epic (“in the Dutch coffee shop after a couple of hours category”) promises in “The Tactile Internet” vision paper is the following;

“Tomorrow, using advanced tele-diagnostic tools, it could be available anywhere, anytime; allowing remote physical examination even by palpation (examination by touch). The physician will be able to command the motion of a tele-robot at the patient’s location and receive not only audio-visual information but also critical haptic feedback.(page 6, section 3.5).

All true, if you limited the tele-robot and patient to a distance of no more than 50 km (and likely less!) from the remote medical doctor. In this setup and definition of the Tactile Internet, having a top eye surgeon placed in Delhi would not be able to operate child (near blindness) in a remote village in Madhya Pradesh (India) approx. 800+ km away. Note India has the largest blind population in the world (also by proportion) with 75% of cases avoidable by medical intervention. At best, these specifications allow the doctor not to be in the same room with the patient.

Markus Rank et al did systematic research on the perception of delay in haptic tele-presence systems (Presence, October 2010, MIT Press) and found haptic delay detection thresholds between  30 and 55 ms. Thus haptic feedback did not appear to be sensitive to delays below 30 ms, fairly close to the lowest reported threshold of 20 ms. This combined with experienced tele-robotic surgeons assessing that below 175 ms the remote procedure starts to be perceived as imperceptible, might indicate that the 1 ms, at least for this particular use case, is extremely limiting.

The extreme case would be to have the tactile-related computing done at the radio base station assuming that the tactile use case could be restricted to the covered cell and users supported by that cell. I name this the micro-DC (or micro-cloud or more like what some might call the cloudlet concept) idea. This would be totally back to the older days with lots of compute done at the cell site (and likely kill any traditional legacy cloud-based efficiency thinking … love to use legacy and cloud in same sentence). This would limit the round-trip-time to air-interface latency and compute/processing at the base station and the device supporting the tactile application.

It is normal to talk about the round-trip-time between an action and the subsequent reaction. It is also the time it takes a data or signal to travel from a specific source to a specific destination and back again (i.e., round trip). In case of light in fiber, a 1 millisecond limit on the round-trip-time would imply that the maximum distance that can be travelled (in the fiber) between source to destination and back to the source is 200 km. Limiting the destination to be no more than 100 km away from the source. In case of substantial processing overhead (e.g., computation) the distance between source and destination requires even less than 100 km to allow for the 1 ms target.

THE HUMAN SENSES AND THE TACTILE INTERNET.

The “touchy-feely” aspect, or human sensing in general, is clearly an inspiration to the authors of “The Tactile Internet” vision as can be seen from the following quote;

“We experience interaction with a technical system as intuitive and natural only if the feedback of the system is adapted to our human reaction time. Consequently, the requirements for technical systems enabling real-time interactions depend on the participating human senses.” (page 2, Section 1).

The following human-reaction times illustration shown below is included in “The Tactile Internet” vision paper. Although it originates from Fettweis and Alamouti’s paper titled “5G: Personal Mobile Internet beyond What Cellular Did to Telephony“. It should be noted that the description of the Table is order of magnitude of human reaction times; thus, 10 ms might also be 100 ms or 1 ms and so forth and therefor, as we shall see, it would be difficult to a given reaction time wrong within such a range.human senses

The important point here is that the human perception or senses impact very significantly the user’s experience with a given application or use case.

The responsiveness of a given system or design is incredible important for how well a service or product will be perceived by the user. The responsiveness can be defined as a relative measure against our own sense or perception of time. The measure of responsiveness is clearly not unique but depends on what senses are being used as well as the user engaged.The human mind is not fond of waiting and waiting too long causes distraction, irritation and ultimate anger after which the customer is in all likelihood lost. A very good account of considering the human mind and it senses in design specifications (and of course development) can be found in Jeff Johnson’s 2010 book “Designing with the Mind in Mind”.

The understanding of human senses and the neurophysiological reactions to those senses are important for assessing a given design criteria’s impact on the user experience. For example, designing for 1 ms or lower system reaction times when the relevant neurophysiological timescale is measured in 10s or 100s of milliseconds is likely not resulting in any noticeable (and monetizable) improvement in customer experience. Of course there can be many very good non-human reasons for wanting low or very low latencies.

While you might get the impression, from the above table above from Fettweis et al and countless Tactile Internet and 5G publications referring back to this data, that those neurophysiological reactions are natural constants, it is unfortunately not the case. Modality matters hugely. There are fairly great variations in reactions time within the same neurophysiological response category depending on the individual human under test but often also depending on the underlying experimental setup. In some instances the reaction time deduced would be fairly useless as a design criteria for anything as the detection happens unconsciously and still require the relevant part of the brain to make sense of the event.

We have, based on vision, the surgeon controlling a remote surgical robot stating that anything below 175 ms latency is imperceptible. There is research showing that haptic feedback delay below 30 ms appears to be un-detectable.

John Carmack, CTO of Oculus VR Inc, based on in particular vision (in a fairly dynamic environment) that  “.. when absolute delays are below approximately 20 milliseconds they are generally imperceptible.” particular as it relates to 3D systems and VR/AR user experience which is a lot more dynamic than watching content loading. Moreover, according to some recent user experience research specific to website response time indicates that anything below 100 ms wil be perceived as instantaneous. At 1 second users will sense the delay but would be perceived as seamless. If a web page loads in more than 2 seconds user satisfaction levels drops dramatically and a user would typically bounce.

Based on IAAF (International Athletic Association Federation) rules, an athlete is deemed to have had a false start if that athlete moves sooner than 100 milliseconds after the start signal. The neurophysiological process relevant here is the neuromuscular reaction to the sound heard (i.e., the big bang of the pistol) by the athlete. Research carried out by Paavo V. Komi et al has shown that the reaction time of a prepared (i.e., waiting for the bang!) athlete can be as low as 80 ms. This particular use case relates to the auditory reaction times and the subsequent physiological reaction. P.V. Komi et al also found a great variation in the neuromuscular reaction time to the sound (even far below the 80 ms!).

Neuromuscular reactions to unprepared events typically typically measures in several hundreds of milliseconds (up-to 700 ms) being somewhat faster if driven by auditory senses rather than vision. Note that reflex time scales are approximately 10 times faster or in the order of 80 – 100 ms.

The international Telecommunications Union (ITU) Recommendation G.114, defines for voice applications an upper acceptable one-way (i.e., its you talking you don’t want to be talked back to by yourself) delay of 150 ms. Delays below this limit would provide an acceptable degree of voice user experience in the sense that most users would not hear the delay. It should be understood that a great variation in voice delay sensitivity exist across humans. Voice conversations would be perceived as instantaneous by most below the 100 ms (thought the auditory perception would also depend on the intensity/volume of the voice being listened to).

Finally, let’s discuss human vision. Fettweis et al in my opinion mixes up several psychophysical concepts of vision and TV specifications. Alluding to 10 millisecond is the visual “reaction” time (whatever that now really means). More accurately they describe the phenomena of flicker fusion threshold which describes intermittent light stimulus (or flicker) is perceived as completely steady to an average viewer. This phenomena relates to persistence of vision where the visual system perceives multiple discrete images as a single image (both flicker and persistence of vision are well described in both by Wikipedia and in detail by Yhong-Lin Lu el al “Visual Psychophysics”). There, are other reasons why defining flicker fusion and persistence of vision as a human reaction reaction mechanism is unfortunate.

The 10 ms for vision reaction time, shown in the table above, is at the lowest limit of what researchers (see references 14, 15, 16 ..) find to be the early stages of vision can possible detect (i.e., as opposed to pure guessing ). Mary C. Potter of M.I.T.’s Dept. of Brain & Cognitive Sciences, seminal work on human perception in general and visual perception in particular shows that the human vision is capable very rapidly to make sense of pictures, and objects therein, on the timescale of 10 milliseconds (i.e., 13 ms actually is the lowest reported by Potter). From these studies it is also found that preparedness (i.e., knowing what to look for) helps the detection process although the overall detection results did not differ substantially from knowing the object of interest after the pictures were shown. Note that the setting of these visual reaction time experiments all happens in a controlled laboratory setting with the subject primed to being attentive (e.g., focus on screen with fixation cross for a given period, followed by blank screen for another shorter period, and then a sequence of pictures each presented for a (very) short time, followed again by a blank screen and finally a object name and the yes-no question whether the object was observed in the sequence of pictures). Often these experiments also includes a certain degree of training before the actual experiment  took place. The relevant memory of the target object, In any case and unless re-enforced, will rapidly dissipates. in fact the shorter the viewing time, the quicker it will disappear … which might be a very healthy coping mechanism.

To call this visual reaction time of 10+ ms typical is in my opinion a bit of a stretch. It is typical for that particular experimental setup and very nicely provides important insights into the visual systems capabilities.

One of the more silly things used to demonstrate the importance of ultra-low latencies have been to time delay the video signal send to a wearer’s goggles and then throw a ball at him in the physical world … obviously, the subject will not catch the ball (might as well as thrown it at the back of his head instead). In the Tactile Internet vision paper it the following is stated; “But if a human is expecting speed, such as when manually controlling a visual scene and issuing commands that anticipate rapid response, 1-millisecond reaction time is required(on page 3). And for the record spinning a basketball on your finger has more to do with physics than neurophysiology and human reaction times.

In more realistic settings it would appear that the (prepared) average reaction time of vision is around or below 40 ms. With this in mind, a baseball moving (when thrown by a power pitcher) at 160 km per hour (or ca. 4+ cm per ms) would take a approx. 415 ms to reach the batter (using an effective distance of 18.44 meters). Thus the batter has around 415 ms to visually process the ball coming and hit it at the right time. Given the latency involved in processing vision the ball would be at least 40 cm (@ 10 ms) closer to the batter than his latent visionary impression would imply. Assuming that the neuromuscular reaction time is around 100±20 ms, the batter would need to compensate not only for that but also for his vision process time in order to hit the ball. Based on batting statistics, clearly the brain does compensate for its internal latencies pretty well. In the paper  “Human time perception and its illusions” D.M. Eaglerman that the visual system and the brain (note: visual system is an integral part of the brain) is highly adaptable in recalibrating its time perception below the sub-second level.

It is important to realize that in literature on human reaction times, there is a very wide range of numbers for supposedly similar reaction use cases and certainly a great deal of apparent contradictions (though the experimental frameworks often easily accounts for this).

reaction times

The supporting data for the numbers shown in the above figure can be found via the hyperlink in the above text or in the references below.

Thus, in my opinion, also supported largely by empirical data, a good latency E2E design target for a Tactile network serving human needs, would be between 20 milliseconds and 10 milliseconds. With the latency budget covering the end user device (e.g., tablet, VR/AR goggles, IOT, …), air-interface, transport and processing (i.e., any computing, retrieval/storage, protocol handling, …). It would be unlikely to cover any connectivity out of the operator”s network unless such a connection is manageable from latency and jitter perspective though distance would count against such a strategy.

This would actually be quiet agreeable from a network perspective as the distance to data centers would be far more reasonable and likely reduce the aggressive need for many edge data centers using the below 10 ms target promoted in the Tactile Internet vision paper.

latency budget

There is however one thing that we are assuming in all the above. It is assumed that the user’s local latency can be managed as well and made almost arbitrarily small (i.e., much below 1 ms). Hardly very reasonable even in the short run for human-relevant communications ecosystems (displays, goggles, drivers, etc..) as we shall see below.

For a gaming environment we would look at something like the below illustration;

local latency should be considered

Lets ignore the use case of local games (i.e., where the player only relies on his local computing environment) and focus on games that rely on a remote gaming architecture. This could either be relying on a  client-server based architecture or cloud gaming architecture (e.g., typical SaaS setup). In general the the client-server based setup requires more performance of the users local environment (e.g., equipment) but also allows for more advanced latency compensating strategies enhancing the user perception of instantaneous game reactions. In the cloud game architecture, all game related computing including rendering/encoding (i.e., image synthesis) and video output generation happens in the cloud. The requirements to the end users infrastructure is modest in the cloud gaming setup. However, applying latency reduction strategies becomes much more challenging as such would require much more of the local computing environment that the cloud game architecture tries to get away from. In general the network transport related latency would be the same provide the dedicated game servers and the cloud gaming infrastructure would reside within the same premises. In Choy et al’s 2012 paper “The Brewing Storm in Cloud Gaming: A Measurement Study on Cloud to End-User Latency” , it is shown, through large scale measurements, that current commercial cloud infrastructure architecture is unable to deliver the latency performance for an acceptable (massive) multi-user experience. Partly simply due to such cloud data centers are too far away from the end user. Moreover, the traditional commercial cloud computing infrastructure is simply not optimized for online gaming requiring augmentation of stronger computing resources including GPUs and fast memory designs. Choy et al do propose to distribute the current cloud infrastructure targeting a shorter distance between end user and the relevant cloud game infrastructure. Similar to what is already happening today with content distribution networks (CDNs) being distributed more aggressively in metropolitan areas and thus closer to the end user.

A comprehensive treatment on latencies, or response time scales, in games and how these relates to user experience can be found in Kjetil Raaen’s Ph.D. thesis “Response time in games: Requirements and improvements” as well as in the comprehensive relevant literature list found in this thesis.

From the many studies (as found in Raaen’s work, the work of Mark Claypool and much cited 2002 study by Pantel et al) on gaming experience, including massive multi-user online game experience, shows that players starts to notice delay of about 100 ms of which ca. 20 ms comes from play-out and processing delay. Thus, quiet a far cry from the 1 millisecond. From the work, and not that surprising, sensitivity to gaming latency depends on the type of game played (see the work of Claypool) and how experienced a gamer is with the particular game (e.g., Pantel er al). It should also be noted that in a VR environment, you would want to the image that arrives at your visual system to be in synch with your heads movement and the directions of your vision. If there is a timing difference (or lag) between the direction of your vision and the image presented to your visual system, the user experience becomes rapidly poor causing discomfort by disorientation and confusion (possible leading to a physical reaction such as throwing up). It is also worth noting that in VR there is a substantially latency component simple from the image rendering (e.g., 60 MHz frame rate provides a new frame on average every 16.7 millisecond). Obviously chunking up the display frame rate will reduce the rendering related latency. However, several latency compensation strategies (to compensate for you head and eye movements) have been developed to cope with VR latency (e.g., time warping and prediction schemes).

Anyway, if you would be of the impression that VR is just about showing moving images on the inside of some awesome goggles … hmmm do think again and keep dreaming of 1 millisecond end-2end network centric VR delivery solutions (at least for the networks we have today). Of course 1 ms target is possible really a Proxima-Centauri shot as opposed to a just moonshot.

With a target of no more than 20 milliseconds lag or latency and taking into account the likely reaction time of the users VR system (future system!), that likely leaves no more (and likely less) than 10 milliseconds for transport and any remote server processing. Still this could allow for a data center to be 500 km (5 ms round.trip time in fiber) away from the user and allow another 5 ms for data center processing and possible routing delay along the way.

One might very well be concerned about the present Tactile Internet vision and it’s focus on network centric solutions to the very low latency target of 1 millisecond. The current vision and approach would force (fixed and mobile) network operators to add a considerable amount of data centers in order to get the physical transport time down below the 1 millisecond. This in turn drives the latest trend in telecommunication, the so-called edge data center or edge cloud. In the ultimate limit, such edge data centers (however small) might be placed at cell site locations or fixed network local exchanges or distribution cabinets.

Furthermore, the 1 millisecond as a goal might very well have very little return on user experience (UX) and substantial cost impact for telecom operators. A diligent research through academic literature and wealth of practical UX experiments indicates that this indeed might be the case.

Such a severe and restrictive target as the 1 millisecond is, it severely narrows the Tactile Internet to scenarios where sensing, acting, communication and processing happens in very close proximity of each other. In addition the restrictions to system design it imposes, further limits its relevance in my opinion. The danger is, with the expressed Tactile vision, that too little academic and industrious thinking goes into latency compensating strategies using the latest advances in machine learning, virtual reality development and computational neuroscience (to name a few areas of obvious relevance). Further network reliability and managed latency, in the sense of controlling the variation of the latency, might be of far bigger importance than latency itself below a certain limit.

So if 1 ms is no use to most men and beasts … why bother with this?

While very low latency system architectures might be of little relevance to human senses, it is of course very likely (as it is also pointed out in the Tactile Internet Vision paper) that industrial use cases could benefit from such specifications of latency, reliability and security.

For example in machine-to-machine or things-to-things communications between sensors, actuators, databases, and applications very short reaction times in the order of sub-milliseconds to low milliseconds could be relevant.

We will look at this next.

THE TACTILE INTERNET USE CASES & BUSINESS MODELS.

An open mind would hope that most of what we do strives to out perform human senses, improve how we deal with our environment and situations that are far beyond mere mortal capabilities. Alas I might have read too many Isaac Asimov novels as a kid and young adult.

In particular where 5G has its present emphasis of ultra-high frequencies (i.e., ultra small cells), ultra-wide spectral bandwidth (i.e., lots of Gbps) together with the current vision of the Tactile Internet (ultra-low latencies, ultra-high reliability and ultra-high security), seem to be screaming for being applied to Industrial facilities, logistic warehouses, campus solutions, stadiums, shopping malls, tele-, edge-cloud, networked robotics, etc… In other words, wherever we have a happy mix of sensors, actuators, processors, storage, databases and software based solutions  across a relative confined area, 5G and the Tactile Internet vision appears to be a possible fit and opportunity.

In the following it is important to remember;

  • 1 ms round-trip time ~ 100 km (in fiber) to 150 km (in free space) in 1-way distance from the relevant action if only transport distance mattered to the latency budget.
  • Considering the total latency budget for a 1 ms Tactile application the transport distance is likely to be no more than 20 – 50 km or less (i.e., right at the RAN edge).

One of my absolute current favorite robotics use case that comes somewhat close to a 5G Tactile Internet vision, done with 4G technology, is the example of Ocado’s warehouse automation in UK. Ocado is the world’s largest online-only grocery retailer with ca. 50 thousand lines of goods, delivering more than 200,000 orders a week to customers around the United Kingdom. The 4G network build (by Cambridge Consultants) to support Ocado’s automation is based on LTE at unlicensed 5GHz band allowing Ocado to control 1,000 robots per base station. Each robot communicates with the Base Station and backend control systems every 100 ms on average as they traverses ca. 30 km journey across the warehouse 1,250 square meters. A total of 20 LTE base stations each with an effective range of 4 – 6 meters cover the warehouse area. The LTE technology was essential in order to bring latency down to an acceptable level by fine tuning LTE to perform under its lowest possible latency (<10 ms).

5G will bring lower latency, compared to an even optimized LTE system, that in a similar setup as the above described for Ocado, could further increase the performance. Obviously very high network reliability promised by 5G of such a logistic system needs to be very high to reduce the risk of disruption and subsequent customer dissatisfaction of late (or no) delivery as well as the exposure to grocery stock turning bad.

This all done within the confines of a warehouse building.

ROBOTICS AND TACTILE CONDITIONS

First of all lets limit the Robotics discussion to use cases related to networked robots. After all if the robot doesn’t need a network (pretty cool) it pretty much a singleton and not so relevant for the Tactile Internet discussion. In the following I am using the word Cloud in a fairly loose way and means any form of computing center resources either dedicated or virtualized. The cloud could reside near the networked robotic systems as well as far away depending on the overall system requirements to timing and delay (e.g., that might also depend on the level of robotic autonomy).

Getting networked robots to work well we need to solve a host of technical challenges, such as

  • Latency.
  • Jitter (i.e., variation of latency).
  • Connection reliability.
  • Network congestion.
  • Robot-2-Robot communications.
  • Robot-2-ROS (i.e., general robotics operations system).
  • Computing architecture: distributed, centralized, elastic computing, etc…
  • System stability.
  • Range.
  • Power budget (e.g., power limitations, re-charging).
  • Redundancy.
  • Sensor & actuator fusion (e.g., consolidate & align data from distributed sources for example sensor-actuator network).
  • Context.
  • Autonomy vs human control.
  • Machine learning / machine intelligence.
  • Safety (e.g., human and non-human).
  • Security (e.g., against cyber threats).
  • User Interface.
  • System Architecture.
  • etc…

The network connection-part of the networked robotics system can be either wireless, wired, or a combination of wired & wireless. Connectivity could be either to a local computing cloud or data center, to an external cloud (on the internet) or a combination of internal computing for control and management for applications requiring very low-latency very-low jitter communications and external cloud for backup and latency-jitter uncritical applications and use cases.

For connection types we have Wired (e.g., LAN), Wireless (e.g., WLAN) and Cellular  (e.g., LTE, 5G). There are (at least) three levels of connectivity we need to consider; inter-robot communications, robot-to-cloud communications (or operations and control systems residing in Frontend-Cloud or computing center), and possible Frontend-Cloud to Backend-Cloud (e..g, for backup, storage and latency-insensitive operations and control systems). Obviously, there might not be a need for a split in Frontend and Backend Clouds and pending on the use case requirements could be one and the same. Robots can be either stationary or mobile with a need for inter-robot communications or simply robot-cloud communications.

Various networked robot connectivity architectures are illustrated below;

networked robotics

ACKNOWLEDGEMENT

I greatly acknowledge my wife Eva Varadi for her support, patience and understanding during the creative process of creating this Blog.

.WORTHY 5G & RELATED READS.

  1. “NGMN 5G White Paper” by R.El Hattachi & J. Erfanian (NGMN Alliance, February 2015).
  2. “The Tactile Internet” by ITU-T (August 2014). Note: in this Blog this paper is also referred to as the Tactile Internet Vision.
  3. “5G: Personal Mobile Internet beyond What Cellular Did to Telephony” by G. Fettweis & S. Alamouti, (Communications Magazine, IEEE , vol. 52, no. 2, pp. 140-145, February 2014).
  4. “The Tactile Internet: Vision, Recent Progress, and Open Challenges” by Martin Maier, Mahfuzulhoq Chowdhury, Bhaskar Prasad Rimal, and Dung Pham Van (IEEE Communications Magazine, May 2016).
  5. “John Carmack’s delivers some home truths on latency” by John Carmack, CTO Oculus VR.
  6. “All About Robotic Surgery” by The Official Medical Robotics News Center.
  7. “The surgeon who operates from 400km away” by BBC Future (2014).
  8. “The Case for VM-Based Cloudlets in Mobile Computing” by Mahadev Satyanarayanan et al. (Pervasive Computing 2009).
  9. “Perception of Delay in Haptic Telepresence Systems” by Markus Rank et al. (pp 389, Presence: Vol. 19, Number 5).
  10. “Neuroscience Exploring the Brain” by Mark F. Bear et al. (Fourth Edition, 2016 Wolters Kluwer).
  11. “Neurophysiology: A Conceptual Approach” by Roger Carpenter & Benjamin Reddi (Fifth Edition, 2013 CRC Press). Definitely a very worthy read by anyone who want to understand the underlying principles of sensory functions and basic neural mechanisms.
  12. “Designing with the Mind in Mind” by Jeff Johnson (2010, Morgan Kaufmann). Lots of cool information of how to design a meaningful user interface and of basic user expirence principles worth thinking about.
  13. “Vision How it works and what can go wrong” by John E. Dowling et al. (2016, The MIT Press).
  14. “Visual Psychophysics From Laboratory to Theory” by Yhong-Lin Lu and Barbera Dosher (2014, MIT Press).
  15. “The Time Delay in Human Vision” by D.A. Wardle (The Physics Teacher, Vol. 36, Oct. 1998).
  16. “What do we perceive in a glance of a real-world scene?” by Li Fei-Fei et al. (Journal of Vision (2007) 7(1); 10, 1-29).
  17. “Detecting meaning in RSVP at 13 ms per picture” by Mary C. Potter et al. (Attention, Perception, & Psychophysics, 76(2): 270–279).
  18. “Banana or fruit? Detection and recognition across categorical levels in RSVP” by Mary C. Potter & Carl Erick Hagmann (Psychonomic Bulletin & Review, 22(2), 578-585.).
  19. “Human time perception and its illusions” by David M. Eaglerman (Current Opinion in Neurobiology, Volume 18, Issue 2, Pages 131-136).
  20. “How Much Faster is Fast Enough? User Perception of Latency & Latency Improvements in Direct and Indirect Touch” by J. Deber, R. Jota, C. Forlines and D. Wigdor (CHI 2015, April 18 – 23, 2015, Seoul, Republic of Korea).
  21. “Response time in games: Requirements and improvements” by Kjetil Raaen (Ph.D., 2016, Department of Informatics, The Faculty of Mathematics and Natural Sciences, University of Oslo).
  22. “Latency and player actions in online games” by Mark Claypool & Kajal Claypool (Nov. 2006, Vol. 49, No. 11 Communications of the ACM).
  23. “The Brewing Storm in Cloud Gaming: A Measurement Study on Cloud to End-User Latency” by Sharon Choy et al. (2012, 11th Annual Workshop on Network and Systems Support for Games (NetGames), 1–6).
  24. “On the impact of delay on real-time multiplayer games” by Lothar Pantel and Lars C. Wolf (Proceedings of the 12th International Workshop on Network and Operating Systems Support for Digital Audio and Video, NOSSDAV ’02, New York, NY, USA, pp. 23–29. ACM.).
  25. “Oculus Rift’s time warping feature will make VR easier on your stomach” from ExtremeTech Grant Brunner on Oculus Rift Timewarping. Pretty good video included on the subject.
  26. “World first in radio design” by Cambridge Consultants. Describing the work Cambridge Consultants did with Ocado (UK-based) to design the worlds most automated technologically advanced warehouse based on 4G connected robotics. Please do see the video enclosed in page.
  27. “Ocado: next-generation warehouse automation” by Cambridge Consultants.
  28. “Ocado has a plan to replace humans with robots” by Business Insider UK (May 2015). Note that Ocado has filed more than 73 different patent applications across 32 distinct innovations.
  29. “The Robotic Grocery Store of the Future Is Here” by MIT Technology Review (December 201
  30. “Cloud Robotics: Architecture, Challenges and Applications.” by Guoqiang Hu et al (IEEE Network, May/June 2012).

Mobile Data-centric Price Plans – An illustration of the De-composed.

How much money would it take for you to give up internet? …for the rest of your life? … and maybe much more important; How much do you want to pay for internet? The following cool video URL “Would you give up the Internet for 1 Million Dollars” hints towards both of those questions and an interesting paradox!

The perception of value is orders of magnitude higher than the willingness to pay, i.e.,

“I would NOT give up Internet for life for a Million+ US Dollars … oh … BUT… I don’t want to pay more than a couple of bucks for it either” (actually for a mature postpaid-rich market the chances are that over your expected life-time you will pay between 30 to 40 thousand US$ for mobile internet & voice & some messaging).

Price plans are fascinating! … Particular the recent data-centric price plans bundling in legacy services such as voice and SMS.

Needles to say that a consumer today often needs an advanced degree in science to really understand the price plans they are being presented. A high degree of trust is involved in choosing a given plan. The consumer usually takes what has been recommended by the shop expert (who most likely doesn’t have an advanced science degree either). This shop expert furthermore might (or might not) get a commission (i.e., a bonus) selling you a particular plan and thus in such a case hardly is the poster child of objectiveness.

How does the pricing experts come to the prices that they offer to the consumer? Are those plans internally consistent … or  maybe not?

It becomes particular interesting to study data-centric price plans that try to re-balance Mobile Voice and SMS.

How is 4G (i.e., in Europe also called LTE) being charged versus “normal” data offerings in the market? Do the mobile consumer pay more for Quality? Or maybe less?

What is the real price of mobile data? … Clearly, it is not the price we pay for a data-centric price plan.

A Data-centric Tale of a Country called United & a Telecom Company called Anything Anywhere!

As an example of mobile data pricing and in particular of data-centric mobile pricing with Voice and SMS included, I looked at a Western European Market (let’s call it United) and a mobile operator called Anything Anywhere. Anything Anywhere (AA) is known for its comprehensive & leading-edge 4G network as well as several innovative product ideas around mobile broadband data.

In my chosen Western European country United, voice revenues have rapidly declined over the last 5 years. Between 2009 to 2014 mobile voice revenues lost more than 36% compared to an overall revenue loss of “only” 14%. This corresponds to a compounded annual growth rate of minus 6.3% over the period. For an in depth analysis of the incredible mobile voice revenue losses the mobile industry have incurred in recent years see my blog “The unbearable lightness of mobile voice”.

Did this market experience a massive uptake in prepaid customers? No! Not at all … The prepaid share of the customer base went from ca. 60% in 2009 to ca. 45% in 2014. So in other words the Postpaid base over the period had grown with 15% and in 2014 was around 55%. This should usually have been a cause for great joy and incredible boost in revenues. United is also a market that has largely managed not to capitalize economically on substantial market consolidation.

As it is with many other mobile markets, engaging & embracing the mobile broadband data journey has been followed by a sharp decline in the overall share of voice revenue from ca. 70% in 2009 to ca. 50% in 2014. An ugly trend when the total mobile revenue declines as well.

The Smartphone penetration in United as of Q1 2014 was ca. 71% with 32% iOS-based devices. Compare this to 2009 where the smartphone penetration was ca. 21% with iOS making out around 75+%.

Our Mobile Operator AA has the following price plan structure (note: all information is taken directly from AA’s web site and can be found back if you guess which company it applies to);

  • Data-centric price plans with unlimited Voice and SMS.
  • Differentiated speed plans, i.e., 4G (average speed advertised to 12 – 15 Mbps) vs. Double Speed 4G (average speed advertised to 24 – 30 Mbps).
  • Offer plans that apply Europe Union-wide.
  • Option to pay less for handsets upfront but more per month (i.e., particular attractive for expensive handsets such as iPhone or Samsung Galaxy top-range models).
  • Default offering is 24 month although a shorter period is possible as well.
  • Offer SIM-only data-centric with unlimited voice & SMS.
  • Offer Data-only SIM-only plans.
  • Further you will get access to extensive “WiFi Underground”. Are allowed tethering and VoIP including Voice-calling over WiFi.

So here is an example of AA’s data-centric pricing for various data allowances. In this illustration I have chosen to add an iPhone 6 Plus (why? well I do love that phone as it largely replaces my iPad outside my home!) with 128GB storage. This choice have no impact on the fixed and variable parts of the respective price plans. For SIM-Only plans in the data below, I have added the (Apple) retail price of the iPhone 6 Plus (light grey bars). This is to make the comparison somewhat more comparable. It should of course be clear that in the SIM-only plans, the consumer is not obliged to buy a new device.

tco 24 month

  • Figure above: illustrates the total consumer cost or total price paid over the period (in local currency) of different data plans for our leading Western European Mobile Operator AA. The first 9 plans shown above includes a iPhone 6 Plus with 128GB memory. The last 5 are SIM only plans with the last 2 being Data-only SIM-only plans. The abbreviations are the following PPM: Pay per Month (but little upfront for terminal), PUF: Pay UpFront (for terminal) and less per month, SIMO: SIM-Only plan, SIMDO: SIM Data-Only plan, xxGB: The xx amount of Giga Bytes offered in Plan, 2x indicates double 4G speed of “normal” and 1x indicates “normal” speed, 1st UL indicates unlimited voice in plan, 2nd UL indicates unlimited SMS in plan, EU indicates that the plan also applies to countries in EU without extra charges. So PPM20GB2xULULEU defines a Pay per Month plan (i.e., the handset is pay over the contract period and thus leads to higher monthly charges) with 20 GB allowance at Double (4G) Speed with Unlimited Voice and Unlimited SMS valid across EU. In this plan you would pay 100 (in local currency) for a iPhone 6 Plus with 128 GB. Note the local Apple Shop retail price of an iPhone 6 Plus with 128 GB is around 789 in local currency (of which ca. 132 is VAT) for this particular country. Note: for the SIM-only plans (i.e., SIMO & SIMDO) I have added the Apple retail price of a iPhone 6 Plus 128GB. It furthermore should be pointed out that the fixed service fee and the data consumption price does not vary with choice of handset.

If I decide that I really want that iPhone 6 Plus and I do not want to pay the high price (even with discounts) that some price plans offers. AA offers me a 20GB 4G data-plan, pay 100 upfront for the iPhone 6 Plus (with 128 GB memory) and for the next 24 month 63.99 (i.e., as this feels much cheaper than paying 64) per month. After 24 month my total cost of the 20 GB would be 1,636. I could thus save 230 over the 24 month if I wanted to pay 470 (+370 compared to previous plan & – 319 compared to Apple retail price) for the iPhone. In this lower cost plan my monthly cost of the 20 GB would be 38.99 or 25 (40%!) less on a monthly basis.

The Analysis show that a “Pay-less-upfront-and-more-per-month” subscriber would end up after the 24 month having paid at least ca. 761 for the iPhone 6 Plus (with 128GB). We will see later, that the total price paid for the iPhone 6 Plus however is likely to be approximately 792 or slightly above today’s retail price (based on Apple’s pricing).

The Price of a Byte and all that Jazz

So how does the above data-price plans look like in terms of Price-per-Giga-Byte?

Although in most cases not be very clear to the consumer, the data-centric price plan is structured around the price of the primary data allowance (i.e., the variable part) and non-data related bundled services included in the plan (i.e., the fixed service part representing non-data items).

There will be a variable price reflecting the data-centric price-plans data allowance and a “Fixed” Service Fee that capture the price of bundled services such as voice and SMS. Based on total price of the data-centric price plan, it will often appear that the higher the allowance the cheaper does your unit-data “consumption” (or allowance) become. Indicating that volume discounts have been factored into the price-plan. In other words, the higher the data allowance the lower the price per GB allowance.

This is often flawed logic and simply an artefact of the bundled non-data related services being priced into the plan. However, to get to that level of understanding requires a bit of analysis that most of us certainly don’t do before a purchase.

price per giga byte

  • Figure above: Illustrates the unit-price of a Giga Byte (GB) versus AA’s various data-centric price plans. Note the price plans can be decomposed into a variable data-usage attributable price (per GB) and a fixed service fee that accounts for non-data services blended into the price. The Data Consumption per GB is the variable data-usage dependable part of the Price Plan and the Total price per GB is the full price normalized to the plans data consumption allowance.

So with the above we have argued that the total data-centric price can be written as a fixed and a variable part;

{P_{Tot}} = {P_{Fixed}} + {P_{Data}}({U_{GB}}) = {P_{Fixed}} + {p_{GB}}U_{GB}^\beta

As will be described in more detail below, the data-centric price {P_{Tot}} is structured in what can be characterized as a “Fixed Service Fee”  {P_{Fixed}} and a variable “Data Consumption Price{P_{Data}} that depends on a given price-plan’s data allowance {U_{GB}} (i.e., GB is Giga Byte). The “Data Consumption Price{P_{Data}} is variable in nature and while it might be a complex (i.e. in terms of complexity) function of data allowance {U_{GB}} it typically be of the form {p_{GB}}U_{GB}^\beta with the exponent \beta (i.e., Beta) being 1 or close to 1. In other words the Data Consumptive price is a linear (or approximately so) function of the data allowance. In case \beta is larger than 1, data pricing gets progressively more expensive with increasing allowance (i.e., penalizing high consumption or as I believe right-costing high consumption). For \beta lower than 1, data gets progressively cheaper with increasing data allowances corresponding to volume discounts with the danger of mismatching the data pricing with the cost of delivering the data.

The “Fixed Service Fee” depends on all the non-data related goodies that are added to the data-centric price plan, such as (a) unlimited voice, (b) unlimited SMS, (c) Price plan applies Europe-wide (i.e., EU-Option), (d) handset subsidy recovery fee, (e) maybe a customer management fee, etc..

For most price data-centric plan, If the data-centric price divided by the allowance would be plotted against the allowance {U_{GB}} in a Log-Log format would result in a fairly straight-line.

examples of power-law behaviour

Nothing really surprising given the pricing math involved! It is instructive to see what actually happens when we take a data-centric price and divide by the corresponding data allowance;

\frac{{{P_{Tot}}}}{{{U_{GB}}}} = \frac{{{P_{Fixed}} + {p_{GB}}U_{GB}^\beta }}{{{U_{GB}}}}  = \limits^{\beta  = 1} {p_{GB}} + {P_{Fixed}}U_{GB}^{ - 1}

For very large data allowances {U_{GB}} the price-centric per GB would asymptotically converge to {p_{GB}}, i.e., the unit cost of a GB. As {p_{GB}} is usually a lot smaller than {P_{Fixed}}, we see that there is another limit, where the allowance {U_{GB}} is relative low, where we would see the data-centric pricing per GB slope (in a Log-Log plot) become linear in the data allowance. Typically for allowances from 0.1 GB up towards 50 GB, non-linear slope of approximately -0.7±0.1 is observed and thus in between the linear and the constant pricing regime.

We can also observe that If the total price, of a data-centric price plan associated with a given data allowance (i.e., GB), is used to derive a price-per-GB, one would conclude that most mobile operators provide the consumer with volume discounts as they adapt higher data allowance plans. The GB gets progressively cheaper for higher usage plans. As most data-centric price plans are in the range where {p_{GB}} is (a lot) smaller than {P_{Fixed}}U_{GB}^{ - 1} , it will appear that the unit price of data declines as the data allowance increases. However in most cases it is likely an artefact of the Fixed Service Fee that reflects non-data related services which unless a data-only bundle can be a very substantial part of the data-centric price plan.

It is clear that data-allowance normalizing the totality of a data-centric price plan, particular when non-data services have been blended into the plan, will not reveal the real price of data. If used for assessing, for example, data profitability or other mobile data related financial KPIs this approach might be of very little use.

data centric price dynamics

  • Figure above: illustrates the basic characteristics of a data-centric price plan normalized by the data allowance. The data for this example reflects the AA’s data-centric price plans 2x4G Speed with bundled unlimited Voice & SMS as well as applying EU-wide. We see that the Beta value corresponds to a Volume Discount (at values lower than 1) or a Volume Penalty (at values higher than 1).

Oh yeah! … The really “funny” part of most data-price plan analysis (including my own past ones!) are they are more likely to reflect the Fixed Service Part (independent of the Data allowance) of the Data-centric price plan than the actual unit price of mobile data.

What to expect from AA’s data-centric price plans?

so in a rational world of data-centric pricing (assuming such exist) what should we expect of Anything Anywhere’s price plans as advertised online;

  • The (embedded) price for unlimited voice would be the same irrespective of the data plan’s allowed data usage (i.e., unlimited Voice does not depend on data plan).
  • The (embedded) price for unlimited SMS would be the same irrespective of the data plan’s allowed data usage (i.e., unlimited SMS does not depend on data plan).
  • You would pay more for having your plan extended to apply across Europe Union compared to not having this option.
  • You would (actually you should) expect to pay more per Mega Byte for the Double Speed option as compared to the Single Speed Option.
  • If you decide to “finance” your handset purchase (i.e., pay less upfront option) within a data plan you should expect to pay more on a monthly basis.
  • Given a data plan has a whole range of associated handsets priced From Free (i.e., included in plan without extra upfront charge) to high-end high-priced Smartphones, such as iPhone 6 Plus 128 GB, you would not expect that handset related cost would have been priced into the data plan. Or if it is, it must be the lowest common denominator for the whole range of offered handsets at a given price plan.
  • Where the discussion becomes really interesting is how your data consumption should be priced; (1) You pay more per unit of data consumption as you consume more data on a monthly basis, (2) You pay the same per unit irrespective of your consumption or (3) You should have a volume discount making your units cheaper the more you consume.

of course the above is if and only if the price plans have been developed in reasonable self-consistent manner.

data price analysis

  • Figure above: Illustrates AA’s various data-centric price plans (taken from their web site). Note that PPM represents low upfront (terminal) cost for the consumer and higher monthly cost and PUF represent paying upfront for the handset and thus having lower monthly costs as a consequence. The Operator AA allows the consumer in the PPM Plan to choose for an iPhone 6 Plus 128GB (priced at 100 to 160) or an IPhone 6 Plus 64GB option (at a lower price of course).

First note that Price Plans (with more than 2 data points) tend to be linear with the Data Usage allowance.

The Fixed Service Fee – The Art of Re-Capture Lost legacy Value?

In the following I define the Fixed Service Fee as the part of the total data-centric price plan that is independent of a given plan’s data allowance. The logic is that this part would contain all non-data related cost such as Unlimited Voice, Unlimited SMS, EU-Option, etc..

From AA’s voice plan (for 250 Minutes @ 10 per Month & 750 Minutes @ 15 per Month) with unlimited SMS (& no data) it can be inferred that

  • Price of Unlimited SMS can be no higher than 7.5. This however is likely also include general customer maintenance cost.

Monthly customer maintenance cost (cost of billing, storage, customer care & systems support, etc.) might be deduced from the SIM-Only Data-Only package and would be

  • Price of Monthly Customer Maintenance could be in the order of 5, which would imply that the Unlimited SMS price would be 2.5. Note the market average Postpaid SMS ARPU in 2014 was ca., 8.40 (based on Pyramid Research data). The market average number of postpaid SMS per month was ca. 273 SMS.

From AA’s SIM-only plan we get that the fixed portion of providing service (i.e., customer maintenance, unlimited Voice & SMS usage) is 14 and thus

  • Price of Unlimited Voice should be approximately 6.5. Note the market average Postpaid Voice ARPU was ca. 12 (based on Pyramid Research data). The market average voice usage per month was ca. 337 minutes. Further from the available limited voice price plans it can be deduced that unlimited voice must be higher than 1,000 Minutes or more than 3 times the national postpaid average.

The fixed part of the data-centric pricing difference between the data-centric SIM-only plan and similar data-centric plan including a handset (i.e., all services are the same except for the addition of the handset) could be regarded as a minimum handset financing cost allowing the operator to recover some of the handset subsidy

  • Equipment subsidy recovery cost of 7 (i.e., over a 24 month period this amounts to 168 which is likely to recover the average handset subsidy). Note is the customer chooses to pay little upfront for the handset, the customer would have to pay 26 extra per month in he fixed service fee. Thus low upfront cost result in another 624 over the 24 month contract period. Interestingly is that with the initial 7 for handset subsidy recovery in the basic fixed service fee a customer would have paid 792 in handset recovery over 24 month period the contract applies to (a bit more than the iPhone 6 Plus 128GB retail price).

The price for allowing the data-centric price-plan to apply Europe Union Wide is

  • The EU-Option (i.e., plan applicable within EU) appears to be priced at ca. 5 (caution: 2x4G vis-a-vis 1x4G could have been priced into this delta as well).

For EU-option price it should be noted here that the two plans that are being compared differs not only in the EU-option. The plan without the EU option is a data plan with “normal” 4G speed, while the EU-option plan supports double 4G speeds. So in theory the additional EU-option charge of 5 could also include a surcharge for the additional speed.

Why an operator would add the double speed to the fixed Service Fee price part is “bit” strange. The 2x4G speed price-plan option clearly is a variable trigger for cost (and value to the customer’s data usage). Thus should be introduced in the the variable part (i.e., the Giga-Byte dependent part) of the data-centric price plan.

It is assumed that indeed the derived difference can be attributed to the EU-option, i.e., the double speed has not been include in the monthly Fixed Service Fee.

In summary we get AA’s data-centric price plan’s monthly Fixed Service Fee de-composition as follows;

fixed part of data-centric pricing

  • Figure above: shows the composition of the monthly fixed service fee as part of AA’s data-centric plans. Of course in a SIM-only scenario the consumer would not have the Handset Recovery Fee inserted in the price plan.

So irrespective of the data allowance a (postpaid) customer would pay between 26 to 52 per month depending on whether handset financing is chosen (i.e., Low upfront payment on the expense of higher monthly cost).

Mobile data usage still has to happen!

The price of Mobile Data Allowance.

The variable data-price in the studied date-centric price plans are summarized in the table below as well as the figure;

Price-plan

4G Speed

Price per GB

Pay Less Upfront More per Month

Double

0.61±0.03

Pay Upfront & Less per Month

Double

0.67±0.05

SIM-Only

Single

1.47±0.08

SIM-Only Data Only

Single

2 (only 2 data points)

variable data price analysis

The first thing that obviously should make you Stop in Wonder is that Single 4G Speed Giga Byte is more than Twice the price of a Double 4G Speed Giga Byte In need for speed … well that will give you a pretty good deal with AA’s price 2x4G plans.

Second thing to notice is that it would appear to be a really bad deal (with respect to the price-per-byte) to be a SIM-Only Data-Only customer.

The Data-Only pays 2 per GB. Almost 3 times more than if you would choose a subscription with a device, double speed, double unlimited and EU-wide applicable price plan.

Agreed! In absolute terms the SIM-only Data-only cost a lot less per month (9 less than the 20GB pay device upfront) and it is possible to run away after 12 months (versus the 24 month plans). One rationale for charging extra per Byte for a SIM-only Data-only plan could be that the SIM card might be used in Tablet or Data-card/Dongle products that typically does consume most if not all of a given plans allowance. For normal devices and high allowance plans on average the consumption can be quiet a lot lower than the actual allowance. Particular over a 24 month period.

You might argue that this is all about how the data-centric price plans have been de-composed in a fixed service fee (supposedly the non-data dependent component) and a data consumptive price. However, even when considering the full price of a given price plan is the Single-4G-Speed more expensive per Byte than Double-4G-Speed.

You may also argue that I am comparing apples and oranges (or even bananas pending taste) as the Double-4G-Speed plans include a devices and a price-plan that applies EU-wide versus the SIM-only plan that includes the customers own device and a price-plan that only works in United. All true of course … Why that should be more expensive to opt out of is a bit beyond me and why this should have an inflationary impact on the price-per-Byte … well a bit of a mystery as well.

At least there is no (statistical) difference in the variable price of a Giga Byte whether the customer chooses to pay of her device over the 24 month contract period or pay (most of) it upfront.

For AA it doesn’t seem to be of concern! …. As 88% would come back for more (according with their web site).

Obviously this whole analysis above make the big assumption that the data-centric price plans are somewhat rationally derived … this might not be the case!

and it assumes that rationally & transparently derived price plans are the best for the consumer …

and it assumes what is good for the consumer is also good for the company …

Is AA different in this respect to that of other Operators around the world …

No! AA is not different from any other incumbent operator coming from a mobile voice centric domain!

Acknowledgement

I greatly acknowledge my wife Eva Varadi for her support, patience and understanding during the creative process of creating this Blog.

Postscript – The way I like to look at (rational … what ever that means) data-centric pricing.

Firstly, it would appear that AA’s pricing philosophy follows the industry standard of pricing mobile services and in particular mobile data-centric services by the data volume allowance. Non-data services are added to the data-centric price plan and in all effect make up for the most part of the price-plan even at relative higher data allowances;

standard pricing philosophy in mobile domain

  • Figure above: illustrates the typical approach to price plan design in the Telecom’s industry. Note while not per se wrong it often overweight’s the volume element of pricing and often results in sub-optimizing the Quality and Product aspects . Source: Dr. Kim K Larsen’s Mind Share contribution at Informa’s LTE World Summit May 2012; “Right pricing LTE and mobile broadband in general (a Technologist’ Observations)”.

Unlimited Voice and SMS in AA’s standard data-centric plans clearly should mitigate possible loss or migration away from old fashion voice (i.e., circuit switched) and SMS. However both the estimated allowances for unlimited voice (6.5) and SMS (2.5) appear to be a lot lower than their classical standalone ARPUs for the postpaid category. This certainly could explain that this market (as many others in Western Europe) have lost massive amount of voice revenues over the last 5 years. In other words re-capturing or re-balancing legacy service revenues into data-centric plans still have some way to go in order to be truly effective (if at all possible which is highly questionable at this time and age).

pricing_fundamentals

As a Technologist, I am particular interested in how the technology cost and benefits are being considered in data-centric price plans.

The big challenge for the pricing expert who focus too much on volume is that the same volume can result from vastly different network qualities and speed. The customers handset will drive the experience of quality and certainly consumption. By that differences in network load and thus technology cost. A customer with a iPhone 6 Plus is likely to load the mobile data network more (and thus incur higher cost) than a customer with a normal screen smartphone of 1 or 2 generations removed from iPhone 6 Plus. It is even conceivable that a user with iPhone 6 Plus will load the network more than a customer with a normal iPhone 6 (independent of the iOS). This is very very different for Voice and SMS volumetric considerations in legacy price plans, where handset had little (or no) impact on network load relative to the usage.

For data-centric price plans to be consistent with the technology cost incurred one should consider;

  • Higher “guarantied” Quality, typically speed or latency, should be priced higher per Byte than lower quality plans (or at the very least not lower).
  • Higher Volumetric Allowances should be priced per Byte higher than Lower Volumetric Allowance (or at the very least not lower).
  • Offering unlimited Voice & SMS in data-centric plans (as well as other bundled goodies) should be carefully re-balanced to re-capture some of lost legacy revenues.

That AA’s data-centric plans for double speed appears to be cheaper than their plans at a lower data delivery quality level is not consistent with costing. Of course, AA cannot really guaranty that the customer will get double 4G speed everywhere and as such it may not be fair to charge substantially more than for single speed. However, this is of course not what appear to happen here.

AA’s lowest data unit price (in per Giga Byte) is around 0.6 – 0.7 (or 0.06 – 0.07 Cent per Mega Byte). That price is very low and in all likelihood lower than their actual production cost of a GB or MB.

However, one may argue that as long as the Total Service Revenue gained by a data-centric price plan recover the production cost, as well as providing a healthy margin then whether the applied data unit-price is designed to recover the data production cost is maybe less of an issue.

In other words, data profitability may not matter as much as overall profitability. This said it remains in my opinion in-excusable for a mobile operator not to understand its main (data) cost drivers and ensure it is recovered in their overall pricing strategies.

Surely! You may say? … “Surely Mobile Operators know their cost structure and respective cost drivers and their price plans reflects this knowledge?”

It is my observation that most price plans (data-centric or not) are developed primarily in response to competition (which of course is an important pricing element as well) rather than firmly anchored in Cost, Value & Profit considerations. Do Operators really & deeply know their own cost structure and cost drivers? … Ahhh … In my opinion few really appear to do!

The Unbearable Lightness of Mobile Voice.

  • Mobile data adaption can be (and usually is) very un-healthy for the mobile voice revenues.
  • A Mega Byte of Mobile Voice is 6 times more expensive than a Mega Byte of Mobile Data (i.e., global average) 
  • If customers would pay the Mobile Data Price for Mobile Voice, 50% of Global Mobile Revenue would Evaporate (based on 2013 data).
  • Classical Mobile Voice is not Dead! Global Mobile Voice Usage grew with more than 50% over the last 5 years. Though Global Voice Revenue remained largely constant (over 2009 – 2013). 
  • Mobile Voice Revenues declined in most Western European & Central Eastern European countries.
  • Voice Revenue in Emerging Mobile-Data Markets (i.e., Latin America, Africa and APAC) showed positive growth although decelerating.
  • Mobile Applications providing high-quality (often High Definition) mobile Voice over IP should be expected to dent the classical mobile voice revenues (as Apps have impacted SMS usage & revenue).
  • Most Western & Central Eastern European markets shows an increasing decline in price elasticity of mobile voice demand. Even some markets (regions) had their voice demand decline as the voice prices were reduced (note: not that causality should be deduced from this trend though).
  • The Art of Re-balancing (or re-capture) the mobile voice revenue in data-centric price plans are non-trivial and prone to trial-and-error (but likely also un-avoidable).

An Unbearable Lightness.

There is something almost perverse about how light the mobile industry tends to treat Mobile Voice, an unbearable lightness?

How often don’t we hear Telco Executives wish for All-IP and web-centric services for All? More and more mobile data-centric plans are being offered with voice as an after thought. Even though voice still constitute more than 60% of the Global Mobile turnover  and in many emerging mobile markets beyond that. Even though classical mobile voice is more profitable than true mobile broadband access. “Has the train left the station” for Voice and running off the track? In my opinion, it might have for some Telecom Operators, but surely not for all. Taking some time away from thinking about mobile data would already be an incredible improvement if spend on strategizing and safeguarding mobile voice revenues that still are a very substantial part of The Mobile Business Model.

Mobile data penetration is un-healthy for voice revenue. It is almost guarantied that voice revenue will start declining as the mobile data penetration reaches 20% and beyond. There are very few exceptions (i.e., Australia, Singapore, Hong Kong and Saudi Arabia) to this rule as observed in the figure below. Much of this can be explained by the Telecoms focus on mobile data and mobile data centric strategies that takes the mobile voice business for given or an afterthought … focusing on a future of All-IP Services where voice is “just” another data service. Given the importance of voice revenues to the mobile business model, treating voice as an afterthought is maybe not the most value-driven strategy to adopt.

I should maybe point out that this is not per se a result of the underlying Cellular All-IP Technology. The fact is that Cellular Voice over an All-IP network is very well specified within 3GPP. Voice over LTE (i.e., VoLTE), or Voice over HSPA (VoHSPA) for that matter, is enabled with the IP Multimedia Subsystem (IMS). Both VoLTE and VoHSPA, or simply Cellular Voice over IP (Cellular VoIP as specified by 3GPP), are highly spectral efficient (compared to their circuit switched equivalents). Further the Cellular VoIP can be delivered at a high quality comparable to or better than High Definition (HD) circuit switched voice. Recent Mean Opinion Score (MOS) measurements by Ericsson and more recently (August 2014) Signals Research Group & Spirent have together done very extensive VoLTE network benchmark tests including VoLTE comparison with the voice quality of 2G & 3G Voice as well as Skype (“Behind the VoLTE Curtain, Part 1. Quantifying the Performance of a Commercial VoLTE Deployment”). Further advantage of Cellular VoIP is that it is specified to inter-operate with legacy circuit-switched networks via the circuit-switched fallback functionality. An excellent account for Cellular VoIP and VoLTE in particular can be found in Miikki Poikselka  et al’s great book on “Voice over LTE” (Wiley, 2012).

Its not the All-IP Technology that is wrong, its the commercial & strategic thinking of Voice in an All-IP World that leaves a lot to be wished for.

Voice over LTE provides for much better Voice Quality than a non-operator controlled (i.e., OTT) mobile VoIP Application would be able to offer. But is that Quality worth 5 to 6 times the price of data, that is the Billion $ Question.

voice growth vs mobile data penetration

  • Figure Above: illustrates the compound annual growth rates (2009 to 2013) of mobile voice revenue and the mobile data penetration at the beginning of the period (i.e., 2009). As will be addressed later it should be noted that the growth of mobile voice revenues are NOT only depending on Mobile Data Penetration Rates but on a few other important factors, such as addition of new unique subscribers, the minute price and the voice arpu compared to the income level (to name a few). Analysis has been based on Pyramid Research data. Abbreviations: WEU: Western Europe, CEE: Central Eastern Europe, APAC: Asia Pacific, MEA: Middle East & Africa, NA: North America and LA: Latin America.

In the following discussion classical mobile voice should be understood as an operator-controlled voice service charged by the minute or in equivalent economical terms (i.e., re-balanced data pricing). This is opposed to a mobile-application-based voice service (outside the direct control of the Telecom Operator) charged by the tariff structure of a mobile data package without imposed re-balancing.

If the Industry would charge a Mobile Voice Minute the equivalent of what they charge a Mobile Mega Byte … almost 50% of Mobile Turnover would disappear … So be careful AND be prepared for what you wish for! 

There are at least a couple of good reasons why Mobile Operators should be very focused on preserving mobile voice as we know it (or approximately so) also in LTE (and any future standards). Even more so, Mobile Operators should try to avoid too many associations with non-operator controlled Voice-over-IP (VoIP) Smartphone applications (easier said than done .. I know). It will be very important to define a future voice service on the All-IP Mobile Network that maintains its economics (i.e., pricing & margin) and don’t get “confused” with the mobile-data-based economics with substantially lower unit prices & questionable profitability.

Back in 2011 at the Mobile Open Summit, I presented “Who pays for Mobile Broadband” (i.e., both in London & San Francisco) with the following picture drawing attention to some of the Legacy Service (e.g., voice & SMS) challenges our Industry would be facing in the years to come from the many mobile applications developed and in development;

voice_future

One of the questions back in 2011 was (and Wow it still is! …) how to maintain the Mobile ARPU & Revenues at a reasonable level, as opposed to massive loss of revenue and business model sustainability that the mobile data business model appeared to promise (and pretty much still does). Particular the threat (& opportunities) from mobile Smartphone applications. Mobile Apps that provides Mobile Customers with attractive price-arbitrage compared to their legacy prices for SMS and Classical Voice.

IP killed the SMS Star” … Will IP also do away with the Classical Mobile Voice Economics as well?

Okay … Lets just be clear about what is killing SMS (it’s hardly dead yet). The Mobile Smartphone  Messaging-over-IP (MoIP) App does the killing. However, the tariff structure of an SMS vis-a-vis that of a mobile Mega Byte (i..e, ca. 3,000x) is the real instigator of the deed together with the shear convenience of the mobile application itself.

As of August 2014 the top Messaging & Voice over IP Smartphone applications share ca. 2.0+ Billion Active Users (not counting Facebook Messenger and of course with overlap, i.e., active users having several apps on their device). WhatsApp is the Number One Mobile Communications App with about 700 Million active users  (i.e., up from 600 Million active users in August 2014). Other Smartphone Apps are further away from the WhatsApp adaption figures. Applications from Viber can boast of 200+M active users, WeChat (predominantly popular in Asia) reportedly have 460+M active users and good old Skype around 300+M active users. The impact of smartphone MoIP applications on classical messaging (e.g., SMS) is well evidenced. So far Mobile Voice-over-IP has not visible dented the Telecom Industry’s mobile voice revenues. However the historical evidence is obviously no guaranty that it will not become an issue in the future (near, medium or far).

WhatsApp is rumoured to launch mobile voice calling as of first Quarter of 2015 … Will this event be the undoing of operator controlled classical mobile voice?  WhatsApp already has taken the SMS Scalp with 30 Billion WhatsApp messages send per day according the latest data from WhatsApp (January 2015). For comparison the amount of SMS send out over mobile networks globally was a bit more than 20 Billion per day (source: Pyramid Research data). It will be very interesting (and likely scary as well) to follow how WhatsApp Voice (over IP) service will impact Telecom operator’s mobile voice usage and of course their voice revenues. The Industry appears to take the news lightly and supposedly are unconcerned about the prospects of WhatsApp launching a mobile voice services (see: “WhatsApp voice calling – nightmare for mobile operators?” from 7 January 2015) … My favourite lightness is Vodacom’s (South Africa) “if anything, this vindicates the massive investments that we’ve been making in our network….” … Talking about unbearable lightness of mobile voice … (i.e., 68% of the mobile internet users in South Africa has WhatsApp on their smartphone).

Paying the price of a mega byte mobile voice.

A Mega-Byte is not just a Mega-Byte … it is much more than that!

In 2013, the going Global average rate of a Mobile (Data) Mega Byte was approximately 5 US-Dollar Cent (or a Nickel). A Mega Byte (MB) of circuit switched voice (i.e., ca. 11 Minutes @ 12.2kbps codec) would cost you 30+ US$-cent or about 6 times that of Mobile Data MB. Would you try to send a MB of SMS (i.e., ca. 7,143 of them) that would cost you roughly 150 US$ (NOTE: US$ not US$-Cents).

1 Mobile MB = 5 US$-cent Data MB < 30+ US$-cent Voice MB (6x mobile data) << 150 US$ SMS MB (3000x mobile data).

A Mega Byte of voice conversation is pretty un-ambiguous in the sense of being 11 minutes of a voice conversation (typically a dialogue, but could be monologue as well, e.g., voice mail or an angry better half) at a 12.2 kbps speech codec. How much mega byte a given voice conversation will translate into will depend on the underlying speech coding & decoding  (codec) information rate, which typically is 12.2 kbps or 5.9 kbps (i.e., for 3GPP cellular-based voice). In general we would not be directly conscious about speed (e.g., 12.2 kbps) at which our conversation is being coded and decoded although we certainly would be aware of the quality of the codec itself and its ability to correct errors that will occur in-between the two terminals. For a voice conversation itself, the parties that engage in the conversation is pretty much determining the duration of the conversation.

An SMS is pretty straightforward and well defined as well, i.e., being 140 Bytes (or characters). Again the underlying delivery speed is less important as for most purposes it feels that the SMS sending & delivery is almost instantaneously (though the reply might not be).

All good … but what about a Mobile Data Byte? As a concept it could by anything or nothing. A Mega Byte of Data is Extremely Ambiguous. Certainly we get pretty upset if we perceive a mobile data connection to be slow. But the content, represented by the Byte, would obviously impact our perception of time and whether we are getting what we believe we are paying for. We are no longer master of time. The Technology has taken over time.

Some examples: A Mega Byte of Voice is 11 minutes of conversation (@ 12.2 kbps). A Mega Byte of Text might take a second to download (@ 1 Mbps) but 8 hours to process (i.e., read). A Mega Byte of SMS might be delivered (individually & hopefully for you and your sanity spread out over time) almost instantaneously and would take almost 16 hours to read through (assuming English language and an average mature reader). A Mega Byte of graphic content (e.g., a picture) might take a second to download and milliseconds to process. Is a Mega Byte (MB) of streaming music that last for 11 seconds (@ 96 kbps) of similar value to a MB of Voice conversation that last for 11 minutes or a MB millisecond picture (that took a second to download).

In my opinion the answer should be clearly NO … Such (somewhat silly) comparisons serves to show the problem with pricing and valuing a Mega Byte. It also illustrates the danger of ambiguity of mobile data and why an operator should try to avoid bundling everything under the banner of mobile data (or at the very least be smart about it … whatever that means).

I am being a bit naughty in above comparisons, as I am freely mixing up the time scales of delivering a Byte and the time scales of neurological processing that Byte (mea culpa).

price of a mb 

  • Figure Above: Logarithmic representation of the cost per Mega Byte of a given mobile service. 1 MB of Voice is roughly corresponding to 11 Minutes at a 12.2 voice codec which is ca. 25+ times the monthly global MoU usage. 1 MB of SMS correspond to ca. 7,143 SMSs which is a lot (actually really a lot). In USA 7,143 would roughly correspond to a full years consumption. However, in WEU 7,143 SMS would be ca. 6+ years of SMS consumption (on average) to about almost 12 years of SMS consumption in MEA Region. Still SMS remain proportionate costly and clear is an obvious service to be rapidly replaced by mobile data as it becomes readily available. Source: Pyramid Research.

The “Black” Art of Re-balancing … Making the Lightness more Bearable?

I recently had a discussion with a very good friend (from an emerging market) about how to recover lost mobile voice revenues in the mobile data plans (i.e., the art of re-balancing or re-capturing). Could we do without Voice Plans? Should we focus on All-in the Data Package? Obviously, if you would charge 30+ US$-cent per Mega Byte Voice, while you charge 5 US$-cent for Mobile Data, that might not go down well with your customers (or consumer interest groups). We all know that “window-dressing” and sleight-of-hand are important principles in presenting attractive pricings. So instead of Mega Byte voice we might charge per Kilo Byte (lower numeric price), i.e., 0.029 US$-cent per kilo byte (note: 1 kilo-byte is ca. 0.65 seconds @ 12.2 kbps codec). But in general the consumer are smarter than that. Probably the best is to maintain a per time-unit charge or to Blend in the voice usage & pricing into the Mega Byte Data Price Plan (and hope you have done your math right).

Example (a very simple one): Say you have 500 MB mobile data price plan at 5 US$-cent per MB (i.e., 25 US$). You also have a 300 Minute Mobile Voice Plan of 2.7 US$-cent a minute (or 30 US$-cent per MB). Now 300 Minutes corresponds roughly to 30 MB of Voice Usage and would be charged ca. 9$. Instead of having a Data & Voice Plan, one might have only the Data Plan charging (500 MB x 5 US$cent/MB + 30 MB x 30 US$/cent/MB) / 530 MB or 6.4 US$-cent per MB (or 1.4 US$-cent more for mobile voice over the data plan or a 30% surcharge for Voice on the Mobile Data Bytes). Obviously such a pricing strategy (while simple) does pose some price strategic challenges and certainly does not per se completely safeguard voice revenue erosion. Keeping Mobile Voice separately from Mobile Data (i.e., Minutes vs Mega Bytes) in my opinion will remain the better strategy. Although such a minutes-based strategy is easily disrupted by innovative VoIP applications and data-only entrepreneurs (as well as Regulator Authorities).

Re-balancing (or re-capture) the voice revenue in data-centric price plans are non-trivial and prone to trial-and-error. Nevertheless it is clearly an important pricing strategy area to focus on in order to defend existing mobile voice revenues from evaporating or devaluing by the mobile data price plan association.

Is Voice-based communication for the Masses (as opposed to SME, SOHO, B2B,Niche demand, …) technologically un-interesting? As a techno-economist I would say far from it. From the GSM to HSPA and towards LTE, we have observed a quantum leap, a factor 10, in voice spectral efficiency (or capacity), substantial boost in link-budget (i.e., approximately 30% more geographical area can be covered with UMTS as opposed to GSM in apples for apples configurations) and of course increased quality (i.e., high-definition or crystal clear mobile voice). The below Figure illustrates the progress in voice capacity as a function of mobile technology. The relative voice spectral efficiency data in the below figure has been derived from one of the best (imo) textbooks on mobile voice “Voice over LTE” by Miikki Poikselka et all (Wiley, 2012);

voice spectral capacity

  • Figure Above: Abbreviation guide;  EFR: Enhanced Full Rate, AMR: Adaptive Multi-Rate, DFCA: Dynamic Frequency & Channel Allocation, IC: Interference Cancellation. What might not always be appreciate is the possibility of defining voice over HSPA, similar to Voice over LTE. Source: “Voice over LTE” by Miikki Poikselka et all (Wiley, 2012).

If you do a Google Search on Mobile Voice you would get ca. 500 Million results (note Voice over IP only yields 100+ million results). Try that on Mobile Data and “sham bam thank you mam” you get 2+ Billion results (and projected to increase further). For most of us working in the Telecom industry we spend very little time on voice issues and an over-proportionate amount of time on broadband data. When you tell your Marketing Department that a state-of-the-art 3G can carry at least twice as much voice traffic than state-of-the –art GSM (and over 30% more coverage area) they don’t really seem to get terribly exited? Voice is un-sexy!? an afterthought!? … (don’t even go brave and tell Marketing about Voice over LTE, aka VoLTE).

Is Mobile Voice Dead or at the very least Dying?

Is Voice un-interesting, something to be taken for granted?

Is Voice “just” data and should be regarded as an add-on to Mobile Data Services and Propositions?

From a Mobile Revenue perspective mobile voice is certainly not something to be taken for granted or just an afterthought. In 2013, mobile voice still amounted for 60+% of he total global mobile turnover, with mobile data taking up ca. 40% and SMS ca. 10%. There are a lot of evidence that SMS is dying out quickly with the emergence of smartphones and Messaging-over-IP-based mobile application (SMS – Assimilation is inevitable, Resistance is Futile!). Not particular surprising given the pricing of SMS and the many very attractive IP-based alternatives. So are there similar evidences of mobile voice dying?

NO! NIET! NEM! MA HO BU! NEJ! (not any time soon at least)

Lets see what the data have to say about mobile voice?

In the following I only provide a Regional but should there be interest I have very detailed deep dives for most major countries in the various regions. In general there are bigger variations to the regional averages in Middle East & Africa (i.e., MEA) as well as Asia Pacific (i.e., APAC) Regions, as there is a larger mix of mature and emerging markets with fairly large differences in mobile penetration rates and mobile data adaptation in general. Western Europe, Central Eastern Europe, North America (i.e., USA & Canada) and Latin America are more uniform in conclusions that can reasonably be inferred from the averages.

As shown in the Figure below, from 2009 to 2013, the total amount of mobile minutes generated globally increased with 50+%. Most of that increase came from emerging markets as more share of the population (in terms of individual subscribers rather than subscriptions) adapted mobile telephony. In absolute terms, the global mobile voice revenues did show evidence of stagnation and trending towards decline.

mobile revenues & mou growth 

  • Figure Above: Illustrates the development & composition of historical Global Mobile Revenues over the period 2009 to 2013. In addition also shows the total estimated growth of mobile voice minutes (i.e., Red Solid Curve showing MoUs in units of Trillions) over the period. Sources: Pyramid Research & Statista. It should noted that various data sources actual numbers (over the period) are note completely matching. I have observed a difference between various sources of up-to 15% in actual global values. While interesting this difference does not alter the analysis & conclusions presented here.

If all voice minutes was charged with the current Rate of Mobile Data, approximately Half-a-Billion US$ would evaporate from the Global Mobile Revenues.

So while mobile voice revenues might not be a positive growth story its still “sort-of” important to the mobile industry business.

Most countries in Western & Central Eastern Europe as well as mature markets in Middle East and Asia Pacific shows mobile voice revenue decline (in absolute terms and in their local currencies). For Latin America, Africa and Emerging Mobile Data Markets in Asia-Pacific almost all exhibits positive mobile voice revenue growth (although most have decelerating growth rates).

voice rev & mous

  • Figure Above: Illustrates the annual growth rates (compounded) of total mobile voice revenues and the corresponding growth in mobile voice traffic (i.e., associated with the revenues). Some care should be taken as for each region US$ has been used as a common currency. In general each individual country within a region has been analysed based on its own local currency in order to avoid mixing up currency exchange effects. Source: Pyramid Research.

Of course revenue growth of the voice service will depend on (1) the growth of subscriber base, (2) the growth of the unit itself (i.e., minutes of voice usage) as it is used by the subscribers (i.e., which is likely influenced by the unit price), and (3) the development of the average voice revenue per subscriber (or user) or the unit price of the voice service. Whether positive or negative growth of Revenue results, pretty much depends on the competitive environment, regulatory environment and how smart the business is in developing its pricing strategy & customer acquisition & churn dynamics.

Growth of (unique) mobile customers obviously depends the level of penetration, network coverage & customer affordability. Growth in highly penetrated markets is in general (much) lower than growth in less mature markets.

subs & mou growth

  • Figure Above: Illustrates the annual growth rates (compounded) of unique subscribers added to a given market (or region). Further to illustrate the possible relationship between increased subscribers and increased total generated mobile minutes the previous total minutes annual growth is shown as well. Source: Pyramid Research.

Interestingly, particular for the North America Region (NA), we see an increase in unique subscribers of 11% per anno and hardly any growth over the  period of total voice minutes. Firstly, note that the US Market will dominate the averaging of the North America Region (i.e., USA and Canada) having approx. 13 times more subscribers. So one of the reasons for this no-minutes-growth effect is that the US market saw a substantial increase in the prepaid ratio (i.e., from ca.19% in 2009 to 28% in 2013). Not only were new (unique) prepaid customers being added. Also a fairly large postpaid to prepaid migration took place over the period. In the USA the minute usage of a prepaid is ca. 35+% lower than that of a postpaid. In comparison the Global demanded minutes difference is 2.2+ times lower prepaid minute usage compared to that of a postpaid subscriber). In the NA Region (and of course likewise in the USA Market) we observe a reduced voice usage over the period both for the postpaid & prepaid segment (based on unique subscribers). Thus increased prepaid blend in the overall mobile base with a relative lower voice usage combined with a general decline in voice usage leads to a pretty much zero growth in voice usage in the NA Market. Although the NA Region is dominated by USA growth (ca. 0.1 % CAGR total voice growth), Canada’s likewise showed very minor growth in their overall voice usage as well (ca. 3.8% CAGR). Both Canada & USA reduced their minute pricing over the period.

  • Note on US Voice Usage & Revenues: note that in both in US and in Canada also the receiving party pays (RPP) for receiving a voice call. Thus revenue generating minutes arises from both outgoing and incoming minutes. This is different from most other markets where the Calling Party Pays (CPP) and only minutes originating are counted in the revenue generation. For example in USA the Minutes of Use per blended customer was ca. 620 MoU in 2013. To make that number comparable with say Europe’s 180 MoU, one would need to half the US figure to 310 MoU still a lot higher than the Western European blended minutes of use. The US bundles are huge (in terms of allowed minutes) and likewise the charges outside bundles (i.e., forcing the consumer into the next one) though the fixed fees tends be high to very high (in comparison with other mobile markets). The traditional US voice plan would offer unlimited on-net usage (i.e., both calling & receiving party are subscribing to the same mobile network operator) as well as unlimited off-peak usage (i.e., evening/night/weekends). It should be noted that many new US-based mobile price plans offers data bundles with unlimited voice (i.e., data-centric price plans). In 2013 approximately 60% of the US mobile industry’s turnover could be attributed to mobile voice usage. This number is likely somewhat higher as some data-tariffs has voice-usage (e.g., typically unlimited) embedded. In particular the US mobile voice business model would be depending customer migration to prepaid or lower-cost bundles as well as how well the voice-usage is being re-balanced (and re-captured) in the Data-centric price plans.

The second main component of the voice revenue is the unit price of a voice minute. Apart from the NA Region, all markets show substantial reductions in the unit price of a minute.mou & minute price growth

  • Figure Above: Illustrating the annual growth (compounded) of the per minute price in US$-cents as well as the corresponding growth in total voice minutes. The most affected by declining growth is Western Europe & Central Eastern Europe although other more-emerging markets are observed to have decelerating voice revenue growth. Source: Pyramid Research.

Clearly from the above it appears that the voice “elastic” have broken down in most mature markets with diminishing (or no return) on further minute price reductions. Another way of looking at the loss (or lack) of voice elasticity is to look at the unit-price development of a voice-minute versus the growth of the total voice revenues;

elasticity

  • Figure Above: Illustrates the growth of Total Voice Revenue and the unit-price development of a mobile voice minute. Apart from the Latin America (LA) and Asia Pacific (APAC) markets there clearly is no much further point in reducing the price of voice. Obviously, there are other sources & causes, than the pure gain of elasticity, effecting the price development of a mobile voice minute (i.e., regulatory, competition, reduced demand/voice substitution, etc..). Note US$ has been used as the unifying currency across the various markets. Despite currency effects the trend is consistent across the markets shown above. Source: Pyramid Research.

While Western & Central-Eastern Europe (WEU & CEE) as well as the mature markets in Middle East and Asia-Pacific shows little economic gain in lowering voice price, in the more emerging markets (LA and Africa) there are still net voice revenue gains to be made by lowering the unit price of a minute (although the gains are diminishing rapidly). Although most of the voice growth in the emerging markets comes from adding new customers rather than from growth in the demand per customer itself.

voice growth & uptake

  • Figure Above: Illustrating possible drivers for mobile voice growth (positive as well as negative); such as Mobile Data Penetration 2013 (expected negative growth impact), increased number of (unique) subscribers compared to 2009 (expected positive growth impact) and changes in prepaid-postpaid blend (a negative %tage means postpaid increased their proportion while a positive %tage translates into a higher proportion of prepaid compared to 2009). Voice tariff changes have been observed to have elastic effects on usage as well although the impact changes from market to market pending on maturity. Source: derived from Pyramid Research.

With all the talk about Mobile Data, it might come as a surprise that Voice Usage is actually growing across all regions with the exception of North America. The sources of the Mobile Voice Minutes Growth are largely coming from

  1. Adding new unique subscribers (i.e., increasing mobile penetration rates).
  2. Transitioning existing subscribers from prepaid to postpaid subscriptions (i.e., postpaid tends to have (a lot) higher voice usage compared to prepaid).
  3. General increase in usage per individual subscriber (i.e., few markets where this is actually observed irrespective of the general decline in the unit cost of a voice minute).

To the last point (#3) it should be noted that the general trend across almost all markets is that Minutes of Use per Unique customer is stagnating and even in decline despite substantial per unit price reduction of a consumed minute. In some markets that trend is somewhat compensated by increase of postpaid penetration rates (i.e., postpaid subscribers tend to consume more voice minutes). The reduction of MoUs per individual subscriber is more significant than a subscription-based analysis would let on.

Clearly, Mobile Voice Usage is far from Dead

and

Mobile Voice Revenue is a very important part of the overall mobile revenue composition.

It might make very good sense to spend a bit more time on strategizing voice, than appears to be the case today. If mobile voice remains just an afterthought of mobile data, the Telecom industry will loose massive amounts of Revenues and last but not least Profitability.

 

Post Script: What drives the voice minute growth?

An interesting exercise is to take all the data and run some statistical analysis on it to see what comes out in terms of main drivers for voice minute growth, positive as well as negative. The data available to me comprises 77 countries from WEU (16), CEE (8), APAC (15), MEA (17), NA (Canada & USA) and LA (19). I am furthermore working with 18 different growth parameters (e.g., mobile penetration, prepaid share of base, data adaptation, data penetration begin of period, minutes of use, voice arpu, voice minute price, total minute volume, customers, total revenue growth, sms, sms price, pricing & arpu relative to nominal gdp etc…) and 7 dummy parameters (populated with noise and unrelated data).

Two specific voice minute growth models emerges our of a comprehensive analysis of the above described data. The first model is as follows

(1) Voice Growth correlates positively with Mobile Penetration (of unique customers) in the sense of higher penetration results in more minutes, it correlates negatively with Mobile Data Penetration at the begin of the period (i.e., 2009 uptake of 3G, LTE and beyond) in the sense that higher mobile data uptake at the begin of the period leads to a reduction of Voice Growth, and finally  Voice Growth correlates negatively with the Price of a Voice Minute in the sense of higher prices leads to lower growth and lower prices leads to higher growth.  This model is statistically fairly robust (e.g., a p-values < 0.0001) as well as having all parameters with a statistically meaningful confidence intervals (i.e., upper & lower 95% confidence interval having the same sign).

The Global Analysis does pin point to very rational drivers for mobile voice usage growth, i.e., that mobile penetration growth, mobile data uptake and price of a voice minute are important drivers for total voice usage. 

It should be noted that changes in the prepaid proportion does not appear statistically to impact voice minute growth.

The second model provides a marginal better overall fit to the Global Data but yields slightly worse p-values for the individual descriptive parameters.

(2) The second model simply adds the Voice ARPU to (nominal) GDP ratio to the first model. This yields a negative correlation in the sense that a low ratio results in higher voice usage growth and a higher ration in lower voice usage growth.

Both models describe the trends or voice growth dynamics reasonably well, although less convincing for Western & Central Eastern Europe and other more mature markets where the model tends to overshoot the actual data. One of the reasons for this is that the initial attempt was to describe the global voice growth behaviour across very diverse markets.

mou growth actual vs model

  • Figure Above: Illustrates total annual generated voice minutes compound annual growth rate (between 2009 and 2013) for 77 markets across 6 major regions (i.e., WEU, CEE, APAC, MEA, NA and LA). The Model 1 shows an attempt to describe the Global growth trend across all 77 markets within the same model. The Global Model is not great for Western Europe and part of the CEE although it tends to describe the trends between the markets reasonably.

w&cee growth

  • Figure Western & Central Eastern Region: the above Illustrates the compound annual growth rate (2009 – 2013) of total generated voice minutes and corresponding voice revenues. For Western & Central Eastern Europe while the generated minutes have increased the voice revenue have consistently declined. The average CAGR of new unique customers over the period was 1.2% with the maximum being little less than 4%.

apac growth

  • Figure Asia Pacific Region: the above Illustrates the compound annual growth rate (2009 – 2013) of total generated voice minutes and corresponding voice revenues. For the Emerging market in the region there is still positive growth of both minutes generated as well as voice revenue generated. Most of the mature markets the voice revenue growth is negative as have been observed for mature Western & Central Eastern Europe.

mea growth

  • Figure Middle East & Africa Region: the above Illustrates the compound annual growth rate (2009 – 2013) of total generated voice minutes and corresponding voice revenues. For the Emerging market in the region there is still positive growth of both minutes generated as well as voice revenue generated. Most of the mature markets the voice revenue growth is negative as have been observed for mature Western & Central Eastern Europe.

    na&la growth

  • Figure North & Latin America Region: the above Illustrates the compound annual growth rate (2009 – 2013) of total generated voice minutes and corresponding voice revenues. For the Emerging market in the region there is still positive growth of both minutes generated as well as voice revenue generated. Most of the mature markets the voice revenue growth is negative as have been observed for mature Western & Central Eastern Europe.

    PS.PS. Voice Tariff Structure

  • Typically the structure of a mobile voice tariff (or how the customer is billed) is structure as follows

    • Fixed charge / fee

      • This fixed charge can be regarded as an access charge and usually is associated with a given usage limit (i.e., $ X for Y units of usage) or bundle structure.
    • Variable per unit usage charge

      • On-net – call originating and terminating within same network.
      • Off-net – Domestic Mobile.
      • Off-net – Domestic Fixed.
      • Off-net – International.
      • Local vs Long-distance.
      • Peak vs Off-peak rates (e.g., off-peak typically evening/night/weekend).
      • Roaming rates (i.e., when customer usage occurs in foreign network).
      • Special number tariffs (i.e., calls to paid-service numbers).

    How a fixed vis-a-vis variable charges are implemented will depend on the particularity of a given market but in general will depend on service penetration and local vs long-distance charges.

  • Acknowledgement

    I greatly acknowledge my wife Eva Varadi for her support, patience and understanding during the creative process of creating this Blog. I certainly have not always been very present during the analysis and writing. Also many thanks to Shivendra Nautiyal and others for discussing and challenging the importance of mobile voice versus mobile data and how practically to mitigate VoIP cannibalization of the Classical Mobile Voice.

  • Profitability of the Mobile Business Model … The Rise! & Inevitable Fall?

    A Mature & Emerging Market Profitability Analysis … From Past, through Present & to the Future.

    • I dedicate this Blog to David Haszeldine whom has been (and will remain) a true partner when it comes to discussing, thinking and challenging cost structures, corporate excesses and optimizing the Telco profitability.
    • Opex growth & declining revenue growth is the biggest exposure to margin decline & profitability risk for emerging growth markets as well as mature mobile markets.
    • 48 Major Mobile Market’s Revenue & Opex Growth have been analyzed over the period 2007 to 2013 (for some countries from 2003 to 2013). The results have been provided in an easy to compare overview chart.
    • For 23 out of the 48 Mobile Markets, Opex have grown faster than Revenue and poses a substantial risk to Telco profitability in the near & long-term unless Opex will be better managed and controlled.
    • Mobile Profitability Risk is a substantial Emerging Growth Market Problem where cost has grown much faster than the corresponding Revenues.
    • 11 Major Emerging Growth Markets have had an Opex compounded annual growth rate between 2007 to 2013 that was higher than the Revenue Growth substantially squeezing margin and straining EBITDA.
    • On average the compounded annual growth rate of Opex grew 2.2% faster than corresponding Revenue over the period 2007 to 2013. Between 2012 to 2013 Opex grew (on average) 3.7% faster than Revenue.
    • A Market Profit Sustainability Risk Index (based on Bayesian inference) is proposed as a way to provide an overview of mobile markets profitability directions based on their Revenue and Opex growth rates.
    • Statistical Analysis on available data shows that a Mobile Markets Opex level is driven by (1) Population, (2) Customers, (3) Penetration and (4) ARPU. The GDP & Surface Area have only minor and indirect influence on the various markets Opex levels.
    • A profitability framework for understanding individual operators profit dynamics is proposed.
    • It is shown that Profitability can be written as \Delta  = \delta  - \frac{{{o_f}}}{{{r_u}}}\frac{1}{\sigma }with\Delta being the margin, \delta  = 1 - \frac{{{o_u}}}{{{r_u}}}with ou and ru being the user dependent OpEx and Revenue (i.e., AOPU and ARPU), of the fixed OpEx divided by the Total Subscriber Market and\sigma is the subscriber market share.
    • The proposed operator profitability framework provides a high degree of descriptive power and understanding of individual operators margin dynamics as a function of subscriber market share as well as other important economical drivers.

    I have long & frequently been pondering over the mobile industry’s profitability.In particular, I have spend a lot of my time researching the structure & dynamics of profitability and mapping out factors that contributes in both negative & positive ways? My interest is the underlying cost structures and business models that drives the profitability in both good and bad ways. I have met Executives who felt a similar passion for strategizing, optimizing and managing their companies Telco cost structures and thereby profit and I have also met Executives who mainly cared for the Revenue.

    Obviously, both Revenue and Cost are important to optimize. This said it is wise to keep in mind the following Cost- structure & Revenue Heuristics;

    • Cost is an almost Certainty once made & Revenues are by nature Uncertain.
    • Cost left Unmanaged will by default Increase over time.
    • Revenue is more likely to Decrease over time than increase.
    • Majority of Cost exist on a different & longer time-scale than Revenue.

    In the following I will use EBITDA, which stands for Earnings Before Interest, Taxes, Depreciation and Amortization, as a measure of profitability and EBITDA to Revenue Ratio as a measure of my profit margin or just margin. It should be clear that EBITDA is a proxy of profitability and as such have shortfalls in specific Accounting and P&L Scenarios. Also according with GAAP (General Accepted Accounting Principles) and under IFRS (International Financial Reporting Standards) EBITDA is not a standardized accepted accounting measure. Nevertheless, both EBITDA and EBITDA Margin are widely accepted and used in the mobile industry as a proxy for operational performance and profitability. I am going to assume that for most purposes & examples discussed in this Blog, EBITDA & the corresponding Margin remains sufficiently good measures profitability.

    While I am touching upon mobile revenues as an issue for profitability, I am not going to provide much thoughts on how to boost revenues or add new incremental revenues that might compensate from loss of mobile legacy service revenues (i.e., voice, messaging and access). My revenue focus in particular addresses revenue growth on a more generalized level compared to the mobile cost being incurred operating such services in particular and a mobile business in general. For an in-depth and beautiful treatment of mobile revenues past, present and future, I would like to refer to Chetan Sharma’s 2012 paper “Operator’s Dilemma (and Opportunity): The 4th Wave” (note: you can download the paper by following the link in the html article) on mobile revenue dynamics from (1) Voice (1st Revenue or Service Wave), (2) Messaging (2nd Revenue or Service Wave) to todays (3) Access (3rd Revenue Wave) and the commence to what Chetan Sharma defines as the 4th Wave of Revenues (note: think of waves as S-curves describing initial growth spurt, slow down phase, stagnation and eventually decline) which really describes a collection of revenue or service waves (i.e., S-curves) representing a portfolio of Digital Services, such as (a) Connected Home, (b) Connected Car,  (c) Health, (d) Payment, (e) Commerce, (f) Advertising, (g) Cloud Services (h) Enterprise solutions, (i) Identity, Profile & Analysis etc..  I feel confident that adding any Digital Service enabled by Internet-of-Things (IoT) and M2M would be important inclusions to the Digital Services Wave. Given the competition (i.e., Facebook, Google, Amazon, Ebay, etc..) that mobile operators will face entering the 4th Wave of Digital Services Space, in combination with having only national or limited international scale, will make this area a tough challenge to return direct profit on. The inherent limited international or national-only scale appears to be one of the biggest barrier to turn many of the proposed Digital Services, particular with those with strong Social Media Touch Points, into meaningful business opportunities for mobile operators.

    This said, I do believe (strongly) that Telecom Operators have very good opportunities for winning Digital Services Battles in areas where their physical infrastructure (including Spectrum & IT Architecture) is an asset and essential for delivering secure, private and reliable services. Local regulation and privacy laws may indeed turn out to be a blessing for Telecom Operators and other national-oriented businesses. The current privacy trend and general consumer suspicion of American-based Global Digital Services / Social Media Enterprises may create new revenue opportunities for national-focused mobile operators as well as for other national-oriented digital businesses. In particular if Telco Operators work together creating Digital Services working across operator’s networks, platforms and beyond (e.g., payment, health, private search, …) rather than walled-garden digital services, they might become very credible alternatives to multi-national offerings. It is highly likely that consumers would be more willing to trust national mobile operator entities with her or his personal data & money (in fact they already do that in many areas) than a multinational social-media corporation. In addition to the above Digital Services, I do expect that Mobile/Telecom Operators and Entertainment Networks (e.g., satellite, cable, IP-based) will increasingly firm up partnerships as well as acquire & merge their businesses & business models. In all effect this is already happening.

    For emerging growth markets without extensive and reliable fixed broadband infrastructures, high-quality (& likely higher cost compared to today’s networks!) mobile broadband infrastructures would be essential to drive additional Digital Services and respective revenues as well as for new entertainment business models (other than existing Satellite TV). Anyway, Chetan captures these Digital Services (or 4th Wave) revenue streams very nicely and I recommend very much to read his articles in general (i.e., including “Mobile 4th Wave: The Evolution of the Next Trillion Dollars” which is the 2nd “4th Wave” article).

    Back to mobile profitability and how to ensure that the mobile business model doesn’t breakdown as revenue growth starts to slow down and decline while the growth of mobile cost overtakes the revenue growth.

    A good friend of mine, who also is a great and successful CFO, stated that Profitability is rarely a problem to achieve (in the short term)”; “I turn down my market invest (i.e., OpEx) and my Profitability (as measured in terms of EBITDA) goes up. All I have done is getting my business profitable in the short term without having created any sustainable value or profit by this. Just engineered my bonus.”

    Our aim must be to ensure sustainable and stable profitability. This can only be done by understanding, careful managing and engineering our basic Telco cost structures.

    While most Telco’s tend to plan several years ahead for Capital Expenditures (CapEx) and often with a high degree of sophistication, the same Telco’s mainly focus on one (1!) year ahead for OpEx. Further effort channeled into OpEx is frequently highly simplistic and at times in-consistent with the planned CapEx. Obviously, in the growth phase of the business cycle one may take the easy way out on OpEx and focus more on the required CapEx to grow the business. However, as the time-scales for committed OpEx “lives” on a much longer period than Revenue (particular Prepaid Revenue or even CapEx for that matter), any shortfall in Revenue and Profitability will be much more difficult to mitigate by OpEx measures that takes time to become effective. In markets with little or no market investment the penalty can be even harsher as there is no or little OpEx cushion that can be used to soften a disappointing direction in profitability.

    How come a telecom business in Asia, or other emerging growth markets around the world, can maintain, by European standards, such incredible high EBITDA Margins. Margin’s that run into 50s or even higher. Is this “just” a matter of different lower-cost & low GDP economies? Does the higher margins simply reflect a different stage in the business cycle (i.e., growth versus super-saturation)?, Should Mature Market really care too much about Emerging Growth Markets? in the sense of whether Mature Markets can learn anything from Emerging Growth Markets and maybe even vice versa? (i.e., certainly mature markets have made many mistakes, particular when shifting gears from growth to what should be sustainability).

    Before all those questions have much of a meaning, it might be instructive to look at the differences between a Mature Market and an Emerging Growth Market. I obviously would not have started this Blog, unless I believe that there are important lessons to be had by understanding what is going on in both types of markets. I also should make it clear that I am only using the term Emerging Growth Markets as most of the markets I study is typically defined as such by economists and consultants. However from a mobile technology perspective few of those markets we tend to call Emerging Growth Markets can really be called emerging any longer and growth has slowed down a lot in most of those markets. This said, from a mobile broadband perspective most of the markets defined in this analysis as Emerging Growth Markets are pretty much dead on that definition.

    Whether the emerging markets really should be looking forward to mobile broadband data growth might depend a lot on whether you are the consumer or the provider of services.

    For most Mature Markets the introduction of 3G and mobile broadband data heralded a massive slow-down and in some cases even decline in revenue. This imposed severe strains on Mobile Margins and their EBITDAs. Today most mature markets mobile operators are facing a negative revenue growth rate and is “forced” continuously keep a razor focus on OpEx, Mitigating the revenue decline keeping Margin and EBITDA reasonably in check.

    Emerging Markets should as early as possible focus on their operational expenses and Optimize with a Vengeance.

    Well well let ‘s get back to the comparison and see what we can learn!

    It doesn’t take to long to make a list of some of the key and maybe at times obvious differentiators (not intended to be exhaustive) between Mature and Emerging Markets are;

    mature vs growth markets

    • Side Note: it should be clear that by today many of the markets we used to call emerging growth markets are from mobile telephony penetration & business development certainly not emerging any longer and as growing as they were 5 or 10 years ago. This said from a 3G/4G mobile broadband data penetration perspective it might still be fair to characterize those markets as emerging and growing. Though as mature markets have seen that journey is not per se a financial growth story.

    Looking at the above table we can assess that Firstly: the straightforward (and possible naïve) explanation of relative profitability differences between Mature and Emerging Markets, might be that emerging markets cost structures are much more favorable compared to what we find in mature market economies. Basically the difference between Low and High GDP economies. However, we should not allow ourselves to be too naïve here as lessons learned from low GDP economies are that some cost structure elements (e.g., real estate, fuel, electricity, etc..) are as costly (some times more so) than what we find back in mature high/higher GDP markets. Secondly: many emerging growth market’s economies are substantially more populous & dense than what we find in mature markets (i.e., although it is hard to beat Netherlands or the Ruhr Area in Germany). Maybe the higher population count & population density leads to better scale than can be achieved in mature markets. However, while maybe true for the urban population, emerging markets tend to have substantially higher ratio of their population living in rural areas compared to what we find in mature markets.  Thirdly: maybe the go-to-market approach in emerging markets is different from mature markets (e.g., subsidies, quality including network coverage, marketing,…) offering substantially lower mobile quality overall compared to what is the practice in mature markets. Providing poor mobile network quality certainly have been a recurring theme in the Philippines mobile industry despite the Telco Industry in Philippines enjoys Margins that most mature markets operators can only dream of. It is pretty clear that for 3G-UMTS based mobile broadband, 900 MHz does not have sufficient bandwidth to support the anticipated mobile broadband uptake in emerging markets (e.g., particular as 900MHz is occupied by 2G-GSM as well). IF emerging markets mobile operators will want to offer mobile data at reasonable quality levels (i.e., and the IF is intentional), sustain anticipated customer demand and growth they are likely to require network densification (i.e., extra CapEx and OpEx) at 2100 MHz. Alternative they might choose to wait for APT 700 MHz and drive an affordable low-cost LTE device ecosystem albeit this is some years ahead.

    More than likely some of the answers of why emerging markets have a much better margins (at the moment at least) will have to do with cost-structure differences combined with possibly better scale and different go-to-market requirements more than compensating the low revenue per user.

    Let us have a look at the usual suspects towards the differences between mature & emerging markets. The EBITDA can be derived as Revenue minus the Operational Expenses (i.e., OpEx) and the corresponding margin is Ebitda divided by the Revenue (ignoring special accounting effects that here);

    EBITDA (E) = Revenue (R) – OpEx (O) and Margin (M) = EBITDA / Revenue.

    The EBITDA & Margin tells us in absolute and relative terms how much of our Revenue we keep after all our Operational expenses (i.e., OpEx) have been paid (i.e., beside tax, interests, depreciation & amortization charges).

    We can write Revenue as a the product of ARPU (Average Number of Users) times Number of Users N and thus the EBITDA can also be written as;

    E = R - O = ARPU\, \times {N_{users}}\; - \;O. We see that even if ARPU is low (or very) low, an Emerging Market with lot of users might match the Revenue of a Mature Market with higher ARPU and worse population scale (i.e., lower amount of users). Pretty simple!

    But what about the Margin? M = \frac{{R - O}}{R} = 1 - \frac{O}{R}, in order for an Emerging Market to have substantially better Margin than corresponding Mature Market at the same revenue level, it is clear that the Emerging Market’s OpEx (O) needs to be lower than that of a Mature markets. We also observe that if the Emerging Market Revenue is lower than the Mature Market, the corresponding Opex needs to be even lower than if the Revenues were identical. One would expect that lower GDP countries would have lower Opex (or Cost in general) combined with better population scale is really what makes for a great emerging market mobile Margins! … Or is it ?

    A Small but essential de-tour into Cost Structure.

    Some of the answers towards the differences in margin between mature and emerging markets obviously lay in the OpEx part or in the Cost-structure differences. Let’s take a look at a mature market’s cost structure (i.e., as you will find in Western & Eastern Europe) which pretty much looks like this;

    mature market cost structure

    With the following OpEx or cost-structure elements;

    • Usage-related OpEx:  typically take up between 10% to 35% of of the total OpEx with an average of ca. 25%. On average this OpEx contribution is approximately 17% of the revenue in mature European markets. Trend wise it is declining. Usage-based OpEx is dominated by interconnect & roaming voice traffic and to a less degree of data interconnect and peering. In a scenario where there is little circuit switched voice left (i.e., the ultimate LTE scenario) this cost element will diminish substantially from the operators cost structure. It should be noted that this also to some extend is being influenced by regulatory forces.
    • Market Invest: can be decomposed into Subscriber Acquisition Cost (SAC), i.e., “bribing” the customers to leave your competitor for yourself, Subscriber Retention Cost (SRC), i.e., “bribing” your existing (valuable) customers to not let them be “bribed” by you’re a competitor and leave you (i.e., churn), and lastly Other Marketing spend for advertisement, promotional and so forth. This cost-structure element contribution to OpEx can vary greatly depending on the market composition. In Europe’s mature markets it will vary from 10% to 31% with a mean value of ca. 23% of the total OpEx. On average it will be around 14% of the Revenue. It should be noted that as the mobile penetration increases and enter into heavy saturation (i.e., >100%), SAC tends to reduce and SRC will increase. Further in markets that are very prepaid heavy SAC and SRC will naturally be fairly minor cost structure element (i.e., 10% of OpEx or lower and only a couple of % of Revenue). Profit and Margin can rapidly be influenced by changes in the market invest. SAC and SRC cost-structure elements will in general be small in emerging growth markets (compared to corresponding mature markets).
    • Terminal-equipment related OpEx: is the cost associated by procuring terminals equipment (i.e, handsets, smartphones, data cards, etc.). In the past (prior to 2008) it was fairly common that OpEx from procuring and revenues from selling terminals were close to a zero-sum game. In other words the cost made for the operator of procuring terminals was pretty much covered by re-selling them to their customer base. This cost structure element is another  heavy weight and vary from 10% to 20% of the OpEx with an average in mature European markets of 17%. Terminal-related cost on average amounts to ca. 11% of the Revenue (in mature markets). Most operators in emerging growth markets don’t massively procure, re-sell and subsidies handsets, as is the case in many mature markets. Typically handsets and devices in emerging markets will be supplied by a substantial 2nd hand gray and black market readily available.
    • Personnel Cost: amounts to between 6% to 15% of the Total OpEx with a best-practice share of around the 10%. The ones who believe that this ratio is lower in emerging markets might re-think their impression. In my experience emerging growth markets (including the ones in Eastern & Central Europe) have a lower unit personnel cost but also tends to have much larger organizations. This leads to many emerging growth markets operators having a personnel cost share that is closer to the 15% than to the 10% or lower. On average personnel cost should be below 10% of revenue with best practice between 5% and 8% of the Revenue.
    • Technology Cost (Network & IT): includes all technology related OpEx for both Network and Information Technology. Personnel-related technology OpEx (prior to capitalization ) is accounted for in the above Personnel Cost Category and would typically be around 30% of the personnel cost pending on outsourcing level and organizational structure. Emerging markets in Central & Eastern Europe historical have had higher technology related personnel cost than mature markets. In general this is attributed to high-quality relative low-cost technology staff leading to less advantages in outsourcing technology functions. As Technology OpEx is the most frequent “victim” of efficiency initiatives, lets just have a look at how the anatomy of the Technology Cost Structure looks like:

    technology opex  mature markets

    • Technology Cost (Network & IT) – continued: Although, above Chart (i.e., taken from my 2012 Keynote at the Broadband MEA 2012, Dubai “Ultra-efficient network factory: Network sharing and other means to leapfrog operator efficiencies”) emphasizes a Mature Market View, emerging markets cost distribution does not differ that much from the above with a few exceptions. In Emerging Growth Markets with poor electrification rates diesel generators and the associated diesel full will strain the Energy Cost substantially. As the biggest exposure to poor electrical grid (in emerging markets) in general tend to be in Rural and Sub-Urban areas it is a particular OpEx concern as the emerging market operators expands towards Rural Areas to capture the additional subscriber potential present there. Further diesel fuel has on average increased with 10% annually (i..e, over the least 10 years) and as such is a very substantial Margin and Profitability risk if a very large part of the cellular / mobile network requires diesel generators and respective fuel. Obviously, “Rental & Leasing” as well as “Service & Maintenance” & “Personnel Cost” would be positively impacted (i.e., reduced) by Network Sharing initiatives. Best practices Network Sharing can bring around 35% OpEx savings on relevant cost structures. For more details on benefits and disadvantages (often forgotten in the heat of the moment) see my Blog “The ABC of Network Sharing – The Fundamentals”. In my experience one of the greatest opportunities in Emerging Growth Markets for increased efficiency are in the Services part covering Maintenance & Repair (which obviously also incudes field maintenance and spare part services).
    • Other Cost: typically covers the rest of OpEx not captured by the above specific items. It can also be viewed as overhead cost. It is also often used to “hide” cost that might be painful for the organization (i.e., in terms of authorization or consequences of mistakes). In general you will find a very large amount of smaller to medium cost items here rather than larger ones. Best practices should keep this below 10% of total OpEx and ca. 5% of Revenues. Much above this either means mis-categorization, ad-hoc projects, or something else that needs further clarification.

    So how does this help us compare a Mature Mobile Market with an Emerging Growth Market?

    As already mentioned in the description of the above cost structure categories particular Market Invest and Terminal-equipment Cost are items that tend to be substantially lower for emerging market operators or entirely absent from their cost structures.

    Lets assume our average mobile operator in an average mature mobile market (in Western Europe) have a Margin of 36%. In its existing (OpEx) cost structure they spend 15% of Revenue on Market Invest of which ca. 53% goes to subscriber acquisition (i.e., SAC cost category), 40% on subscriber retention (SRC) and another 7% for other marketing expenses. Further, this operator has been subsidizing their handset portfolio (i.e., Terminal Cost) which make up another 10% of the Revenue.

    Our Average Operator comes up with the disruptive strategy to remove all SAC and SRC from their cost structure and stop procuring terminal equipment. Assuming (and that is a very big one in a typical western European mature market) that revenue remains at the same level, how would this average operator fare?

    Removing SAC and SRC, which was 14% of the Revenue will improve the Margin adding another 14 percentage points. Removing terminal procurement from its cost structure leads to an additional Margin jump of 10 percentage points. The final result is a Margin of 60% which is fairly close to some of the highest margins we find in emerging growth markets. Obviously, completely annihilating Market Invest might not be the most market efficient move unless it is a market-wide initiative.

    Albeit the example might be perceived as a wee bit academic, it serves to illustrate that some of the larger margin differences we observe between mobile operators in mature and emerging growth markets can be largely explain by differences in the basic cost structure, i..e, the lack of substantial subscriber acquisition and retention costs as well as not procuring terminals does offer advantages to the emerging market business model.

    However, it also means that many operators in emerging markets have little OpEx flexibility, in the sense of faster OpEx reduction opportunities once mobile margin reduces due to for example slowing revenue growth. This typical becomes a challenge as mobile penetration starts reaching saturation and as ARPU reduces due to diminishing return on incremental customer acquisition.

    There is not much substantial OpEx flexibility (i..e, market invest & terminal procurement) in Emerging Growth Markets mobile accounts. This adds to the challenge of avoiding profitability squeeze and margin exposure by quickly scaling back OpEx.

    This is to some extend different from mature markets that historically had quiet a few low hanging fruits to address before OpEx efficiency and reduction became a real challenge. Though ultimately it does become a challenge.

    Back to Profitability with a Vengeance.

    So it is all pretty simple! … leave out Market Invest and Terminal Procurement … Then add that we typically have to do with Lower GDP countries which conventional wisdom would expect also to have lower Opex (or Cost in general) combined with better population scale .. isn’t that really what makes for a great emerging growth market Mobile Margin?

    Hmmm … Albeit Compelling ! ? … For the ones (of us) who would think that the cost would scale nicely with GDP and therefor a Low GDP Country would have a relative Lower Cost Base, well …

    opex vs gdp

    • In the Chart above the Y-axis is depicted with logarithmic scaling in order to provide a better impression of the data points across the different economies. It should be noted that throughout the years 2007 to 2013 (note: 2013 data is shown above)  there is no correlation between a countries mobile Opex, as estimated by Revenue – EBITDA, and the GDP.

    Well … GDP really doesn’t provide the best explanation (to say the least)! … So what does then?

    I have carried out multi-linear regression analysis on the available data from the “Bank of America Merrill Lynch (BoAML) Global Wireless Matrix Q1, 2014” datasets between the years 2007 to 2013. The multi-linear regression approach is based on year-by-year analysis of the data with many different subsets & combination of data chosen including adding random data.

    I find that the best description (R-square 0.73, F-Ratio of 30 and p-value(s) <0.0001) of the 48 country’s data on Opex. The amount of data points used in the multi-regression is at least 48 for each parameter and that for each of the 7 years analyzed. The result of the (preliminary) analysis is given by the following statistically significant parameters explaining the Mobile Market OpEx:

    1. Population – The higher the size of the population, proportional less Mobile Market Opex is spend (i.e., scale advantage).
    2. Penetration – The higher the mobile penetration, proportionally less Mobile Market Opex is being spend (i.e., scale advantage and the incremental penetration at an already high penetration would have less value thus less Opex should be spend).
    3. Users (i..e., as measured by subscriptions) – The more Users the higher the Mobile Market Opex (note: prepaid ratio has not been found to add statistical significance).
    4. ARPU (Average Revenue Per User) – The higher the ARPU, the higher will the Mobile Market Opex be.

    If I leave out ARPU, GDP does enter as a possible descriptive candidate although the overall quality of the regression analysis suffers. However, it appears that the GDP and ARPU cannot co-exist in the analysis. When Mobile Market ARPU data are included, GDP becomes non-significant. Furthermore, a countries Surface Area, which I previously believed would have a sizable impact on a Mobile Market’s OpEx, also does not enter as a significant descriptive parameter in this analysis. In general the Technology related OpEx is between 15% to 25% (maximum) of the Total OpEx and out that possible 40% to 60% would be related to sites that would be needed to cover a given surface area. This might no be significant enough in comparison to the other parameters or simply not a significant factor in the overall country related mobile OpEx.

    I had also expected 3G-UMTS to have had a significant contribution to the Opex. However this was not very clear from the analysis either. Although in the some of the earlier years (2005 – 2007), 3G does enter albeit not with a lot of weight. In Western Europe most incremental OpEx related to 3G has been absorb in the existing cost structure and very little (if any) incremental OpEx would be visible particular after 2007. This might not be the case in most Emerging Markets unless they can rely on UMTS deployments at 900 MHz (i.e., the traditional GSM band). Also the UMTS 900 solution would only last until capacity demand require the operators to deploy UMTS 2100 (or let their customers suffer with less mobile data quality and keep the OpEx at existing levels). In rural areas (already covered by GSM at 900 MHz) the 900 MHz UMTS deployment option may mitigate incremental OpEx of new site deployment and further encourage rural active network sharing to allow for lower cost deployment and providing rural populations with mobile data and internet access.

    The Population Size of a Country, the Mobile Penetration and the number of Users and their ARPU (note last two basically multiplies up to the revenue) are most clearly driving a mobile markets Opex.

    Philippines versus Germany – Revenue, Cost & Profitability.

    Philippines in 2013 is estimated to have a population of ca. 100 Million compared to Germany’s ca. 80 Million. The Urban population in Germany is 75% taking up ca. 17% of the German surface area (ca. 61,000 km2 or a bit more than Croatia). Comparison this to Philippines 50% urbanization that takes up up only 3% (ca. 9,000 km2 or equivalent to the surface area of Cyprus). Germany surface area is about 20% larger than Philippines (although the geographies are widely .. wildly may be a better word … different, with the Philippines archipelago comprising 7,107 islands of which ca. 2,000 are inhabited, making the German geography slightly boring in comparison).

    In principle if all I care about is to cover and offer services to the urban population (supposedly the ones with the money?) I only need to cover 9 – 10 thousand square kilometer in the Philippines to capture ca. 50 Million potential mobile users (or 5,000 pop per km2), while I would need to cover about 6 times that amount of surface area to capture 60 million urban users in Germany (or 1,000 pop per km2). Even when taking capacity and quality into account, my Philippine cellular network should be a lot smaller and more efficient than my German mobile network. If everything would be equal, I basically would need 6 times more sites in Germany compared to Philippines. Particular if I don’t care too much about good quality but just want to provide best effort services (that would never work in Germany by the way). Philippines would win any day over Germany in terms of OpEx and obviously also in terms of capital investments or CapEx. It does help the German Network Economics that the ARPU level in Germany is between 4 times (in 2003) to 6 times (in 2013) higher than in Philippines. Do note that the two major Germany mobile operators covers almost 100% of the population as well as most of the German surface area and that with a superior quality of voice as well as mobile broadband data. This does not true hold true for Philippines.

    In 2003 a mobile consumer in Philippines would spend on average almost 8 US$ per month for mobile services. This was ca. 4x lower than a German customer for that year. The 2003 ARPU of the Philippines roughly corresponded to 10% of the GDP per Capita versus 1.2% of the German equivalent. Over the 10 Years from 2003 to 2013, ARPU dropped 60% in Philippine and by 2013 corresponded to ca. 1.5% of GDP per Capita (i.e., a lot more affordable propositions). The German 2013 ARPU to GDP per Capita ratio was 0.5% and its ARPU was ca. 40% lower than in 2003.

    The Philippines ARPU decline and Opex increase over the 10 year period led to a Margin drop from 64% to 45% (19% drop!) and their Margin is still highly likely to fall further in the near to medium-term. Despite the Margin drop Philippines still made a PHP26 Billion more EBITDA in 2013 than compared to 2003 (ca. 45% more or equivalent compounded annual growth rate of 3.8%).

    in 2003

    • Germany had ca. 3x more mobile subscribers compared to Philippines.
    • German Mobile Revenue was 14x higher than Philippines.
    • German EBITDA was 9x higher than that of Philippines.
    • German OpEx was 23x higher than that of Philippines Mobile Industry.
    • Mobile Margin of the Philippines was 64% versus 42% of Germany.
    • Germany’s GPD per Capita (in US$) was 35 times larger than that of Philippines.
    • Germany’s mobile ARPU was 4 times higher than that of Philippines.

    in 2013 (+ 10 Years)

    • Philippines & Germany have almost the same amount of mobile subscriptions.
    • Germany Mobile Revenue was 6x higher than Philippines.
    • German EBITDA was only 5x higher than that of Philippines.
    • German OpEx was 6x higher than Mobile OpEx in Philippines (and German OpEx was at level with 2003).
    • Mobile Margin of the Philippines dropped 19% to 45% compared to 42% of Germany (essential similar to 2003).
    • In local currencies, Philippines increased their EBITDA with ca. 45%, Germany remain constant.
    • Both Philippines and Germany has lost 11% in absolute EBITDA between the 10 Year periods maximum and 2013.
    • Germany’s GDP per Capita (in US$) was 14 times larger than that of the Philippines.
    • Germany’s ARPU was 6 times higher than that of Philippines.

    In the Philippines, mobile revenues have grown with 7.4% per anno (between 2003 and 2013) while the corresponding mobile OpEx grew with 12% and thus eroding margin massively over the period as increasingly more mobile customers were addressed. In Philippines, the 2013 OpEx level was 3 times that of 2003 (despite one major network consolidation and being an essential duopoly after the consolidation). In Philippines over this period the annual growth rate of mobile users were 17% (versus Germany’s 6%). In absolute terms the number of users in Germany and Philippines were almost the same in 2013, ca. 115 Million versus 109 Million. In Germany over the same period Financial growth was hardly present although more than 50 Million subscriptions were added.

    When OpEx grows faster than Revenue, Profitability will suffer today & even more so tomorrow.

    Mobile capital investments (i.e., CapEx) over the period 2003 to 2013 was for Germany 5 times higher than that of Philippines (i.e., remember that Germany also needs at least 5 – 6 times more sites to cover the Urban population) and tracks at a 13% Capex to Revenue ratio versus Philippines 20%.

    The stories of Mobile Philippines and of Mobile Germany are not unique. Likewise examples can be found in Emerging Growth Markets as well as Mature Markets.

    Can Mature Markets learn or even match (keep on dreaming?) from Emerging Markets in terms of efficiency? Assuming such markets really are efficient of course!

    As logic (true or false) would dictate given the relative low ARPUs in emerging growth markets and their correspondingly high margins, one should think that such emerging markets are forced to run their business much more efficient than in Mature Markets. While compelling to believe this, the economical data would indicate that most emerging growth markets have been riding the subscriber & revenue growth band wagon without too much thoughts to the OpEx part … and Frankly why should you care about OpEx when your business generates margins much excess of 40%? Well … it is (much) easier to manage & control OpEx year by year than to abruptly “one day” having to cut cost in panic mode when growth slows down the really ugly way and OpEx keeps increasing without a care in the world. Many mature market operators have been in this situation in the past (e.g., 2004 – 2008) and still today works hard to keep their margins stable and profitability from declining.

    Most Companies will report both Revenues and EBITDA on quarterly and annual basis as both are key financial & operational indicators for growth. They tend not report Opex but as seen from above that’s really not a problem to estimate when you have Revenue and EBITDA (i.e., OpEx = Revenue – EBITDA).

    philippines vs germany

    Thus, had you left the European Telco scene (assuming you were there in the first place) for the last 10 years and then came back you might have concluded that not much have happened in your absence … at least from a profitability perspective. Germany was in 2013 almost at its Ebitda margin level of 2003. Of course as the ones who did not take a long holiday knows those last 10 years were far from blissful financial & operational harmony in the mature markets where one efficiency program after the other struggled to manage, control and reduce Operators Operational Expenses.

    However, over that 10-year period Germany added 50+ Million mobile subscriptions and invested more than 37 Billion US$ into the mobile networks from T-Deutschland, Vodafone, E-plus and Telefonica-O2. The mobile country margin over the 10-year period has been ca. 43% and the Capex to Revenue ratio ca. 13%. By 2013 the total amount of mobile subscription was in the order of 115 Million out of a population of 81 Million (i.e., 54 Million of the German population is between 15 and 64 years of age). The observant numerologist would have realized that there are many more subscriptions than population … this is not surprising as it reflects that many subscribers are having multiple different SIM cards (as opposed to cloned SIMs) or subscription types based on their device portfolio and a host of other reasons.

    All Wunderbar! … or? .. well not really … Take a look at the revenue and profitability over the 10 year period and you will find that no (or very very little) revenue and incremental profitability has been gained over the period from 2003 to 2013. AND we did add 80+% more subscriptions to the base!

    Here is the Germany Mobile development over the period;

    germany 2003-2013

    Apart from adding subscribers, having modernized the mobile networks at least twice over the period (i.e, CapEx with little OpEx impact) and introduced LTE into the German market (i.e., with little additional revenue to show for it) not much additional value has been added. It is however no small treat what has happen in Germany (and in many other mature markets for that matter). Not only did Germany almost double the mobile customers (in terms of subscriptions), over the period 3G Nodes-B’s were over-layed across the existing 2G network. Many additional sites were added in Germany as the fundamental 2G cellular grid was primarily based on 900 MHz and to accommodate the higher UMTS frequency (i.e., 2100 MHz) more new locations were added to provide a superior 3G coverage (and capacity/quality). Still Germany managed all this without increasing the Mobile Country OpEx across the period (apart from some minor swings). This has been achieved by a tremendous attention to OpEx efficiency with every part of the Industry having razor sharp attention to cost reduction and operating at increasingly efficiency.

    philippines 2003-2013

    Philippines story is a Fabulous Story of Growth (as summarized above) … and of Profitability & Margin Decline.

    Philippines today is in effect a duopoly with PLDT having approx. 2/3 of the mobile market and Globe the remaining 1/3. During the period the Philippine Market saw Sun Cellular being acquired and merged by PLDT. Further, 3G was deployed and mobile data launched in major urban areas. SMS revenues remained the largest share of non-voice revenue to the two remaining mobile operators PLDT and Globe. Over the period 2003 to 2013, the mobile subscriber base (in terms of subscriptions) grew with 16% per anno and the ARPU fell accordingly with 10% per anno (all measured in local currency). All-in-all safe guarding a “healthy” revenue increase over the period from ca. 93 Billion PHP in 2003 to 190 Billion PHP in 2013 (i.e., a 65% increase over the period corresponding to a 5% annual growth rate).

    However, the Philippine market could not maintain their relative profitability & initial efficiency as the mobile market grew.

    philippines opex & arpu

    So we observe (at least) two effects (1) Reduction in ARPU as market is growing & (2) Increasing Opex cost to sustain the growth in the market. As more customers are added to a mobile network the return on thus customers increasingly diminishes as network needs to be massively extended capturing the full market potential versus “just” the major urban potential.

    Mobile Philippines did become less economical efficient as its scale increases and ARPU dropped (i.e., by almost 70%). This is not an unusual finding across Emerging Growth Markets.

    As I have described in my previous Blog “SMS – Assimilation is inevitable, Resistance is Futile!”, Philippines mobile market has an extreme exposure to SMS Revenues which amounts to more than 35% of Total Revenues. Particular as mobile data and smartphones penetrate the Philippine markets. As described in my previous Blog, SMS Services enjoy the highest profitability across the whole range of mobile services we offer the mobile customer including voice. As SMS is being cannibalized by IP-based messaging, the revenue will decline dramatically and the mobile data revenue is not likely to catch up with this decline. Furthermore, profitability will suffer as the the most profitable service (i.e., SMS) is replaced by mobile data that by nature has a different profitability impact compared to simple SMS services.

    Philippines do not only have a substantial Margin & EBITDA risk from un-managed OpEx but also from SMS revenue cannibalization (a la KPN in the Netherlands and then some).

    exposure_to_SMS_decline

    Let us compare the ARPU & Opex development for Philippines (above Chart) with that of Germany over the same period 2003 to 2013 (please note that the scale of Opex is very narrow)

    germany opex & arpu

    Mobile Germany managed their Cost Structure despite 40+% decrease in ARPU and as another 60% in mobile penetration was added to the mobile business. Again similar trend will be found in most Mature Markets in Western Europe.

    One may argue (and not being too wrong) that Germany (and most mature mobile markets) in 2003 already had most of its OpEx bearing organization, processes, logistics and infrastructure in place to continue acquiring subscribers (i.e., as measured in subscriptions). Therefor it have been much easier for the mature market operators to maintain their OpEx as they continued to grow. Also true that many emerging mobile markets did not have the same (high) deployment and quality criteria, as in western mature markets, in their initial network and service deployment (i.e., certainly true for the Philippines as is evident from the many Regulatory warnings both PLDT and Globe received over the years) providing basic voice coverage in populated areas but little service in sub-urban and rural areas.

    Most of the initial emerging market networks has been based on coarse (by mature market standards) GSM 900 MHz (or CDMA 850 MHz) grids with relative little available capacity and indoor coverage in comparison to population and clutter types (i.e., geographical topologies characterized by their cellular radio interference patterns). The challenge is, as an operator wants to capture more customers, it will need to build out / extend its mobile network in the areas those potential or prospective new customers live and work in. From a cost perspective sub-urban and rural areas in emerging markets are not per se lower cost areas despite such areas in general being lower revenue areas than their urban equivalents. Thus, as more customers are added (i.e.,  increased mobile penetration) proportionally more cost are generated than revenue being capture and the relative margin will decline. … and this is how the Ugly-cost (or profitability tail) is created.

    ugly_tail

    • I just cannot write about profitability and cost structure without throwing the Ugly-(cost)-Tail on the page.I strongly encourage all mobile operators to make their own Ugly-Tail analysis. You will find more details of how to remedy this Ugliness from your cost structure in “The ABC of Network Sharing – The Fundamentals”.

    In Western Europe’s mature mobile markets we find that more than 50% of our mobile cellular sites captures no more than 10% of the Revenues (but we do tend to cover almost all surface area several times unless the mobile operators have managed to see the logic of rural network sharing and consolidated those rural & sub-urban networks). Given emerging mobile markets have “gone less over board” in terms of lowest revenue un-profitable network deployments in rural areas you will find that the number of sites carrying 10% of less of the revenue is around 40%. It should be remembered that the rural populations in emerging growth markets tend to be a lot larger than in of that in mature markets and as such revenue is in principle spread out more than what would be the case in mature markets.

    Population & Mobile Statistics and Opex Trends.

    The following provides a 2013 Summary of Mobile Penetration, 3G Penetration (measured in subscriptions), Urban Population and the corresponding share of surface area under urban settlement. Further to guide the eye the 100% line has been inserted (red solid line), a red dotted line that represents the share of the population that is between 15 and 64 years of age (i.e., who are more likely to afford a mobile service) and a dashed red line providing the average across all the 43 countries analyzed in this Blog.

    population & mobile penetration stats

    • Sources: United Nations, Department of Economic & Social Affairs, Population Division.  The UN data is somewhat outdated though for most data points across emerging and mature markets changes have been minor. Mobile Penetration is based on Pyramid Research and Bank of America Merrill Lynch Global Wireless Matrix Q1, 2014. Index Mundi is the source for the Country Age structure and data for %tage of population between 15 and 64 years of age and shown as a red dotted line which swings between 53.2% (Nigeria) to 78.2% (Singapore), with an average of 66.5% (red dashed line).

    There is a couple of points (out of many) that can be made on the above data;

    1. There are no real emerging markets any longer in the sense of providing basic mobile telephone services such as voice and messaging.
    2. For mobile broadband data via 3G-UMTS (or LTE for that matter), what we tend to characterize as emerging markets are truly emerging or in some case nascent (e.g., Algeria, Iraq, India, Pakistan, etc..). 
    3. All mature markets have mobile penetration rates way above 100% with exception of Canada, i.e., 80% (i.e., though getting to 100% in Canada might be a real challenge due to a very dispersed remaining 20+% of the population).
    4. Most emerging markets are by now covering all urban areas and corresponding urban population. Many have also reach 100% mobile penetration rates.
    5. Most Emerging Markets are lagging Western Mature Markets in 3G penetration. Even providing urban population & urban areas with high bandwidth mobile data is behind that of mature markets.

    Size & density does matter … in all kind of ways when it comes to the economics of mobile networks and the business itself.

    In Australia I only need to cover ca. 40 thousand km2 (i.e., 0.5% of the total surface area and a bit less than the size of Denmark) to have captured almost 90% of the Australian population (e.g., Australia’s total size is 180+ times that of Denmark excluding Greenland). I frequently hear my Australian friends telling me how Australia covers almost 100% of the population (and I am sure that they cover more area than is equivalent to Denmark too) … but without being (too) disrespectful that record is not for Guinness Book of Records anytime soon. in US (e.g., 20% more surface area than Australia) I need to cover in almost 800 thousand km2 (8.2% of surface area or equivalent  to a bit more than Turkey) to capture more than 80% of the population. In Thailand I can only capture 35% of the population by covering ca. 5% of the surface area or a little less than 30 thousand km2 (approx. the equivalent of Belgium). The remaining of 65% of the Thai population is rural-based and spread across a much larger surface area requiring extensive mobile network to provide coverage to and capture additional market share outside the urban population.

    So in Thailand I might need a bit less cell sites to cover 35% of my population (i.e., 22M) than in Australia to cover almost 90% of the population (i.e., ca. 21M). That’s pretty cool economics for Australia which is also reflected in a very low profitability risk score. For Thailand (and other countries with similar urban demographics) it is tough luck if they want to reach out and get the remaining 65% of their population. The geographical dispersion of the population outside urban areas is very wide and increasing geographical area is required to be covered in order to catch this population group. UMTS at 900 MHz will help to deploy economical mobile broadband, as will LTE in the APT 700 MHz band (being it either FDD Band 28 or TDD Band 44) as the terminal portfolio becomes affordable for rural and sub-urban populations in emerging growth markets.

    In Western Europe on average I can capture 77% of my population (i..e, the urban pop) covering 14.2% of the surface area (i.e., average over markets in this analysis), This is all very agreeable and almost all Western European countries cover their surface areas to at least 80% and in most cases beyond that (i.e., it’s just less & easier land to cover though not per see less costly). In most cases rural coverage is encourage (or required) by the mature market license regime and not always a choice of the mobile operators.

    Before we look in depth to the growth (incl. positive as well as negative growth), lets first have a peek at what has happened to the mobile revenue in terms of ARPU and Number of Mobile User and the corresponding mobile penetration over the period 2007 to 2013.

    arpu development

    • Source: Bank of America Merrill Lynch Global Wireless Matrix Q1, 2014 and Pyramid Research Data data were used to calculated the growth of ARPU as compounded annual growth rate between 2007 to 2013 and the annual growth rate between 2012 and 2013. Since 2007 the mobile ARPUs have been in decline and to make matters worse the decline has even accelerated rather than slowed down as markets mobile penetration saturated.

    mobile penetration

    • Source: Mobile Penetrations taken from Bank of America Merrill Lynch Global Wireless Matrix Q1, 2014 and Pyramid Research Data data .Index Mundi is the source for the Country Age structure and data for %tage of population between 15 and 64 years of age and shown as a red dotted line which swings between 53.2% (Nigeria) to 78.2% (Singapore), with an average of 66.5% (red dashed line). It s interesting to observe that most emerging growth markets are now where the mature markets were in 2007 in terms of mobile penetration.

    Apart from a very few markets, ARPU has been in a steady decline since 2007. Further in many countries the ARPU decline has even accelerated rather than slowed down. From most mature markets the conclusion that we can draw is that there are no evidence that mobile broadband data (via 3G-UMTS or LTE) has had any positive effect on ARPU. Although some of the ARPU decline over the period in mature markets (particular European Union countries) can be attributed to regulatory actions. In general as soon a country mobile penetration reaches 100% (in all effect reaches the part of the population 15-64 years of age) ARPU tends to decline faster rather than slowing down. Of course one may correctly argue that this is not a big issue as long as the ARPU times the Users (i.e., total revenue) remain growing healthily. However, as we will see that is yet another challenge for the mobile industry as also the total revenue in mature markets also are in decline on a year by year basis. Given the market, revenue & cost structures of emerging growth markets, it is not unlikely that they will face similar challenges to their mobile revenues (and thus profitability). This could have a much more dramatic effect on their overall mobile economics & business models than what has been experienced in the mature markets which have had a lot more “cushion” on the P&Ls to defend and even grow (albeit weakly) their profitability. It is instructive to see that the most emerging growth markets mobile penetrations have reached the levels of Mature Markets in 2007. Combined with the introduction and uptake of mobile broadband data this marks a more troublesome business model phase than what these markets have experienced in the past.Some of the emerging growth market have yet to introduce 3G-UMTS, and some to leapfrog mobile broadband by launching LTE. Both events, based on lessons learned from mature markets, heralds a more difficult business model period of managing cost structures while defending revenues from decline and satisfy customers appetite for mobile broadband internet that cannot be supported by such countries fixed telecommunications infrastructures.

    For us to understand more profoundly where our mobile profitability is heading it is obviously a good idea to understand how our Revenue and OpEx is trending. In this Section I am only concerned about the Mobile Market in Country and not the individual mobile operators in the country. For that latter (i.e., Operator Profitability) you will find a really cool and exiting analytic framework in the Section after this. I am also not interested (in this article) in modeling the mobile business bottom up (been there & done that … but that is an entirely different story line). However, I am interested and I am hunting for some higher level understanding and a more holistic approach that will allow me to probabilistically (by way of Bayesian analysis & ultimately inference) to predict in which direction a given market is heading when it comes to Revenue, OpEx and of course the resulting EBITDA and Margin. The analysis I am presenting in this Section is preliminary and only includes compounded annual growth rates as well as the Year-by-Year growth rates of Revenue and OpEx. Further developments will include specific market & regulatory developments as well to further improve on the Bayesian approach. Given the wealth of data accumulated over the years from the Bank of America Merrill Lynch (BoAML) Global Wireless Matrix datasets it is fairly easy to construct & train statistical models as well as testing those consistent with best practices.

    The Chart below comprises 48 countries Revenue & OpEx growth rates as derived from the “Bank of America Merrill Lynch (BoAML) Global Wireless Matrix Q1, 2014” dataset (note: BoAML data available in this analysis goes back to 2003). Out of the 48 Countries, 23 countries have an Opex compounded annual growth rate higher than the corresponding Revenue growth rate. Thus, it is clear that those 23 countries are having a higher risk of reduced margin and strained profitability due to over-proportionate growth of OpEx. Out of the 23 countries with high or very high profitability risk, 11 countries have been characterized in macro-economical terms as emerging growth markets (i.e.,  China, India, Indonesia, Philippines, Egypt, Morocco, Nigeria, Russia, Turkey, Chile, Mexico) the remaining 12 countries can be characterized as mature markets in macro-economical terms (i.e., New Zealand, Singapore, Austria, Belgium, France, Greece, Spain, Canada, South Korea, Malaysia, Taiwan, Israel). Furthermore, 26 countries had a higher Opex growth between 2012 and 2013 than their revenues and is likely to be trending towards dangerous territory in terms of Profitability Risk.

    cagr_rev&opex2007-2013

    • Source: Bank of America Merrill Lynch Global Wireless Matrix Q1, 2014. Revenue depicted here is Service Revenues and the OPEX has been calculated as Service REVENUE minus EBITDA. The Compounded Annual Growth Rate (CAGR) is calculated CAG{R_{2007 - 2013}}X = {\left( {\frac{{{X_{2013}}}}{{{X_{2007}}}}} \right)^{\frac{1}{{2013 - 2007}}}} - 1with X being Revenue and Opex. Y-axis scale is from -25% to +25% (i.e., similar to the scale chosen in the Year- by-Year growth rate shown in the Chart below).

    With few exceptions one does not need to read the countries names on the Chart above to immediately see where we have the Mature Markets with little or negative growth and where what we typically call emerging growth markets are located.

    As the above Chart clearly illustrate the mobile industry across different types of markets have an increasing challenge to deliver profitable growth and if the trend continues to keep their profitability period!

    Opex grows faster than Mobile Operator’s can capture Revenue … That’s a problem!

    In order gauge whether the growth dynamics of the last 7 years is something to be concerned about (it is! … it most definitely is! but humor me!) … it is worthwhile to take a look at the year by year growth rate trends (i.e. as CAGR only measures the starting point and the end point and “doesn’t really care” about what happens in the in-between years).

    annualgrowth2012-2013

    • Source: Bank of America Merrill Lynch Global Wireless Matrix Q1, 2014. Revenue depicted here is Service Revenues and the OPEX has been calculated as Service REVENUE minus EBITDA. Year on Year growth is calculated and is depicted in the Chart above. Y-axis scale is from -25% to +25%. Note that the Y-scales in the Year-on-Year Growth Chart and the above 7-Year CAGR Growth Chart are the same and thus directly comparable.

    From the Year on Year Growth dynamics compared to the compounded 7-year annual growth rate, we find that Mature Markets Mobile Revenues decline has accelerated. However, in most cases the Mature Market OpEx is declining as well and the Control & Management of the cost structure has improved markedly over the last 7 years. Despite the cost structure management most Mature Markets Revenue have been declining faster than the OpEx. As a result Profitability Squeeze remains a substantial risk in Mature Markets in general.

    In almost all Emerging Growth Markets the 2013 to 2012 revenue growth rate has declined in comparison with the compounded annual growth rate. Not surprising as most of those markets are heading towards 100% mobile penetration (as measured in subscriptions). OpEx growth remains a dire concern for most of the emerging growth markets and will continue to squeeze emerging markets profitability and respective margins. There is no indication (in the dataset analyzed) that OpEx is really under control in Emerging Growth Markets, at least to the same degree as what is observed in the Mature Markets (i.e., particular Western Europe). What further adds to the emerging markets profitability risk is that mobile data networks (i.e., 3G-UMTS, HSPA+,..) and corresponding mobile data uptakes are just in its infancy in most of the Emerging Growth Markets in this analysis. The networks required to sustain demand (at a reasonable quality) are more extensive than what was required to provide okay-voice and SMS. Most of the emerging growth markets have no significant fixed (broadband data) infrastructure and in addition poor media distribution infrastructure which can relieve the mobile data networks being built. Huge rural populations with little available ARPU potential but a huge appetite to get connected to internet and media will further stress the mobile business models cost structure and sustainable profitability.

    This argument is best illustrated by comparing the household digital ecosystem evolution (or revolution) in Western Europe with the projected evolution of Emerging Growth Markets.

    emerging markets display & demand 

    • Above Chart illustrates the likely evolution in Home and Personal Digital Infrastructure Ecosystem of an emerging market’s Household (HH). Particular note that the amount of TV Displays are very low and much of the media distribution is expected to happen over cellular and wireless networks. An additional challenge is that the fixed broadband infrastructure is widely lagging in many emerging markets (in particular in sub-urban and rural areas) increasing the requirements of the mobile network in those markets. It is compelling to believe that we will witness a completely different use case scenarios of digital media consumption than experienced in the Western Mature Markets. The emerging market is not likely to have the same degree of mobile/cellular data off-load as experienced in mature markets and as such will strain mobile networks air-interface, backhaul and backbone substantially more than is the case in mature markets. Source: Dr. Kim K Larsen Broadband MEA 2013 keynote on “Growth Pains: How networks will supply data capacity for 2020

    displays in homes _ western europe

    • Same as above but projection for Western Europe. In comparison with Emerging Markets a Mature Market Household  (HH) has many more TV as wells as a substantially higher fixed broadband penetration offering high-bandwidth digital media distribution as well as off-load optionality for mobile devices via WiFi. Source: Dr. Kim K Larsen Broadband MEA 2013 keynote on “Growth Pains: How networks will supply data capacity for 2020

    Mobile Market Profit Sustainability Risk Index

    The comprehensive dataset from Bank of America Merrill Lynch Global Wireless Matrix allows us to estimate what I have chosen to call a Market Profit Sustainability Risk Index. This Index provides a measure for the direction (i.e., growth rates) of Revenue & Opex and thus for the Profitability.

    The Chart below is the preliminary result of such an analysis limited to the BoAML Global Wireless Matrix Quarter 1 of 2014. I am currently extending the Bayesian Analysis to include additional data rather than relying only on growth rates of Revenue & Opex, e.g., (1) market consolidation should improve the cost structure of the mobile business, (2) introducing 3G usually introduces a negative jump in the mobile operator cost structure, (3) mobile revenue growth rate reduces as mobile penetration increases, (4) regulatory actions & forces will reduce revenues and might have both positive and negative effects on the relevant cost structure, etc.…

    So here it is! Preliminary but nevertheless directionally reasonable based on Revenue & Opex growth rates, the Market Profit Sustainability Risk Index over for 48 Mature & Emerging Growth Markets worldwide:

    profitability_risk_index

    The above Market Profit Sustainability Risk Index is using the following risk profiles

    1. Very High Risk (index –5): (i.e., for margin decline): (i) Compounded Annual Growth Rate (CAGR) between 2007 and 2013 of Opex was higher than equivalent for Revenue AND (ii) Year-on-Year (YoY) Growth Rate 2012 to 2013 of Opex higher than that of Revenue AND (iii) Opex Year-on-Year 2012 to 2013 Growth Rate is higher than the Opex CAGR over the period 2007 to 2013.
    2. High Risk (index –3): Same as above Very High Risk with condition (iii) removed OR YoY Revenue Growth 2012 to 2013 lower than the corresponding Opex Growth.
    3. Medium Risk (index –2): CAGR of Revenue lower than CAGR of Opex but last year (i.e., 2012 t0 2013) growth rate of Revenue higher than that of Opex.
    4. Low Risk (index 1): (i) CAGR of Revenue higher than CAGR of Opex AND (ii) YoY Revenue Growth higher than Opex Growth but lower than the inflation of the previous year.
    5. Very Low Risk (index 3): Same as above Low Risk with YoY Revenue Growth Rate required to be higher than the Opex Growth with at least the previous year’s inflation rate.

    The Outlook for Mature Markets are fairly positive as most of those Market have engaged in structural cost control and management for the last 7 to 8 years. Emerging Growth Markets Profit Sustainability Risk Index are cause for concern. As the mobile markets are saturating it usually results in lower ARPU and higher cost to reach the remaining parts of the population (often “encouraged” by regulation). Most Emerging Growth markets have started to introduce mobile data, which is likely to result in higher cost-structure pressure & with traditional revenue streams under pressure (if history of Mature Markets are to repeat itself in emerging growth markets). The Emerging Growth Markets have had little incentive (in the past) to focus on cost structure control and management, due to the exceedingly high margins that they historically could present with their legacy mobile services (i.e., Voice & SMS) and relative light networks (as always in comparison to Mature Markets).

    Cautionary note is appropriate. All the above are based on the Mobile Market across the world. There are causes and effects that can move a market from having a high risk profile to a lower. Even if I feel that the dataset supports the categorization it remains preliminary as more effects should be included in the current risk model to add even more confidence in its predictive power. Furthermore, the analysis is probabilistic in nature and as such does not claim to carve in stone the future. All the Index claims to do is to indicate a probable direction of the profitability (as well as Revenue & OpEx). There are several ways that Operators and Regulatory Authorities might influence the direction of the profitability changing Risk Exposure (in the Wrong as well as in the Right Direction)

    Furthermore, it would be wrong to apply the Market Profit Sustainability Risk Index to individual mobile operators in the relevant markets analyzed here. The profitability dynamics of individual mobile operators are a wee bit more complicated, albeit some guidelines and predictive trends for their profitability dynamics in terms of Revenue and Opex can be defined. This will all be revealed in the following Section.

    Operator Profitability – the Profitability Math.

    We have seen that the Margin M an be written as

    M = \frac{E}{R} = \frac{{R - O}}{R}with E, R and O being EBITDA, REVENUE and OPEX respectively.

    However, much more interesting is that it can also be written as a function of subscriber share \sigma

    \Delta  = \delta  - \frac{{{o_f}}}{{{r_u}}}\frac{1}{\sigma }being valid for\forall \sigma  \in ]0,1]with \Delta being the margin and the subscriber market share \sigma can be found between 0% to 100%. The rest will follow in more details below, suffice to say that as the subscriber market share increases the Margin (or relative profitability) increases as well although not linearly (if anyone would have expected that ).

    Before we get down and dirty on the math lets discuss Operator Profitability from a higher level and in terms of such an operators subscriber market share (i.e., typically measured in subscriptions rather than individual users).

    In the following I will show some Individual Operator examples of EBITDA Margin dynamics from Mature Markets limited to Western Europe. Obviously the analysis and approach is not limited emerging markets and can (have been) directly extended to Emerging Growth Markets or any mobile market for that matter. Again BoAML Global Matrix provides a very rich data set for applying the approach described in this Blog.

    It has been well established (i.e., by un-accountable and/or un-countable Consultants & Advisors) that an Operator’s Margin correlates reasonably well with its Subscriber Market Share as the Chart below illustrates very well. In addition the Chart below also includes the T-Mobile Netherlands profitability journey from 2002 to 2006 up to the point where Deutsche Telekom looked into acquiring Orange Netherlands. An event that took place in the Summer of 2007.

    margin versus subscriber share

    I do love the above Chart (i.e., must be the physicist in me?) as it shows that such a richness in business dynamics all boiled down to two main driver, i.e., Margin & Subscriber Market Shared.

    So how can an Operator strategize to improve its profitability?

    Let us take an Example

    margin growth by acquisition or efficiency

    Here is how we can think about it in terms of Subscriber Market Share and EBITDA as depicted by the above Chart. In simple terms an Operator have a combination of two choices (Bullet 1 in above Chart) Improve its profitability through Opex reductions and making its operation more efficient without much additional growth (i.e., also resulting in little subscriber acquisition cost), it can improve its ARPU profile by increasing its revenue per subscriber (smiling a bit cynical here while writing this) again without adding much in additional market share. The first part of Bullet 1 has been pretty much business as usual in Western Europe since 2004 at least (unfortunately very few examples of the 2nd part of Bullet 1) and (Bullet 2 in above Chart) The above “Margin vs. Subscriber Market Share”  Chart indicates that if you can acquire the customers of another company (i.e., via Acquisition & Merger) it should be possible to quantum leap your market share while increasing the efficiencies of the operation by scale effects. In the above Example Chart our Hero has ca. 15% Customer Market Share and the Hero’s target ca. 10%. Thus after an acquisition our Hero would expect to get ca. 25% (if they play it well enough). Similarly we would expect a boost in profitability and hope for at least 38% if our Hero has 18% margin and our Target has 20%. Maybe even better as the scale should improve this further. Obviously, this kind of “math” assumes that our Hero and Target can work in isolation from the rest of the market and that no competitive forces would be at play to disrupt the well thought through plan (or that nothing otherwise disruptive happens in parallel with the merger of the two businesses). Of course such a venture comes with a price tag (i.e., the acquisition price) that needs to be factored into the overall economics of acquiring customers. As said most (Western) Operators are in a perpetual state of managing & controlling cost to maintain their Margin, protect and/or improve their EBITDA.

    So one thing is theory! Let us see how the Dutch Mobile Markets Profitability Dynamics evolved over the 10 year period from 2003 to 2013;

    mobile netherlands 10 year journey

    From both KPN’s acquisition of Telfort as well as the acquisition & merger of Orange by T-Mobile above Margin vs. Subscriber Market Share Chart, we see that in general, the Market Share logic works. On the other hand the management of the integration of the business would have been fairly unlucky for that to be right. When it comes to the EBITDA logic it does look a little less obvious. KPN clearly got unlucky (if un-luck has something to do with it?) as their margin decline with a small uplift albeit still lower than where they started pre-acquisition. KPN should have expected a margin lift to 50+%. That did not happen to KPN – Telfort. T-Mobile did fare better although we do observe a margin uplift to around 30% that can be attributed to Opex synergies resulting from the integration of the two businesses. However, it has taken many Opex efficiency rounds to get the Margin up to 38% that was the original target for the T-Mobile – Orange transaction.

    In the past it was customary to take lots of operators from many countries, plot their margin versus subscriber markets share, draw a straight line through the data points and conclude that the margin potential is directly related to the Subscriber Market Share. This idea is depicted by the Left Side Chart and the Straight line “Best” Fit to data.

    Lets just terminate that idea … it is wrong and does not reflect the right margin dynamics as a function of the subscriber markets share. Furthermore, the margin dynamics is not a straight-line function of the subscriber market share but rather asymptotic falling off towards minus infinity, i.e., when the company have no subscribers and no revenue but non-zero cost. We also observed a diminishing return on additional market share in the sense that as more market share is gained smaller and smaller incremental margins are gained. The magenta dashed line in the Left Chart below illustrates how one should expect the Margin to behave as a function of Subscriber market share.

    the wrong & the right way to show margin vs subscriber share 

    The Right Chart above shows has broken down the data points in country by country. It is obvious that different countries have different margin versus market share behavior and that drawing a curve through all of those might be a bit naïve.

    So how can we understand this behavior? Let us start with making a very simple formula a lot more complex :–)

    We can write the Margin\Delta as the ratio of Earning before Interest Tax Depreciation & Amortization (EBITDA)and Revenue R:\Delta  = \frac{{EBITDA}}{R} = \frac{{R - O}}{R} = 1 - \frac{O}{R}, EBITDA is defined as Revenue minus Opex. Both Opex and Revenue I can de-compose into a fixed and a variable part: O = Of + AOPU x U and R = Rf + ARPU x U with AOPU being the Average Opex per User, ARPU the Average (blended) Revenue per User and U the number of users. For the moment I will be ignoring the fixed part of the revenue and write R = ARPU x U. Further, the number of users can be written as U = \sigma \,Mwith \sigma being the market share and M being the market size. So we can now write the margin as

    \Delta  = 1 - \frac{{{O_f} + {o_u}\sigma M}}{{{r_u}\sigma M}} = 1 - \frac{{{o_u}}}{{{r_u}}} - \frac{{{o_f}}}{{{r_u}}}\frac{1}{\sigma } = \delta  - \frac{{{o_f}}}{{{r_u}}}\frac{1}{\sigma }with \delta  = 1 - \frac{{{o_u}}}{{{r_u}}}and {o_f} = \frac{{{O_f}}}{M}.

    \Delta  = \delta  - \frac{{{o_f}}}{{{r_u}}}\frac{1}{\sigma }being valid for\forall \sigma  \in ]0,1]

    The Margin is not a linear function of the Subscriber Market Share (if anybody would have expected that) but relates to the Inverse of Market Share.

    Still the Margin becomes larger as the market share grows with maximum achievable margin of {\Delta _{\max }} = 1 - \frac{{{o_u}}}{{{r_u}}} - \frac{{{o_f}}}{{{r_u}}}as the market share equals 1 (i.e., Monopoly). We observe that even in a Monopoly there is a limit to how profitable such a business can be. It should be noted that this is not a constant but a function of how operationally efficient a given operator is as well as its market conditions. Furthermore, as the market share reduces towards zero \Delta  \to  - \infty .

    Fixed Opex (of) per total subscriber market: This cost element is in principle related to cost structure that is independent on the amount of customers that a given mobile operator have. For example a big country with a relative low population (or mobile penetration) will have higher fixed cost per total amount of subscribers than a smaller country with a larger population (or mobile penetration). Fixed cost is difficult to change as it depends on the network and be country specific in nature. For an individual Operator the fixed cost (per total market subscribers) will be influenced by;

    • Coverage strategy, i.e., to what extend the country’s surface area will be covered, network sharing, national roaming vs. rural coverage, leased bandwidth, etc..
    • Spectrum portfolio, i.e, lower frequencies are more economical than higher frequencies for surface area coverage but will in general have less bandwidth available (i.e., driving up the number of sites in capacity limited scenarios). Only real exception to bandwidth limitations of low frequency spectrum would be the APT700 band (though would “force” an operator to deploy LTE which might not be timed right given specifics of the market).
    • General economical trends, lease/rental cost, inflation, salary levels, etc..

    Average Variable Opex per User (ou): This cost structure element capture cost that is directly related to the subscriber, such as

    • Market Invest (i.e., Subscriber Acquisition Cost SAC, Subscriber Retention Cost SRC), handset subsidies, usage-related cost, etc..
    • Any other variable cost directly associated with the customer (e.g., customer facing functions in the operator organization).

    This behavior is exactly what we observe in the presented Margin vs. Subscriber Market Share data and also explains why the data needs to be treated on a country by country basis. It is worthwhile to note that after the higher the market share the less incremental margin gain should be expected for additional market share.

    The above presented profitability framework can be used to test whether a given mobile operator is market & operationally efficient compared to its peers.

    margin vs share example

    The overall Margin dynamics is shown above Chart for the various settings of fixed and variable Opex as well as a given operators ARPU. We see that as the fixed Opex (in relation to the total subscriber market) increasing it will get more difficult to get EBITDA positive and increasingly more market share is required to reach a reasonable profitability targets. The following maps a 3 player market according with the profitability logic derived here:

    marke share dynamics

    What we first notice is that operators in the initial phase of what you might define as the “Market-share Capture Phase” are extremely sensitive to setbacks. A small loss of subscriber market share (i.e. 2%) can tumble the operator back into the abyss (i.e, 15% Margin setback) and wreck havoc to the business model. The profitability logic also illustrates that once an operator has reached Market-share maturity adding new subscribers is less valuable than to keep them. Even big market share addition will only result in little additional profitability (i.e., the law of diminishing returns).

    The derived Profitability framework can be used also to illustrate what happens to the Margin in a market-wise steady situation (i.e., only minor changes to an operators market share) or what the Market Share needs to be to keep a given Margin or how cost needs to be controlled in the event that ARPU drops and we want to keep our margin and cannot grow market share (or any other market, profitability or cost-structure exercise for that matter);

    margin versus arpu & time etc

    • Above chart illustrates Margin as a function of ARPU & Cost (fixed & variable) Development at a fixed market share here chosen to be 33%. The starting point is an ARPU ru of EUR25.8 per month, a variable cost per user ou assumed to be EUR15 and a fixed cost per total mobile user market (of) of EUR0.5. The first scenario (a Orange Solid Line) with an end of period margin of 32.7% assumes that ARPU reduces with 2% per anno, that the variable cost can be controlled and likewise will reduce with 2% pa. Variable cost is here assumed to increase with 3% on an annual basis. During the 10 year period it is assumed that the Operators market share remains at 33%. The second scenario (b Red Dashed Line) is essential same as (a) with the only difference that the variable cost remains at the initial level of EUR15 and will not change over time. This scenario ends at 21.1% after 10 Years. In principle it shows that Mobile Operators will not have a choice on reducing their variable cost as ARPU declines (again the trade-off between certainty of cost and risk/uncertainty of revenue). In fact the most successful mature mobile operators are spending a lot of efforts to manage & control their cost to keep their margin even if ARPU & Revenues decline.

    market share as function of arpu etc

    • The above chart illustrates what market share is required to keep the margin at 36% when ARPU reduces with 2% pa, fixed cost increases with 3% pa and the variable cost either (a Orange Solid Line) can be reduced with 2% in line with the ARPU decline or (b Red Solid Line) remains fixed at the initial level. In scenario (a) the mobile operator would need to grow its market share to 52% to main its margin at 36%. This will obviously be very challenging as this would be on the expense of other operators in this market (here assume to be 3). Scenario (b) is extremely dramatic and in my opinion mission impossible as it requires a complete 100% market dominance.

    variable cost development for margin

    • Above Chart illustrates how we need to manage & control my variable cost compared to the –2% decline pa in order to keep the Margin constant at 36% assuming that the Operator Subscriber Market Share remains at 33% over the period. The Orange Solid Line in the Chart shows the –2% variable cost decline pa and the Red Dashed Line the variable cost requirement to keep the margin at 36%.

    The following illustrates the Profitability Framework as described above applied to a few Western European Markets. As this only serves as an illustration I have chosen to show older data (i..e, 2006). It is however very easy to apply the methodology to any country and the BoAML Global Wireless Matrix with its richness in data can serve as an excellent source for such analysis. Needless to say the methodology can be extended to assess an operators profitability sensitivity to market share and market dynamics in general.

    The Charts below shows the Equalized Market Share which simply means the fair market share of operators, i.e., if I have 3 operators the fair or equalized market share would 1/3 (33.3%), in case of 4 operators it should be 25% and so forth, I am also depicting what I call the Max Margin Potential this is simply the Margin potential at 100% Market Share at a given set of ARPU (ru), AOPU (ou) and Fixed Cost (of) Level in relation to the total market.

    netherlands

    • Netherlands Chart: Equalized Market Share assumes Orange has been consolidated with T-Mobile Netherlands. The analysis would indicate that no more than ca. 40% Margin should be expected in The Netherlands for any of the 4 Mobile Operators. Note that for T-Mobile and Orange small increases in market share should in theory lead to larger margins, while KPN’s margin would be pretty much un-affected by additional market share.

    germany

    • Germany Chart: Shows Vodafone to slightly higher and T-Mobile Deutschland slight lower in Margin than the idealized Margin versus Subscriber Market share. At the time T-Mobile had almost exclusive leased lines and outsourced their site infrastructure while Vodafone had almost exclusively Microwaves and owned its own site infrastructure. The two new comers to the German market (E-Plus and Telefonica-O2) is trailing on the left side of the Equalized Market Share. At this point in time should Telefonica and E-Plus have merged one would have expected them eventually (post-integration) to exceed a margin of 40%. Such a scenario would lead to an almost equilibrium market situation with remaining 3 operators having similar market shares and margins.

    france

     

    austria

     

    italy

     

    united kingdom

     

    denmark

     

    Acknowledgement

    I greatly acknowledge my wife Eva Varadi for her support, patience and understanding during the creative process of creating this Blog. I certainly have not always been very present during the analysis and writing.

    The Economics of the Thousand Times Challenge: Spectrum, Efficiency and Small Cells

    By now the biggest challenge of the “1,000x challenge” is to read yet another story about the “1,000x challenge”.

    This said, Qualcomm has made many beautiful presentations on The Challenge. It leaves the reader with an impression that it is much less of a real challenge, as there is a solution for everything and then some.

    So bear with me while we take a look at the Economics and in particular the Economical Boundaries around the Thousand Times “Challenge” of providing (1) More spectrum, (2) Better efficiency and last but not least (3) Many more Small Cells.

    THE MISSING LINK

    While (almost) every technical challenge is solvable by clever engineering (i.e., something Qualcomm obviously have in abundance), it is not following naturally that such solutions are also feasible within the economical framework imposed by real world economics. At the very least, any technical solution should also be reasonable within the world of economics (and of course within a practical time-frame) or it becomes a clever solution but irrelevant to a real world business.

    A  Business will (maybe should is more in line with reality) care about customer happiness. However a business needs to do that within healthy financial boundaries of margin, cash and shareholder value. Not only should the customer be happy, but the happiness should extend to investors and shareholders that have trusted the Business with their livelihood.

    While technically, and almost mathematically, it follows that massive network densification would be required in the next 10 years IF WE KEEP FEEDING CUSTOMER DEMAND it might not be very economical to do so or at the very least such densification only make sense within a reasonable financial envelope.

    Its obvious that massive network densification, by means of macro-cellular expansion, is unrealistic, impractically as well as uneconomically. Thus Small Cell concepts including WiFi has been brought to the Telecoms Scene as an alternative and credible solution. While Small Cells are much more practical, the question whether they addresses sufficiently the economical boundaries, the Telecommunications Industry is facing, remains pretty much unanswered.

    PRE-AMP

    The Thousand Times Challenge, as it has been PR’ed by Qualcomm, states that the cellular capacity required in 2020 will be at least 1,000 times that of “today”. Actually, the 1,000 times challenge is referenced to the cellular demand & supply in 2010, so doing the math

    the 1,000x might “only” be a 100 times challenge between now and 2020 in the world of Qualcomm’s and alike. Not that it matters! … We still talk about the same demand, just referenced to a later (and maybe less “sexy” year).

    In my previous Blogs, I have accounted for the dubious affair (and non-nonsensical discussion) of over-emphasizing cellular data growth rates (see “The Thousand Times Challenge: The answer to everything about mobile data”) as well as the much more intelligent discussion about how the Mobile Industry provides for more cellular data capacity starting with the existing mobile networks (see “The Thousand Time Challenge: How to provide cellular data capacity?”).

    As it turns out  Cellular Network Capacity C can be described by 3 major components; (1) available bandwidth B, (2) (effective) spectral efficiency E and (3) number of cells deployed N.

    The SUPPLIED NETWORK CAPACITY in Mbps (i.e., C) is equal to  the AMOUNT OF SPECTRUM, i.e., available bandwidth, in MHz (i..e, B) multiplied with the SPECTRAL EFFICIENCY PER CELL in Mbps/MHz (i.e., E) multiplied by the NUMBER OF CELLS (i.e., N). For more details on how and when to apply the Cellular Network Capacity Equation read my previous Blog on “How to provide Cellular Data Capacity?”).

    SK Telekom (SK Telekom’s presentation at the 3GPP workshop on “Future Radio in 3GPP” is worth a careful study) , Mallinson (@WiseHarbor) and Qualcomm (@Qualcomm_tech, and many others as of late) have used the above capacity equation to impose a Target amount of cellular network capacity a mobile network should be able to supply by 2020: Realistic or Not, this target comes to a 1,000 times the supplied capacity level in 2010 (i.e., I assume that 2010 – 2020 sounds nicer than 2012 – 2022 … although the later would have been a lot more logical to aim for if one really would like to look at 10 years … of course that might not give 1,000 times which might ruin the marketing message?).

    So we have the following 2020 Cellular Network Capacity Challenge:

    Thus a cellular network in 2020 should have 3 times more spectral bandwidth B available (that’s fairly easy!), 6 times higher spectral efficiency E (so so … but not impossible, particular compared with 2010) and 56 times higher cell site density N (this one might  be a “real killer challenge” in more than one way), compared to 2010!.

    Personally I would not get too hanged up about whether its 3 x 6 x 56 or 6 x 3 x 56 or some other “multiplicators” resulting in a 1,000 times gain (though some combinations might be a lot more feasible than others!)

    Obviously we do NOT need a lot of insights to see that the 1,000x challenge is a

    Rally call for Small & then Smaller Cell Deployment!

    Also we do not need to be particular visionary (or have visited a Dutch Coffee Shop) to predict that by 2020 (aka The Future) compared to today (i.e., October 2012)?

    Data demand from mobile devices will be a lot higher in 2020!

    Cellular Networks have to (and will!) supply a lot more data capacity in 2020!

    Footnote: the observant reader will have seen that I am not making the claim that there will be hugely more data traffic on the cellular network in comparison to today. The WiFi path might (and most likely will) take a lot of the traffic growth away from the cellular network.

    BUT

    how economical will this journey be for the Mobile Network Operator?

    THE ECONOMICS OF THE THOUSAND TIMES CHALLENGE

    Mobile Network Operators (MNOs) will not have the luxury of getting the Cellular Data Supply and Demand Equation Wrong.

    The MNO will need to balance network investments with pricing strategies, churn & customer experience management as well as overall profitability and corporate financial well being:

    Growth, if not manage, will lead to capacity & cash crunch and destruction of share holder value!

    So for the Thousand Times Challenge, we need to look at the Total Cost of Ownership (TCO) or Total Investment required to get to a cellular network with 1,000 times more network capacity than today. We need to look at:

    Investment I(B) in additional bandwidth B, which would include (a) the price of spectral re-farming (i.e., re-purposing legacy spectrum to a new and more efficient technology), (b) technology migration (e.g., moving customers off 2G and onto 3G or LTE or both) and (c) possible acquisition of new spectrum (i..e, via auction, beauty contests, or M&As).

    Improving a cellular networks spectral efficiency I(E) is also likely to result in additional investments. In order to get an improved effective spectral efficiency, an operator would be required to (a) modernize its infrastructure, (b) invest into better antenna technologies, and (c) ensure that customer migration from older spectral in-efficient technologies into more spectral efficient technologies occurs at an appropriate pace.

    Last but NOT Least the investment in cell density I(N):

    Needing 56 times additional cell density is most likely NOT going to be FREE,

    even with clever small cell deployment strategies.

    Though I am pretty sure that some will make a very positive business case, out there in the Operator space, (note: the difference between Pest & Cholera might come out in favor of Cholera … though we would rather avoid both of them) comparing a macro-cellular expansion to Small Cell deployment, avoiding massive churn in case of outrageous cell congestion, rather than focusing on managing growth before such an event would occur.

    The Real “1,000x” Challenge will be Economical in nature and will relate to the following considerations:

    tco 2020

    In other words:

    Mobile Networks required to supply a 1,000 times present day cellular capacity are also required to provide that capacity gain at substantially less ABSOLUTE Total Cost of Ownership.

    I emphasize the ABSOLUTE aspects of the Total Cost of Ownership (TCO), as I have too many times seen our Mobile Industry providing financial benefits in relative terms (i.e., relative to a given quality improvement) and then fail to mention that in absolute cost the industry will incur increased Opex (compared to pre-improvement situation). Thus a margin decline (i.e., unless proportional revenue is gained … and how likely is that?) as well as negative cash impact due to increased investments to gain the improvements (i.e., again assuming that proportional revenue gain remains wishful thinking).

    Never Trust relative financial improvements! Absolutes don’t Lie!

    THE ECONOMICS OF SPECTRUM.

    Spectrum economics can be captured by three major themes: (A) ACQUISITION, (B) RETENTION and (C) PERFECTION. These 3 major themes should be well considered in any credible business plan: Short, Medium and Long-term.

    It is fairly clear that there will not be a lot new lower frequency (defined here as <2.5GHz) spectrum available in the next 10+ years (unless we get a real breakthrough in white-space). The biggest relative increase in cellular bandwidth dedicated to mobile data services will come from re-purposing (i.e., perfecting) existing legacy spectrum (i.e., by re-farming). Acquisition of some new bandwidth in the low frequency range (<800MHz), which per definition will not be a lot of bandwidth and will take time to become available. There are opportunities in the very high frequency range (>3GHz) which contains a lot of bandwidth. However this is only interesting for Small Cell and Femto Cell like deployments (feeding frenzy for small cells!).

    As many European Countries re-auction existing legacy spectrum after the set expiration period (typical 10 -15 years), it is paramount for a mobile operator to retain as much as possible of its existing legacy spectrum. Not only is current traffic tied up in the legacy bands, but future growth of mobile data will critical depend on its availability. Retention of existing spectrum position should be a very important element of an Operators  business plan and strategy.

    Most real-world mobile network operators that I have looked at can expect by acquisition & perfection to gain between 3 to 8 times spectral bandwidth for cellular data compared to today’s situation.

    For example, a typical Western European MNO have

    1. Max. 2x10MHz @ 900MHz primarily used for GSM. Though some operators are having UMTS 900 in operation or plans to re-farm to UMTS pending regulatory approval.
    2. 2×20 MHz @ 1800MHz, though here the variation tend to be fairly large in the MNO spectrum landscape, i.e., between 2x30MHz down-to 2x5MHz. Today this is exclusively in use for GSM. This is going to be a key LTE band in Europe and already supported in iPhone 5 for LTE.
    3. 2×10 – 15 MHz @ 2100MHz is the main 3G-band (UMTS/HSPA+) in Europe and is expected to remain so for at least the next 10 years.
    4. 2×10 @ 800 MHz per operator and typically distributed across 3 operator and dedicated to LTE. In countries with more than 3 operators typically some MNOs will have no position in this band.
    5. 40 MHz @ 2.6 GHz per operator and dedicated to LTE (FDD and/or TDD). From a coverage perspective this spectrum would in general be earmarked for capacity enhancements rather than coverage.

    Note that most European mobile operators did not have 800MHz and/or 2.6GHz in their spectrum portfolios prior to 2011. The above list has been visualized in the Figure below (though only for FDD and showing the single side of the frequency duplex).

    spectrum_details

    The 700MHz will eventually become available in Europe (already in use for LTE in USA via AT&T and VRZ) for LTE advanced. Though the time frame for 700MHz cellular deployment in Europe is still expected take maybe up to 8 years (or more) to get it fully cleared and perfected.

    Today (as of 2012) a typical European MNO would have approximately (a) 60 MHz (i.e., DL+UL) for GSM, (b) 20 – 30 MHz for UMTS and (c) between 40MHz – 60MHz for LTE (note that in 2010 this would have been 0MHz for most operators!). By 2020 it would be fair to assume that same MNO could have (d) 40 – 50 MHz for UMTS/HSPA+ and (e) 80MHz – 100MHz for LTE. Of course it is likely that mobile operators still would have a thin GSM layer to support roaming traffic and extreme laggards (this is however likely to be a shared resource among several operators). If by 2020 10MHz to 20MHz would be required to support voice capacity, then the MNO would have at least 100MHz and up-to 130MHz for data.

    Note if we Fast-Backward to 2010, assume that no 2.6GHz or 800MHz auction had happened and that only 2×10 – 15 MHz @ 2.1GHz provided for cellular data capacity, then we easily get a factor 3 to 5 boost in spectral capacity for data over the period. This just to illustrate the meaningless of relativizing the challenge of providing network capacity.

    So what’s the economical aspects of spectrum? Well show me the money!

    Spectrum:

    1. needs to be Acquired (including re-acquired = Retention) via (a) Auction, (b) Beauty contest or (c) Private transaction if allowed by the regulatory authorities (i.e., spectrum trading); Usually spectrum (in Europe at least) will be time-limited right-to-use! (e.g., 10 – 15 years) => Capital investments to (re)purchase spectrum.
    2. might need to be Perfected & Re-farmed to another more spectral efficient technology => new infrastructure investments & customer migration cost (incl. acquisition, retention & churn).
    3. new deployment with coverage & service obligations => new capital investments and associated operational cost.
    4. demand could result in joint ventures or mergers to acquire sufficient spectrum for growth.
    5. often has a re-occurring usage fee associate with its deployment => Operational expense burden.

    First 3 bullet points can be attributed mainly to Capital expenditures and point 5. would typically be an Operational expense. As we have seen in US with the failed AT&T – T-Mobile US merger, bullet point 4. can result in very high cost of spectrum acquisition. Though usually a merger brings with it many beneficial synergies, other than spectrum, that justifies such a merger.

    spectrum_cost

    Above Figure provides a historical view on spectrum pricing in US$ per MHz-pop. As we can see, not all spectrum have been borne equal and depending on timing of acquisition, premium might have been paid for some spectrum (e.g., Western European UMTS hyper pricing of 2000 – 2001).

    Some general spectrum acquisition heuristics can be derived by above historical overview (see my presentation “Techno-Economical Aspects of Mobile Broadband from 800MHz to 2.6GHz” on @slideshare for more in depth analysis).

    spectrum_heuristics

    Most of the operator cost associated with Spectrum Acquisition, Spectrum Retention and Spectrum Perfection should be more or less included in a Mobile Network Operators Business Plans. Though the demand for more spectrum can be accelerated (1) in highly competitive markets, (2) spectrum starved operations, and/or (3) if customer demand is being poorly managed within the spectral resources available to the MNO.

    WiFi, or in general any open radio-access technology operating in ISM bands (i.e., freely available frequency bands such as 2.4GHz, 5.8GHz), can be a source of mitigating costly controlled-spectrum resources by stimulating higher usage of such open-technologies and open-bands.

    The cash prevention or cash optimization from open-access technologies and frequency bands should not be under-estimated or forgotten. Even if such open-access deployment models does not make standalone economical sense, is likely to make good sense to use as an integral part for the Next Generation Mobile Data Network perfecting & optimizing open- & controlled radio-access technologies.

    The Economics of Spectrum Acquisition, Spectrum Retention & Spectrum Perfection is of such tremendous benefits that it should be on any Operators business plans: short, medium and long-term.

    THE ECONOMICS OF SPECTRAL EFFICIENCY

    The relative gain in spectral efficiency (as well as other radio performance metrics) with new 3GPP releases has been amazing between R99 and recent HSDPA releases. Lots of progress have been booked on the account of increased receiver and antenna sophistication.

    spectral_efficiency_gain_per_technology

    If we compare HSDPA 3.6Mbps (see above Figure) with the first Release of LTE, the spectral efficiency has been improved with a factor 4. Combined with more available bandwidth for LTE, provides an even larger relative boost of supplied bandwidth for increased capacity and customer quality. Do note above relative representation of spectral efficiency gain largely takes away the usual (almost religious) discussions of what is the right spectral efficiency and at what load. The effective (what that may be in your network) spectral efficiency gain moving from one radio-access release or generation to the next would be represented by the above Figure.

    Theoretically this is all great! However,

    Having the radio-access infrastructure supporting the most spectral efficient technology is the easy part (i.e., thousands of radio nodes), getting your customer base migrated to the most spectral efficient technology is where the challenge starts (i.e., millions of devices).

    In other words, to get maximum benefits of a given 3GPP Release gains, an operator needs to migrate his customer-base terminal equipment to that more Efficient Release. This will take time and might be costly, particular if accelerated. Irrespective, migrating a customer base from radio-access A (e.g., GSM) to radio-access B (e.g., LTE), will take time and adhere to normal market dynamics of churn, retention, replacement factors, and gross-adds. The migration to a better radio-access technology can be stimulated by above-market-average acquisition & retention investments and higher-than-market-average terminal equipment subsidies. In the end competitors market reactions to your market actions, will influence the migration time scale very substantially (this is typically under-estimate as competitive driving forces are ignored in most analysis of this problem).

    The typical radio-access network modernization cycle has so-far been around 5 years. Modernization is mainly driven by hardware obsolescence and need for more capacity per unit area than older (first & second) generation equipment could provide. The most recent and ongoing modernization cycle combines the need for LTE introduction with 2G and possibly 3G modernization. In some instances retiring relative modern 3G equipment on the expense of getting the latest multi-mode, so-called Single-RAN equipment, deployed, has been assessed to be worth the financial cost of write-off.  This new cycle of infrastructure improvements will in relative terms far exceed past upgrades. Software Definable Radios (SDR) with multi-mode (i.e., 2G, 3G, LTE) capabilities are being deployed in one integrated hardware platform, instead of the older generations that were separated with the associated floor space penalty and operational complexity. In theory only Software Maintenance & simple HW upgrades (i.e., CPU, memory, etc..) would be required to migrate from one radio-access technology to another. Have we seen the last HW modernization cycle? … I doubt it very much! (i.e., we still have Cloud and Virtualization concepts going out to the radio node blurring out the need for own core network).

    Multi-mode SDRs should in principle provide a more graceful software-dominated radio-evolution to increasingly more efficient radio access; as cellular networks and customers migrate from HSPA to HSPA+ to LTE and to LTE-advanced. However, in order to enable those spectral-efficient superior radio-access technologies, a Mobile Network Operator will have to follow through with high investments (or incur high incremental operational cost) into vastly improved backhaul-solutions and new antenna capabilities than the past access technologies required.

    Whilst the radio access network infrastructure has gotten a lot more efficient from a cash perspective, the peripheral supporting parts (i.e., antenna, backhaul, etc..) has gotten a lot more costly in absolute terms (irrespective of relative cost per Byte might be perfectly OKAY).

    Thus most of the economics of spectral efficiency can and will be captured within the modernization cycles and new software releases without much ado. However, backhaul and antenna technology investments and increased operational cost is likely to burden cash in the peak of new equipment (including modernization) deployment. Margin pressure is therefor likely if the Opex of supporting the increased performance is not well managed.

    To recapture the most important issues of Spectrum Efficiency Economics:

    • network infrastructure upgrades, from a hardware as well as software perspective, are required => capital investments, though typically result in better Operational cost.
    • optimal customer migration to better and more efficient radio-access technologies => market invest and terminal subsidies.

    Boosting spectrum much beyond 6 times today’s mobile data dedicated spectrum position is unlikely to happen within a foreseeable time frame. It is also unlikely to happen in bands that would be very interesting for both providing both excellent depth of coverage and at the same time depth of capacity (i.e., lower frequency bands with lots of bandwidth available). Spectral efficiency will improve with both next generation HSPA+ as well as with LTE and its evolutionary path. However, depending on how we count the relative improvement, it is not going to be sufficient to substantially boost capacity and performance to the level a “1,000 times challenge” would require.

    This brings us to the topic of vastly increased cell site density and of course Small Cell Economics.

    THE ECONOMICS OF INCREASED CELL SITE DENSITY

    It is fairly clear that there will not be a lot new spectrum available in the next 10+ years. The relative increase in cellular bandwidth will come from re-purposing & perfecting existing legacy spectrum (i.e., by re-farming) and acquiring some new bandwidth in the low frequency range (<800MHz) which per definition is not going to provide a lot of bandwidth.  The very high-frequency range (>3GHz) will contain a lot of bandwidth, but is only interesting for Small Cell and Femto-cell like deployments (feeding frenzy for Small Cells).

    Financially Mobile Operators in mature markets, such as Western Europe, will be lucky to keep their earning and margins stable over the next 8 – 10 years. Mobile revenues are likely to stagnate and possible even decline. Opex pressure will continue to increase (e.g., just simply from inflationary pressures alone). MNOs are unlikely to increase cell site density, if it leads to incremental cost & cash pressure that cannot be recovered by proportional Topline increases. Therefor it should be clear that adding many more cell sites (being it Macro, Pico, Nano or Femto) to meet increasing (often un-managed & unprofitable) cellular demand is economically unwise and unlikely to happen unless followed by Topline benefits.

    Increasing cell density dramatically (i.e., 56 times is dramatic!) to meet cellular data demand will only happen if it can be done with little incremental cost & cash pressure.

    I have no doubt that distributing mobile data traffic over more and smaller nodes (i.e., decrease traffic per node) and utilize open-access technologies to manage data traffic loads are likely to mitigate some of the cash and margin pressure from supporting the higher performance radio-access technologies.

    So let me emphasize that there will always be situations and geographical localized areas where cell site density will be increased disregarding the economics, in order to increase urgent capacity needs or to provide specialized-coverage needs. If an operator has substantially less spectral overhead (e.g., AT&T) than a competitor (e.g., T-Mobile US), the spectrum-starved operator might decide to densify with Small Cells and/or Distributed Antenna Systems (DAS) to be able to continue providing a competitive level of service (e.g., AT&T’s situation in many of its top markets). Such a spectrum starved operator might even have to rely on massive WiFi deployments to continue to provide a decent level of customer service in extreme hot traffic zones (e.g., Times Square in NYC) and remain competitive as well as having a credible future growth story to tell shareholders.

    Spectrum-starved mobile operators will move faster and more aggressively to Small Cell Network solutions including advanced (and not-so-advanced) WiFi solutions. This fast learning-curve might in the longer term make up for a poorer spectrum position.

    In the following I will consider Small Cells in the widest sense, including solutions based both on controlled frequency spectrum (e.g., HSPA+, LTE bands) as well in the ISM frequency bands (i.e., 2.4GHz and 5.8GHz). The differences between the various Small Cell options will in general translate into more or less cells due to radio-access link-budget differences.

    As I have been involved in many projects over the last couple of years looking at WiFi & Small Cell substitution for macro-cellular coverage, I would like to make clear that in my opinion:

    A Small Cells Network is not a good technical (or economical viable) solution for substituting macro-cellular coverage for a mobile network operator.

    However, Small Cells however are Great for

    • Specialized coverage solutions difficult to reach & capture with standard macro-cellular means.
    • Localized capacity addition in hot traffic zones.
    • Coverage & capacity underlay when macro-cellular cell split options have been exhausted.

    The last point in particular becomes important when mobile traffic exceeds the means for macro-cellular expansion possibilities, i.e., typically urban & dense-urban macro-cellular ranges below 200 meters and in some instances maybe below 500 meters pending on the radio-access choice of the Small Cell solution.

    Interference concerns will limit the transmit power and coverage range. However our focus are small localized and tailor-made coverage-capacity solutions, not a substituting macro-cellular coverage, range limitation is of lesser concern.

    For great accounts of Small Cell network designs please check out Iris Barcia (@IBTwi) & Simon Chapman (@simonchapman) both from Keima Wireless. I recommend the very insightful presentation from Iris “Radio Challenges and Opportunities for Large Scale Small Cell Deployments” which you can find at “3G & 4G Wireless Blog” by Zahid Ghadialy (@zahidtg, a solid telecom knowledge source for our Industry).

    When considering small cell deployment it makes good sense to understand the traffic behavior of your customer base. The Figure below illustrates a typical daily data and voice traffic profile across a (mature) cellular network:

    a_typical_traffic_day_in_europe

    • up-to 80% of cellular data traffic happens either at home or at work.+

    Currently there is an important trend, indicating that the evening cellular-data peak is disappearing coinciding with the WiFi-peak usage taking over the previous cellular peak hour.

    A great source of WiFi behavioral data, as it relates to Smartphone usage, you will find in Thomas Wehmeier’s (Principal Analyst, Informa: @Twehmeier) two pivotal white papers on  “Understanding Today’s Smatphone User” Part I and Part II.

    The above daily cellular-traffic profile combined with the below Figure on cellular-data usage per customer distributed across network cells

    traffic_over_network_distribution

    shows us something important when it comes to small cells:

    • Most cellular data traffic (per user) is limited to very few cells.
    • 80% (50%) of the cellular data traffic (per user) is limited to 3 (1) main cells.
    • The higher the cellular data usage (per user) the fewer cells are being used.

    It is not only important to understand how data traffic (on a per user) behaves across the cellular network. It is likewise very important to understand how the cellular-data traffic multiplex or aggregate across the cells in the mobile network.

    We find in most Western European Mature 3G networks the following trend:

    traffic_over_cell_distribution

    • 20% of the 3G Cells carries 60+% of the 3G data traffic.
    • 50% of the 3G Cells carriers 95% or more of the 3G data traffic.

    Thus relative few cells carries the bulk of the cellular data traffic. Not surprising really as this trend was even more skewed for GSM voice.

    The above trends are all good news for Small Cell deployment. It provides confidence that small cells can be effective means to taking traffic away from macro-cellular areas, where there is no longer an option for conventional capacity expansions (i.e., sectorization, additional carrier or conventional cell splits).

    For the Mobile Network Operator, Small Cell Economics is a Total Cost of Ownership exercise comparing Small Cell Network Deployment  to other means of adding capacity to the existing mobile network.

    The Small Cell Network needs (at least) to be compared to the following alternatives;

    1. Greenfield Macro-cellular solutions (assuming this is feasible).
    2. Overlay (co-locate) on existing network grid.
    3. Sectorization of an existing site solution (i.e., moving from 3 sectors to 3 + n on same site).

    Obviously, in the “extreme” cellular-demand limit where non of the above conventional means of providing additional cellular capacity are feasible, Small Cell deployment is the only alternative (besides doing nothing and letting the customer suffer). Irrespective we still need to understand how the economics will work out, as there might be instances where the most reasonable strategy is to let your customer “suffer” best-effort services. This would in particular be the case if there is no real competitive and incremental Topline incentive by adding more capacity.

    However,

    Competitive circumstances could force some spectrum-starved operators to deploy small cells irrespective of it being financially unfavorable to do so.

    Lets begin with the cost structure of a macro-cellular 3G Greenfield Rooftop Site Solution. We take the relevant cost structure of a configuration that we would be most likely to encounter in a Hot Traffic Zone / Metropolitan high-population density area which also is likely to be a candidate area for Small Cell deployment. The Figure below shows the Total Cost of Ownership, broken down in Annualized Capex and Annual Opex, for a Metropolitan 3G macro-cellular rooftop solution:

    tco_greenfield_rooftop_site

    Note 1: The annualized Capex has been estimated assuming 5 years for RAN Infra, Backaul & Core, and 10 years for Build. It is further assumed that the site is supported by leased-fiber backhaul. Opex is the annual operational expense for maintaining the site solution.

    Note 2: Operations Opex category covers Maintenance, Field-Services, Staff cost for Ops, Planning & optimization. The RAN infra Capex category covers: electronics, aggregation, antenna, cabling, installation & commissioning, etc..

    Note 3: The above illustrated cost structure reflects what one should expect from a typical European operation. North American or APAC operators will have different cost distributions. Though it is not expected to change conclusions substantially (just redo the math).

    When we discuss Small Cell deployment, particular as it relates to WiFi-based small cell deployment, with Infrastructure Suppliers as well as Chip Manufacturers you will get the impression that Small Cell deployment is Almost Free of Capex and Opex; i.e., hardly any build cost, free backhaul and extremely cheap infrastructure supported by no site rental, little maintenance and ultra-low energy consumption.

    Obviously if Small Cells cost almost nothing, increasing cell site density with 56 times or more becomes very interesting economics … Unfortunately such ideas are wishful thinking.

    For Small Cells not to substantially pressure margins and cash, Small Cell Cost Scaling needs to be very aggressive. If we talk about a 56x increase in cell site density the incremental total cost of ownership should at least be 56 times better than to deploy a macro-cellular expansion. Though let’s not fool ourselves!

    No mobile operator would densify their macro cellular network 56 times if absolute cost would proportionally increase!

    No Mobile operator would upsize their cellular network in any way unless it is at least margin, cost & cash neutral.

    (I have no doubt that out there some are making relative business cases for small cells comparing an equivalent macro-cellular expansion versus deploying Small Cells and coming up with great cases … This would be silly of course, not that this have ever prevented such cases to be made and presented to Boards and CxOs).

    The most problematic cost areas from a scaling perspective (relative to a macro-cellular Greenfield Site) are (a) Site Rental (lamp posts, shopping malls,), (b) Backhaul Cost (if relying on Cable, xDSL or Fiber connectivity), (c) Operational Cost (complexity in numbers, safety & security) and (d) Site Build Cost (legal requirements, safety & security,..).

    In most realistic cases (I have seen) we will find a 1:12 to 1:20 Total Cost of Ownership difference between a Small Cell unit cost and that of a Macro-Cellular Rooftop’s unit cost. While unit Capex can be reduced very substantially, the Operational Expense scaling is a lot harder to get down to the level required for very extensive Small Cell deployments.

    EXAMPLE:

    For a typical metropolitan rooftop (in Western Europe) we have the annualized capital expense (Capex) of ca. 15,000 Euro and operational expenses (Opex) in the order of 30,000 Euro per annum. The site-related Opex distribution would look something like this;

    • Macro-cellular Rooftop 3G Site Unit Annual Opex:
    • Site lease would be ca. 10,500EUR.
    • Backhaul would be ca. 9,000EUR.
    • Energy would be ca. 3,000EUR.
    • Operations would be ca. 7,500EUR.
    • i.e., total unit Opex of 30,000EUR (for average major metropolitan area)

    Assuming that all cost categories could be scaled back with a factor 56 (note: very big assumption that all cost elements can be scaled back with same factor!)

    • Target Unit Annual Opex cost for a Small Cell:
    • Site lease should be less than 200EUR (lamp post leases substantially higher)
    • Backhaul should be  less than 150EUR (doable though not for carrier grade QoS).
    • Energy should be less than 50EUR (very challenging for todays electronics)
    • Operations should be less than 150EUR (ca. 1 hour FTE per year … challenging).
    • Annual unit Opex should be less than 550EUR (not very likely to be realizable).

    Similar for the Small Cell unit Capital expense (Capex) would need to be done for less than 270EUR to be fully scalable with a macro-cellular rooftop (i.e., based on 56 times scaling).

    • Target Unit Annualized Capex cost for a Small Cell:
    • RAN Infra should be less than 100EUR (Simple WiFi maybe doable, Cellular challenging)
    • Backhaul would be less than 50EUR (simple router/switch/microwave maybe doable).
    • Build would be less than 100EUR (very challenging even to cover labor).
    • Core would be less than 20EUR (doable at scale).
    • Annualized Capex should be less than 270EUR (very challenging to meet this target)
    • Note: annualization factor: 5 years for all including Build.

    So we have a Total Cost of Ownership TARGET for a Small Cell of ca. 800EUR

    Inspecting the various capital as well as operational expense categories illustrates the huge challenge to be TCO comparable to a macro-cellular urban/dense-urban 3G-site configuration.

    Massive Small Cell Deployment needs to be almost without incremental cost to the Mobile Network Operator to be a reasonable scenario for the 1,000 times challenge.

    Most the analysis I have seen, as well as carried out myself, on real cost structure and aggressive pricing & solution designs shows that the if the Small Cell Network can be kept between 12 to 20 Cells (or Nodes) the TCO compares favorably to (i.e., beating) an equivalent macro-cellular solution. If the Mobile Operator is also a Fixed Broadband Operator (or have favorable partnership with one) there are in general better cost scaling possible than above would assume (e.g., another AT&T advantage in their DAS / Small Cell strategy).

    In realistic costing scenarios so far, Small Cell economical boundaries are given by the Figure below:

    Let me emphasize that above obviously assumes that an operator have a choice between deploying a Small Cell Network and conventional Cell Split, Nodal Overlay (or co-location on existing cellular site) or Sectorization (if spectral capacity allows). In the Future and in Hot Traffic Zones this might not be the case. Leaving Small Cell Network deployment or letting the customers “suffer” poorer QoS be the only options left to the mobile network operator.

    So how can we (i.e., the Mobile Operator) improve the Economics of Small Cell deployment?

    Having access fixed broadband such as fiber or high-quality cable infrastructure would make the backhaul scaling a lot better. Being a mobile and fixed broadband provider does become very advantageous for Small Cell Network Economics. However, the site lease (and maintenance) scaling remains a problem as lampposts or other interesting Small Cell locations might not scale very aggressively (e.g., there are examples of lamppost leases being as expensive as regular rooftop locations). From a capital investment point of view, I have my doubts whether the price will scale downwards as favorable as they would need to be. Much of the capacity gain comes from very sophisticated antenna configurations that is difficult to see being extremely cheap:

    Small Cell Equipment Suppliers would need to provide a Carrier-grade solution priced at  maximum 1,000EUR all included! to have a fighting chance of making massive small cell network deployment really economical.

    We could assume that most of the “Small Cells” are in fact customers existing private access points (or our customers employers access points) and simply push (almost) all cellular data traffic onto those whenever a customer is in vicinity of such. All those existing and future private access points are (at least in Western Europe) connected to at least fairly good quality fixed backhaul in the form of VDSL, Cable (DOCSIS3), and eventually Fiber. This would obviously improve the TCO of “Small Cells” tremendously … Right?

    Well it would reduce the MNOs TCO (as it shift the cost burden to the operator’s customer or employers of those customers) …Well … This picture also would  not really be Small Cells in the sense of proper designed and integrated cells in the Cellular sense of the word, providing the operator end-2-end control of his customers service experience. In fact taking the above scenario to the extreme we might not need Small Cells at all, in the Cellular sense, or at least dramatically less than using the standard cellular capacity formula above.

    In Qualcomm (as well as many infrastructure suppliers) ultimate vision the 1,000x challenge is solved by moving towards a super-heterogeneous network that consist of everything from Cellular Small Cells, Public & Private WiFi access points as well as Femto cells thrown into the equation as well.

    Such an ultimate picture might indeed make the Small Cell challenge economically feasible. However, it does very fundamentally change the current operational MNO business model and it is not clear that transition comes without cost and only benefits.

    Last but not least it is pretty clear than instead of 3 – 5 MNOs all going out plastering walls and lampposts with Small Cell Nodes & Antennas sharing might be an incredible clever idea. In fact I would not be altogether surprised if we will see new independent business models providing Shared Small Cell solutions for incumbent Mobile Network Operators.

    Before closing the Blog, I do find it instructive to pause and reflect on lessons from Japan’s massive WiFi deployment. It might serves as a lesson to massive Small Cell Network deployment as well and an indication that collaboration might be a lot smarter than competition when it comes to such deployment:

    softband_wifi_deployment