On Cellular Data Pricing, Revenue & Consumptive Growth Dynamics, and Elephants in the Data Pipe.

I am getting a bit sentimental as I haven’t written much about cellular data consumption for the last 10+ years. At the time, it did not take long for most folks in and out of our industry to realize that data traffic and, thereby, so many believed, the total cost of providing the cellular data would be growing far beyond the associated data revenues, e.g., remember the famous scissor chart back in the early two thousand tens. Many believed (then) that cellular data growth would be the undoing of the cellular industry. In 2011 many believed that the Industry only had a few more years before the total cost of providing cellular data would exceed the revenue rendering cellular data unprofitable. Ten years after, our industry remains alive and kicking (though they might not want to admit it too loudly).

Much of the past fear was due to not completely understanding the technology drivers, e.g., bits per second is a driver, and bytes that price plans were structured around not so much. The initial huge growth rates of data consumption that were observed did not make the unease smaller, i.e., often forgetting that a bit more can be represented as a huge growth rate when you start with almost nothing. Moreover, we also did have big scaling challenges with 3G data delivery. It became quickly clear that 3G was not what it had been hyped to be by the industry.

And … despite the historical evidence to the contrary, there are still to this day many industry insiders that believe that a Byte lost or gained is directly related to a loss or gain in revenue in a linear fashion. Our brains prefer straight lines and linear thinking, happily ignoring the unpleasantries of the non-linear world around us, often created by ourselves.

Figure 1 illustrates linear or straight-line thinking (left side), preferred by our human brains, contrasting the often non-linear reality (right side). It should be emphasized that horizontal and vertical lines, although linear, are not typically something that instinctively enters the cognitive process of assessing real-world trends.

Of course, if the non-linear price plans for cellular data were as depicted above in Figure 1, such insiders would be right even if anchored in linear thinking (i.e., even in the non-linear example to the right, an increase in consumption (GBs) leads to an increase in revenue). However, when it comes to cellular data price plans, the price vs. consumption is much more “beastly,” as shown below (in Figure 2);

Figure 2 illustrates the two most common price plan structures in Telcoland; (a, left side) the typical step function price logic that associates a range of data consumption with a price point, i.e., the price is a constant independent of the consumption over the data range. The price level is presented as price versus the maximum allowed consumption. This is by far the most common price plan logic in use. (b, right side) The “unlimited” price plan logic has one price level and allows for unlimited data consumption. T-Mobile US, Swisscom, and SK Telecom have all endorsed the unlimited with good examples of such pricing logic. The interesting fact is that most of those operators have several levels of unlimited tied to the consumptive behavior where above a given limit, the customer may be throttled (i.e., the speed will be reduced compared to before reaching the limit), or (and!) the unlimited plan is tied to either radio access technology (e.g., 4G, 4G+5G, 5G) or a given speed (e.g., 50 Mbps, 100 Mbps, 1Gbps, ..).

Most cellular data price plans follow a step function-like pricing logic as shown in Figure 2 (left side), where within each level, the price is constant up to the nominal data consumption value (i.e., purple dot) of the given plan, irrespective of the consumption. The most extreme version of this logic is the unlimited price plan, where the price level is independent of the volumetric data consumption. Although, “funny” enough, many operators have designed unlimited price plans that, in one way or another, depend on the customers’ consumption, e.g., after a certain level of unlimited consumption (e.g., 200 GB), cellular speed is throttled substantially (at least if the cell under which the customer demand resources are congested). So the “logic” is that if you wanted true unlimited, you still need to pay more than if you only require “unlimited”. Note that for the mathematically inclined, the step function is regarded as (piece-wise) linear … Although our linear brains might not appreciate that finesse very much. Maybe a heuristic that “The brain thinks in straight lines” would be more precisely restated as “The brain thinks in continuous non-constant monotonous straight lines”.

Any increase in consumption within a given pricing-consumption level will not result in any additional revenue. Most price plans allow for considerable growth without incurring additional associated revenues.

NETHERLANDS vs INDONESIA – BRIEFLY.

I like to keep informed and updated about markets I have worked in, with operators I have worked for, and with. I have worked across the globe in many very diverse markets and with operators in vastly different business cycles gives an interesting perspective on our industry. Throughout my career, I have been super interested in the difference between Telco operations and strategies in so-called mature markets versus what today may be much more of a misnomer than 10+ years ago, emerging markets.

The average cellular, without WiFi, consumption per customer in Indonesia was ca. 8 GB per month in 2022. That consumption would cost around 50 thousand Rp (ca. 3 euros) per month. For comparison, in The Netherlands, that consumption profile would cost a consumer around 16 euros per month. As of May 2023, the median cellular download speed was 106 Mbps (i.e., helped by countrywide 5G deployment, for 4G only, the speed would be around 60 to 80 Mbps) compared with 22 Mbps in Indonesia (i.e., where 5G has just been launched. Interestingly, although most likely coincidental, in Indonesia, a cellular data customer would pay ca. 5 times less than in the Netherlands for the same volumetric consumption. Note that for 2023, the average annual income in Indonesia is about one-quarter of that in the Netherlands. However, the Indonesian cellular consumer would also have one-fifth of the quality measured by downlink speed from the cellular base station to the consumer’s smartphone.

Let’s go deeper into how effective consumptive growth of cellular data is monetized… what may impact the consumptive growth, positively and negatively, and how it relates to the telco’s topline.

CELLULAR BUSINESS DYNAMICS.

Figure 3 Between 2016 and 2021, Western European Telcos lost almost 7% of their total cellular turnover (ca. 7+ billion euros over the markets I follow). This corresponds to a total revenue loss of ca. 1.4% per year over the period. To no surprise, the loss of cellular voice-based revenue has been truly horrendous, with an annual loss ca. 30%, although the Covid year (2021 and 2022, for that matter) was good to voice revenues (as we found ourselves confined to our homes and a call away from our colleagues). On the positive side, cellular data-based revenues have “positively” contributed to the revenue in Western Europe over the period (we don’t really know the counterfactual), with an annual growth of ca. 4%. Since 2016 cellular data revenues have exceeded that of cellular voice revenues and are 2022 expected to be around 70% of the total cellular revenue (for Western Europe). Cellular revenues have been and remain under pressure, even with a positive contribution from cellular data. The growth of cellular data volume (not including the contribution generated from WiFi usage) has continued to grow with a 38% annualized growth rate and is today (i.e., 2023) more than five times that of 2016. The annual growth rate of cellular data consumption per customer is somewhat lower ranging from the mid-twenties to the end-thirties percent. Needless to say that the corresponding cellular ARPU has not experienced anywhere near similar growth. In fact, cellular ARPU has generally been lowered over the period.

Some, in my opinion, obvious observations that are worth making on cellular data (I come to realize that although I find these obvious, I am often confronted with a lack of awareness or understanding of those);

Cellular data consumption grows much (much) faster than the corresponding data revenue (i.e., 38% vs 4% for Western Europe).

The unit growth of cellular data consumption does not lead to the same unit growth in the corresponding cellular data revenues.

Within most finite cellular data plans (thus the not unlimited ones), substantial data growth potential can be realized without resulting in a net increase of data-related revenues. This is, of course, trivial for unlimited plans.

The anticipated death of the cellular industry back in the twenty-tens was an exaggeration. The Industry’s death by signaling, voluptuous & unconstrained volumes of demanded data, and ever-decreasing euros per Bytes remains a fading memory and, of course, in PowerPoints of that time (I have provided some of my own from that period below). A good scare does wonders to stimulate innovation to avoid “Armageddon.” The telecom industry remains alive and well.

Figure 4 The latest data (up to 2022) from OECD on mobile data consumption dynamics. Source data can be found at OECD Data Explorer. The data illustrates the slowdown in cellular data growth from a customer perspective and in terms of total generated mobile data. Looking over the period, the 5-year cumulative growth rate between 2016 and 2021 is higher than 2017 to 2022 as well as the growth rate between 2022 and 2021 was, in general, even lower. This indicates a general slowdown in mobile data consumption as 4G consumption (in Western Europe) saturates and 5G consumption still picks up. Although this is not an account of the observed growth dynamics over the years, given the data for 2022 was just released, I felt it was worth including these for completeness. Unfortunately, I have not yet acquired the cellular revenue structure (e.g., voice and data) for 2022, it is work in progress.

WHAT DRIVES CONSUMPTIVE DATA GROWTH … POSITIVE & NEGATIVE.

What drives the consumer’s cellular data consumption? As I have done with my team for many years, a cellular operator with data analytics capabilities can easily check the list of positive and negative contributors driving cellular data consumption below.

Positive Growth Contributors:

  • Customer or adopter uptake. That is, new or old, customers that go from non-data to data customers (i.e., adopting cellular data).
  • Increased data consumption (i.e., usage per adopter) within the cellular data customer base that is driven by a lot of the enablers below;
  • Affordable pricing and suitable price plans.
  • More capable Radio Access Technology (RAT), e.g., HSDPA → HSPA+ → LTE → 5G, effectively higher spectral efficiency from advanced antenna systems. Typically will drive up the per-customer data consumption to the extent that pricing is not a barrier to usage.
  • More available cellular frequency spectrum is provisioned on the best RAT (regarding spectral efficiency).
  • Good enough cellular network consistent with customer demand.
  • Affordable and capable device ecosystem.
  • Faster mobile device CPU leads to higher consumption.
  • Faster & more capable mobile GPUs lead to higher consumption.
  • Device screen size. The larger the screen, the higher the consumption.
  • Access to popular content and social media.

Figure 5 illustrates the description of data growth as depending on the uptake of Adopters and the associated growth rate α(t) multiplied by the Usage per Adopter and the associated growth rate of usage μ(t). The growth of the Adopters can typically be approximated by an S-curve reaching its maximum as there are few more customers left to adopt a new service or product or RAT (i.e., α(t)→0%). As described in this section, the growth of usage per adopter, μ(t), will depend on many factors. Our intuition of μ is that it is positive for cellular data and historically has exceeded 30%. A negative μ would be an indication of consumptive churn. It should not be surprising that overall cellular data consumption growth can be very large as the Adopter growth rate is at its peak (i.e., around the S-curve inflection point), and Usage growth is high as well. It also should not be too surprising that after Adopter uptake has reached the inflection point, the overall growth will slow down and eventually be driven by the Usage per Adopter growth rate.

Figure 6 Using the OECD data (OECD Data Explorer) for the Western European mobile data per customer consumptive growth from 2011 to 2022, the above illustrates the annual growth rate of per-customer data mobile consumption. Mobile data consumption is a blend of usage across the various RATs enabling packet data usage. There is a clear increased annual growth after introducing LTE (4G) followed by a slowdown in annual growth, possibly due to reaching saturation in 4G adaptation, i.e., α3G→4G(t) → 0% leaving μ4G(t) driving the cellular data growth. There is a relatively weak increase in 2021, and although the timing coincides with 5G non-standalone (NSA) introduction (typically at 700 MHz or dynamics spectrum share (DSS) with 4G, e.g., Vodafone-Ziggo NL using their 1800 MHz for 4G and 5G) the increase in 2020 may be better attributed to Covid lockdown than a spurt in data consumption due to 5G NSA intro.

Anything that creates more capacity and quality (e.g., increased spectral efficiency, more spectrum, new, more capable RAT, better antennas, …) will, in general, result in an increased usage overall as well as on a per-customer basis (remember most price plans allow for substantial growth within the plans data-volume limit without incurring more cost for the customer). If one takes the above counterfactual, it should not be surprising that this would result in slower or negative consumption growth.

Negative growth contributors:

  • Cellular congestion causes increased packet loss, retransmissions, and deteriorating latency and speed performance. All in all, congestion may have a substantial negative impact on the customer’s service experience.
  • Throttling policies will always lower consumption and usage in general, as quality is intentionally lowered by the Telco.
  • Increased share of QUIC content on the network. The QUIC protocol is used by many streaming video providers (e.g., Youtube, Facebook, TikTok, …). The protocol improves performance (e.g., speed, latency, packet delivery, network changes, …) and security. Services using QUIC will “bully” other applications that use TCP/IP, encouraging TCP/IP to back off from using bandwidth. In this respect, QUIC is not a fair protocol.
  • Elephant flow dynamics (e.g., few traffic flows causing cell congestion and service degradation for the many). In general, elephant flows, particularly QUIC based, will cause an increase in TCP/IP data packet retransmissions and timing penalties. It is very much a situation where a few traffic flows cause significant service degradation for many customers.

One of the manifestations of cell congestion is packet loss and packet retransmission. Packet loss due to congestion ranges from 1% to 5%. or even several times higher at moments of peak traffic or if the user is in a poor cellular coverage area. The higher the packet loss, the worse the congestion, and the worse the customer experience. The underlying IP protocols will attempt to recover a lost packet by retransmission. The retransmission rate can easily exceed 10% to 15% in case of congestion. Generally, for a reliable and well-operated network, the packet loss should be well below 1% and even as low as 0.1%. Likewise, one would expect a packet retransmission rate of less than 2% (I believe the target should be less than 1%).

Thus, customers that happen to be under a given congested cell (e.g., caused by an elephant flow) would incur a substantially higher rate of retransmitted data packages (i.e., 10% to 15% or higher) as the TCP/IP protocol tries to make up for lost data packages. The customer may experience substantial service quality degradation and, as a final (unintended) “insult”, often be charged for those additional retransmitted data volumes.

From a cellular perspective, as the congestion has been relieved, the cellular operator may observe that the volume on the congested cell actually drops. The reason is that the packet loss and retransmission drops to a level far below the congested one (e.g., typically below 1%). As the quality improves for all customers demanding service from the previously overloaded (i.e., congested) cell, sustainable volume growth will commence in total and as well as will the average consumption on a customer basis. As will be shown below for normal cellular data consumption and most (if not all) price plans, a few percentage points drop in data volume will not have any meaningful effect on revenues. Either because the (temporary) drop happens within the boundaries of a given price plan level and thus has no effect on revenue, or because the overall gainful consumptive growth, as opposed to data volume attributed to poor quality, far exceeds the volume loss due to improved capacity and quality of a congested cell.

Well-balanced and available cellular sites will experience positive and sustainable data traffic growth.

Congested and over-loaded cellular sites will experience a negative and persistent reduction of data traffic.

Actively managing the few elephant flows and their negative impact on the many will increase customer satisfaction, reduce consumptive churn, and increase data growth, easily compensating for the congestion-induced increases due to packet retransmission. And unless an operator consistently is starved for radio access investments, or has poor radio access capacity management processes, most cell congestion can be attributed to the so-called elephant flows.

CELLULAR DATA CONSUMPTION IN REAL NETWORKS – ON A SECTOR LEVEL.

And irrespective of whatever drives positive and negative growth, it is worth remembering that daily traffic variations on a sector-by-sector basis and an overall cellular network level are entirely natural. An illustration of such natural sector variation over a (non-holiday) week is shown below in Figure 7 (c) for a sector in the top-20% of busiest sectors. In this example, the median variation over all sectors in the same week, as shown below, was around 10%. I often observe that even telco people (that should know better) find this natural variation quite worrisome as it appears counterintuitive to their linear growth expectations. Proper statistical measurement & analysis methodologies must be in place if inferences and solid analysis are required on a sector (or cell) basis over a relatively short time period (e.g., day, days, week, weeks,…).

Figure 7 illustrates the cellular data consumption daily variation over a (non-holiday) week. In the above, there are three examples (a) a sector from the bottom 20% in terms of carried volume, (b) a sector with a median data volume, and (c) a sector taken from the top 20% of carried data volume. Over the three different sectors (low, median, high) we observe very different variations over weekdays. From the top-20%, we have an almost 30% variation between the weekly minimum (Tuesday) and the weekly maximum (Thursday) to the bottom-20% with a variation in excess of 200% over the week. The charts above show another trend we observe in cellular networks regarding consumptive variations over time. Busy sectors tend to have a lower weekly variation than less busy sectors. I should point out that I have made no effort to select particular sectors. I could easily find some (of the less busy sectors) with even more wild variations than shown above.

The day-to-day variation is naturally occurring based on the dynamic behavior of the customers served by a given sector or cell (in a sector). I am frequently confronted with technology colleagues (whom I respect for their deep technical knowledge) that appear to expect (data) traffic on all levels monotonously increase with a daily growth rate that amounts to the annual CAGR observed by comparing the end-of-period volume level with the beginning of period volume level. Most have not bothered to look at actual network data and do not understand (or, to put it more nicely, simply ignore) the naturally statistical behavior of traffic that drives hourly, daily, weekly, and monthly variations. If you let statistical variations that you have no control over drive your planning & optimization decisions. In that case, you will likely fail to decide on the business-critical ones you can control.

An example of a high-traffic (top-20%) sector’s complete 365 day variations of data consumption is shown below in Figure 8. We observe that the average consumption (or traffic demand) increases nicely over the year with a bit of a slowdown (in this European example) during the summer vacation season (same around official holidays in general). Seasonal variations is naturally occurring and often will result in a lower-than-usual daily growth rate and a change in daily variations. In the sector traffic example below, Tuesdays and Saturdays are (typically) lower than the average, and Thursdays are higher than average. The annual growth is positive despite the consumptive lows over the year, which would typically freak out my previously mentioned industry colleagues. Of course, every site, sector, and cell will have a different yearly growth rate, most likely close to a normal distribution around the gross annual growth rate.

Figure 8 illustrates a top-20% sector’s data traffic growth dynamics (in GB) over a calendar year’s 365 days. Tuesdays and Saturdays are likely below the weekly average data consumption, and Thursdays are more likely to be above. Furthermore, the daily traffic growth is slowing around national holidays and in the summer vacation (i.e., July & August for this particular Western European country).

And to nail down the message. As shown in the example in Figure 9 below, every sector in your cellular network from one time period to the other will have a different positive and negative growth rate. The net effect over time (in terms of months more than days or weeks) is positive as long as customers adopt the supplied RAT (i.e., if customers are migrating from 4G to 5G, it may very well be that 4G consumed data will decline while the 5G consumed data will increase) and of course, as long as the provided quality is consistent with the expected and demanded quality, i.e., sectors with congestion, particular so-called elephant-flow induced congestion, will hurt the quality of the many that may reduce their consumptive behavior and eventually churn.

Figure 9 illustrates the variation in growth rates across 15+ thousand sectors in a cellular network comparing the demanded data volume between two consecutive Mondays per sector. Statistical analysis of the above data shows that the overall average value is ca. 0.49% and slightly skewed towards the positive growths rates (e.g., if you would compare a Monday with a Tuesday, the histogram would typically be skewed towards the negative side of the growth rates as Tuesday are a lower traffic day compared to Monday). Also, with the danger of pointing out the obvious, the daily or weekly growth rates expected from an annual growth rate of, for example, 30% are relatively minute, with ca. 0.07% and 0.49%, respectively.

The examples above (Figures 7, 8, and 9) are from a year in the past when Verstappen had yet to win his first F1 championship. That particular weekend also did not show F1 (or Sunday would have looked very different … i.e., much higher) or any other big sports event.

CELLULAR DATA PRICE PLAN LOGIC.

Figure 10 above is an example of the structure of a price plan. Possibly represented slightly differently from how your marketeer would do (and I am at peace with that). We observe the illustration of a price level of 8 data volume intervals on the upper left chart. This we can also write as (following the terminology of the lower right corner);

Thus, for the p_1 package allowing the customer to consume up to 3 GB is priced at 20 (irrespective of whether the customer would consume less). For package p_5 a consumer would pay 100 for a data consumption allowance up to 35 GB. Of course, we assume that the consumer choosing this package would generally consume more than 24 GB, which is the next cheaper package (i.e., p_4).

The price plan example above clearly shows that each price level offers customers room to grow before upgrading to the next level. For example, a customer consuming no more than 8 GB per month, fitting into p_3, could increase consumption with 4 GB (+50%) before considering the next level price plan (i.e., p_4). This is just to illustrate that even if the customer’s consumption may grow substantially, one should not per se be expecting more revenue.

Even though it should be reasonably straightforward that substantial growth of a customer base data consumption cannot be expected to lead to an equivalent growth in revenue, many telco insiders instinctively believe this should be the case. I believe that the error may be due to many mentally linearizing the step-function price plans (see Figure 2 upper right side) and simply (but erroneously) believing that any increase (decrease) in consumption directly results in an increase (or decrease) in revenue.

DATA PRICING LOGIC & USAGE DISTRIBUTION.

If we want to understand how consumptive behavior impacts cellular operators’ toplines, we need to know how the actual consumption distributes across the pricing logic. As a high-level illustration, Figure 11 (below) shows the data price step-function logic from Figure 9 with an overall consumptive distribution superimposed (orange solid line). It should be appreciated that while this provides a fairly clear way of associating consumption with pricing, it is an oversimplification at best. It will nevertheless allow me to estimate crudely the number of customers that are likely to have chosen a particular price plan matching their demand (and affordability). In reality, we will have customers that have chosen a given price plan but either consume less than the limit of the next cheaper plan (thus, if consistently so, could save but go to that plan). We will also have customers that consume more than their allowed limit. Usually, this would result in the operator throttling the speed and sending a message to the customer that the consumption exceeds the limit of the chosen price plan. If a customer would consistently overshoots the limits (with a given margin) of the chosen plan, it is likely that eventually, the customer will upgrade to the next more expensive plan with a higher data allowance.

Figure 11 above illustrates on the left side a consumptive distribution (orange line) identified by its mean and standard deviation superimposed on our price plan step-function logic example. The right summarizes the consumptive distribution across the eight price plan levels. Note that there is a 9th level in case the 200 GB limit is breached (0.2% in this example). I am assuming that such customers pay twice the price for the 200 GB price plan (i.e., 320).

In the example of an operator with 100 million cellular customers, the consumptive distribution and the given price plan lead to a fiat of 7+ billion per month. However, with a consumptive growth rate of 30% to 40% annually per active cellular data user (on average), what kind of growth should we expect from the associated cellular data revenues?

Figure 12 In the above illustration, I have mapped the consumptive distribution to the price plan levels and then developed the begin-of-period consumptive distribution (i.e., the light green curve) month by month until month 12 has been reached (i.e., the yellow curve). I assume the average monthly consumptive cellular data growth is 2.5% or ca. 35% after 12 months. Furthermore, I assume that for the few customers falling outside the 200 GB limit that they will purchase another 200 GB plan. For completeness, the previous 12 months (previous year) need to be carried out to compare the total cumulated cellular data revenue between the current and previous periods.

Within the current period (shown in Figure 12 above), the monthly cellular data revenue CAGR comes out at 0.6% or a total growth of 7.4% of monthly revenue between the beginning period and the end period. Over the same period, the average data consumption (per user) grew by ca. 34.5%. In terms of the current year’s total data revenue to the previous year’s total data revenue, we get an annual growth rate of 8.3%. This illustrates that it should not be surprising that the revenue growth can be far smaller than the consumptive growth given price plans such as the above.

It should be pointed out that the above illustration of consumptive and revenue growth simplifies the growth dynamics. For example, the simulation ignores seasonal swings over a 12-month period. Also, it attributes 1-to-1 all consumption falling within the price range to that particular price level when there is always spillover on both upper and lower levels of a price range that will not incur higher or lower revenues. Moreover, while mapping the consumptive distribution to the price-plan giga-byte intervals makes the simulation faster (and setup certainly easier), it is also not a very accurate approach to the coarseness of the intervals.

A LEVEL DEEPER.

While working with just one consumptive distribution, as in Figure 11 and Figure 12 above, allows for simpler considerations, it does not fully reflect the reality that every price plan level will have its own consumptive distribution. So let us go that level deeper and see whether it makes a difference.

Figure 13 above, illustrates the consumptive distribution within a given price plan range, e.g., the “5 GB @ 30” price-plan level for customers with a consumption higher than 3 GB and less than or equal to 5 GB. It should come as no surprise that some customers may not reach even the 3 GB, even though they pay for (up to) 5 GB, and some may occasionally exceed the 5 GB limit. In the example above, 10% of customers have a consumption below 3 GB (and could have chosen the next cheaper plan of up to 3 GB), and 3% exceed the limits of the chosen plan (an event that may result in the usage speed being throttled). As the average usage within a given price plan level approaches the ceiling (e.g., 5 GB in the above illustration), in general, the standard deviation will reduce accordingly as customers will jump to the Next Expensive Plan to meet their consumptive needs (e.g., “12 GB @ 50” level in the illustration above).

Figure 14 generalizes Figure 11 to the full price plan and, as illustrated in Figure 12, let the consumption profiles develop in time over a 12-month period (Initial and +12 month shown in the above illustration). The difference between the initial and 12 months can be best appreciated with the four smaller figures that break up the price plan levels in 0 to 40 GB and 40 to 200 GB.

The result in terms of cellular data revenue growth is comparable to that of the higher-level approach of Figure 12 (ca. 8% annual revenue growth vs 34 % overall consumptive annual growth rate). The detailed approach of Figure 11 is, however, more complicated to get working and requires much more real data to work with (which obviously should be available to operators in this time and age). One should note that in the illustrated example price plan (used in the figures above) that at a 2.5% monthly consumptive growth rate (i.e., 34% annually), it would take a customer an average of 24 months (spread of 14 to 35 month depending on level) to traverse a price plan level from the beginning of the level (e.g., 5 GB) to the end of the level (12 GB). It should also be clear that as a customer enters the highest price plan levels (e.g., 100 GB and 200 GB), little additional can be expected to be earned on those customers over their consumptive lifetime.

The illustrated detailed approach shown above is, in particular, useful to test a given price plan’s profitability and growth potential, given the particularities of the customers’ consumptive growth dynamics.

The additional finesse that could be considered in the analysis could be an affordability approach because the growth within a given price level slows down as the average consumption approaches the limit of the given price level. This could be considered by slowing the mean growth rate and allowing for the variance to narrow as the density function approaches the limit. In my simpler approach, the consumptive distributions will continue to grow at a constant growth rate. In particular, one should consider more sophisticated approaches to modeling the variance that determines the spillover into less and more expensive levels. An operator should note that consumption that reduces or consistently falls into the less expensive level expresses consumptive churn. This should be monitored on a customer level as well as on a radio access cell level. Consumptive churn often reflects the supplied radio access quality is out of sync with the customer demand dynamics and expectations. On a radio access cell level, the diligent operator will observe a sharp increase in retransmitted data packages and increased latency on a flow (and active customer basis) hallmarks of a congested cell.

WRAPPING UP.

To this day, 20+ odd years after the first packet data cellular price plans were introduced, I still have meetings with industry colleagues where they state that they cannot implement quality-enhancing technologies for the fear that data consumption may reduce and by that their revenues. Funny enough, often the fear is that by improving the quality for typically many of their customers being penalized by a few customers’ usage patterns (e.g., the elephants in the data pipe), the data packet loss and TCP/IP retransmissions are reducing as the quality is improving and more customers are getting the service they have paid for. It is ignoring the commonly established fact of our industry that improving the customer experience leads to sustainable growth in consumption that consequently may also have a positive topline impact.

I am often in situations where I am surprised with how little understanding and feeling Telco employees have for their own price plans, consumptive behavior, and the impact these have on their company’s performance. This may be due to the fairly complex price plans telcos are inventing, and our brain’s propensity for linear thinking certainly doesn’t make it easier. It may also be because Telcos rarely spend any effort educating their employees about their price plans and products (after all, employees often get all the goodies for “free”, so why bother?). Do a simple test at your next town hall meeting and ask your CXOs about your company’s price plans and their effectiveness in monetizing consumption.

So what to look out for?

Many in our industry have an inflated idea (to a fault) about how effective consumptive growth is being monetized within their company’s price plans.

Most of today’s cellular data plans can accommodate substantial growth without leading to equivalent associated data revenue growth.

The apparent disconnect between the growth rate of cellular data consumption (CAGR ~30+%), in its totality as well on an average per-customer basis, and cellular data revenues growth rate (CAGR < 10%) is simply due to the industry’s price plan structures allowing for substantial growth without a proportion revenue growth.

ACKNOWLEDGEMENT.

I greatly acknowledge my wife, Eva Varadi, for her support, patience, and understanding during the creative process of writing this Blog.

FURTHER READING.

Kim Kyllesbech Larsen, Mind Share: Right Pricing LTE … and Mobile Broadband in general (A Technologist’s observations) (slideshare.net), (May 2012). A cool seminal presentation on various approaches to pricing mobile data. Contains a lot of data that illustrates how far we have come over the last 10 years.

Kim Kyllesbech Larsen, Mobile Data-centric Price Plans – An illustration of the De-composed. | techneconomyblog (February, 2015). Exploring UK mobile mixed-services price plans in an attempt to decipher the price of data which at the time (often still is) a challenge to figure out due to (intentional?) obfuscation.

Kim Kyllesbech Larsen, The Unbearable Lightness of Mobile Voice. | techneconomyblog (January, 2015). On the demise of voice revenue and rise of data. More of a historical account today.

Tellabs “End of Profit” study executive summary (wordpress.com), (2011). This study very much echoed the increasing Industry concern back in 2010-2012 that cellular data growth would become unprofitable and the industry’s undoing. The basic premise was that the explosive growth of cellular data and, thus, the total cost of maintaining the demand would lead to a situation where the total cost per GB would exceed the revenue per GB within the next couple of years. This btw. was also a trigger point for many cellular-focused telcos to re-think their strategies towards the integrated telco having internal access to fixed and mobile broadband.

B. de Langhe et al., “Linear Thinking in a Nonlinear World”, Harvard Business Review, (May-June, 2017). It is a very nice and compelling article about how difficult it is to get around linear thinking in a non-linear world. Our brains prefer straight lines and linear patterns and dependencies. However, this may lead to rather amazing mistakes and miscalculations in our clearly nonlinear world.

OECD Data Explorer A great source of telecom data, for example, cellular data usage per customer, and the number of cellular data customers, across many countries. Recently includes 2022 data.

I have used Mobile Data – Europe | Statista Market Forecast to better understand the distribution between cellular voice and data revenues. Most Telcos do not break out their cellular voice and data revenues from their total cellular revenues. Thus, in general, such splits are based on historical information where it was reported, extrapolations, estimates, or more comprehensive models.

Kim Kyllesbech Larsen, The Smartphone Challenge (a European perspective) (slideshare.net) (April 2011). I think it is sort of a good account for the fears of the twenty-tens in terms of signaling storms, smartphones (=iPhone) and unbounded traffic growth, etc… See also “Eurasia Mobile Markets Challenges to our Mobile Networks Business Model” (September 2011).

Geoff Huston, “Comparing TCP and QUIC”, APNIC, (November 2022).

Anna Saplitski et al., “CS244 ’16: QUIC loss recovery”, Reproducing Network Research, (May 2016).

RFC9000, “QUIC: A UDP-Based Multiplexed and Secure Transport“, Internet Engineering Task Force (IETF), (February 2022).

Dave Gibbons, What Are Elephant Flows And Why Are They Driving Up Mobile Network Costs? (forbes.com) (February 2019).

K.-C. Lan and J. Heidemann, “A measurement study of correlations of Internet flow characteristic” (February 2006). This seminal paper has inspired many other research works on elephant flows. A flow should be understood as an unidirectional series of IP packets with the same source and destination addresses, port numbers, and protocol numbers. The authors define elephant flows as flows with a size larger than the mean plus three standard deviations of the sampled data. Though it is important to point out that the definition is less important. Such elephant flows are typically few (less than 20%) but will cause cell congestion by reducing the quality of many requiring a service in such an affected cell.

Opanga Networks is a fascinating and truly innovative company. Using AI, they have developed their solution around the idea of how to manage data traffic flows, reduce congestion, and increase customer quality. Their (N2000) solution addresses particular network situations where a limited number of customer data usage takes up a disproportionate amount of resources within the cellular network (i.e., the problem with elephant flows). Opanga’s solution optimizes those traffic congestion-impacting flows and results in an overall increase in service quality and customer experience. Thus, the beauty of the solution is that the few traffic patterns, causing the cellular congestion, continue without degradation, allowing the many traffic patterns that were impacted by the few to continue at their optimum quality level. Overall, many more customers are happy with their service. The operator avoids an investment of relatively poor return and can either save the capital or channel it into a much higher IRR (internal rate of return) investment. I have seen tangible customer improvements exceeding 30+ percent improvement to congested cells, avoiding substantial RAN Capex and resulting Opex. And the beauty is that it does not involve third-party network vendors and can be up and running within weeks with an investment that is easily paid back within a few months. Opanga’s product pipeline is tailor-made to alleviate telecom’s biggest and thorniest challenges. Their latest product, with the appropriate name Joules, enables substantial radio access network energy savings above and beyond what features the telcos have installed from their Radio Access Network suppliers. Disclosure: I am associated with Opanga as an advisor to their industrial advisory board.

5G Standalone – European Demand & Expectations (Part I).

By the end of 2020, according with Ericsson, it was estimated that there where ca. 7.6 million 5G subscriptions in Western Europe (~ 1%). Compare this to North America’s ca. 14 million (~4%) and 190 million (~11%) North East Asia (e.g, China, South Korea, Japan, …).

Maybe Western Europe is not doing that great, when it comes to 5G penetration, in comparison with other big regional markets around the world. To some extend the reason may be that 4G network’s across most of Western Europe are performing very well and to an extend more than servicing consumers demand. For example, in The Netherlands, consumers on T-Mobile’s 4G gets, on average, a download speed of 100+ Mbps. About 5× the speed you on average would get in USA with 4G.

From the October 2021 statistics of the Global mobile Suppliers Association (GSA), 180 operators worldwide (across 72 countries) have already launched 5G. With 37% of those operators actively marketing 5G-based Fixed Wireless Access (FWA) to consumers and businesses. There are two main 5G deployment flavors; (a) non-standalone (NSA) deployment piggybacking on top of 4G. This is currently the most common deployment model, and (b) as standalone (SA) deployment, independently from legacy 4G. The 5G SA deployment model is to be expected to become the most common over the next couple of years. As of October 2021, 15 operators have launched 5G SA. It should be noted that, operators with 5G SA launched are also likely to support 5G in NSA mode as well, to provide 5G to all customers with a 5G capable handset (e.g., at the moment only 58% of commercial 5G devices supports 5G SA). Only reason for not supporting both NSA and SA would be for a greenfield operator or that the operator don’t have any 4G network (none of that type comes to my mind tbh). Another 25 operators globally are expected to be near launching standalone 5G.

It should be evident, also from the illustration below, that mobile customers globally got or will get a lot of additional download speed with the introduction of 5G. As operators introduce 5G, in their mobile networks, they will leapfrog their available capacity, speed and quality for their customers. For Europe in 2021 you would, with 5G, get an average downlink (DL) speed of 154 ± 90 Mbps compared to 2019 4G DL speed of 26 ± 8 Mbps. Thus, with 5G, in Europe, we have gained a whooping 6× in DL speed transitioning from 4G to 5G. In Asia Pacific, the quality gain is even more impressive with a 10× in DL speed and somewhat less in North America with 4× in DL speed. In general, for 5G speeds exceeding 200 Mbps on average may imply that operators have deployed 5G in the C-band band (e.g., with the C-band covering 3.3 to 5.0 GHz).

The above DL speed benchmark (by Opensignal) gives a good teaser for what to come and to expect from 5G download speed, once a 5G network is near you. There is of course much more to 5G than downlink (and uplink) speed. Some caution should be taken in the above comparison between 4G (2019) and 5G (2021) speed measurements. There are still a fair amount of networks around the world without 5G or only started upgrading their networks to 5G. I would expect the 5G average speed to reduce a bit and the speed variance to narrow as well (i.e., performance becoming more consistent).

In a previous blog I describe what to realistically expect from 5G and criticized some of the visionary aspects of the the original 5G white paper paper published back in February 2015. Of course, the tech-world doesn’t stand still and since the original 5G visionary paper by El Hattachi and Erfanian. 5G has become a lot more tangible as operators deploy it or is near deployment. More and more operators have launched 5G on-top of their 4G networks and in the configuration we define as non-standalone (i.e., 5G NSA). Within the next couple of years, coinciding with the access to higher frequencies (>2.1 GHz) with substantial (unused or underutilized) spectrum bandwidths of 50+ MHz, 5G standalone (SA) will be launched. Already today many high-end handsets support 5G SA ensuring a leapfrog in customer experience above and beyond shear mobile broadband speeds.

The below chart illustrates what to expect from 5G SA, what we already have in the “pocket” with 5G NSA, and how that may compare to existing 4G network capabilities.

There cannot be much doubt that with the introduction of the 5G Core (5GC) enabling 5G SA, we will enrich our capability and service-enabler landscape. Whether all of this cool new-ish “stuff” we get with 5G SA will make much top-line sense for operators and convenience for consumers at large is a different story for a near-future blog (so stay tuned). Also, there should not be too much doubt that 5G NSA already provide most of what the majority of our consumers are looking for (more speed).

Overall, 5G SA brings benefits, above and beyond NSA, on (a) round-trip delay (latency) which will be substantially lower in SA, as 5G does not piggyback on the slower 4G, enabling the low latency in ultra-reliable low latency communications (uRLLC), (b) a factor of 250× improvement device density (1 Million devices per km2) that can be handled supporting massive machine type communication scenarios (mMTC), (c) supports communications services at higher vehicular speeds, (d) in theory should result in low device power consumption than 5G NSA, and (e) enables new and possible less costly ways to achieve higher network (and connection) availability (e.g., with uRLLC).

Compared to 4G, 5G SA brings with it a more flexible, scalable and richer set of quality of service enablers. A 5G user equipment (UE) can have up to 1,024 so called QoS flows versus a 4G UE that can support up to 8 QoS classes (tied into the evolved packet core bearer). The advantage of moving to 5G SA is a significant reduction of QoS driven signaling load and management processing overhead, in comparison to what is the case in a 4G network. In 4G, it has been clear that the QoS enablers did not really match the requirements of many present day applications (i.e., brutal truth maybe is that the 4G QoS was outdated before it went live). This changes with the introduction of 5G SA.

So, when is it a good idea to implement 5G Standalone for mobile operators?

There are maybe three main events that should trigger operators to prepare for and launch 5G SA;

  1. Economical demand for what 5G SA offers.
  2. Critical mass of 5G consumers.
  3. Want to claim being the first to offer 5G SA.

with the 3rd point being the least serious but certainly not an unlikely factor in deploying 5G SA. Apart from potentially enriching consumers experience, there are several operational advantages of transitioning to a 5GC, such as more mature IT-like cloudification of our telecommunications networks (i.e., going telco-cloud native) leading to (if designed properly) a higher degree of automation and autonomous network operations. Further, it may also allow the braver parts of telco-land to move a larger part of its network infrastructure capabilities into the public-cloud domain operated by hyperscalers or network-cloud consortia’s (if such entities will appear). Another element of the 5G SA cloud nativification (a new word?) that is frequently not well considered, is that it will allow operators to start out (very) small and scale up as business and consumer demand increases. I would expect that particular with hyperscalers and of course the-not-so-unusual-telco-supplier-suspects (e.g., Ericsson, Nokia, Huawei, Samsung, etc…), operators could launch fairly economical minimum viable products based on a minimum set of 5G SA capabilities sufficient to provide new and cost-efficient services. This will allow early entry for business-to-business new types of QoS and (or) slice-based services based on our new 5G SA capabilities.

Western Europe mobile market expectations – 5G technology share.

By end of 2021, it is expected that Western Europe would have in the order of 36 Million 5G connections, around a 5% 5G penetration. Increasing to 80 Million (11%) by end of 2022. By 2024 to 2025, it is expected that 50% of all mobile connections would be 5G based. As of October 2021 ca. 58% of commercial available mobile devices supports already 5G SA. This SA share is anticipated to grow rapidly over the next couple of years making 5G NSA increasingly unimportant.

Approaching 50% of all connections being 5G appears a very good time to aim having 5G standalone implemented and launched for operators. Also as this may coincide with substantial efforts to re-farming existing frequency spectrum from 4G to 5G as 5G data traffic exceeds that of 4G.

For Western Europe 2021, ca. 18% of the total mobile connections are business related. This number is expected to steadily increase to about 22% by 2030. With the introduction of new 5G SA capabilities, as briefly summarized above, it is to be expected that the 5G business connection share quickly will increase to the current level and that business would be able to directly monetize uRLLC, mMTC and the underlying QoS and network slicing enablers. For consumers 5G SA will bring some additional benefits but maybe less obvious new monetization possibilities, beyond the proportion of consumers caring about latency (e.g., gamers). Though, it appears likely that the new capabilities could bring operators efficiency opportunities leading to improved margin earned on consumers (for another article).

Recommendation:

  • Learn as much as possible from recent IT cloudification journeys (e.g., from monolithic to cloud, understand pros and cons with lift-and-shift strategies and the intricacies of operating cloud-native environments in public cloud domains).
  • Aim to have 5GC available for 5G SA launch latest by 2024.
  • Run 5GC minimum viable product poc’s with friendly (business) users prior to bigger launch.
  • As 5G is launched on C-band / 3.x GHz it may likewise be a good point in time to have 5G SA available. At least for B2B customers that may benefit from uRLLC, lower latency in general, mMTC, a much richer set of QoS, network slicing, etc…
  • Having a solid 4G to 5G spectrum re-farming strategy ready between now and 2024 (too late imo). This should map out 4G+NSA and SA supply dynamics as increasingly customers get 5G SA capabilities in their devices.

Western Europe mobile market expectations – traffic growth.

With the growth of 5G connections and the expectation that 5G would further boost the mobile data consumption, it is expected that by 2023 – 2024, 50% of all mobile data traffic in Western Europe would be attributed to 5G. This is particular driven by increased rollout of 3.x GHz across the Western European footprint and associated massive MiMo (mMiMo) antenna deployments with 32×32 seems to be the telco-lands choice. In blended mobile data consumption a CAGR of around 34% is expected between 2020 and 2030, with 2030 having about 26× more mobile data traffic than that of 2020. Though, I suspect that in Western Europe, aggressive fiberization of telecommunications consumer and business markets, over the same period, may ultimately slow the growth (and demand) on mobile networks.

A typical Western European operator would have between 80 – 100+ MHz of bandwidth available for 4G its downlink services. The bandwidth variation being determined by how much is required of residual 3G and 2G services and whether the operator have acquired 1500MHz SDL (supplementary downlink) spectrum. With an average 4G antenna configuration of 4×4 MiMo and effective spectral efficiency of 2.25 Mbps/MHz/sector one would expect an average 4G downlink speed of 300+ Mbps per sector (@ 90 MHz committed to 4G). For 5G SA scenario with 100 MHz of 3.x GHz and 2×10 MHz @ 700 MHz, we should expect an average downlink speed of 500+ Mbps per sector for a 32×32 massive MiMo deployment at same effective spectral efficiency as 4G. In this example, although naïve, quality of coverage is ignored. With 5G, we more than double the available throughput and capacity available to the operator. So the question is whether we remain naïve and don’t care too much about the coverage aspects of 3.x GHz, as beam-forming will save the day and all will remain cheesy for our customers (if something sounds too good to be true, it rarely is true).

In an urban environment it is anticipated that with beam-forming available in our mMiMo antenna solutions downlink coverage will be reasonably fine (i.e., on average) with 3.x GHz antennas over-layed on operators existing macro-cellular footprint with minor densification required (initially). In the situation that 3.x GHz uplink cannot reach the on-macro-site antenna, the uplink can be closed by 5G @ 700 MHz, or other lower cellular frequencies available to the operator and assigned to 5G (if in standalone mode). Some concerns have been expressed in literature that present advanced higher order antenna’s (e.g., 16×16 and above ) will on average provide a poorer average coverage quality over a macro cellular area than what consumers would be used to with lower order antennas (e.g., 4×4 or lower) and that the only practical (at least with today’s state of antennas) solution would be sectorization to make up for beam forming shortfalls. In rural and sub-urban areas advanced antennas would be more suitable although the demand would be a lot less than in a busy urban environment. Of course closing the 3.x GHz with existing rural macro-cellular footprint may be a bigger challenge than in an urban clutter. Thus, massive MiMo deployments in rural areas may be much less economical and business case friendly to deploy. As more and more operators deploy 3.x GHz higher-order mMiMo more field experience will become available. So stay tuned to this topic. Although I would reserve a lot more CapEx in my near-future budget plans for substantial more sectorization in urban clutter than what I am sure is currently in most operators plans. Maybe in rural and suburban areas the need for sectorizations would be much smaller but then densification may be needed in order to provide a decent 3.x GHz coverage in general.

Western Europe mobile market expectations – 5G RAN Capex.

That brings us to another important aspect of 5G deployment, the Radio Access Network (RAN) capital expenditures (CapEx). Using my own high-level (EU-based) forecast model based on technology deployment scenario per Western European country that in general considers 1 – 3% growth in new sites per anno until 2024, then from 2025 onwards, I assuming 2 – 5% growth due to densifications needs of 5G, driven by traffic growth and before mentioned coverage limitations of 3.x GHz. Exact timing and growth percentages depends on initial 5G commercial launch, timing of 3.x GHz deployment, traffic density (per site), and site density considering a country’s surface area.

According with Statista, Western Europe had in 2018 a cellular site base of 421 thousands. Further, Statista expected this base will grow with 2% per anno in the years after 2018. This gives an estimated number of cellular sites of 438k in 2020 that has been assumed as a starting point for 2020. The model estimates that by 2030, over the next 10 years, an additional 185k (+42%) sites will have been built in Western Europe to support 5G demand. 65% (120+k) of the site growth, over the next 10 years, will be in Germany, France, Italy, Spain and UK. All countries with relative larger geographical areas that are underserved with mobile broadband services today. Countries with incumbent mobile networks, originally based on 900 MHz GSM grids (of course densified since the good old GSM days), and thus having coarser cellular grids with higher degree of mismatching the higher 5G cellular frequencies (i.e., ≥ 2.5 GHz). In the model, I have not accounted for an increased demand of sectorizations to keep coverage quality upon higher order mMiMO deployments. This, may introduce some uncertainty in the Capex assessment. However, I anticipate that sectorization uncertainty may be covered in the accelerated site demand the last 5 years of the period.

In the illustration above, the RAN capital investment assumes all sites will eventually be fiberized by 2025. That may however be an optimistic assumption and for some countries, even in Western Europe, unrealistic and possibly highly uneconomical. New sites, in my model, are always fiberized (again possibly too optimistic). Miscellaneous (Misc.) accounts for any investments needed to support the RAN and Fiber investments (e.g., Core, Transport, Cap. Labor, etc..).

In the economical estimation price erosion has been taken into account. This erosion is a blended figure accounting for annual price reduction on equipment and increases in labor cost. I am assuming a 5-year replacement cycle with an associated 10% average price increase every 5 years (on the previous year’s eroded unit price). This accounts for higher capability equipment being deployed to support the increased traffic and service demand. The economical justification for the increase unit price being that otherwise even more new sites would be required than assumed in this model. In my RAN CapEx projection model, I am assuming rational deployment, that is demand driven deployment. Thus, operators investments are primarily demand driven, e.g., only deploying infrastructure required within a given financial recovery period (e.g., depreciation period). Thus, if an operator’s demand model indicate that it will need a given antenna configuration within the financial recovery period, it deploys that. Not a smaller configuration. Not a bigger configuration. Only the one required by demand within the financial recovery period. Of course, there may be operators with other deployment incentives than pure demand driven. Though on average I suspect this would have a neglectable effect on the scale of Western Europe (i.e., on average Western Europe Telco-land is assumed to be reasonable economically rational).

All in all, demand over the next 8 years leads to an 80+ Billion Euro RAN capital expenditure, required between 2022 and 2030. This, equivalent to a annual RAN investment level of a bit under 10 Billion Euro. The average RAN CapEx to Mobile Revenue over this period would be ca. 6.3%, which is not a shockingly high level (tbh), over a period that will see an intense rollout of 5G at increasingly higher frequencies and increasingly capable antenna configurations as demand picks up. Biggest threat to capital expenditures is poor demand models (or no demand models) and planning processes investing too much too early, ultimately resulting in buyers regret and cycled in-efficient investment levels over the next 10 years. And for the reader still awake and sharp, please do note that I have not mentioned the huge elephant in the room … The associated incremental operational expense (OpEx) that such investments will incur.

As mobile revenues are not expected to increase over the period 2022 to 2030, this leaves 5G investments main purpose to maintaining current business level dominated by consumer demand. I hope this scenario will not materialize. Given how much extra quality and service potential 5G will deliver over the next 10 years, it seems rather pessimistic to assume that our customers would not be willing to pay more for that service enhancement that 5G will brings with it. Alas, time will show.

Acknowledgement.

I greatly acknowledge my wife Eva Varadi for her support, patience and understanding during the creative process of writing this Blog. Petr Ledl, head of DTAG’s Research & Trials, and his team’s work has been a continuous inspiration to me (thank you so much for always picking up on that phone call Petr!). Also many of my Deutsche Telekom AG, T-Mobile NL & Industry colleagues in general have in countless of ways contributed to my thinking and ideas leading to this little Blog. Thank you!

Further readings.

Kim Kyllesbech Larsen, “5G Standalone Will Deliver! – But What?”, Keynote presentation at Day 2 Telecoms Europe 5G Conference, (November 2021). A YouTube voice over is given here on the presentation.

Kim Kyllesbech Larsen, “5G Economics – The Numbers (Appendix X).”, Techneconomyblog.com, (July 2017).

Kim Kyllesbech Larsen, “5G Economics – An Introduction (Chapter 1)”, Techneconomyblog.com, (December 2016).

Peter Boyland, “The State of Mobile Network Experience – Benchmarking mobile on the eve of the 5G revolution”, OpenSignal, (May 2019).

Ian Fogg, “Benchmarking the Global 5G Experience”, OpenSignal, (November 2021).

Rachid El Hattachi & Javan Erfanian , “5G White Paper”, NGMN Alliance, (February 2015). See also “5G White Paper 2” by Nick Sampson (Orange), Javan Erfanian (Bell Canada) and Nan Hu (China Mobile).

Global Mobile Frequencies Database. (last update, 25 May 2021). I recommend very much to subscribe to this database (€595,. single user license). Provides a wealth of information on spectrum portfolios across the world.

Thomas Alsop, “Number of telecom tower sites in Europe by country in 2018 (in 1,000s)”, Statista Telecommunications, (July 2020).

Jia Shen, Zhongda Du, & Zhi Zhang, “5G NR and enhancements, from R15 to R16”, Elsevier Science, (2021). Provides a really good overview of what to expect from 5G standalone. Particular, very good comparison with what is provided with 4G and the differences with 5G (SA and NSA).

Ali Zaidi, Fredrik Athley, Jonas Medbo, Ulf Gustavsson, Giuseppe Durisi, & Xiaoming Chen, “5G Physical Layer Principles, Models and Technology Components”, Elsevier Science, (2018). The physical layer will always pose a performance limitation on a wireless network. Fundamentally, the amount of information that can be transferred between two locations will be limited by the availability of spectrum, the laws of electromagnetic propagation, and the principles of information theory. This book provides a good description of the 5G NR physical layer including its benefits and limitations. It provides a good foundation for modelling and simulation of 5G NR.

Thomas L. Marzetta, Erik G. Larsson, Hong Yang, Hien Quoc Ngo, “Fundamentals of Massive MIMO”, Cambridge University Press, (2016). Excellent account of the workings of advanced antenna systems such as massive MiMo. 

Western Europe: Western Europe has a bit of a fluid definition (I have found), here Western Europe includes the following countries comprising a population of ca. 425 Million people (in 2021); Austria, Belgium, Denmark, Finland, France, Germany, Greece, Ireland, Italy, Netherlands, Norway, Portugal, Spain, Sweden, Switzerland United Kingdom, Andorra, Cyprus, Faeroe Islands, Greenland, Guernsey, Jersey, Malta, Luxembourg, Monaco, Liechtenstein, San Marino, Gibraltar.

Is the ‘Uber’ moment for the Telecom sector coming?

As I am preparing for my keynote speech for the Annual Dinner event of the Telecom Society Netherlands (TSOC) end of January 2020, I thought the best way was to write down some of my thoughts on the key question “Is the ‘Uber’ moment for the telecom sector coming?”. In the end it turned out to be a lot more than some of my thoughts … apologies for that. Though it might still be worth reading, as many of those considerations in this piece will be hitting a telcos near you soon (if it hasn’t already).

Knowing Uber Technologies Inc’s (Uber) business model well (and knowing at least the Danish taxi industry fairly well as my family has a 70+ years old Taxi company, Radio-Taxi Nykoebing Sjaelland Denmark, started by my granddad in 1949), it instinctively appear to be an odd question … and begs the question “why would the telecom sector want an Uber moment?” … Obviously, we would prefer not to be massively loss making (as is the Uber moment at this and past moments, e.g., several billions of US$ loss over the last couple of years) and also not the regulatory & political headaches (although we have our own). Not to mention some of the negative reputation issues around “their” customer experience (quiet different from telco topics and thank you for that). Also not forgetting that Uber has access to only a fraction of the value chain in the markets the operate … Althans of course Uber is also ‘infinitely’ lighter in terms of assets than a classical Telco … Its also a bit easier to replicate an Uber (or platform businesses in general) than an asset-heavy Telco (as it requires a “bit” less cash to get started;-). But but … of course the question is more related to the type of business model Uber represent rather than the taxi / ride hailing business model itself. Thinking of Uber makes such a question more practical and tangible …

And not to forget … The super cool technology aspects of being a platform business such as Uber … maybe Telco-land can and should learn from platform businesses? … Lets roll!

uber Uber

Uber main business (ca. 81%) is facilitating peer-2-peer ride sharing and ride hailing services via their mobile application and its websites. Uber tabs into the sharing economy. Making use of under-utilized private cars and their owners (producers) willingness to give up hours of their time to drive others (consumers) around in their private vehicle. Uber had 95 million active users (consumers) in 2018 and is expected to reach 110 million in 2019 (22% CAGR between 2016 & 2019). Uber has around 3+ million drivers (producers) spread out over 85+ countries and 900+ cities around the world (although 1/3 is in the USA). In the third quarter of 2019, Uber did 1.77 billion trips. That is roughly 200 trips per Uber driver per month of which the median income is 155 US$ per month (1.27 US$ per trip) before gasoline and insurances. In December 2017, the median monthly salary for Americans was $3,714.

In addition Uber also provides food delivery services (i.e., Uber Eats, ca. 11%), Uber Freight services (ca. 7%) and what they call Other Bets (ca. 1%). The first 9 month of 2019, Uber spend more than 40% of the turnover on R&D. Uber has an average revenue per trip (ARPT) of ca. 2 US$ (out of 9.5 US$ per trip based on gross bookings). Not a lot of ARPT growth the last 9 quarters. Although active users (+30% YoY), trips (+31% YoY), Gross Bookings (+32%) and Adjusted Net Revenue (+35%) all shows double digit growth.

Uber allegedly takes a 25% fee of each fare (note: if you compare gross bookings, the total revenue generated by their services, to net revenue which Uber receives the average is around 20%).

Uber’s market cap, roughly 10 years after being founded, after its IPO was 76 Bn US$ (@ May 10th, 2019) only exceeded by Facebook (104.2 Bn @ IPO) and Alibaba Group (167.6 Bn US$ @ IPO). 7 month after Uber’s market cap is ca. 51 Bn US$ (-33% down on IPO). The leading European telco Deutsche Telekom AG (25 years old, 1995) in comparison has a market capitalization around 70 Bn US$ and is very far from loss making. Deutsche Telekom is one of the world’s leading integrated telecommunications companies, with some 170+ million mobile customers, 28 million fixed-network lines, and 20 million broadband lines.

Peal the Onion

“Telcos are pipe businesses, Ubers are platform businesses”

In other words, Telco’s are adhering to a classical business model with fairly linear causal value chain (see Michael Porter’s classic from 1985). It’s the type of input/output businesses that has been around since the dawn of the industrial revolution. Such a business model can (and should) have a very high degree of end-2-end customer experience control.

Ubers (e.g., Uber, Airbnb, Booking.com, ebay, Tinder, Minecraft, …) are non-linear business models that benefit from direct and indirect network effects allowing for exponential growth dynamics. Such businesses are often piggybacking on under-utilized or un-used assets owned by individuals (e.g., homes & rooms, cars, people time, etc…). Moreover, these businesses facilitate networked connectivity between consumers and producers via a digital platform. As such, platform businesses rarely have complete end-2-end customer experience control but would focus on the quality and experience of networked connectivity. While platform business have little control over their customers (i.e., consumers and producers) experiences or overall customer journey they may have indirectly via near real-time customer satisfaction feedback (although this is after the fact).

Clearly the internet has enabled many new ways of doing business. In particular it allows for digital businesses (infrastructure lite) to create value by facilitating networked-scaled business models where demand (i.e., customers demand XYZ) and supply (i.e., businesses supplying XYZ).

Think of Airbnb‘s internet-based platform that connects (or networks) consumers (guests), who are looking for temporary accommodation (e.g., hotel room), with producers (hosts, private or corporate) of temporary accommodations to each other. Airbnb thus allow for value creation by tying into the sharing economy of private citizens. Under-utilized private property is being monetized, benefiting hosts (producers), guests (consumers) and the platform business (by charging a transactional fee). Airbnb charges hosts a 3% fee that mainly covers the payment processing cost. Moreover, Airbnb’s typical guest fee is under 13% of the booking cost. “Airbnb is a platform business built upon software and other peoples under-utilized homes & rooms”While Airbnb facilitated private (temporary) accommodations to consumers, today there are other online platform businesses (e.g., Booking.com, Experia.com, agoda.com, … ) that facilitates connections between hotels and consumers.

Think of Uber‘s online ride hailing platform connects travelers (consumer) with drivers (producers, private or corporate) as an alternative to normal cab / taxi services. Uber benefits from the under-utilization of most private cars, the private owners willingness to spend spare time and desire to monetize this under-utilization by becoming a private cab driver. Again the platform business exploring the sharing economy. Uber charges their drivers 25% of the faring fee. “Uber is a platform business built upon software and other peoples under-utilized cars and spare time”. The word platform was used 747 times in Uber’s IPO document. After Uber launched its digital online ride hailing platform, many national and regional taxi applications have likewise been launched. Facilitating an easier and more convenient way of hauling a taxi, piggybacking on the penetration of smartphones in any given market. In those models official taxi businesses and licensed taxi drivers collaborate around an classical industry digital platform facilitating and managing dispatches on consumer demand.

“A platform business relies on the sharing economy, monetizing networking (i.e., connecting) consumers and producers by taking a transaction fee on the value of involved transaction flow.”

E.g., consumer pays producer, or consumer get service for free and producer pays the platform business. It is a highly scaleble business model with exponential potential for growth assuming consumers and producers alike adapt your platform. The platform business model tends to be (physical) infrastructure and asset lite and software heavy. It typically (in start-up phase at least) relies on commercially available cloud offering (e.g., Lyft relies on AWS, Uber on AWS & Google) or if the platform business is massively scaled (e.g., Facebook), the choice may be to own data center infrastructure to have better platform control over operations. Typically you will see that successful Platform businesses at scale implements hybrid cloud model levering commercially available cloud solutions and own data centers. Platform businesses tend to be heavily automated (which is relative easy in a modern cloud environment) and rely very significantly on monetizing their data with underlying state-of-the-art real-time big data systems and of course intelligent algorithmic (i.e., machine learning based) business support systems.

Consider this

A platform-business’s technology stack, residing in a cloud, will typically run on a virtual machine or within a so-called container engine. The stack really resides on the upper protocol layers and is transparent to lower level protocols (e..g, physical, link, network, transport, …). In general the platform stack can be understood to function on the 3 platform layers presented in the chart to the left; (top-platform-layer) Networked Marketplace that connects producers and consumers with each other. This layer describes how a platform business customers connect (e.g., mobile app on smartphone), (middle-platform-layer) Enabling Layer in which microservices, software tools, business logic, rules and so forth will reside, (bottom-platform-layer) the Big Data Layer or Data Layer with data-driven decision making are occurring often supported by advanced real-time machine learning applications. The remaining technology stuff (e.g., physical infrastructure, servers, storage, LAN/WAN, switching, fixed and mobile telco networking, etc..) is typically taken care of by cloud or data center providers and telco providers. Which is explains why platform businesses tends to be infrastructure or asset lite (and software heavy) compared to telco and data-center providers.

“Many classical linear businesses are increasingly copying the platform businesses digital strategies (achieving an improved operational excellence) without given up on their fundamental value-chain control. Thus allowing to continue to provide consumers a known and often improved customer experience compared to a pure platform business.”

So what about the Telco model?

Well, the Telco business model is adhering to a linear value chain and business logic. And unless you are thinking of a service telco provider or virtual telco operator, Telcos are incredible infrastructure and asset heavy with massive capital investments required to provide competitive services to their customers. Apart from the required capital intensive underlying telco technology infrastructure, the telco business model requires; (1) public licenses to operate (often auctioned, or purchased and rarely “free”), (2) requires (public) telephony numbers, (3) spectrum frequencies (i.e.,for mobile operation) and so forth …

Furthermore, overall customer experience and end-2-end customer journey is very important to Telcos (as it is to most linear businesses and most would and should subscribe to being very passionate about it). In comparison to Platform Businesses, it would not be an understatement (at this moment in time at least) to say that most Telco businesses are lagging on cloudification/softwarisation, intelligent automation (whether domain-based or End-2-End) and advanced algorithmic (i.e., machine learning enabled) decision making as it relates to overarching business decisions as well as customer-related micro-decisions. However, from an economical perspective we are not talking about more than 10% – 20% of a Telco’s asset base (or capital expenses).

Mobile telco operators tend to be fairly advanced in their approaches to customer experience management, although mainly reactive rather than pro-active (due to lower intelligent algorithmic maturity again in comparison to most platform businesses). In general, fixed telco businesses are relative immature in their approaches to customer experience management (compared to mobile operators) possibly due lack of historical competitive pressure (“why care when consumers have not other choice” mindset). Alas this too is changing as more competition in fixed telco-land emerges.

“Telcos have some technology catching up to do in comparison & where relevant with platform businesses. However, that catching up does not force them to change the fundamentals of their business model (unless it make sense of course).”

Characteristic of a Platform Business

  • Often relies on the sharing economy (i.e., monetizing under-utilized resources).
  • It’s (exponential) growth relies on successful networking of consumers & producers (i.e., piggybacking on network effects).
  • Software-centric: platform business is software and focus / relies on the digital domain & channels.
  • Mobile-centric: mobile apps for consumers & producers.
  • Cloud-centric: platform-solution built on Public or Hybrid cloud models.
  • Cloud-native maturity level (i.e., the highest cloud maturity level).
  • Heavily end-2-end automated across cloud-native platform, processes & decision making.
  • Highly sophisticated data-driven decision making.
  • Infrastructure / asset lite (at scale may involve own data center assets).
  • Business driven & optimized by state-of-art big data real-time solutions supported by a very high level of data science & engineering maturity.
  • Little or no end-2-end customer experience control (i.e., in the sense of complete customer journey).
  • Very strong focus on connection experience including payment process.
  • Revenue source may be in form of transactional fee imposed on the value involved in networking producers and consumers (e.g., payment transaction, cost-per-click, impressions, etc..).

In my opinion it is not a given that a platform business always have to disrupt an existing market (or classical business model). However, a successful platform business often will be transformative, resulting in classical business attempting to copy aspects of the platform business model (e..g, digitalization, automation, cloud transformation, etc..). It is too early in most platform businesses life-cycle to conclude whether, where they disrupt, it is a temporary disruption (until the disrupted have transformed) or a permanently destruction of an existing classical market model (i.e., leaving little or no time for transformation).

So with the above in mind (and I am sure for many other defining factors), it is hard to see a classical telco transforming itself into a carbon copy of a platform business and maybe more importantly why this would make a lot of sense to do in the first instance. But but … it is also clear that Telco-land should proudly copy what make sense (e.g., particular around tech and level of digitization).

Teaser thought Though if you think in terms of sharing economical principles, the freedom that an eSIM (or software-based SIM equivalents) provides with 5 or more network profiles may bring to a platform business going beyond traditional MVNOs or Service Providers … well well … you think! (hint: you may still need an agreement with the classical telco though … if you are not in the club already;-). Maybe a platform model could also tab into under-utilized consumer resources that the consumer has already paid for? or what about a transactional model on Facebook (or other social media) where the consumer actual monetizes (and controls) personal information directly with third party advertisers? (actually in this model the social media company could also share part of its existing spoil earned on their consumer product, i.e., the consumer) etc…

However, it does not mean that telcos cannot (and should not) learn from some of the most successful platform business around. There certainly is enough classical beliefs in the industry that may be ripe for a bit of disruption … so untelconizing (or as my T-Mobile US friends like to call it uncarrier) ourselves may not be such a bad idea.

Telco-land

“There is more to telco technologies than its core network and backend platforms.”

Having a great (=successful) e-commerce business platform with cloud-native maturity level including automation that most telcos can only dream of, and mouth watering real-time big data platforms with the smartest data scientist and data engineers in the world … does not make for an easy straightforward transformation to a national (or world for that matter) leading (or non-leading) telco business in the classical sense of owning the value chain end-to-end.

Japan’s Rakuten is one platform business that has the ambition and expressed intention to move from being traditional platform-based business (ala Amazon.com) to become a mobile operator leveraging all the benefits and know-how of their existing platform technologies. Extending those principles, such as softwarization, cloudification and cloud-native automation principles, all the way out to the edge of the mobile antenna.

Many of us in telco-land thought that starting out with a classical telco, with mobile and maybe fixed assets as well, would make for an easy inclusion of platform-like technologies (as describe above), have had to revise our thinking somewhat. Certainly time-lines have been revised a couple of times, as have the assumed pre-conditions or context for such a transformation. Even economical and operational benefits that seems compelling, at least from a Greenfield perspective, turns out to be a lot more muddy when considering the legacy spaghetti we have in telcos with years and years in bag. And for the ones who keep saying that 5G will change all that … no I really doubt that it will any time soon.

While above platform-like telco topology looks so much simpler than the incumbent one … we should not forget it is what lays underneath the surface that matters. And what matters is software. Lots of software. The danger will always be present that we are ending up replacing hardware & legacy spaghetti complexity with software spaghetti complexity. Resulting unintended consequences in terms of longer-term operational stability (e.g,, when you go beyond being a greenfield business).

“Software have made a lot in the physical world redundant but it may also have leapfrogged the underlying operational complexity to an extend that may pose an existential threat down the line.”

While many platform businesses have perfected cloud-native e-commerce stacks reaching all the way out to the end-consumers mobile apps, residing on the smartphone’s OS, they do operate on the higher level of whatever relevant telco protocol stack. Platform businesses today relies on classical telcos to provide a robust connection data pipe to their end-users at high availability and stability.

What’s coming for us in Telco-land?

“Software will eat more and more of telco-land’s hardware as well as the world.”

(side note: for the ones who want to say that artificial intelligence (AI) will be eating the software, do remember that AI is software too and imo we talk then about autosarcophagy … no further comment;-).

Telcos, of the kinds with a past, will increasingly implement software solutions replacing legacy hardware functionality. Such software will be residing in a cloud environment either in form of public and/or private cloud models. We will be replacing legacy hardware-centric telco components or boxes with a software copy, residing on a boring but highly standardized hardware platform (i.e., a common off the shelf server). Yes … I talk about software definable networks (SDN) and network functional virtualization (NFV) features and functionalities (though I suspect SDN/NFV will be renamed to something else as we have talked about this for too many years for it to keep being exciting;-). The ultimate dream (or nightmare pending on taste) is to have all telco functions defined in software and operating on a very low number of standardized servers (let’s call it the pizza-box model). This is very close to the innovative and quiet frankly disruptive ideas of for example Drivenets in Israel (definitely worth a study if you haven’t already peeked at some of their solutions). We are of course seeing quiet some progress in developing software equivalents to telco core (i.e., Telco Cloud in above picture) functionalities, e.g., evolved packet core (EPC) functions, policy and charging rules function (PCRF), …. These solutions are available from the usual supplier suspects (e.g., Cisco, Ericsson, Huawei, and Nokia) as well as from (relative) new bets, such as for example Affirmed Networks and Mavenir (side note: if you are not the usual supplier suspect and have developed cloud-based telco functionalities drop me a note … particular if such work in a public or hybrid cloud model with for example Azure or AWS).

We will have software eating its way out to the edge of our telco networks. That is assuming it proves to make economical and operational sense (and maybe even anyway;-). As computing requirements, driven by softwarization of telco-land, goes “through the roof” across all network layers, edge computing centers will be deployed (or classical 2G BSC or 3G RNC sites will be re-purposed for the “lucky” operators with a more dis-aggregated network typologies).

Telcos (should) have very strong desires for platform-like automation as we know it from platform businesses cloud-native implementations. For a telco though, the question is whether they can achieve cloud-native automation principles throughout all their network layers and thus possibly allow for end-2-end (E2E) automation principles as known in a cloud-native world (which scope wise is more limited than the full telco stack). This assumes that an E2E automation goal makes economical and operational sense compared to domain-oriented automation (with domains not per see matching one to one the traditional telco network layers). While it is tempting to get all enthusiastic & winded-up about the role of artificial intelligence (AI) in telco (or any other) automation framework, it always make sense to take an ice cold shower and read up on non-AI based automation schemes as we have them in a cloud-native cloud environment before jumping into the rabbit hole. I also think that we should be very careful architecturally to spread intelligent agents all over our telco architecture and telco stack. AI will have an important mission in pro-active customer experience solutions and anomaly detection. The devil may be in how we close the loop of an intelligent agent’s output and a input to our automation framework.

To summarize what’s coming for the Telco sector;

  • Increased softwarization (or virtualization) moving from traditional platform layers out towards the edge.
  • Increased leveraging of cloud models (e.g., private, public, hybrid) following the path of softwarization.
  • Strive towards cloud-native operations including the obvious benefits from (non-AI based) automation that the cloud-native framework brings.
  • We will see a lot of focus on developing automation principles across the telco stack to the extend such will be different from cloud-native principles (note: expect there will be some at least for non-Greenfield implementations but also in general as the telco stack is not idem ditto a traditional platform stack). This may be hampered by lack of architectural standardization alignment across our industry. There is a risk that we will push for AI-based automation without exploring fully what non-AI based schemes may bring.
  • Inevitable the industry will spend much more efforts on developing cognitive-based pro-active customer experience solutions as well as expanding anomaly detection across the full telco stack. This will help in dealing with design complexities although might also be hampered by mis-alignment on standardization. Not to mention that AI should never become an excuse to not simplify designs and architectures.
  • Plus anything clever that I have not thought about or forgot to mention 🙂

So yes … softwarization, cloudification and aggressive (non-AI based) automation, known from platform-centric businesses, will be coming (in fact has arrived to an extend) for Telcos … over time and earlier for the few new brave Telco Greenfields …

Artificial intelligence based solutions will have a mission in pro-active customer experience (e.g., cellwizeuhana, …), zero-touch predictive maintenance, self-restoration & healing, and for advanced anomaly detection solutions (e.g., see Anodot as a leading example here). All are critical requirements in the new (and obviously in the old as well) telco world is being eaten by software. Self-learning “conscious” (defined in a relative narrow technical sense) anomaly detection solutions across the telco stack is in my opinion a must to deal with today’s and the future’s highly complex software architectures and systems.

I am also speculating whether intelligent agents (e.g., microagents reacting to an events) may make the telco layers less reliant on top-down control and orchestration (… I am also getting goosebumps by that idea … so maybe this is not good … hmmm … or I am cold … but then again orchestration is for non-trusting control “freaks”). Such a reactive microagent (or microservice) could take away the typical challenges with stack orchestration (e.g., blocking, waiting, …), decentralize control across the telco stack.

And no … we will not become Ubers … although there might be Ubers that will try to become us … The future will show …

Acknowledgement.

I also greatly acknowledge my wife Eva Varadi for her support, patience and understanding during the creative process of writing this Blog. Also many of my Deutsche Telekom AG, T-Mobile NL & Industry colleagues in general have in countless of ways contributed to my thinking and ideas leading to this little Blog. Thank you!

Further reading

Mike Isaac“Super Pumped – The Battle for Uber”, 2019, W.W. Norton & Company. A good read and what starts to look like a rule of a Silicon Valley startup behavior (the very worst and of course some of the best). Irrespective of the impression this book leaves with me, I am also deeply impressed (maybe even more after reading the book;-) what Uber’s engineers have been pulling off over the last couple of years.

Muchneeded.com“Uber by the Numbers: Users & Drivers Statistics, Demographics, and Fun Facts”, 2018. The age of the Uber statistics presented varies a lot. It’s a nice overall summary but for most recent stats please check against financial reports or directly from Uber’s own website.

Graham Rapier“Uber lost $5.2 billion in 3 months. Here’s where all that money went”, 2019, Business Insider. As often is the case with web articles, it is worth actually reading the article. Out of the $5.2 billion, $3.9 Billion was due to stock-based compensation. Still a loss of $1.3 billion is nevertheless impressive as well. In 2018 the loss was $1.8 billion and $4.5 billion in 2017.

Chris Anderson“Free – The Future of a Radical Price”, (2009), Hyperion eBook. This is one of the coolest books I have read on the topic of freemium, sharing economy and platform-based business models. A real revelations and indeed a confirmation that if you get something for free, you are likely not a customer but a product. A must read to understand the work around us. In this setting it is also worth reading “What is a Free Customer Worth?” by Sunil Gupta & Carl F. Mela (HBR, 2008).

Sangeet Paul Choudary“Platform Scale”, (2015), Platform Thinking Labs Pte. Ltd. A must read for anyone thinking of developing a platform based business. Contains very good detailed end-2-end platform design recommendations. If you are interested in knowing the most important aspects of Platform business models and don’t have time for more academic deep dive, this is most likely the best book to read.

Laure Claire Reillier & Benout Reillier“Platform Strategy”, (2017), Routledge Taylor & Francis Group. Very systematic treatment of platform economics and all strategic aspects of a platform business. It contains a fairly comprehensive overview of academic works related to platform business models and economics (that is if you want to go deeper than for example Choudary’s excellent “Platform Scale” above).

European Commission Report on “Study on passenger transport by taxi, hire car with driver and ridesharing in the EU”, (2016), European Commission.

Michal Gromek“Business Models 2.0 – Freemium & Platform based business models“, (2017), Slideshare.net.

Greg Satell“Don’t Believe Everything You Hear About Platform Businesses”, (2018), Inc.. A good critique of the hype around platform business models.

Jean-Charles Rochet & Jean Tirole“Platform Competition in Two-sided Markets” (2003), Journal of the European Economic Association, 1, 990. Rochet & Tirole formalizes the economics of two-sided markets. The math is fairly benign but requires a mathematical background. Beside the math their paper contains some good descriptions of platform economics.

Eitan Muller“Delimiting disruption: Why Uber is disruptive, but Airbnb is not”, (2019), International Journal of Research in Marketing. Great account (backed up with data) for the disruptive potential of platform business models going beyond (and rightly so) Clayton Christensen Disruptive Theory.

Todd W. Schneider“Taxi and Ridehailing Usage in New York City”, a cool site that provides historical and up-to-date taxi and ride hailing usage data for New York and Chicago. This gives very interesting insights into the competitive dynamics of Uber / Ride hailing platform businesses vs the classical taxi business. It also shows that while ride hailing businesses have disrupted the taxi business in totality, being a driver for a ride hailing platform is not that great either (and as Uber continues to operate at impressive losses maybe also not for Uber either at least in their current structure).

Uber Engineering is in general a great resource for platform / stack architecture, system design, machine learning, big data & forecasting solutions for a business model relying on real-time transactions. While I personally find the Uber architecture or system design too complex it is nevertheless an impressive solution that Uber has developed. There are many noteworthy blog posts to be found on the Uber Engineering site. Here is a couple of foundational ones (both from 2016 so please be aware that lots may have changed since then) “The Uber Engineering Tech Stack, Part I: The Foundation” (Lucie Lozinski, 2016) and “The Uber Engineering Tech Stack, Part II: The Edge and Beyond” (Lucie Lozinski, 2016) . I also found “Uber’s Big Data Platform: 100+ Petabytes with Minute Latency” post (by Reza Shiftehfar, 2018) very interesting in describing the historical development and considerations Uber went through in their big data platform as their business grew and scale became a challenge in their designs. This is really a learning resource.

Wireless One“Rakuten: Japan’s new #4 is going all cloud”, 2019. Having had the privilege to visit Rakuten in Japan and listen to their chief-visionary Tareq Amin (CTO) they clearly start from being a platform-centric business (i.e., Asia’s Amazon.com) with the ambition to become a new breed of telco levering their platform technologies (and platform business model thinking) all the way out to the edge of the mobile base station antenna. While I love that Tareq Amin actually has gone and taken his vision from powerpoint to reality, I also think that Rakuten benefits (particular many of the advertised economical benefits) from being more a Greenfield telco than an established telco with a long history and legacy. In this respect it is humbling that their biggest stumbling block or challenge for launching their services is site rollout (yes touchy-feel infrastructure & real estate is a b*tch!). See also “Rakuten taking limited orders for services on its delayed Japan mobile network” (October, 2019).

Justin Garrison & Chris Nova“Cloud Native Infrastructure”, 2018, O’Reilly and Kief Morris“Infrastructure as Code”, 2016, O’Reilly. I am usually using both these books as my reference books when it comes to cloud native topics and refreshing my knowledge (and hopefully a bit of understanding).

Marshall W. Van AlstyneGeoffrey G. Parker and Sangeet Paul Choudary“Pipelines, Platforms and the New Rules of Strategy”, 2016, Harvard Business Review (April Issue).

Murat Uenlue“The Complete Guide to the Revolutionary Platform Business Model”, 2017. Good read. Provides a great overview of platform business models and attempts systematically categorize platform businesses (e.g., Communications Platform, Social Platform, Search Platform, Open OS Platforms, Service Platforms, Asset Sharing Platforms, Payment Platforms, etc….).

Profitability of the Mobile Business Model … The Rise! & Inevitable Fall?

A Mature & Emerging Market Profitability Analysis … From Past, through Present & to the Future.

  • I dedicate this Blog to David Haszeldine whom has been (and will remain) a true partner when it comes to discussing, thinking and challenging cost structures, corporate excesses and optimizing the Telco profitability.
  • Opex growth & declining revenue growth is the biggest exposure to margin decline & profitability risk for emerging growth markets as well as mature mobile markets.
  • 48 Major Mobile Market’s Revenue & Opex Growth have been analyzed over the period 2007 to 2013 (for some countries from 2003 to 2013). The results have been provided in an easy to compare overview chart.
  • For 23 out of the 48 Mobile Markets, Opex have grown faster than Revenue and poses a substantial risk to Telco profitability in the near & long-term unless Opex will be better managed and controlled.
  • Mobile Profitability Risk is a substantial Emerging Growth Market Problem where cost has grown much faster than the corresponding Revenues.
  • 11 Major Emerging Growth Markets have had an Opex compounded annual growth rate between 2007 to 2013 that was higher than the Revenue Growth substantially squeezing margin and straining EBITDA.
  • On average the compounded annual growth rate of Opex grew 2.2% faster than corresponding Revenue over the period 2007 to 2013. Between 2012 to 2013 Opex grew (on average) 3.7% faster than Revenue.
  • A Market Profit Sustainability Risk Index (based on Bayesian inference) is proposed as a way to provide an overview of mobile markets profitability directions based on their Revenue and Opex growth rates.
  • Statistical Analysis on available data shows that a Mobile Markets Opex level is driven by (1) Population, (2) Customers, (3) Penetration and (4) ARPU. The GDP & Surface Area have only minor and indirect influence on the various markets Opex levels.
  • A profitability framework for understanding individual operators profit dynamics is proposed.
  • It is shown that Profitability can be written as \Delta  = \delta  - \frac{{{o_f}}}{{{r_u}}}\frac{1}{\sigma }with\Delta being the margin, \delta  = 1 - \frac{{{o_u}}}{{{r_u}}}with ou and ru being the user dependent OpEx and Revenue (i.e., AOPU and ARPU), of the fixed OpEx divided by the Total Subscriber Market and\sigma is the subscriber market share.
  • The proposed operator profitability framework provides a high degree of descriptive power and understanding of individual operators margin dynamics as a function of subscriber market share as well as other important economical drivers.

I have long & frequently been pondering over the mobile industry’s profitability.In particular, I have spend a lot of my time researching the structure & dynamics of profitability and mapping out factors that contributes in both negative & positive ways? My interest is the underlying cost structures and business models that drives the profitability in both good and bad ways. I have met Executives who felt a similar passion for strategizing, optimizing and managing their companies Telco cost structures and thereby profit and I have also met Executives who mainly cared for the Revenue.

Obviously, both Revenue and Cost are important to optimize. This said it is wise to keep in mind the following Cost- structure & Revenue Heuristics;

  • Cost is an almost Certainty once made & Revenues are by nature Uncertain.
  • Cost left Unmanaged will by default Increase over time.
  • Revenue is more likely to Decrease over time than increase.
  • Majority of Cost exist on a different & longer time-scale than Revenue.

In the following I will use EBITDA, which stands for Earnings Before Interest, Taxes, Depreciation and Amortization, as a measure of profitability and EBITDA to Revenue Ratio as a measure of my profit margin or just margin. It should be clear that EBITDA is a proxy of profitability and as such have shortfalls in specific Accounting and P&L Scenarios. Also according with GAAP (General Accepted Accounting Principles) and under IFRS (International Financial Reporting Standards) EBITDA is not a standardized accepted accounting measure. Nevertheless, both EBITDA and EBITDA Margin are widely accepted and used in the mobile industry as a proxy for operational performance and profitability. I am going to assume that for most purposes & examples discussed in this Blog, EBITDA & the corresponding Margin remains sufficiently good measures profitability.

While I am touching upon mobile revenues as an issue for profitability, I am not going to provide much thoughts on how to boost revenues or add new incremental revenues that might compensate from loss of mobile legacy service revenues (i.e., voice, messaging and access). My revenue focus in particular addresses revenue growth on a more generalized level compared to the mobile cost being incurred operating such services in particular and a mobile business in general. For an in-depth and beautiful treatment of mobile revenues past, present and future, I would like to refer to Chetan Sharma’s 2012 paper “Operator’s Dilemma (and Opportunity): The 4th Wave” (note: you can download the paper by following the link in the html article) on mobile revenue dynamics from (1) Voice (1st Revenue or Service Wave), (2) Messaging (2nd Revenue or Service Wave) to todays (3) Access (3rd Revenue Wave) and the commence to what Chetan Sharma defines as the 4th Wave of Revenues (note: think of waves as S-curves describing initial growth spurt, slow down phase, stagnation and eventually decline) which really describes a collection of revenue or service waves (i.e., S-curves) representing a portfolio of Digital Services, such as (a) Connected Home, (b) Connected Car,  (c) Health, (d) Payment, (e) Commerce, (f) Advertising, (g) Cloud Services (h) Enterprise solutions, (i) Identity, Profile & Analysis etc..  I feel confident that adding any Digital Service enabled by Internet-of-Things (IoT) and M2M would be important inclusions to the Digital Services Wave. Given the competition (i.e., Facebook, Google, Amazon, Ebay, etc..) that mobile operators will face entering the 4th Wave of Digital Services Space, in combination with having only national or limited international scale, will make this area a tough challenge to return direct profit on. The inherent limited international or national-only scale appears to be one of the biggest barrier to turn many of the proposed Digital Services, particular with those with strong Social Media Touch Points, into meaningful business opportunities for mobile operators.

This said, I do believe (strongly) that Telecom Operators have very good opportunities for winning Digital Services Battles in areas where their physical infrastructure (including Spectrum & IT Architecture) is an asset and essential for delivering secure, private and reliable services. Local regulation and privacy laws may indeed turn out to be a blessing for Telecom Operators and other national-oriented businesses. The current privacy trend and general consumer suspicion of American-based Global Digital Services / Social Media Enterprises may create new revenue opportunities for national-focused mobile operators as well as for other national-oriented digital businesses. In particular if Telco Operators work together creating Digital Services working across operator’s networks, platforms and beyond (e.g., payment, health, private search, …) rather than walled-garden digital services, they might become very credible alternatives to multi-national offerings. It is highly likely that consumers would be more willing to trust national mobile operator entities with her or his personal data & money (in fact they already do that in many areas) than a multinational social-media corporation. In addition to the above Digital Services, I do expect that Mobile/Telecom Operators and Entertainment Networks (e.g., satellite, cable, IP-based) will increasingly firm up partnerships as well as acquire & merge their businesses & business models. In all effect this is already happening.

For emerging growth markets without extensive and reliable fixed broadband infrastructures, high-quality (& likely higher cost compared to today’s networks!) mobile broadband infrastructures would be essential to drive additional Digital Services and respective revenues as well as for new entertainment business models (other than existing Satellite TV). Anyway, Chetan captures these Digital Services (or 4th Wave) revenue streams very nicely and I recommend very much to read his articles in general (i.e., including “Mobile 4th Wave: The Evolution of the Next Trillion Dollars” which is the 2nd “4th Wave” article).

Back to mobile profitability and how to ensure that the mobile business model doesn’t breakdown as revenue growth starts to slow down and decline while the growth of mobile cost overtakes the revenue growth.

A good friend of mine, who also is a great and successful CFO, stated that Profitability is rarely a problem to achieve (in the short term)”; “I turn down my market invest (i.e., OpEx) and my Profitability (as measured in terms of EBITDA) goes up. All I have done is getting my business profitable in the short term without having created any sustainable value or profit by this. Just engineered my bonus.”

Our aim must be to ensure sustainable and stable profitability. This can only be done by understanding, careful managing and engineering our basic Telco cost structures.

While most Telco’s tend to plan several years ahead for Capital Expenditures (CapEx) and often with a high degree of sophistication, the same Telco’s mainly focus on one (1!) year ahead for OpEx. Further effort channeled into OpEx is frequently highly simplistic and at times in-consistent with the planned CapEx. Obviously, in the growth phase of the business cycle one may take the easy way out on OpEx and focus more on the required CapEx to grow the business. However, as the time-scales for committed OpEx “lives” on a much longer period than Revenue (particular Prepaid Revenue or even CapEx for that matter), any shortfall in Revenue and Profitability will be much more difficult to mitigate by OpEx measures that takes time to become effective. In markets with little or no market investment the penalty can be even harsher as there is no or little OpEx cushion that can be used to soften a disappointing direction in profitability.

How come a telecom business in Asia, or other emerging growth markets around the world, can maintain, by European standards, such incredible high EBITDA Margins. Margin’s that run into 50s or even higher. Is this “just” a matter of different lower-cost & low GDP economies? Does the higher margins simply reflect a different stage in the business cycle (i.e., growth versus super-saturation)?, Should Mature Market really care too much about Emerging Growth Markets? in the sense of whether Mature Markets can learn anything from Emerging Growth Markets and maybe even vice versa? (i.e., certainly mature markets have made many mistakes, particular when shifting gears from growth to what should be sustainability).

Before all those questions have much of a meaning, it might be instructive to look at the differences between a Mature Market and an Emerging Growth Market. I obviously would not have started this Blog, unless I believe that there are important lessons to be had by understanding what is going on in both types of markets. I also should make it clear that I am only using the term Emerging Growth Markets as most of the markets I study is typically defined as such by economists and consultants. However from a mobile technology perspective few of those markets we tend to call Emerging Growth Markets can really be called emerging any longer and growth has slowed down a lot in most of those markets. This said, from a mobile broadband perspective most of the markets defined in this analysis as Emerging Growth Markets are pretty much dead on that definition.

Whether the emerging markets really should be looking forward to mobile broadband data growth might depend a lot on whether you are the consumer or the provider of services.

For most Mature Markets the introduction of 3G and mobile broadband data heralded a massive slow-down and in some cases even decline in revenue. This imposed severe strains on Mobile Margins and their EBITDAs. Today most mature markets mobile operators are facing a negative revenue growth rate and is “forced” continuously keep a razor focus on OpEx, Mitigating the revenue decline keeping Margin and EBITDA reasonably in check.

Emerging Markets should as early as possible focus on their operational expenses and Optimize with a Vengeance.

Well well let ‘s get back to the comparison and see what we can learn!

It doesn’t take to long to make a list of some of the key and maybe at times obvious differentiators (not intended to be exhaustive) between Mature and Emerging Markets are;

mature vs growth markets

  • Side Note: it should be clear that by today many of the markets we used to call emerging growth markets are from mobile telephony penetration & business development certainly not emerging any longer and as growing as they were 5 or 10 years ago. This said from a 3G/4G mobile broadband data penetration perspective it might still be fair to characterize those markets as emerging and growing. Though as mature markets have seen that journey is not per se a financial growth story.

Looking at the above table we can assess that Firstly: the straightforward (and possible naïve) explanation of relative profitability differences between Mature and Emerging Markets, might be that emerging markets cost structures are much more favorable compared to what we find in mature market economies. Basically the difference between Low and High GDP economies. However, we should not allow ourselves to be too naïve here as lessons learned from low GDP economies are that some cost structure elements (e.g., real estate, fuel, electricity, etc..) are as costly (some times more so) than what we find back in mature high/higher GDP markets. Secondly: many emerging growth market’s economies are substantially more populous & dense than what we find in mature markets (i.e., although it is hard to beat Netherlands or the Ruhr Area in Germany). Maybe the higher population count & population density leads to better scale than can be achieved in mature markets. However, while maybe true for the urban population, emerging markets tend to have substantially higher ratio of their population living in rural areas compared to what we find in mature markets.  Thirdly: maybe the go-to-market approach in emerging markets is different from mature markets (e.g., subsidies, quality including network coverage, marketing,…) offering substantially lower mobile quality overall compared to what is the practice in mature markets. Providing poor mobile network quality certainly have been a recurring theme in the Philippines mobile industry despite the Telco Industry in Philippines enjoys Margins that most mature markets operators can only dream of. It is pretty clear that for 3G-UMTS based mobile broadband, 900 MHz does not have sufficient bandwidth to support the anticipated mobile broadband uptake in emerging markets (e.g., particular as 900MHz is occupied by 2G-GSM as well). IF emerging markets mobile operators will want to offer mobile data at reasonable quality levels (i.e., and the IF is intentional), sustain anticipated customer demand and growth they are likely to require network densification (i.e., extra CapEx and OpEx) at 2100 MHz. Alternative they might choose to wait for APT 700 MHz and drive an affordable low-cost LTE device ecosystem albeit this is some years ahead.

More than likely some of the answers of why emerging markets have a much better margins (at the moment at least) will have to do with cost-structure differences combined with possibly better scale and different go-to-market requirements more than compensating the low revenue per user.

Let us have a look at the usual suspects towards the differences between mature & emerging markets. The EBITDA can be derived as Revenue minus the Operational Expenses (i.e., OpEx) and the corresponding margin is Ebitda divided by the Revenue (ignoring special accounting effects that here);

EBITDA (E) = Revenue (R) – OpEx (O) and Margin (M) = EBITDA / Revenue.

The EBITDA & Margin tells us in absolute and relative terms how much of our Revenue we keep after all our Operational expenses (i.e., OpEx) have been paid (i.e., beside tax, interests, depreciation & amortization charges).

We can write Revenue as a the product of ARPU (Average Number of Users) times Number of Users N and thus the EBITDA can also be written as;

E = R - O = ARPU\, \times {N_{users}}\; - \;O. We see that even if ARPU is low (or very) low, an Emerging Market with lot of users might match the Revenue of a Mature Market with higher ARPU and worse population scale (i.e., lower amount of users). Pretty simple!

But what about the Margin? M = \frac{{R - O}}{R} = 1 - \frac{O}{R}, in order for an Emerging Market to have substantially better Margin than corresponding Mature Market at the same revenue level, it is clear that the Emerging Market’s OpEx (O) needs to be lower than that of a Mature markets. We also observe that if the Emerging Market Revenue is lower than the Mature Market, the corresponding Opex needs to be even lower than if the Revenues were identical. One would expect that lower GDP countries would have lower Opex (or Cost in general) combined with better population scale is really what makes for a great emerging market mobile Margins! … Or is it ?

A Small but essential de-tour into Cost Structure.

Some of the answers towards the differences in margin between mature and emerging markets obviously lay in the OpEx part or in the Cost-structure differences. Let’s take a look at a mature market’s cost structure (i.e., as you will find in Western & Eastern Europe) which pretty much looks like this;

mature market cost structure

With the following OpEx or cost-structure elements;

  • Usage-related OpEx:  typically take up between 10% to 35% of of the total OpEx with an average of ca. 25%. On average this OpEx contribution is approximately 17% of the revenue in mature European markets. Trend wise it is declining. Usage-based OpEx is dominated by interconnect & roaming voice traffic and to a less degree of data interconnect and peering. In a scenario where there is little circuit switched voice left (i.e., the ultimate LTE scenario) this cost element will diminish substantially from the operators cost structure. It should be noted that this also to some extend is being influenced by regulatory forces.
  • Market Invest: can be decomposed into Subscriber Acquisition Cost (SAC), i.e., “bribing” the customers to leave your competitor for yourself, Subscriber Retention Cost (SRC), i.e., “bribing” your existing (valuable) customers to not let them be “bribed” by you’re a competitor and leave you (i.e., churn), and lastly Other Marketing spend for advertisement, promotional and so forth. This cost-structure element contribution to OpEx can vary greatly depending on the market composition. In Europe’s mature markets it will vary from 10% to 31% with a mean value of ca. 23% of the total OpEx. On average it will be around 14% of the Revenue. It should be noted that as the mobile penetration increases and enter into heavy saturation (i.e., >100%), SAC tends to reduce and SRC will increase. Further in markets that are very prepaid heavy SAC and SRC will naturally be fairly minor cost structure element (i.e., 10% of OpEx or lower and only a couple of % of Revenue). Profit and Margin can rapidly be influenced by changes in the market invest. SAC and SRC cost-structure elements will in general be small in emerging growth markets (compared to corresponding mature markets).
  • Terminal-equipment related OpEx: is the cost associated by procuring terminals equipment (i.e, handsets, smartphones, data cards, etc.). In the past (prior to 2008) it was fairly common that OpEx from procuring and revenues from selling terminals were close to a zero-sum game. In other words the cost made for the operator of procuring terminals was pretty much covered by re-selling them to their customer base. This cost structure element is another  heavy weight and vary from 10% to 20% of the OpEx with an average in mature European markets of 17%. Terminal-related cost on average amounts to ca. 11% of the Revenue (in mature markets). Most operators in emerging growth markets don’t massively procure, re-sell and subsidies handsets, as is the case in many mature markets. Typically handsets and devices in emerging markets will be supplied by a substantial 2nd hand gray and black market readily available.
  • Personnel Cost: amounts to between 6% to 15% of the Total OpEx with a best-practice share of around the 10%. The ones who believe that this ratio is lower in emerging markets might re-think their impression. In my experience emerging growth markets (including the ones in Eastern & Central Europe) have a lower unit personnel cost but also tends to have much larger organizations. This leads to many emerging growth markets operators having a personnel cost share that is closer to the 15% than to the 10% or lower. On average personnel cost should be below 10% of revenue with best practice between 5% and 8% of the Revenue.
  • Technology Cost (Network & IT): includes all technology related OpEx for both Network and Information Technology. Personnel-related technology OpEx (prior to capitalization ) is accounted for in the above Personnel Cost Category and would typically be around 30% of the personnel cost pending on outsourcing level and organizational structure. Emerging markets in Central & Eastern Europe historical have had higher technology related personnel cost than mature markets. In general this is attributed to high-quality relative low-cost technology staff leading to less advantages in outsourcing technology functions. As Technology OpEx is the most frequent “victim” of efficiency initiatives, lets just have a look at how the anatomy of the Technology Cost Structure looks like:

technology opex  mature markets

  • Technology Cost (Network & IT) – continued: Although, above Chart (i.e., taken from my 2012 Keynote at the Broadband MEA 2012, Dubai “Ultra-efficient network factory: Network sharing and other means to leapfrog operator efficiencies”) emphasizes a Mature Market View, emerging markets cost distribution does not differ that much from the above with a few exceptions. In Emerging Growth Markets with poor electrification rates diesel generators and the associated diesel full will strain the Energy Cost substantially. As the biggest exposure to poor electrical grid (in emerging markets) in general tend to be in Rural and Sub-Urban areas it is a particular OpEx concern as the emerging market operators expands towards Rural Areas to capture the additional subscriber potential present there. Further diesel fuel has on average increased with 10% annually (i..e, over the least 10 years) and as such is a very substantial Margin and Profitability risk if a very large part of the cellular / mobile network requires diesel generators and respective fuel. Obviously, “Rental & Leasing” as well as “Service & Maintenance” & “Personnel Cost” would be positively impacted (i.e., reduced) by Network Sharing initiatives. Best practices Network Sharing can bring around 35% OpEx savings on relevant cost structures. For more details on benefits and disadvantages (often forgotten in the heat of the moment) see my Blog “The ABC of Network Sharing – The Fundamentals”. In my experience one of the greatest opportunities in Emerging Growth Markets for increased efficiency are in the Services part covering Maintenance & Repair (which obviously also incudes field maintenance and spare part services).
  • Other Cost: typically covers the rest of OpEx not captured by the above specific items. It can also be viewed as overhead cost. It is also often used to “hide” cost that might be painful for the organization (i.e., in terms of authorization or consequences of mistakes). In general you will find a very large amount of smaller to medium cost items here rather than larger ones. Best practices should keep this below 10% of total OpEx and ca. 5% of Revenues. Much above this either means mis-categorization, ad-hoc projects, or something else that needs further clarification.

So how does this help us compare a Mature Mobile Market with an Emerging Growth Market?

As already mentioned in the description of the above cost structure categories particular Market Invest and Terminal-equipment Cost are items that tend to be substantially lower for emerging market operators or entirely absent from their cost structures.

Lets assume our average mobile operator in an average mature mobile market (in Western Europe) have a Margin of 36%. In its existing (OpEx) cost structure they spend 15% of Revenue on Market Invest of which ca. 53% goes to subscriber acquisition (i.e., SAC cost category), 40% on subscriber retention (SRC) and another 7% for other marketing expenses. Further, this operator has been subsidizing their handset portfolio (i.e., Terminal Cost) which make up another 10% of the Revenue.

Our Average Operator comes up with the disruptive strategy to remove all SAC and SRC from their cost structure and stop procuring terminal equipment. Assuming (and that is a very big one in a typical western European mature market) that revenue remains at the same level, how would this average operator fare?

Removing SAC and SRC, which was 14% of the Revenue will improve the Margin adding another 14 percentage points. Removing terminal procurement from its cost structure leads to an additional Margin jump of 10 percentage points. The final result is a Margin of 60% which is fairly close to some of the highest margins we find in emerging growth markets. Obviously, completely annihilating Market Invest might not be the most market efficient move unless it is a market-wide initiative.

Albeit the example might be perceived as a wee bit academic, it serves to illustrate that some of the larger margin differences we observe between mobile operators in mature and emerging growth markets can be largely explain by differences in the basic cost structure, i..e, the lack of substantial subscriber acquisition and retention costs as well as not procuring terminals does offer advantages to the emerging market business model.

However, it also means that many operators in emerging markets have little OpEx flexibility, in the sense of faster OpEx reduction opportunities once mobile margin reduces due to for example slowing revenue growth. This typical becomes a challenge as mobile penetration starts reaching saturation and as ARPU reduces due to diminishing return on incremental customer acquisition.

There is not much substantial OpEx flexibility (i..e, market invest & terminal procurement) in Emerging Growth Markets mobile accounts. This adds to the challenge of avoiding profitability squeeze and margin exposure by quickly scaling back OpEx.

This is to some extend different from mature markets that historically had quiet a few low hanging fruits to address before OpEx efficiency and reduction became a real challenge. Though ultimately it does become a challenge.

Back to Profitability with a Vengeance.

So it is all pretty simple! … leave out Market Invest and Terminal Procurement … Then add that we typically have to do with Lower GDP countries which conventional wisdom would expect also to have lower Opex (or Cost in general) combined with better population scale .. isn’t that really what makes for a great emerging growth market Mobile Margin?

Hmmm … Albeit Compelling ! ? … For the ones (of us) who would think that the cost would scale nicely with GDP and therefor a Low GDP Country would have a relative Lower Cost Base, well …

opex vs gdp

  • In the Chart above the Y-axis is depicted with logarithmic scaling in order to provide a better impression of the data points across the different economies. It should be noted that throughout the years 2007 to 2013 (note: 2013 data is shown above)  there is no correlation between a countries mobile Opex, as estimated by Revenue – EBITDA, and the GDP.

Well … GDP really doesn’t provide the best explanation (to say the least)! … So what does then?

I have carried out multi-linear regression analysis on the available data from the “Bank of America Merrill Lynch (BoAML) Global Wireless Matrix Q1, 2014” datasets between the years 2007 to 2013. The multi-linear regression approach is based on year-by-year analysis of the data with many different subsets & combination of data chosen including adding random data.

I find that the best description (R-square 0.73, F-Ratio of 30 and p-value(s) <0.0001) of the 48 country’s data on Opex. The amount of data points used in the multi-regression is at least 48 for each parameter and that for each of the 7 years analyzed. The result of the (preliminary) analysis is given by the following statistically significant parameters explaining the Mobile Market OpEx:

  1. Population – The higher the size of the population, proportional less Mobile Market Opex is spend (i.e., scale advantage).
  2. Penetration – The higher the mobile penetration, proportionally less Mobile Market Opex is being spend (i.e., scale advantage and the incremental penetration at an already high penetration would have less value thus less Opex should be spend).
  3. Users (i..e., as measured by subscriptions) – The more Users the higher the Mobile Market Opex (note: prepaid ratio has not been found to add statistical significance).
  4. ARPU (Average Revenue Per User) – The higher the ARPU, the higher will the Mobile Market Opex be.

If I leave out ARPU, GDP does enter as a possible descriptive candidate although the overall quality of the regression analysis suffers. However, it appears that the GDP and ARPU cannot co-exist in the analysis. When Mobile Market ARPU data are included, GDP becomes non-significant. Furthermore, a countries Surface Area, which I previously believed would have a sizable impact on a Mobile Market’s OpEx, also does not enter as a significant descriptive parameter in this analysis. In general the Technology related OpEx is between 15% to 25% (maximum) of the Total OpEx and out that possible 40% to 60% would be related to sites that would be needed to cover a given surface area. This might no be significant enough in comparison to the other parameters or simply not a significant factor in the overall country related mobile OpEx.

I had also expected 3G-UMTS to have had a significant contribution to the Opex. However this was not very clear from the analysis either. Although in the some of the earlier years (2005 – 2007), 3G does enter albeit not with a lot of weight. In Western Europe most incremental OpEx related to 3G has been absorb in the existing cost structure and very little (if any) incremental OpEx would be visible particular after 2007. This might not be the case in most Emerging Markets unless they can rely on UMTS deployments at 900 MHz (i.e., the traditional GSM band). Also the UMTS 900 solution would only last until capacity demand require the operators to deploy UMTS 2100 (or let their customers suffer with less mobile data quality and keep the OpEx at existing levels). In rural areas (already covered by GSM at 900 MHz) the 900 MHz UMTS deployment option may mitigate incremental OpEx of new site deployment and further encourage rural active network sharing to allow for lower cost deployment and providing rural populations with mobile data and internet access.

The Population Size of a Country, the Mobile Penetration and the number of Users and their ARPU (note last two basically multiplies up to the revenue) are most clearly driving a mobile markets Opex.

Philippines versus Germany – Revenue, Cost & Profitability.

Philippines in 2013 is estimated to have a population of ca. 100 Million compared to Germany’s ca. 80 Million. The Urban population in Germany is 75% taking up ca. 17% of the German surface area (ca. 61,000 km2 or a bit more than Croatia). Comparison this to Philippines 50% urbanization that takes up up only 3% (ca. 9,000 km2 or equivalent to the surface area of Cyprus). Germany surface area is about 20% larger than Philippines (although the geographies are widely .. wildly may be a better word … different, with the Philippines archipelago comprising 7,107 islands of which ca. 2,000 are inhabited, making the German geography slightly boring in comparison).

In principle if all I care about is to cover and offer services to the urban population (supposedly the ones with the money?) I only need to cover 9 – 10 thousand square kilometer in the Philippines to capture ca. 50 Million potential mobile users (or 5,000 pop per km2), while I would need to cover about 6 times that amount of surface area to capture 60 million urban users in Germany (or 1,000 pop per km2). Even when taking capacity and quality into account, my Philippine cellular network should be a lot smaller and more efficient than my German mobile network. If everything would be equal, I basically would need 6 times more sites in Germany compared to Philippines. Particular if I don’t care too much about good quality but just want to provide best effort services (that would never work in Germany by the way). Philippines would win any day over Germany in terms of OpEx and obviously also in terms of capital investments or CapEx. It does help the German Network Economics that the ARPU level in Germany is between 4 times (in 2003) to 6 times (in 2013) higher than in Philippines. Do note that the two major Germany mobile operators covers almost 100% of the population as well as most of the German surface area and that with a superior quality of voice as well as mobile broadband data. This does not true hold true for Philippines.

In 2003 a mobile consumer in Philippines would spend on average almost 8 US$ per month for mobile services. This was ca. 4x lower than a German customer for that year. The 2003 ARPU of the Philippines roughly corresponded to 10% of the GDP per Capita versus 1.2% of the German equivalent. Over the 10 Years from 2003 to 2013, ARPU dropped 60% in Philippine and by 2013 corresponded to ca. 1.5% of GDP per Capita (i.e., a lot more affordable propositions). The German 2013 ARPU to GDP per Capita ratio was 0.5% and its ARPU was ca. 40% lower than in 2003.

The Philippines ARPU decline and Opex increase over the 10 year period led to a Margin drop from 64% to 45% (19% drop!) and their Margin is still highly likely to fall further in the near to medium-term. Despite the Margin drop Philippines still made a PHP26 Billion more EBITDA in 2013 than compared to 2003 (ca. 45% more or equivalent compounded annual growth rate of 3.8%).

in 2003

  • Germany had ca. 3x more mobile subscribers compared to Philippines.
  • German Mobile Revenue was 14x higher than Philippines.
  • German EBITDA was 9x higher than that of Philippines.
  • German OpEx was 23x higher than that of Philippines Mobile Industry.
  • Mobile Margin of the Philippines was 64% versus 42% of Germany.
  • Germany’s GPD per Capita (in US$) was 35 times larger than that of Philippines.
  • Germany’s mobile ARPU was 4 times higher than that of Philippines.

in 2013 (+ 10 Years)

  • Philippines & Germany have almost the same amount of mobile subscriptions.
  • Germany Mobile Revenue was 6x higher than Philippines.
  • German EBITDA was only 5x higher than that of Philippines.
  • German OpEx was 6x higher than Mobile OpEx in Philippines (and German OpEx was at level with 2003).
  • Mobile Margin of the Philippines dropped 19% to 45% compared to 42% of Germany (essential similar to 2003).
  • In local currencies, Philippines increased their EBITDA with ca. 45%, Germany remain constant.
  • Both Philippines and Germany has lost 11% in absolute EBITDA between the 10 Year periods maximum and 2013.
  • Germany’s GDP per Capita (in US$) was 14 times larger than that of the Philippines.
  • Germany’s ARPU was 6 times higher than that of Philippines.

In the Philippines, mobile revenues have grown with 7.4% per anno (between 2003 and 2013) while the corresponding mobile OpEx grew with 12% and thus eroding margin massively over the period as increasingly more mobile customers were addressed. In Philippines, the 2013 OpEx level was 3 times that of 2003 (despite one major network consolidation and being an essential duopoly after the consolidation). In Philippines over this period the annual growth rate of mobile users were 17% (versus Germany’s 6%). In absolute terms the number of users in Germany and Philippines were almost the same in 2013, ca. 115 Million versus 109 Million. In Germany over the same period Financial growth was hardly present although more than 50 Million subscriptions were added.

When OpEx grows faster than Revenue, Profitability will suffer today & even more so tomorrow.

Mobile capital investments (i.e., CapEx) over the period 2003 to 2013 was for Germany 5 times higher than that of Philippines (i.e., remember that Germany also needs at least 5 – 6 times more sites to cover the Urban population) and tracks at a 13% Capex to Revenue ratio versus Philippines 20%.

The stories of Mobile Philippines and of Mobile Germany are not unique. Likewise examples can be found in Emerging Growth Markets as well as Mature Markets.

Can Mature Markets learn or even match (keep on dreaming?) from Emerging Markets in terms of efficiency? Assuming such markets really are efficient of course!

As logic (true or false) would dictate given the relative low ARPUs in emerging growth markets and their correspondingly high margins, one should think that such emerging markets are forced to run their business much more efficient than in Mature Markets. While compelling to believe this, the economical data would indicate that most emerging growth markets have been riding the subscriber & revenue growth band wagon without too much thoughts to the OpEx part … and Frankly why should you care about OpEx when your business generates margins much excess of 40%? Well … it is (much) easier to manage & control OpEx year by year than to abruptly “one day” having to cut cost in panic mode when growth slows down the really ugly way and OpEx keeps increasing without a care in the world. Many mature market operators have been in this situation in the past (e.g., 2004 – 2008) and still today works hard to keep their margins stable and profitability from declining.

Most Companies will report both Revenues and EBITDA on quarterly and annual basis as both are key financial & operational indicators for growth. They tend not report Opex but as seen from above that’s really not a problem to estimate when you have Revenue and EBITDA (i.e., OpEx = Revenue – EBITDA).

philippines vs germany

Thus, had you left the European Telco scene (assuming you were there in the first place) for the last 10 years and then came back you might have concluded that not much have happened in your absence … at least from a profitability perspective. Germany was in 2013 almost at its Ebitda margin level of 2003. Of course as the ones who did not take a long holiday knows those last 10 years were far from blissful financial & operational harmony in the mature markets where one efficiency program after the other struggled to manage, control and reduce Operators Operational Expenses.

However, over that 10-year period Germany added 50+ Million mobile subscriptions and invested more than 37 Billion US$ into the mobile networks from T-Deutschland, Vodafone, E-plus and Telefonica-O2. The mobile country margin over the 10-year period has been ca. 43% and the Capex to Revenue ratio ca. 13%. By 2013 the total amount of mobile subscription was in the order of 115 Million out of a population of 81 Million (i.e., 54 Million of the German population is between 15 and 64 years of age). The observant numerologist would have realized that there are many more subscriptions than population … this is not surprising as it reflects that many subscribers are having multiple different SIM cards (as opposed to cloned SIMs) or subscription types based on their device portfolio and a host of other reasons.

All Wunderbar! … or? .. well not really … Take a look at the revenue and profitability over the 10 year period and you will find that no (or very very little) revenue and incremental profitability has been gained over the period from 2003 to 2013. AND we did add 80+% more subscriptions to the base!

Here is the Germany Mobile development over the period;

germany 2003-2013

Apart from adding subscribers, having modernized the mobile networks at least twice over the period (i.e, CapEx with little OpEx impact) and introduced LTE into the German market (i.e., with little additional revenue to show for it) not much additional value has been added. It is however no small treat what has happen in Germany (and in many other mature markets for that matter). Not only did Germany almost double the mobile customers (in terms of subscriptions), over the period 3G Nodes-B’s were over-layed across the existing 2G network. Many additional sites were added in Germany as the fundamental 2G cellular grid was primarily based on 900 MHz and to accommodate the higher UMTS frequency (i.e., 2100 MHz) more new locations were added to provide a superior 3G coverage (and capacity/quality). Still Germany managed all this without increasing the Mobile Country OpEx across the period (apart from some minor swings). This has been achieved by a tremendous attention to OpEx efficiency with every part of the Industry having razor sharp attention to cost reduction and operating at increasingly efficiency.

philippines 2003-2013

Philippines story is a Fabulous Story of Growth (as summarized above) … and of Profitability & Margin Decline.

Philippines today is in effect a duopoly with PLDT having approx. 2/3 of the mobile market and Globe the remaining 1/3. During the period the Philippine Market saw Sun Cellular being acquired and merged by PLDT. Further, 3G was deployed and mobile data launched in major urban areas. SMS revenues remained the largest share of non-voice revenue to the two remaining mobile operators PLDT and Globe. Over the period 2003 to 2013, the mobile subscriber base (in terms of subscriptions) grew with 16% per anno and the ARPU fell accordingly with 10% per anno (all measured in local currency). All-in-all safe guarding a “healthy” revenue increase over the period from ca. 93 Billion PHP in 2003 to 190 Billion PHP in 2013 (i.e., a 65% increase over the period corresponding to a 5% annual growth rate).

However, the Philippine market could not maintain their relative profitability & initial efficiency as the mobile market grew.

philippines opex & arpu

So we observe (at least) two effects (1) Reduction in ARPU as market is growing & (2) Increasing Opex cost to sustain the growth in the market. As more customers are added to a mobile network the return on thus customers increasingly diminishes as network needs to be massively extended capturing the full market potential versus “just” the major urban potential.

Mobile Philippines did become less economical efficient as its scale increases and ARPU dropped (i.e., by almost 70%). This is not an unusual finding across Emerging Growth Markets.

As I have described in my previous Blog “SMS – Assimilation is inevitable, Resistance is Futile!”, Philippines mobile market has an extreme exposure to SMS Revenues which amounts to more than 35% of Total Revenues. Particular as mobile data and smartphones penetrate the Philippine markets. As described in my previous Blog, SMS Services enjoy the highest profitability across the whole range of mobile services we offer the mobile customer including voice. As SMS is being cannibalized by IP-based messaging, the revenue will decline dramatically and the mobile data revenue is not likely to catch up with this decline. Furthermore, profitability will suffer as the the most profitable service (i.e., SMS) is replaced by mobile data that by nature has a different profitability impact compared to simple SMS services.

Philippines do not only have a substantial Margin & EBITDA risk from un-managed OpEx but also from SMS revenue cannibalization (a la KPN in the Netherlands and then some).

exposure_to_SMS_decline

Let us compare the ARPU & Opex development for Philippines (above Chart) with that of Germany over the same period 2003 to 2013 (please note that the scale of Opex is very narrow)

germany opex & arpu

Mobile Germany managed their Cost Structure despite 40+% decrease in ARPU and as another 60% in mobile penetration was added to the mobile business. Again similar trend will be found in most Mature Markets in Western Europe.

One may argue (and not being too wrong) that Germany (and most mature mobile markets) in 2003 already had most of its OpEx bearing organization, processes, logistics and infrastructure in place to continue acquiring subscribers (i.e., as measured in subscriptions). Therefor it have been much easier for the mature market operators to maintain their OpEx as they continued to grow. Also true that many emerging mobile markets did not have the same (high) deployment and quality criteria, as in western mature markets, in their initial network and service deployment (i.e., certainly true for the Philippines as is evident from the many Regulatory warnings both PLDT and Globe received over the years) providing basic voice coverage in populated areas but little service in sub-urban and rural areas.

Most of the initial emerging market networks has been based on coarse (by mature market standards) GSM 900 MHz (or CDMA 850 MHz) grids with relative little available capacity and indoor coverage in comparison to population and clutter types (i.e., geographical topologies characterized by their cellular radio interference patterns). The challenge is, as an operator wants to capture more customers, it will need to build out / extend its mobile network in the areas those potential or prospective new customers live and work in. From a cost perspective sub-urban and rural areas in emerging markets are not per se lower cost areas despite such areas in general being lower revenue areas than their urban equivalents. Thus, as more customers are added (i.e.,  increased mobile penetration) proportionally more cost are generated than revenue being capture and the relative margin will decline. … and this is how the Ugly-cost (or profitability tail) is created.

ugly_tail

  • I just cannot write about profitability and cost structure without throwing the Ugly-(cost)-Tail on the page.I strongly encourage all mobile operators to make their own Ugly-Tail analysis. You will find more details of how to remedy this Ugliness from your cost structure in “The ABC of Network Sharing – The Fundamentals”.

In Western Europe’s mature mobile markets we find that more than 50% of our mobile cellular sites captures no more than 10% of the Revenues (but we do tend to cover almost all surface area several times unless the mobile operators have managed to see the logic of rural network sharing and consolidated those rural & sub-urban networks). Given emerging mobile markets have “gone less over board” in terms of lowest revenue un-profitable network deployments in rural areas you will find that the number of sites carrying 10% of less of the revenue is around 40%. It should be remembered that the rural populations in emerging growth markets tend to be a lot larger than in of that in mature markets and as such revenue is in principle spread out more than what would be the case in mature markets.

Population & Mobile Statistics and Opex Trends.

The following provides a 2013 Summary of Mobile Penetration, 3G Penetration (measured in subscriptions), Urban Population and the corresponding share of surface area under urban settlement. Further to guide the eye the 100% line has been inserted (red solid line), a red dotted line that represents the share of the population that is between 15 and 64 years of age (i.e., who are more likely to afford a mobile service) and a dashed red line providing the average across all the 43 countries analyzed in this Blog.

population & mobile penetration stats

  • Sources: United Nations, Department of Economic & Social Affairs, Population Division.  The UN data is somewhat outdated though for most data points across emerging and mature markets changes have been minor. Mobile Penetration is based on Pyramid Research and Bank of America Merrill Lynch Global Wireless Matrix Q1, 2014. Index Mundi is the source for the Country Age structure and data for %tage of population between 15 and 64 years of age and shown as a red dotted line which swings between 53.2% (Nigeria) to 78.2% (Singapore), with an average of 66.5% (red dashed line).

There is a couple of points (out of many) that can be made on the above data;

  1. There are no real emerging markets any longer in the sense of providing basic mobile telephone services such as voice and messaging.
  2. For mobile broadband data via 3G-UMTS (or LTE for that matter), what we tend to characterize as emerging markets are truly emerging or in some case nascent (e.g., Algeria, Iraq, India, Pakistan, etc..). 
  3. All mature markets have mobile penetration rates way above 100% with exception of Canada, i.e., 80% (i.e., though getting to 100% in Canada might be a real challenge due to a very dispersed remaining 20+% of the population).
  4. Most emerging markets are by now covering all urban areas and corresponding urban population. Many have also reach 100% mobile penetration rates.
  5. Most Emerging Markets are lagging Western Mature Markets in 3G penetration. Even providing urban population & urban areas with high bandwidth mobile data is behind that of mature markets.

Size & density does matter … in all kind of ways when it comes to the economics of mobile networks and the business itself.

In Australia I only need to cover ca. 40 thousand km2 (i.e., 0.5% of the total surface area and a bit less than the size of Denmark) to have captured almost 90% of the Australian population (e.g., Australia’s total size is 180+ times that of Denmark excluding Greenland). I frequently hear my Australian friends telling me how Australia covers almost 100% of the population (and I am sure that they cover more area than is equivalent to Denmark too) … but without being (too) disrespectful that record is not for Guinness Book of Records anytime soon. in US (e.g., 20% more surface area than Australia) I need to cover in almost 800 thousand km2 (8.2% of surface area or equivalent  to a bit more than Turkey) to capture more than 80% of the population. In Thailand I can only capture 35% of the population by covering ca. 5% of the surface area or a little less than 30 thousand km2 (approx. the equivalent of Belgium). The remaining of 65% of the Thai population is rural-based and spread across a much larger surface area requiring extensive mobile network to provide coverage to and capture additional market share outside the urban population.

So in Thailand I might need a bit less cell sites to cover 35% of my population (i.e., 22M) than in Australia to cover almost 90% of the population (i.e., ca. 21M). That’s pretty cool economics for Australia which is also reflected in a very low profitability risk score. For Thailand (and other countries with similar urban demographics) it is tough luck if they want to reach out and get the remaining 65% of their population. The geographical dispersion of the population outside urban areas is very wide and increasing geographical area is required to be covered in order to catch this population group. UMTS at 900 MHz will help to deploy economical mobile broadband, as will LTE in the APT 700 MHz band (being it either FDD Band 28 or TDD Band 44) as the terminal portfolio becomes affordable for rural and sub-urban populations in emerging growth markets.

In Western Europe on average I can capture 77% of my population (i..e, the urban pop) covering 14.2% of the surface area (i.e., average over markets in this analysis), This is all very agreeable and almost all Western European countries cover their surface areas to at least 80% and in most cases beyond that (i.e., it’s just less & easier land to cover though not per see less costly). In most cases rural coverage is encourage (or required) by the mature market license regime and not always a choice of the mobile operators.

Before we look in depth to the growth (incl. positive as well as negative growth), lets first have a peek at what has happened to the mobile revenue in terms of ARPU and Number of Mobile User and the corresponding mobile penetration over the period 2007 to 2013.

arpu development

  • Source: Bank of America Merrill Lynch Global Wireless Matrix Q1, 2014 and Pyramid Research Data data were used to calculated the growth of ARPU as compounded annual growth rate between 2007 to 2013 and the annual growth rate between 2012 and 2013. Since 2007 the mobile ARPUs have been in decline and to make matters worse the decline has even accelerated rather than slowed down as markets mobile penetration saturated.

mobile penetration

  • Source: Mobile Penetrations taken from Bank of America Merrill Lynch Global Wireless Matrix Q1, 2014 and Pyramid Research Data data .Index Mundi is the source for the Country Age structure and data for %tage of population between 15 and 64 years of age and shown as a red dotted line which swings between 53.2% (Nigeria) to 78.2% (Singapore), with an average of 66.5% (red dashed line). It s interesting to observe that most emerging growth markets are now where the mature markets were in 2007 in terms of mobile penetration.

Apart from a very few markets, ARPU has been in a steady decline since 2007. Further in many countries the ARPU decline has even accelerated rather than slowed down. From most mature markets the conclusion that we can draw is that there are no evidence that mobile broadband data (via 3G-UMTS or LTE) has had any positive effect on ARPU. Although some of the ARPU decline over the period in mature markets (particular European Union countries) can be attributed to regulatory actions. In general as soon a country mobile penetration reaches 100% (in all effect reaches the part of the population 15-64 years of age) ARPU tends to decline faster rather than slowing down. Of course one may correctly argue that this is not a big issue as long as the ARPU times the Users (i.e., total revenue) remain growing healthily. However, as we will see that is yet another challenge for the mobile industry as also the total revenue in mature markets also are in decline on a year by year basis. Given the market, revenue & cost structures of emerging growth markets, it is not unlikely that they will face similar challenges to their mobile revenues (and thus profitability). This could have a much more dramatic effect on their overall mobile economics & business models than what has been experienced in the mature markets which have had a lot more “cushion” on the P&Ls to defend and even grow (albeit weakly) their profitability. It is instructive to see that the most emerging growth markets mobile penetrations have reached the levels of Mature Markets in 2007. Combined with the introduction and uptake of mobile broadband data this marks a more troublesome business model phase than what these markets have experienced in the past.Some of the emerging growth market have yet to introduce 3G-UMTS, and some to leapfrog mobile broadband by launching LTE. Both events, based on lessons learned from mature markets, heralds a more difficult business model period of managing cost structures while defending revenues from decline and satisfy customers appetite for mobile broadband internet that cannot be supported by such countries fixed telecommunications infrastructures.

For us to understand more profoundly where our mobile profitability is heading it is obviously a good idea to understand how our Revenue and OpEx is trending. In this Section I am only concerned about the Mobile Market in Country and not the individual mobile operators in the country. For that latter (i.e., Operator Profitability) you will find a really cool and exiting analytic framework in the Section after this. I am also not interested (in this article) in modeling the mobile business bottom up (been there & done that … but that is an entirely different story line). However, I am interested and I am hunting for some higher level understanding and a more holistic approach that will allow me to probabilistically (by way of Bayesian analysis & ultimately inference) to predict in which direction a given market is heading when it comes to Revenue, OpEx and of course the resulting EBITDA and Margin. The analysis I am presenting in this Section is preliminary and only includes compounded annual growth rates as well as the Year-by-Year growth rates of Revenue and OpEx. Further developments will include specific market & regulatory developments as well to further improve on the Bayesian approach. Given the wealth of data accumulated over the years from the Bank of America Merrill Lynch (BoAML) Global Wireless Matrix datasets it is fairly easy to construct & train statistical models as well as testing those consistent with best practices.

The Chart below comprises 48 countries Revenue & OpEx growth rates as derived from the “Bank of America Merrill Lynch (BoAML) Global Wireless Matrix Q1, 2014” dataset (note: BoAML data available in this analysis goes back to 2003). Out of the 48 Countries, 23 countries have an Opex compounded annual growth rate higher than the corresponding Revenue growth rate. Thus, it is clear that those 23 countries are having a higher risk of reduced margin and strained profitability due to over-proportionate growth of OpEx. Out of the 23 countries with high or very high profitability risk, 11 countries have been characterized in macro-economical terms as emerging growth markets (i.e.,  China, India, Indonesia, Philippines, Egypt, Morocco, Nigeria, Russia, Turkey, Chile, Mexico) the remaining 12 countries can be characterized as mature markets in macro-economical terms (i.e., New Zealand, Singapore, Austria, Belgium, France, Greece, Spain, Canada, South Korea, Malaysia, Taiwan, Israel). Furthermore, 26 countries had a higher Opex growth between 2012 and 2013 than their revenues and is likely to be trending towards dangerous territory in terms of Profitability Risk.

cagr_rev&opex2007-2013

  • Source: Bank of America Merrill Lynch Global Wireless Matrix Q1, 2014. Revenue depicted here is Service Revenues and the OPEX has been calculated as Service REVENUE minus EBITDA. The Compounded Annual Growth Rate (CAGR) is calculated CAG{R_{2007 - 2013}}X = {\left( {\frac{{{X_{2013}}}}{{{X_{2007}}}}} \right)^{\frac{1}{{2013 - 2007}}}} - 1with X being Revenue and Opex. Y-axis scale is from -25% to +25% (i.e., similar to the scale chosen in the Year- by-Year growth rate shown in the Chart below).

With few exceptions one does not need to read the countries names on the Chart above to immediately see where we have the Mature Markets with little or negative growth and where what we typically call emerging growth markets are located.

As the above Chart clearly illustrate the mobile industry across different types of markets have an increasing challenge to deliver profitable growth and if the trend continues to keep their profitability period!

Opex grows faster than Mobile Operator’s can capture Revenue … That’s a problem!

In order gauge whether the growth dynamics of the last 7 years is something to be concerned about (it is! … it most definitely is! but humor me!) … it is worthwhile to take a look at the year by year growth rate trends (i.e. as CAGR only measures the starting point and the end point and “doesn’t really care” about what happens in the in-between years).

annualgrowth2012-2013

  • Source: Bank of America Merrill Lynch Global Wireless Matrix Q1, 2014. Revenue depicted here is Service Revenues and the OPEX has been calculated as Service REVENUE minus EBITDA. Year on Year growth is calculated and is depicted in the Chart above. Y-axis scale is from -25% to +25%. Note that the Y-scales in the Year-on-Year Growth Chart and the above 7-Year CAGR Growth Chart are the same and thus directly comparable.

From the Year on Year Growth dynamics compared to the compounded 7-year annual growth rate, we find that Mature Markets Mobile Revenues decline has accelerated. However, in most cases the Mature Market OpEx is declining as well and the Control & Management of the cost structure has improved markedly over the last 7 years. Despite the cost structure management most Mature Markets Revenue have been declining faster than the OpEx. As a result Profitability Squeeze remains a substantial risk in Mature Markets in general.

In almost all Emerging Growth Markets the 2013 to 2012 revenue growth rate has declined in comparison with the compounded annual growth rate. Not surprising as most of those markets are heading towards 100% mobile penetration (as measured in subscriptions). OpEx growth remains a dire concern for most of the emerging growth markets and will continue to squeeze emerging markets profitability and respective margins. There is no indication (in the dataset analyzed) that OpEx is really under control in Emerging Growth Markets, at least to the same degree as what is observed in the Mature Markets (i.e., particular Western Europe). What further adds to the emerging markets profitability risk is that mobile data networks (i.e., 3G-UMTS, HSPA+,..) and corresponding mobile data uptakes are just in its infancy in most of the Emerging Growth Markets in this analysis. The networks required to sustain demand (at a reasonable quality) are more extensive than what was required to provide okay-voice and SMS. Most of the emerging growth markets have no significant fixed (broadband data) infrastructure and in addition poor media distribution infrastructure which can relieve the mobile data networks being built. Huge rural populations with little available ARPU potential but a huge appetite to get connected to internet and media will further stress the mobile business models cost structure and sustainable profitability.

This argument is best illustrated by comparing the household digital ecosystem evolution (or revolution) in Western Europe with the projected evolution of Emerging Growth Markets.

emerging markets display & demand 

  • Above Chart illustrates the likely evolution in Home and Personal Digital Infrastructure Ecosystem of an emerging market’s Household (HH). Particular note that the amount of TV Displays are very low and much of the media distribution is expected to happen over cellular and wireless networks. An additional challenge is that the fixed broadband infrastructure is widely lagging in many emerging markets (in particular in sub-urban and rural areas) increasing the requirements of the mobile network in those markets. It is compelling to believe that we will witness a completely different use case scenarios of digital media consumption than experienced in the Western Mature Markets. The emerging market is not likely to have the same degree of mobile/cellular data off-load as experienced in mature markets and as such will strain mobile networks air-interface, backhaul and backbone substantially more than is the case in mature markets. Source: Dr. Kim K Larsen Broadband MEA 2013 keynote on “Growth Pains: How networks will supply data capacity for 2020

displays in homes _ western europe

  • Same as above but projection for Western Europe. In comparison with Emerging Markets a Mature Market Household  (HH) has many more TV as wells as a substantially higher fixed broadband penetration offering high-bandwidth digital media distribution as well as off-load optionality for mobile devices via WiFi. Source: Dr. Kim K Larsen Broadband MEA 2013 keynote on “Growth Pains: How networks will supply data capacity for 2020

Mobile Market Profit Sustainability Risk Index

The comprehensive dataset from Bank of America Merrill Lynch Global Wireless Matrix allows us to estimate what I have chosen to call a Market Profit Sustainability Risk Index. This Index provides a measure for the direction (i.e., growth rates) of Revenue & Opex and thus for the Profitability.

The Chart below is the preliminary result of such an analysis limited to the BoAML Global Wireless Matrix Quarter 1 of 2014. I am currently extending the Bayesian Analysis to include additional data rather than relying only on growth rates of Revenue & Opex, e.g., (1) market consolidation should improve the cost structure of the mobile business, (2) introducing 3G usually introduces a negative jump in the mobile operator cost structure, (3) mobile revenue growth rate reduces as mobile penetration increases, (4) regulatory actions & forces will reduce revenues and might have both positive and negative effects on the relevant cost structure, etc.…

So here it is! Preliminary but nevertheless directionally reasonable based on Revenue & Opex growth rates, the Market Profit Sustainability Risk Index over for 48 Mature & Emerging Growth Markets worldwide:

profitability_risk_index

The above Market Profit Sustainability Risk Index is using the following risk profiles

  1. Very High Risk (index –5): (i.e., for margin decline): (i) Compounded Annual Growth Rate (CAGR) between 2007 and 2013 of Opex was higher than equivalent for Revenue AND (ii) Year-on-Year (YoY) Growth Rate 2012 to 2013 of Opex higher than that of Revenue AND (iii) Opex Year-on-Year 2012 to 2013 Growth Rate is higher than the Opex CAGR over the period 2007 to 2013.
  2. High Risk (index –3): Same as above Very High Risk with condition (iii) removed OR YoY Revenue Growth 2012 to 2013 lower than the corresponding Opex Growth.
  3. Medium Risk (index –2): CAGR of Revenue lower than CAGR of Opex but last year (i.e., 2012 t0 2013) growth rate of Revenue higher than that of Opex.
  4. Low Risk (index 1): (i) CAGR of Revenue higher than CAGR of Opex AND (ii) YoY Revenue Growth higher than Opex Growth but lower than the inflation of the previous year.
  5. Very Low Risk (index 3): Same as above Low Risk with YoY Revenue Growth Rate required to be higher than the Opex Growth with at least the previous year’s inflation rate.

The Outlook for Mature Markets are fairly positive as most of those Market have engaged in structural cost control and management for the last 7 to 8 years. Emerging Growth Markets Profit Sustainability Risk Index are cause for concern. As the mobile markets are saturating it usually results in lower ARPU and higher cost to reach the remaining parts of the population (often “encouraged” by regulation). Most Emerging Growth markets have started to introduce mobile data, which is likely to result in higher cost-structure pressure & with traditional revenue streams under pressure (if history of Mature Markets are to repeat itself in emerging growth markets). The Emerging Growth Markets have had little incentive (in the past) to focus on cost structure control and management, due to the exceedingly high margins that they historically could present with their legacy mobile services (i.e., Voice & SMS) and relative light networks (as always in comparison to Mature Markets).

Cautionary note is appropriate. All the above are based on the Mobile Market across the world. There are causes and effects that can move a market from having a high risk profile to a lower. Even if I feel that the dataset supports the categorization it remains preliminary as more effects should be included in the current risk model to add even more confidence in its predictive power. Furthermore, the analysis is probabilistic in nature and as such does not claim to carve in stone the future. All the Index claims to do is to indicate a probable direction of the profitability (as well as Revenue & OpEx). There are several ways that Operators and Regulatory Authorities might influence the direction of the profitability changing Risk Exposure (in the Wrong as well as in the Right Direction)

Furthermore, it would be wrong to apply the Market Profit Sustainability Risk Index to individual mobile operators in the relevant markets analyzed here. The profitability dynamics of individual mobile operators are a wee bit more complicated, albeit some guidelines and predictive trends for their profitability dynamics in terms of Revenue and Opex can be defined. This will all be revealed in the following Section.

Operator Profitability – the Profitability Math.

We have seen that the Margin M an be written as

M = \frac{E}{R} = \frac{{R - O}}{R}with E, R and O being EBITDA, REVENUE and OPEX respectively.

However, much more interesting is that it can also be written as a function of subscriber share \sigma

\Delta  = \delta  - \frac{{{o_f}}}{{{r_u}}}\frac{1}{\sigma }being valid for\forall \sigma  \in ]0,1]with \Delta being the margin and the subscriber market share \sigma can be found between 0% to 100%. The rest will follow in more details below, suffice to say that as the subscriber market share increases the Margin (or relative profitability) increases as well although not linearly (if anyone would have expected that ).

Before we get down and dirty on the math lets discuss Operator Profitability from a higher level and in terms of such an operators subscriber market share (i.e., typically measured in subscriptions rather than individual users).

In the following I will show some Individual Operator examples of EBITDA Margin dynamics from Mature Markets limited to Western Europe. Obviously the analysis and approach is not limited emerging markets and can (have been) directly extended to Emerging Growth Markets or any mobile market for that matter. Again BoAML Global Matrix provides a very rich data set for applying the approach described in this Blog.

It has been well established (i.e., by un-accountable and/or un-countable Consultants & Advisors) that an Operator’s Margin correlates reasonably well with its Subscriber Market Share as the Chart below illustrates very well. In addition the Chart below also includes the T-Mobile Netherlands profitability journey from 2002 to 2006 up to the point where Deutsche Telekom looked into acquiring Orange Netherlands. An event that took place in the Summer of 2007.

margin versus subscriber share

I do love the above Chart (i.e., must be the physicist in me?) as it shows that such a richness in business dynamics all boiled down to two main driver, i.e., Margin & Subscriber Market Shared.

So how can an Operator strategize to improve its profitability?

Let us take an Example

margin growth by acquisition or efficiency

Here is how we can think about it in terms of Subscriber Market Share and EBITDA as depicted by the above Chart. In simple terms an Operator have a combination of two choices (Bullet 1 in above Chart) Improve its profitability through Opex reductions and making its operation more efficient without much additional growth (i.e., also resulting in little subscriber acquisition cost), it can improve its ARPU profile by increasing its revenue per subscriber (smiling a bit cynical here while writing this) again without adding much in additional market share. The first part of Bullet 1 has been pretty much business as usual in Western Europe since 2004 at least (unfortunately very few examples of the 2nd part of Bullet 1) and (Bullet 2 in above Chart) The above “Margin vs. Subscriber Market Share”  Chart indicates that if you can acquire the customers of another company (i.e., via Acquisition & Merger) it should be possible to quantum leap your market share while increasing the efficiencies of the operation by scale effects. In the above Example Chart our Hero has ca. 15% Customer Market Share and the Hero’s target ca. 10%. Thus after an acquisition our Hero would expect to get ca. 25% (if they play it well enough). Similarly we would expect a boost in profitability and hope for at least 38% if our Hero has 18% margin and our Target has 20%. Maybe even better as the scale should improve this further. Obviously, this kind of “math” assumes that our Hero and Target can work in isolation from the rest of the market and that no competitive forces would be at play to disrupt the well thought through plan (or that nothing otherwise disruptive happens in parallel with the merger of the two businesses). Of course such a venture comes with a price tag (i.e., the acquisition price) that needs to be factored into the overall economics of acquiring customers. As said most (Western) Operators are in a perpetual state of managing & controlling cost to maintain their Margin, protect and/or improve their EBITDA.

So one thing is theory! Let us see how the Dutch Mobile Markets Profitability Dynamics evolved over the 10 year period from 2003 to 2013;

mobile netherlands 10 year journey

From both KPN’s acquisition of Telfort as well as the acquisition & merger of Orange by T-Mobile above Margin vs. Subscriber Market Share Chart, we see that in general, the Market Share logic works. On the other hand the management of the integration of the business would have been fairly unlucky for that to be right. When it comes to the EBITDA logic it does look a little less obvious. KPN clearly got unlucky (if un-luck has something to do with it?) as their margin decline with a small uplift albeit still lower than where they started pre-acquisition. KPN should have expected a margin lift to 50+%. That did not happen to KPN – Telfort. T-Mobile did fare better although we do observe a margin uplift to around 30% that can be attributed to Opex synergies resulting from the integration of the two businesses. However, it has taken many Opex efficiency rounds to get the Margin up to 38% that was the original target for the T-Mobile – Orange transaction.

In the past it was customary to take lots of operators from many countries, plot their margin versus subscriber markets share, draw a straight line through the data points and conclude that the margin potential is directly related to the Subscriber Market Share. This idea is depicted by the Left Side Chart and the Straight line “Best” Fit to data.

Lets just terminate that idea … it is wrong and does not reflect the right margin dynamics as a function of the subscriber markets share. Furthermore, the margin dynamics is not a straight-line function of the subscriber market share but rather asymptotic falling off towards minus infinity, i.e., when the company have no subscribers and no revenue but non-zero cost. We also observed a diminishing return on additional market share in the sense that as more market share is gained smaller and smaller incremental margins are gained. The magenta dashed line in the Left Chart below illustrates how one should expect the Margin to behave as a function of Subscriber market share.

the wrong & the right way to show margin vs subscriber share 

The Right Chart above shows has broken down the data points in country by country. It is obvious that different countries have different margin versus market share behavior and that drawing a curve through all of those might be a bit naïve.

So how can we understand this behavior? Let us start with making a very simple formula a lot more complex :–)

We can write the Margin\Delta as the ratio of Earning before Interest Tax Depreciation & Amortization (EBITDA)and Revenue R:\Delta  = \frac{{EBITDA}}{R} = \frac{{R - O}}{R} = 1 - \frac{O}{R}, EBITDA is defined as Revenue minus Opex. Both Opex and Revenue I can de-compose into a fixed and a variable part: O = Of + AOPU x U and R = Rf + ARPU x U with AOPU being the Average Opex per User, ARPU the Average (blended) Revenue per User and U the number of users. For the moment I will be ignoring the fixed part of the revenue and write R = ARPU x U. Further, the number of users can be written as U = \sigma \,Mwith \sigma being the market share and M being the market size. So we can now write the margin as

\Delta  = 1 - \frac{{{O_f} + {o_u}\sigma M}}{{{r_u}\sigma M}} = 1 - \frac{{{o_u}}}{{{r_u}}} - \frac{{{o_f}}}{{{r_u}}}\frac{1}{\sigma } = \delta  - \frac{{{o_f}}}{{{r_u}}}\frac{1}{\sigma }with \delta  = 1 - \frac{{{o_u}}}{{{r_u}}}and {o_f} = \frac{{{O_f}}}{M}.

\Delta  = \delta  - \frac{{{o_f}}}{{{r_u}}}\frac{1}{\sigma }being valid for\forall \sigma  \in ]0,1]

The Margin is not a linear function of the Subscriber Market Share (if anybody would have expected that) but relates to the Inverse of Market Share.

Still the Margin becomes larger as the market share grows with maximum achievable margin of {\Delta _{\max }} = 1 - \frac{{{o_u}}}{{{r_u}}} - \frac{{{o_f}}}{{{r_u}}}as the market share equals 1 (i.e., Monopoly). We observe that even in a Monopoly there is a limit to how profitable such a business can be. It should be noted that this is not a constant but a function of how operationally efficient a given operator is as well as its market conditions. Furthermore, as the market share reduces towards zero \Delta  \to  - \infty .

Fixed Opex (of) per total subscriber market: This cost element is in principle related to cost structure that is independent on the amount of customers that a given mobile operator have. For example a big country with a relative low population (or mobile penetration) will have higher fixed cost per total amount of subscribers than a smaller country with a larger population (or mobile penetration). Fixed cost is difficult to change as it depends on the network and be country specific in nature. For an individual Operator the fixed cost (per total market subscribers) will be influenced by;

  • Coverage strategy, i.e., to what extend the country’s surface area will be covered, network sharing, national roaming vs. rural coverage, leased bandwidth, etc..
  • Spectrum portfolio, i.e, lower frequencies are more economical than higher frequencies for surface area coverage but will in general have less bandwidth available (i.e., driving up the number of sites in capacity limited scenarios). Only real exception to bandwidth limitations of low frequency spectrum would be the APT700 band (though would “force” an operator to deploy LTE which might not be timed right given specifics of the market).
  • General economical trends, lease/rental cost, inflation, salary levels, etc..

Average Variable Opex per User (ou): This cost structure element capture cost that is directly related to the subscriber, such as

  • Market Invest (i.e., Subscriber Acquisition Cost SAC, Subscriber Retention Cost SRC), handset subsidies, usage-related cost, etc..
  • Any other variable cost directly associated with the customer (e.g., customer facing functions in the operator organization).

This behavior is exactly what we observe in the presented Margin vs. Subscriber Market Share data and also explains why the data needs to be treated on a country by country basis. It is worthwhile to note that after the higher the market share the less incremental margin gain should be expected for additional market share.

The above presented profitability framework can be used to test whether a given mobile operator is market & operationally efficient compared to its peers.

margin vs share example

The overall Margin dynamics is shown above Chart for the various settings of fixed and variable Opex as well as a given operators ARPU. We see that as the fixed Opex (in relation to the total subscriber market) increasing it will get more difficult to get EBITDA positive and increasingly more market share is required to reach a reasonable profitability targets. The following maps a 3 player market according with the profitability logic derived here:

marke share dynamics

What we first notice is that operators in the initial phase of what you might define as the “Market-share Capture Phase” are extremely sensitive to setbacks. A small loss of subscriber market share (i.e. 2%) can tumble the operator back into the abyss (i.e, 15% Margin setback) and wreck havoc to the business model. The profitability logic also illustrates that once an operator has reached Market-share maturity adding new subscribers is less valuable than to keep them. Even big market share addition will only result in little additional profitability (i.e., the law of diminishing returns).

The derived Profitability framework can be used also to illustrate what happens to the Margin in a market-wise steady situation (i.e., only minor changes to an operators market share) or what the Market Share needs to be to keep a given Margin or how cost needs to be controlled in the event that ARPU drops and we want to keep our margin and cannot grow market share (or any other market, profitability or cost-structure exercise for that matter);

margin versus arpu & time etc

  • Above chart illustrates Margin as a function of ARPU & Cost (fixed & variable) Development at a fixed market share here chosen to be 33%. The starting point is an ARPU ru of EUR25.8 per month, a variable cost per user ou assumed to be EUR15 and a fixed cost per total mobile user market (of) of EUR0.5. The first scenario (a Orange Solid Line) with an end of period margin of 32.7% assumes that ARPU reduces with 2% per anno, that the variable cost can be controlled and likewise will reduce with 2% pa. Variable cost is here assumed to increase with 3% on an annual basis. During the 10 year period it is assumed that the Operators market share remains at 33%. The second scenario (b Red Dashed Line) is essential same as (a) with the only difference that the variable cost remains at the initial level of EUR15 and will not change over time. This scenario ends at 21.1% after 10 Years. In principle it shows that Mobile Operators will not have a choice on reducing their variable cost as ARPU declines (again the trade-off between certainty of cost and risk/uncertainty of revenue). In fact the most successful mature mobile operators are spending a lot of efforts to manage & control their cost to keep their margin even if ARPU & Revenues decline.

market share as function of arpu etc

  • The above chart illustrates what market share is required to keep the margin at 36% when ARPU reduces with 2% pa, fixed cost increases with 3% pa and the variable cost either (a Orange Solid Line) can be reduced with 2% in line with the ARPU decline or (b Red Solid Line) remains fixed at the initial level. In scenario (a) the mobile operator would need to grow its market share to 52% to main its margin at 36%. This will obviously be very challenging as this would be on the expense of other operators in this market (here assume to be 3). Scenario (b) is extremely dramatic and in my opinion mission impossible as it requires a complete 100% market dominance.

variable cost development for margin

  • Above Chart illustrates how we need to manage & control my variable cost compared to the –2% decline pa in order to keep the Margin constant at 36% assuming that the Operator Subscriber Market Share remains at 33% over the period. The Orange Solid Line in the Chart shows the –2% variable cost decline pa and the Red Dashed Line the variable cost requirement to keep the margin at 36%.

The following illustrates the Profitability Framework as described above applied to a few Western European Markets. As this only serves as an illustration I have chosen to show older data (i..e, 2006). It is however very easy to apply the methodology to any country and the BoAML Global Wireless Matrix with its richness in data can serve as an excellent source for such analysis. Needless to say the methodology can be extended to assess an operators profitability sensitivity to market share and market dynamics in general.

The Charts below shows the Equalized Market Share which simply means the fair market share of operators, i.e., if I have 3 operators the fair or equalized market share would 1/3 (33.3%), in case of 4 operators it should be 25% and so forth, I am also depicting what I call the Max Margin Potential this is simply the Margin potential at 100% Market Share at a given set of ARPU (ru), AOPU (ou) and Fixed Cost (of) Level in relation to the total market.

netherlands

  • Netherlands Chart: Equalized Market Share assumes Orange has been consolidated with T-Mobile Netherlands. The analysis would indicate that no more than ca. 40% Margin should be expected in The Netherlands for any of the 4 Mobile Operators. Note that for T-Mobile and Orange small increases in market share should in theory lead to larger margins, while KPN’s margin would be pretty much un-affected by additional market share.

germany

  • Germany Chart: Shows Vodafone to slightly higher and T-Mobile Deutschland slight lower in Margin than the idealized Margin versus Subscriber Market share. At the time T-Mobile had almost exclusive leased lines and outsourced their site infrastructure while Vodafone had almost exclusively Microwaves and owned its own site infrastructure. The two new comers to the German market (E-Plus and Telefonica-O2) is trailing on the left side of the Equalized Market Share. At this point in time should Telefonica and E-Plus have merged one would have expected them eventually (post-integration) to exceed a margin of 40%. Such a scenario would lead to an almost equilibrium market situation with remaining 3 operators having similar market shares and margins.

france

 

austria

 

italy

 

united kingdom

 

denmark

 

Acknowledgement

I greatly acknowledge my wife Eva Varadi for her support, patience and understanding during the creative process of creating this Blog. I certainly have not always been very present during the analysis and writing.

The ABC of Network Sharing – The Fundamentals (Part I).

  • Up-to 50% of Sites in Mobile Networks captures no more than 10% of Mobile Service Revenues.
  • The “Ugly” (cost) Tail of Cellular Networks can only be remedied by either removing sites (and thus low- or –no-profitable service) or by aggressive site sharing.
  • With Network Sharing expect up-to 35% saving on Technology Opex as well as future Opex avoidance.
  • The resulting Technology Opex savings easily translates into a Corporate Opex saving of up-to 5% as well as future Opex avoidance.
  • Active as well as Passive Network Sharing brings substantial Capex avoidance and improved sourcing economics by improved scale.
  • National Roaming can be an alternative to Network Sharing in low traffic and less attractive areas. Capex attractive but a likely Ebitda-pressure point over time.
  • “Sharing by Towerco” can be an alternative to Real Network Sharing. It is an attractive mean to Capex avoidance but is not Ebitda-friendly. Long-term commitments combined with Ebitda-risks makes it a strategy that should to be considered very carefully.
  • Network Sharing frees up cash to be spend in other areas (e.g., customer acquisition).
  • Network Sharing structured correctly can result in faster network deployment –> substantial time to market gains.
  • Network Sharing provides substantially better network quality and capacity for a lot less cash (compared to standalone).
  • Instant cell split option easy to realize by Network Sharing –> cost-efficient provision of network capacity.
  • Network Sharing offers enhanced customer experience by improved coverage at less economics.
  • Network Sharing can bring spectral efficiency gains of 10% or higher.

The purpose of this story is to provide decision makers, analysts and general public with some simple rules that will allow them to understand Network Sharing and assess whether it is likely to be worthwhile to implement and of course successful in delivering the promise of higher financial and operational efficiency.

Today’s Technology supports almost any network sharing scenario that can be thought of (or not). Financially & not to forget Strategically this is far from so obvious.

Network Sharing is not only about Gains, its evil twin Loss is always present.

Network Sharing is a great pre-cursor to consolidation.

Network sharing has been the new and old black for many years. It is a fashion that that seems to stay and grow with and within the telecommunications industry. Not surprising as we shall see that one of the biggest financial efficiency levers are in the Technology Cost Structure. Technology wise there is no real stumbling blocks for even very aggressive network sharing maximizing the amount of system resources being shared, passive as well as active. The huge quantum-leap in availability of very high quality and affordable fiber optic connectivity in most mature markets, as well between many countries, have pushed the sharing boundaries into Core Network, Service Platforms and easily reaching into Billing & Policy Platforms with regulatory and law being the biggest blocking factor of Network-as-a-Service offerings. Below figure provides the anatomy of network sharing. It should of course be noted that also within each category several flavors of sharing is possible pending operator taste and regulatory possibilities.

anatomy of network sharing

Network Sharing comes in many different flavors. To only consider  one sharing model is foolish and likely will result in wrong benefit assessment. Setting a sharing deal up for failure down the road (if it ever gets started). It is particular important to understand that while active sharing provides the most comprehensive synergy potential, it tends to be a poor strategy in areas of high traffic potential. Passive sharing is a much more straightforward strategy in such areas. In rural areas, where traffic is less of an issue and profitability is a huge challenge, aggressive active sharing is much more interesting. One should even consider frequency sharing if permitted by regulatory authority. The way I tend to look at the Network Sharing Flavors are (as also depicted in the Figure below);

  1. Capacity Limited Areas (dense urban and urban) – Site Sharing or Passive Sharing most attractive and sustainable.
  2. Coverage Limited Areas (i.e., some urban environments, mainly sub-urban and rural) – Minimum Passive Sharing should be pursued with RAN (Active) Sharing providing an additional economical advantage.
  3. Rural Areas – National Roaming or Full RAN sharing including frequency sharing (if regulatory permissible).

networtksharingflavors

One of the first network sharing deals I got involved in was back in mid-2001 in The Netherlands. This was at the time of the Mobile Industry’s first real cash crises. Just as we were about to launch this new exiting mobile standard (i.e., UMTS) that would bring Internet to the pockets of the masses. After having spend billions & billions of dollars (i.e., way too much of course) on high-frequency 2100MHz UMTS spectrum, all justified by an incredible optimistic (i.e., said in hindsight!) belief in the mobile internet business case, the industry could not afford to deploy the networks required to make our wishful thinking come true.

T-Mobile (i.e., aka Ben BV) engaged with Orange (i.e., aka Dutchtone) in The Netherlands on what should have been a textbook example of the perfect network sharing arrangement. We made a great business case for a comprehensive network sharing. It made good financial and operational sense at the setup. At the time the sharing game was about Capex avoidance and trying to get the UMTS network rolled out as quickly as possible within very tight budgets imposed by our mother companies (i.e., Deutsche Telekom and France Telecom respectively). Two years down the road we revised our strategic thoughts on network sharing. We made another business case for why deploying on standalone made more sense than sharing. At that time the only thing T-we (Mobile NL) really could agree with Orange NL about was ancillary cabinet sharing and of course the underlying site sharing. Except for agreeing not to like the Joint Venture we created (i.e., RANN BV), all else were at odds, e.g., supplier strategy, degree of sharing, network vision, deployment pace, etc… Our respective deployment strategies had diverged so substantially from each other that sharing no longer was an option. Further, T-Mobile decided to rely on the ancillary cabinet we had in place for GSM –> so also no ancillary sharing. This was also at a time where cabinets and equipment took up a lot of space (i.e., do you still remember the first & 2nd generation 3G cabinets?). Many site locations simply could not sustain 2 GSM and 2 UMTS solutions. Our site demand went through the roof and pretty much killed the sharing case.

  • Starting point: Site Sharing, Shared Built, Active RAN and transport sharing.
  • Just before breakup I: Site Sharing, cabinet sharing if required, shared built where deployment plans overlapped.
  • Just before breakup II:Crisis over and almost out. Cash and Capex was no longer as critical as it was at startup.

It did not help that the Joint Venture RANN BV created to realize T-Mobile & Orange NL shared UMTS network plans frequently were at odds with both founding companies. Both entities still had their full engineering & planning departments including rollout departments (i.e., in effect we tried to coordinate across 3 rollout departments & 3 planning departments, 1 from T-Mobile, 1 from Orange and 1 from RANN BV … pretty silly! Right!). Eventually RANN BV was dissolved. The rest is history. Later T-Mobile NL acquired Orange NL and engaged in a very successful network consolidation (within time and money).

The economical benefits of Sharing and Network Consolidation are pretty similar and follows pretty much the same recipe.

Luckily (if Luck has anything to do with it?) since then there have been more successful sharing projects although the verdict is still out whether these constructs are long-lived or not and maybe also by what definition success is measured.

Judging from the more than 34 Thousand views on my various public network sharing presentations, I have delivered around the world since 2008, there certainly seem to be a strong and persistent interest in the topic.

  1. Fundamentals of Mobile Network Sharing.(2012).
  2. Ultra-Efficient Network Factory: Network Sharing & other means to leapfrog operator efficiencies. (2012).
  3. Economics of Network Sharing. (2008).
  4. Technology Cost Optimization Strategies. (2009).
  5. Analyzing Business Models for Network Sharing Success. (2009).

I have worked on Network Sharing and Cost Structure Engineering since the early days of 2001. Very initially focus was on UMTS deployments, the need and requirements to deploy much more cash efficient. Cash was a very scarce resource after the dot-com crash between 2000 & 2003. After 2004 the game changed to be an Opex Saving & Avoidance game to mitigate stagnating customer growth and revenue growth slow down.

I have in detail studied many Network Sharing strategies, concepts and deals. A few have turned out successful (at least still alive & kicking) and many more un-successful (never made it beyond talk and analysis). One of the most substantial Network Sharing deals (arguable closer to network consolidation), I work on several years ago is still very much alive and kicking. That particular setup has been heralded as successful and a poster-boy example of the best of Network Sharing (or consolidation). However, by 2014 there has hardly been any sites taken out of operation (certainly no where close to the numbers we assumed and based our synergy savings on).

More than 50% of all network related TCO comes from site-related operational and capital expenses.

Despite the great economical promises and operational efficiencies that can be gained by two mobilenetworksharingtco operations (fixed for that matter as well) agreeing to share their networks, it is important to note that

It is NOT enough to have a great network sharing plan. A very high degree of discipline and razor-sharp focus in project execution is crucial for delivering network sharing within money and time.

With introduction of UMTS & Mobile Broadband the mobile operator’s margin & cash have come under increasing pressure (not helped by voice revenue decline & saturated markets).

Technology addresses up-to 25% of a Mobile Operators Total Opex & more than 90% of the Capital Expenses.

Radio Access Networks accounts easily for more than 50% of all Network Opex and Capex.

For a reasonable efficient Telco Operation, Technology Cost is the most important lever to slow the business decline, improve financial results and return on investments.

P&L Optimization

Above Profit & Loss Figure serves as an illustration that Technology Cost (Opex & Capex) optimization and is pivotal to achieve a more efficient operation and a lot more certain that relying on new business (and revenue) additions

It is not by chance that RAN Sharing is such a hot topic. The Radio Access Network takes up more than half of Network Cost including Capex.

Of course there are many other general cost levers to consider that might be less complex than Network Sharing to implement. Another Black (or Dark Grey) is outsourcing of (key) operational functions to a 3rd party. Think here about some of the main ticks

  1. Site acquisition (SA) & landlord relations (LR) – Standard practice for SA, not recommended for landlord relations. Usually better done by operator self (at least while important during deployment)..
  2. Site Build – Standard practice with sub-contractors..
  3. Network operations & Maintenance – Cyclic between in-source and outsource pending business cycle.
  4. Field services – standard practice particular in network sharing scenarios.
  5. Power management – particular interesting for network sharing scenarios with heavy reliance of diesel generators and fuel logistics (also synergetic with field services).
  6. Operational Planning – particular for comprehensive managed network services. Network Sharing could outsource RAN & TX Planning.
  7. Site leases – Have a site management company deal with site leases with a target to get them down with x% (they usually take a share of the reduced amount). Care should be taken not to jeopardize network sharing possibilities. Will impact landlord relations.
  8. IT operations – Cyclic between in-source and outsource pending business cycle.
  9. IT Development – Cyclic between in-source and outsource pending business cycle.
  10. Tower Infrastructure – Typical Cash for infrastructure swap with log-term Opex commitments. Care must be taken to allow for Network Sharing and infrastructure termination.

In general many of the above (with exception of IT or at least in a different context than RAN Sharing) potential outsourcing options can be highly synergetic with Network Sharing and should always be considered when negotiating a deal.

Looking at the economics of managed services versus network sharing we find in general the following picture;

managedservicesvsnetwokrsharing

and remember that any managed services that is assumed to be applicable in the Network Sharing strategy  column will enable the upper end of the possible synergy potential estimated. Having a deeper look at the original T-Mobile UK and Hutchinson UK 3G RAN Sharing deal is very instructive as it provides a view on what can be achieved when combining both best practices of network sharing and shared managed services (i.e., this is the story for The ABC of Network Sharing – Part II).

Seriously consider Managed Services when it can be proven to provide at least 20% Opex synergies will be gained for apples to apples SLAs and KPIs (as compared to your insourced model).

Do your Homework! It is bad Karma to implement Managed Services on an in-efficient organizational function or area that has not been optimized prior to outsourcing.

Do your Homework (Part II)! Measure, Analyze and Understand your own relevant cost structure 100% before outsourcing!

It is not by chance that Deutsche Telekom AG (DTAG) has been leading the Telco Operational Efficiency movement and have some of the most successful network sharing operations around. Since 2004 DTAG have had several (very) deep dives programs into their cost structure and defining detailed initiatives across every single operation as well as on its Group level. This has led to one of the most efficient Telco operations around in Western Europe & the US and with lots to learn from when it comes to managing your cost structure when faced with stagnating revenue growth and increasing cost pressure.

In 2006, prior to another very big efficiency program was kicked off within DTAG, I was asked to take a very fundamental and extreme (but nevertheless realistic) look at all the European mobile operations technology cost structures and come back with how much Technology Opex could be pulled out of them (without hurting the business) within 3-4 years (or 2010).

Below (historical) Figure illustrates my findings from 2006 (disguised but nevertheless the real deal);

fullnetworkpotential

This analysis (7-8 years old by now) directly resulted in a lot of Network Sharing discussions across DTAGs operations in Europe. Ultimately this work led to a couple of successful Network Sharing engagements within the DTAG (i.e., T-Mobile) Western European footprint. It enabled some of the more in-efficient mobile operations to do a lot more than they could have done standalone and at least one today went from a number last to number 1. So YES … Network Sharing & Cost Structure Engineering can be used to leapfrog an in-efficient business and by that transforming an ugly duckling into what might be regarded as an approximation of a swan. (in this particular example I have in mind, I will refrain from calling it a beautiful swan … because it really isn’t … although the potential is certainly remain even more today).

The observant reader till see that the order of things (or cost structure engineering) matters. As already said above, the golden rule of outsourcing and managed services is to first ensure you have optimized what can be done internally and then consider outsourcing. We found that first outsourcing network operations or establish a managed service relationship prior to a network sharing relationship was sub-optimal and actually might be hindering reaching the most optimal network sharing outcome (i.e., full RAN sharing or active sharing with joint planning & operations).

REALITY CHECK!

Revenue Growth will eventually slow down and might even decline due to competitive climate, poor pricing management and regulatory pressures, A Truism for all markets … its just a matter of time. The Opex Growth is rarely in synch with the revenue slow down. This will result in margin or Ebitda pressure and eventually profitability decline.

Revenue will eventually stagnate and likely even enter decline. Cost is entropy-like and will keep increasing.

The technology refreshment cycles are not only getting shorter. These cycles imposes additional pressure on cash. Longer return on investment cycles results compared to the past. Paradoxical as the life-time of the Mobile Telecom Infrastructure is shorter than in the past. This vicious cycle requires the industry to leapfrog technology efficiency, driving demand for infrastructure sharing and business consolidation as well as new innovative business models (i.e., a topic for another Blog).

The time Telco’s have to return on new technology investments is getting increasingly shorter.

Cost saving measures are certain by nature. New Business & New (even Old) Revenue is by nature uncertain.

Back to NETWORK SHARING WITH A VENGENCE!

I have probably learned more from the network sharing deals that failed than the few ones that succeeded (in the sense of actually sharing something). I have work on sharing deals & concepts across across the world; in Western Europe, Central Eastern Europe, Asia and The USA under very different socio-economical conditions, financial expectations, strategic incentives, and very diverse business cycles.

It is fair to say that over the time I have been engaged in Network Sharing Strategies and Operational Realities, I have come to the conclusion that the best or most efficient sharing strategy depends very much on where an operator’s business cycle is and the network’s infrastructure age.

The benefits that potentially can be gained from sharing will depend very much on whether you

  • Greenfield: Initial phase of deployment with more than 80% of sites to be deployed.
  • Young: Steady state with more than 80% of your sites already deployed.
  • Mature: Just in front of major modernization of your infrastructure.

The below Figure describes the three main cycles of network sharing.

stages_of_network_sharing

It should be noted that I have omitted the timing benefit aspects from the Rollout Phase (i.e., Greenfield) in the Figure above. The omission is on purpose. I believe (based on experience) that there are more likelihood of delay in deployment than obvious faster time-to-market. This is inherent in getting everything agreed as need to be agreed in a Greenfield Network Sharing Scenario. If time-to-market matters more than initial cost efficiency, then network sharing might not a very effective remedy. Once launch have been achieved and market entry secured, network sharing is an extremely good remedy in securing better economics in less attractive areas (i.e., typical rural and outer sub-urban areas). There are some obvious and very interesting games that can be played out with your competitor particular in the Rollout Phase … not all of them of the Altruistic Nature (to be kind).

There can be a very good strategic arguments of not sharing economical attractive site locations depending on the particular business cycle and competitive climate of a given market. The value certain sites market potential could  justify to not give them up for sharing. Particular if competitor time-to-market in those highly attractive areas gets delayed. This said there is hardly any reason for not sharing rural sites where the Ugly (Cost) Tail of low or no profitable sites are situated. Being able to share such low-no-profitability sites simply allow operators to re-focus cash on areas where it really matters. Sharing allows services can be offered in rural and under-develop areas at the lowest cost possible. Particular in emerging markets rural areas, where a fairly large part of the population will be living, the cost of deploying and operating sites will be a lot more expensive than in urban areas. Combined with rural areas substantially lower population density it follows that sites will be a lot harder to make positively return on investment within their useful lifetime.

Total Cost of Ownership of rural sites are in many countries substantially higher than their urban equivalents. Low or No site profitability follows.

In general it can be shown that between 40% to 50% of mature operators sites generates less than 10% of the revenue and are substantially more expensive to deploy and operate than urban sites.

The ugly (cost) tail is a bit more “ugly” in mature western markets (i.e., 50+% of sites) than in emerging markets, as the customers in mature markets have higher coverage expectations in general.

ugly_tail

(Source: Western European market. Similar Ugly-tail curves observed in many emerging markets as well although the 10% breakpoint tend to be close to 40%).

It is always recommend to analyze the most obvious strategic games that can be played out. Not only from your own perspective. More importantly, you need to have a comprehensive understanding of your competitors (and sharing partners) games and their most efficient path (which is not always synergetic or matching your own). Cost Structure Engineering should not only consider our own cost structure but also those of your competitors and partners.

Sharing is something that is very fundamental to the human nature. Sharing is on the fundamental level the common use of a given resource, tangible as well as intangible.

Sounds pretty nice! However, Sharing is rarely altruistic in nature i.e., lets be honest … why would you help a competitor to get stronger financially and have him spend his savings for customer acquisition … unless of course you achieve similar or preferably better benefits. It is a given that all sharing stakeholders should stand to benefit from the act of sharing. The more asymmetric perceived or tangible sharing benefits are the less stable will a sharing relationship be (or become over time if the benefit distribution should change significantly).

Recipe for a successful sharing partnership is that the sharing partners both have a perception of a deal that offers reasonable symmetric benefits.

It should be noted that perception of symmetric benefits does not mean per see that every saving or avoidance dollar of benefit is exactly the same for both partners. One stakeholder might get access to more coverage or capacity faster than in standalone. The other stakeholder might be able to more driven by budgetary concerns and sharing allows more extensive deployment than otherwise would have been possible within allocated budgets.

Historical most network sharing deals have focused on RAN Sharing, comprising radio access network (RAN) site locations, related passive infrastructure (e.g., such as tower, cabinets, etc..) and various degrees of active sharing. Recent technology development such as software definable network (SDN), virtualization concepts (e.g., Network Function Virtualization, NFV) have made sharing of core network and value-add service platforms interesting as well (or at least more feasible). Another financially interesting industry trend is to spin-off an operators tower assets to 3rd party Tower Management Companies (TMC). The TMC pays upfront a cash equivalent of the value of the passive tower infrastructure to the Mobile Network Operator (MNO). The MNO then lease (i.e., Opex) back the tower assets from the TMC. Such tower asset deals provide the MNO with upfront cash and the TMC a long-term lease income from the MNO. In my opinion such Tower deals tend to be driven by MNOs short-term cash needs without much regard for longer  term profitability and Ebitda (i.e., Revenue minus Opex) developments.

With ever increasing demand for more and more bandwidth feeding our customers mobile internet consumption, fiber optical infrastructures have become a must have. Legacy copper-based fixed transport networks can no longer support such bandwidth demands. Over the next 10 years all Telco’s will face massive investments into fiber-optic networks to sustain the ever growing demand for bandwidth. Sharing such investments should be obvious and straightforward. In this area we also are faced with the choice of passive (Dark Fiber itself) as well as active (i.e., DWDM) infrastructure sharing.

NETWORK SHARING SUCCESS FACTORS

There are many consultants out there who evangelize network sharing as the only real cost reduction / saving measure left to the telecom industry. In Theory they are not wrong. The stories that will be told are almost too good to be true. Are you “desperate” for economical efficiency? You might then get very exited by the network sharing promise and forget that network sharing also has a cost side to it (i.e., usually forget and denial are fairly interchangeable here).

In my experience Network Sharing boils down to  the following 4 points:

  • Who to share with? (your equal, your better or your worse).
  • What to share? (sites, passives, active, frequencies, new sites, old sites, towers, rooftops, organization, ,…).
  • Where to share? (rural, sub-urban, urban, regional, all, etc..).
  • How to share? (“the legal stuff”).

In my more than 14 years of thinking about and working on Network Sharing I have come to the following heuristics of the pre-requisites a successful network sharing:

  • CEOs agree with & endorse Network Sharing.
  • Sharing Partners have similar perceived benefits (win-win feel).
  • Focus on creating a better network for less and with better time-to-market..
  • Both parties share a similar end-goal and have a similar strategic outlook.

While it seems obvious it is often forgotten that Network Sharing is a very-long term engagement (“for Life!”) and like in any other relationship (particular the JV kind) Do consider that a break-up can happen … so be prepared (i.e., “legal stuff”).

Compared to 14 – 15 years ago, Technology pretty much support Network Sharing in all its flavors and is no longer a real show-stopper for engaging with another operator to share network and ripe of (eventually) the financial benefits of such a relationship. References on the technical options for network sharing can be found in the 3GPP TR 3GPP TS 22.951 (“Service Aspects and Requirements for network sharing”) and 123.251 (“Network Sharing; Architecture and Functional Description”). Obviously, today 3GPP support for network sharing runs through most of the 3GPP technical requirements and specification documents.

Technology is not a show-stopper for Network Sharing. The Economics might be!

COST STRUCTURE CONSIDERATIONS.

Before committing man power to a network sharing deal, there are a couple of pretty basic “litmus tests” to be done to see whether the economic savings being promised make sense.

First understand your own cost structure (i.e., Capex, Opex, Cash and Revenues) and in particular where Network Sharing will make an impact – positive as well as negative. I am more often that not, surprised how few Executives and Senior Managers really understand their own company’s cost structure. Thus they are not able to quickly spot un-realistic financial & operational promises made.

Seek answers to the following questions:

  1. What is the Total Technology Opex (Network & IT) share out of the Total Corporate Opex?
  2. What is the Total Network Opex out of Total Technology Opex?
  3. What is the Total Radio Access Network (RAN) Opex out of the Total Network Opex?
  4. Out of the Total RAN Opex how much relates to sites including Operations & Maintenance?

expectation management

In general, I would expect the following answers to the above questions based on many of mobile operator cost structure analysis across many different markets (from mature to very emerging, from Western Europe, Central Eastern & Southern Europe, to US and Asia-Pacific).

  1. Technology Opex is 20% to 25% of Total Corporate Opex defined as “Revenue-minus-Ebitda”(depends a little on degree of leased lines & diesel generator dependence).
  2. Network Opex should be between  70% to 80% of the Technology Opex.,
  3. RAN related Opex should be between 50% to 80% of the Network Opex. Of course here it is important to understand that not all of this Opex might be impacted by Network Sharing or at least the impact would depend on the Network Sharing model chosen (e.g., active versus passive).

Lets assume that a given RAN network sharing scenario provides a 35% saving on Total RAN Opex, that would be 35% (RAN Saving) x 60% (RAN Opex) x 75% (Network Opex) x 25% (Technology Opex) which yields a total network sharing saving of 4% on the Corporate Opex.

A saving on Opex obviously should translate into a proportional saving on Ebitda (i.e., Earnings before interest tax depreciation & amortization). The margin saving is given as follows

\frac{{{E_2} - {E_1}}}{{{E_1}}} = \frac{{1 - {m_1}}}{{{m_1}}}x(with E1 and E2 represents Ebitda before and after the relative Opex saving x, m1 is the margin before the Opex saving, assuming that Revenue remains unchanged after Opex saving has been realized).

From the above we see that when the margin is exactly 50% (i.e., fairly un-usual phenomenon for most mature markets), a saving in Opex corresponds directly to an identical relative saving in Ebitda. When the margin is below 50% the relative impact on Ebitda is higher than the relative saving on Opex. If your margin was 40% prior to a realized Opex saving of 5%, one would expect the margin (or Ebitda) saving to be 1.5x that saving or 7.5%.

In general I would expect up-to 35% Opex saving on relevant technology cost structure from network sharing on established networks. If much more saving is claimed, we should get skeptical of the analysis and certainly not take it on face value. It is not un-usual to see Network Sharing contributing as much as 20% saving (and avoidance on run-rate) on the overall Network Opex (ignoring IT Opex here!).

Why not 50% saving (or avoidance)? You may ask! But only once please!

After all we are taking 2 RAN networks and migrating them into 1 network … surely that should result in at 50% saving (i.e., always on relevant cost structure).

First of all, not all relevant (to cellular sites) cost structure is in general relevant to network sharing. Think here about energy consumption and transport solutions as the most obvious examples. Further, landlords are not likely to allow you to directly share existing site locations, and thus site lease cost with another operator without asking for an increased lease (i.e., 20% to 40% is not un-heard of). Existing lease contracts might need to be opened up to allow sharing, terms & conditions will likely need to be re-negotiated, etc.. in the end site lease savings are achievable but these will not translate into a 50% saving.

WARNING! 50% saving claims as a result of Network Sharing are not to be taken at face value!

Another interesting effect is that more shared sites will eventually result compared to the standalone number of sites. In other words, the shared network will have sites than either of the two networks standalone (and hopefully less than the combined amount of sites prior to sharing & consolidation). The reason for this is that the two sharing parties networks rarely are completely symmetric when it comes to coverage. Thus the shared network that will be somewhat bigger than compared to the standalone networks and thus safeguard the customer experience and hopefully the revenue in a post-merged network scenario. If the ultimate shared network has been planned & optimized properly, both parties customers will experience an increased network quality in terms of coverage and capacity (i.e., speed).

#SitesA , #SitesB < #SitesA+B < #SitesA + #SitesB

The Shared Network should always provide a better network customer experience than each standalone networks.

I have experienced Executives argue (usually post-deal obviously!) that it is not possible to remove sites, as any site removed will destroy customer experience. Let me be clear, If the shared network is planned & optimized according with best practices the shared network will deliver a substantial better network experience to the combined customer base than the respective standalone networks.

Lets dive deeper into the Technology Cost Structure. As the Figure below shows (i.e., typical for mature western markets) we have the following high level cost distribution for the Technology Opex

  1. 10% to 15% for Core Network
  2. 20% to 40% for IT & Platforms and finally
  3. 45% to 70% for RAN.

The RAN Opex for markets without energy distribution challenges, i.e., mature & reliable energy delivery grid) is split in (a) ca. 40% (i.e., of the RAN Opex) for Rental & Leasing which is clearly addressable by Network Sharing, (b) ca. 25% in Services including Maintenance & Repair of which at least the non-Telco part is easily addressable by Network Sharing, (c) ca. 15% Personnel Cost also addressable by Network Sharing, (d) 10% Leased Lines (typical backhaul connectivity) is less dependent on Network Sharing although bandwidth volume discounts might be achievable by sharing connectivity to a shared site and finally (e) Energy & other Opex costs would in general not be impacted substantially by Network Sharing. Note that for markets with a high share of diesel generators and fuel logistics, the share of Energy cost within the RAN Opex cost category will be substantially larger than depicted here.

It is important to note here that sharing of Managed Energy Provision, similar to Tower Company lease arrangement, might provide financial synergies. However, typically one would expect Capex Avoidance (i.e., by not buying power systems) on the account of an increased Energy Opex Cost (compared to standalone energy management) for the managed services. Obviously, if such a power managed service arrangement can be shared, there might be some synergies to be gained from such an arrangement. In my opinion this is particular interesting for markets with a high reliance of diesel generators and fuelling logistics.This said

Power sharing in mature markets with high electrification rates can offer synergies on energy via applicable volume discounts though would require shared metering (which might not always be particular well appreciated by power companies).technology cost distribution

Maybe as much as

80% of the total RAN Opex can be positively impacted (i.e., reduced) by network sharing.

Above cost structure illustration also explain why I rarely get very exited about sharing measures in Core Network Domain (i.e., spend too much time in the past to explain that while NG Core Network might save 50% of relevant cost it really was not very impressive in absolute terms and efforts was better spend on more substantial cost structure elements). Assume you can save 50% (which is a bit on the wild side today) on Core Network Opex (even Capex is in proportion to RAN fairly smallish). That 50% saving on Core translates into maybe maximum 5% of the Network Opex as opposed to RAN’s 15% – 20%. Sharing Core Network resources with another party does require substantially more overhead management and supervision than even fairly aggressive RAN sharing scenarios (with substantial active sharing).

This said, I believe that there are some internal efficiency measures to Telco Groups (with superior interconnection) and very interesting new business models out there that do provide core network & computing infrastructure as a service to Telco’s (and in principle allow multiple Telco’s to share the core network platforms and resources. My 2012 presentation on Ultra-Efficient Network Factory: Network Sharing & other means to leapfrog operator efficiencies. illustrates how such business models might work out. The first describes in largely generic terms how virtualization (e.g., NFV) and cloud-based technologies could be exploited. The LTE-as-a-Service (could be UMTS-as-a-Service as well of course) is more operator specific. The verdict is still out there whether truly new business models can provide meaningful economics for customer networks and business. In the longer run, I am fairly convinced, that scale and expected massive improvements in connectivity in-countries and between-countries will make these business models economical interesting for many tier-2, tier-3 and Generation-Z businesses.

businessmodels2

businessmodels1

BUT BUT … WHAT ABOUT CAPEX?

From a Network Sharing perspective Capex synergies or Capex avoidance are particular interesting at the beginning of a network rollout (i.e., Rollout Phase) as well as at the end of the Steady State where technology refreshment is required (i.e., the Modernization Phase).

Obviously, in a site deployment heavy scenario (e.g., start-ups) sharing the materials and construction cost of greenfield tower or rooftop (in as much as it can be shared) will dramatically lower the capital cost of deployment. In particular as you and your competitor(s) would likely want to cover pretty much the same places and thus sharing does become very compelling and a rational choice. Unless its more attractive to block your competitor from gaining access to interesting locations.

Irrespective, between 40% to 50% of an operators sites will only generate up-to 10% of the turnover. Those ugly-cost-tail sites will typically be in rural areas (including forests) and also on average be more costly to deploy and operate than sites in urban areas and along major roads.

Sharing 40% – 50% of sites, also known as the ugly-cost-tail sites, should really be a no brainer!

Depending on the market, the country particulars, and whether we look at emerging or mature markets there might be more or less Tower sites versus rooftops. Rooftops are less obvious passive sharing candidates, while Towers obviously are almost perfect passive sharing candidates provided the linked budget for the coverage can be maintained post-sharing. Active sharing does make rooftop sharing more interesting and might reduce the tower design specifications and thus optimize Capex further in a deployment scenario.

As operators faces RAN modernization pressures it can Capex-wise become very interesting to discuss active as well as passive sharing with a competitor in the same situation. There are joint-procurement benefits to be gained as well as site consolidation scenarios that will offer better long-term Opex trends. Particular T-Mobile and Hutchinson in the UK (and T-Mobile and Orange as well in UK and beyond) have championed this approach reporting very substantial sourcing Capex synergies by sharing procurements. Note network sharing and sharing sourcing in a modernization scenario does not force operators to engage in full active network sharing. However, it is a pre-requisite that there is an agreement on the infrastructure supplier(s).

Network Sharing triggered by modernization requirements is primarily interesting (again Capex wise) if part of electronics and ancillary can be shared (i.e., active sharing). Suppliers match is an obviously must for optimum benefits. Otherwise the economical benefits will be weighted towards Opex if a sizable amount of sites can be phased out as a result of site consolidation.

total_overview

The above Figure provides an overview of the most interesting components of Network Sharing. It should be noted that Capex prevention is in particular relevant to (1) The Rollout Phase and (2) The Modernization Phase. Opex prevention is always applicable throughout the main 3 stages Network Sharing Attractiveness Cycles. In general the Regulatory Complexity tend to be higher for Active Sharing Scenarios and less problematic for Passive Sharing Scenarios. In general Regulatory Authorities would (or should) encourage & incentivize passive site sharing ensuring that an optimum site infrastructure (i.e., number of towers & rooftops) is being built out (in greenfield markets) or consolidated (in established / mature markets). Even today it is not un-usual to find several towers, each occupied with a single operator, next to each other or within hundred of meters distance.

NETWORK SHARING DOES NOT COME FOR FREE!

One of the first things a responsible executive should ask when faced with the wonderful promises of network sharing synergies in form of Ebitda and cash improvements is

What does it cost me to network share?

The amount of re-structuring or termination cost that will be incurred before Network Sharing benefits can be realized will depend a lot on which part of the Network Sharing Cycle.

(1) The Rollout Phase in which case re-structuring cost is likely to be minimum as there is little or nothing to restructure. Further, also in this case write-off of existing investments and assets would likewise be very small or non-existent pending on how far into the rollout the business would be. What might complicate matters are whether sourcing contracts needs to be changed or cancelled and thus result in possible penalty costs. In any event being able to deploy together the network from the beginning does (in theory) result in the least deployment complexity and best deployment economics. However, getting to the point of agreeing to shared deployment (i.e., which also requires a reasonable common site grid) might be a long and bumpy road. Ultimately, launch timing will be critical to whether two operators can agree on all the bits and pieces in time not to endanger targeted launch.

Network Sharing in the Rollout Phase is characterized by

  • Little restructuring & termination cost expected.
  • High Capex avoidance potential.
  • High  Opex avoidance potential.
  • Little to no infrastructure write-offs.
  • Little to no risk of contract termination penalties.
  • “Normal” network deployment project (though can be messed up by too many cooks syndrome).
  • Best network potential.

    (2) The Steady State Phase, where a substantial part of the networks have been rollout out, tend to be the most complex and costly phase to engage in Network Sharing passive and of course active sharing. A substantial amount of site leases would need to be broken, terminated or re-structured to allow for network sharing. In all cases either penalties or lease increases are likely to result. Infrastructure supplier contracts, typically maintenance & operations agreements, might likewise be terminated or changed substantially. Same holds for leased transmission. Write-off can be very substantial in this phase as relative new sites might be terminated, new radio equipment might become redundant or phased-out, etc If one or both sharing partners are in this phase of the business & network cycle the chance of a network sharing agreement is low. However, if a substantial amount of both parties site locations will be used to enhance the resulting network and a substantial part of the active equipment will be re-used and contracts expanded then sharing tends to be going ahead. A good example of this is in the UK with Vodafone and O2 site sharing agreement with the aim to leapfrog number of sites to match that of EE (Orange + T-Mobile UK JV) for improved customer experience and remain competitive with the EE network.

    Network Sharing in the Steady State Phase is characterized by

  • Very high restructuring & termination cost expected.
  • None or little Capex synergies.
  • Substantial Opex savings potential.
  • Very high infrastructure write-offs.
  • Very high termination penalties incl. site lease termination.
  • Highly complex consolidation project.
  • Medium to long-term network quality & optimization issues.

    (3) Once operators approaches the Modernization Phase more aggressive network sharing scenarios can be considered as the including joint sourcing and infrastructure procurement (e.g., a la T-Mobile UK and Hutchinson in UK). At this stage typically the remainder of the site leases term will be lower and penalties due to lease termination as a result lower as well. Furthermore, at this point in time little (or at least substantially lower than in the steady state phase) residual value should remain in the active and also passive infrastructure. The Modernization Phase is a very opportune moment to consider network sharing, passive as well as active, resulting in both substantial Capex avoidance and of course very attractive Opex savings mitigating a stagnating or declining topline as well as de-risking future loss of profitability.

    Network Sharing in the Modernization Phase is characterized by

    • Relative moderate restructuring & termination cost expected.
    • High Capex avoidance potential.
    • Substantial Opex saving potential.
    • Little infrastructure write-offs.
    • Lower risk of contract termination penalties.
    • Manageable consolidation project.
    • Instant cell splits and cost-efficient provision of network capacity.
    • More aggressive network optimization –> better network.

    As a rule of thumb I usually recommend to estimate restructuring / termination cost as follows (i.e., if you don’t have the real terms & conditions of contracts by the hand);

    1. 1.5 to 3+ times the estimated Opex savings – use the higher multiple in the Steady State Phase and the Lower for Modernization Phase.
    2. Consolidation Capex will often be partly synergetic with Business-as-Usual (BaU) Capex and should not be fully considered (typically between 25% to 50% of consolidation Capex can be mapped to BaU Capex).
    3. Write-offs should be considered and will be the most pain-full to cope with in the Steady State Phase.

    NATIONAL ROAMING AS AN ALTERNATIVE TO NETWORK SHARING.

    A National Roaming agreement will save network investments and the resulting technology Opex. So in terms of avoiding technology cost that’s an easy one. Of course from a Profit & Loss (P&L) perspective I am replacing my technology Opex and Capex with wholesale cost somewhere else in my P&L. Whether National Roaming is attractive or not will depend a lot of anticipated traffic and of course the wholesale rate the hosting network will charge for the national roaming service. Hutchinson in UK (as well in other markets) had for many years a GSM national roaming agreement with Orange UK, that allowed its customers basic services outside its UMTS coverage footprint. In Austria for example, Hutchinson (i.e., 3 Austria) provide their customers with GSM national roaming services on T-Mobile Austria’s 2G network (i.e., where 3 Austria don’t cover with their own 3G) and T-Mobile Austria has 3G national roaming arrangement with Hutchinson in areas that they do not cover with 3G.

    In my opinion whether national roaming make sense or not really boils down to 3 major considerations for both parties:

    national_roaming

    There are plenty of examples on National Roaming which in principle can provide similar benefits to infrastructure sharing by avoidance of Capex & Opex that is being replaced by the cost associated with the traffic on the hosting network.The Hosting MNO gets wholesale revenue from the national roaming traffic which the Host supports in low-traffic areas or on a under-utilized network. National roaming agreements or relationships tends to be of temporary nature.

    It should be noted that National Roaming is defined in an area were 1-Party The Host has network coverage (with excess capacity) and another operator (i.e., The Roamer or The Guest) has no network coverage but has a desire to offer its customers service in that particular area. In general only the host’s HPLMN is been broadcasted on the national roaming network. However, with Multi-Operator Core Network (MOCN) feature it is possible to present the national roamer with the experience of his own network provided the roamers terminal equipment supports MOCN (i.e., Release 8 & later terminal equipment will support this feature).

    In many Network Sharing scenarios both parties have existing and overlapping networks and would like to consolidate their networks to one shared network without loosing service quality. The reduction in site locations provide the economical benefits of network sharing. Throughout the shared network both operators will radiate  their respective HPLMNs and the shared network will be completely transparent to their respective customer bases.

    While having been part of several discussions to shut down one networks in geographical areas of a market and move customers to a host overlapping (or better) network via a national roaming agreement, I am not aware of mobile operators which have actually gone down this path.

    Regulatory and from a spectrum safeguard perspective it might be a better approach to commission both parties frequencies on the same network infrastructure and make use of for example the MOCN feature that allows full customer transparency (at least for Release 8 and later terminals).

    national_roaming _examples

    National Roaming is fully standardized and a well proven arrangement in many markets around the world. One does need to be a bit careful with how the national roaming areas are defined/implemented and also how customers move back and forth from a national roaming area (and technology) to home area (and technology). I have seen national roaming arrangements not being implemented because the dynamics was too complex to manage. The “cleaner” the national roaming area is the simpler does the on-off national roaming dynamics become. With “Clean” is mean keep the number of boundaries between own and national roaming network low, go for contiguous areas rather than many islands, avoid different technology coverage overlap (i.e., area with GSM coverage, it should avoided to do UMTS national roaming), etc.. Note you can engineer a “dirty” national roaming scenario of course. However, those tend to be fairly complex and customer experience management tends to be sub-optimal.

    Network Sharing and National Roaming are from a P&L perspective pretty similar in the efficiency and savings potentials. The biggest difference really is in the Usage Based cost item where a National Roaming would incur higher cost than compared to a Network Sharing arrangement.

    p&l_comparison

    An Example: Operator contemplate 2 scenarios;

    1. Network Sharing in rural area addressing 500 sites.
    2. Terminate 500 sites in rural area and make use of National Roaming Agreement.

    What we are really interested in, is to understand when Network Sharing provides better economics than National Roaming and of course vice versa.

    National Roaming can be attractive for relative low traffic scenarios or in case were product of traffic units and national roaming unit cost remains manageable and lower than the Shared Network Cost.

    national roaming vs network sharing

    The above illustration ignores the write-off and termination charges that might result from terminating a given number of sites in a region and then migrate traffic to a national roaming network (note I have not seen any examples of such scenarios in my studies).

    The termination cost or restructuring cost, including write-off of existing telecom assets (i.e., radio nodes, passive site solutions, transmission, aggregation nodes, etc….) is likely to be a substantially financial burden to National Roaming Business Case in an area with existing telecom infrastructure. Certainly above and beyond that of a Network Sharing scenario where assets are being re-used and restructuring cost might be partially shared between the sharing partners.

    Obviously, if National Roaming is established in an area that has no network coverage, restructuring and termination cost is not an issue and Network TCO will clearly be avoided, Albeit the above economical logic and P&L trade-offs on cost still applies.

    National Roaming can be an interesting economical alternative, at least temporarily, to Network Sharing or establishing new coverage in an area with established network operators.

    However, National Roaming agreements are usually of temporary nature as establishing own coverage either standalone or via Network Sharing eventually will be a better economical and strategic choice than continuing with the national roaming agreement.

    SHARING BY TOWER COMPANY (TOWERCO).

    There is a school of thought, within the Telecommunications Industry, that very much promotes the idea of relying on Tower Companies (Towerco) to provide and manage passive telecom site infrastructure.

    The mobile operator leases space from the Towerco on the tower (or in some instances a rooftop) for antennas, radio units and possible microwave dishes. Also the lease would include some real estate space around the tower site location for the telecom racks and ancillary equipment.

    In the last 10 years many operators have sold off their tower assets to Tower companies that then lease those back to the mobile operator.

    In most Towerco deals, Mobile Operators are trading off up-front cash for long-term lease commitments.

    With the danger of generalizing, Towerco deals made by operators in my opinion have a bit the nature and philosophy of “The little boy peeing in his trousers on a cold winter day, it will warm him for a short while, in the long run he will freeze much more after the act”. Let us also be clear that the business down the road will not care about a brilliant tower deal (done in the past) if it pressures their Ebitda and Site Lease cost.

    In general the Tower company will try (should be incented) to increase the tower tenancy (i.e., having more tenants per tower). Pending on the lease contract the Towerco might (should!) provide the mobile operator lease discount as more tenants are added to a given tower infrastructure.

    Towerco versus Network Sharing is obviously a Opex versus Capex trade-off. Anyway, lets look at a simple total-cost-of-ownership example that allows us to understand better when one strategy could be better than the other.towerco vs network sharing

    From the above very simple and high level per tower total-cost-of-ownership model its clear that a Towerco would have some challenges in matching the economics of the Shared Network. A Mobile Operator would most likely (in above example) be better of commencing on a simple tower sharing model (assuming a sharing partner is available and not engaging with another Towerco) rather than leasing towers from a Towerco. The above economics is ca. 600 US$ TCO per month (2-sharing scenario) compared to ca. 1,100 (2-tenant scenario). Actually, unless the Towerco is able to (a) increase occupancy beyond 2, (b) reduce its productions cost well below what the mobile operators would be (without sacrificing quality too much), and (c) at a sufficient low margin, it is difficult to see how a Towerco can provide a Tower solution at better economics than conventional network shared tower.

    This said it should also be clear that the devil will be in the details and there are various P&L and financial engineering options available to mobile operators and Towercos that will improve on the Towerco model. In terms of discounted cash flow and NPV analysis of the cash flows over the full useful life period the Network Sharing model (2-parties) and Towerco lease model with 2-tenants can be made fairly similar in terms of value. However, for 2-tenant versus 2-party sharing, the Ebitda tends to be in favor of network sharing.

    For the Mobile Network Operator (MNO) it is a question of committing Capital upfront versus an increased lease payment over a longer period of time. Obviously the cost of capital factors in here and the inherent business model risk. The inherent risk factors for the Towerco needs to be considered in its WACC (weighted average cost of capital) and of course the overall business model exposure to

    1. Operator business failure or consolidation.
    2. Future Network Sharing and subsequent lease termination.
    3. Tenant occupancy remains low.
    4. Contract penalties for Towerco non-performance, etc..

    Given the fairly large inherent risk (to Towerco business models) of operator consolidation in mature markets, with more than 3 mobile operators, there would be a “wicked” logic in trying to mitigate consolidation scenarios with costly breakaway clauses and higher margins.

    From all the above it should be evident that for mobile operators with considerable tower portfolios and also sharing ambitions, it is far better to (First) Consolidate & optimize their tower portfolios, ensuring minimum 2 tenants on each tower and then (Second) spin-off (when the cash is really needed) the optimized tower portfolio to a Towerco ensuring that the long-term lease is tenant & Ebitda optimized (as that really is going to be any mobile operations biggest longer term headache as markets starts to saturate).

    SUMMARY OF PART I – THE FUNDAMENTALS.

    There should be little doubt that

    Network Sharing provides one of the biggest financial efficiency levers available to mobile network operator.

    Maybe apart from reducing market invest… but that is obviously not really a sustainable medium-long-term strategy.

    In aggressive network sharing scenarios Opex savings in the order of 35% is achievable as well as future Opex avoidance in the run-rate. Depending on the Network Sharing Scenario substantial Capex can be avoided by sharing the infrastructure built-out (i.e., The Rollout Phase) and likewise in the Modernization Phase. Both allows for very comprehensive sharing of both passive and active infrastructure and the associated capital expenses.

    Both National Roaming and Sharing via Towerco can be interesting concepts and if engineered well (particular financially) can provide similar benefits as sharing (active as well as passive, respectively). Particular in cash constrained scenarios (or where operators see an extraordinary business risk and want to minimize cash exposure) both options can be attractive. Long-term National Roaming is particular attractive in areas where an operator have no coverage and has little strategic importance. In case an area is strategically important, national roaming can act as a time-bridge until presence has been secure possibly via Network Sharing (if competitor is willing).

    Sharing via Towerco can also be an option when two parties are having trust issues. Having a 3rd party facilitating the sharing is then an option.

    In my opinion National Roaming & Sharing via Towerco rarely as Ebitda efficient as conventional Network Sharing.

    Finally! Why should you stay away from Network Sharing?

    This question is important to answer as well as why you should (which always seems initially the easiest). Either to indeed NOT to go down the path of network sharing or at the very least ensure that point of concerns and possible blocking points have been though roughly considered and checked of.

    So here comes some of my favorites … too many of those below you are not terrible likely to be successful in this endeavor:

    whynotsharing

    ACKNOWLEDGEMENT

    I would like to thank many colleagues for support and Network Sharing discussions over the past 13 years. However, in particular I owe a lot to David Haszeldine (Deutsche Telekom) for his insights and thoughts. David has been my true brother-in-arms throughout my Deutsche Telekom years and on our many Network Sharing experiences we have had around the world. I have had many & great discussions with David on the ins-and-outs of Network Sharing … Not sure we cracked it all? … but pretty sure we are at the forefront of understanding what Network Sharing can be and also what it most definitely cannot do for a Mobile Operator. Of course similar to all the people who have left comments on my public presentations and gotten in contact with me on this very exiting and by no way near exhausted topic of how to share networks.

    The term the “Ugly Tail” as referring to rural and low-profitability sites present in all networks should really be attributed to Fergal Kelly (now CTO of Vodafone Ireland) from a meeting quiet a few years ago. The term is too good not to borrow … Thanks Fergal!

    This story is PART I and as such it obviously would indicate that another Part is on the way Winking smilePART II“Network Sharing – That was then, this is now” will be on the many projects I have worked on in my professional career and lessons learned (all available in the public domain of course). Here obviously providing a comparison with the original ambition level and plans with the reality is going to be cool (and in some instances painful as well). PART III“The Tools” will describe the arsenal of tools and models that I have developed over the last 13 years and used extensively on many projects.

  • Time Value of Money, Real Options, Uncertainty & Risk in Technology Investment Decisions

    “We have met the Enemy … and he is us”

    is how the Kauffman Foundations starts their extensive report on investments in Venture Capital Funds and their abysmal poor performance over the last 20 years. Only 20 out of 200 Venture Funds generated returns that beat the public-market equivalent with more than 3%. 10 of those were Funds created prior to 1995. Clearly there is something rotten in the state of valuation, value creation and management. Is this state of affairs limited only to portfolio management (i..e, one might have hoped a better diversified VC portfolio) is this poor track record on investment decisions (even diversified portfolios) generic to any investment decision made in any business? I let smarter people answer this question. Though there is little doubt in my mind that the quote “We have met the Enemy … and he is us” could apply to most corporations and the VC results might not be that far away from any corporation’s internal investment portfolio. Most business models and business cases will be subject to wishful thinking and a whole artillery of other biases that will tend to overemphasize the positives and under-estimate (or ignore) the negatives.The avoidance of scenario thinking and reference class forecasting will tend to bias investments towards the upper boundaries and beyond of the achievable and ignore more attractive propositions that could be more valuable than the idea that is being pursued.

    As I was going through my archive I stumbled over an old paper I wrote back in 2006 when I worked for T-Mobile International and Deutsche Telekom (a companion presentation due on Slideshare). At the time I was heavily engaged with Finance and Strategy in transforming Technology Investment Decision Making into a more economical responsible framework than had been the case previously. My paper was a call for more sophisticated approaches to technology investments decisions in the telecom sector as opposed to what was “standard practice” at the time and in my opinion pretty much still i.

    Many who are involved in techno-economical & financial analysis as well as the decision makers acting upon recommendations from their analysts are in danger of basing their decisions on flawed economical analysis or simply have no appreciation of uncertainty and risk involved. A frequent mistake made in decision making of investment options is ignoring one of the most central themes of finance & economics, the Time-Value-of-Money. An investment decision taken was insensitive to the timing of the money flow. Furthermore, investment decisions based on Naïve TCO are good examples of such insensitivity bias and can lead to highly in-efficient decision making. Naïve here implies that time and timing does not matter in the analysis and subsequent decision.

    Time-Value-of-Money:

    I like to get my money today rather than tomorrow, but I don’t mind paying tomorrow rather than today”.

    Time and timing matters when it comes to cash. Any investment decision that does not consider timing of expenses and/or income has a substantially higher likelihood of being an economical in-efficient decision. Costing the shareholders and investors (a lot of) money. As a side note Time-Value-of-Money assumes that you can actually do something with the cash today that is more valuable than waiting for it at a point in the future. Now that might work well for Homo Economicus but maybe not so for the majority of the human race (incl. Homo Financius).

    Thus, if I am insensitive to timing of payments it does not matter for example whether I have to pay €110 Million more for a system the first year compared to deferring that increment to the 5th year

    Clearly wrong!

    naive tco

    In the above illustration outgoing cash flow (CF) example the naïve TCO (i..e, total cost of ownership) is similar for both CFs. I use the word naïve here to represent a non-discounted valuation framework. Both Blue and Orange CFs represent a naïve TCO value of €200 Million. So a decision maker (or an analyst) not considering time-value-of-money would be indifferent to one or the other cash flow scenario. Would the decision maker consider time-value-of-money (or in the above very obvious case see the timing of cash out) clear it would be in favor of Blue. Further front-loaded investment decisions are scary endeavors, particular for unproven technologies or business decisions with a high degree of future unknowns, as the exposure to risks and losses are so much higher than a more carefully designed cash-out/investment trajectory following the reduction of risk or increased growth. When only presented with the (naïve) TCO rather than the cash flows, it might even be that some scenarios might be unfavorable from a naïve TCO framework but favorable when time-value-of-money is considered. The following illustrates this;

    naive tco vs dcf

    The Orange CF above amounts to a naïve TCO of €180 Million versus to the Blue’s TCO of €200 Million. Clearly if all the decision maker is presented with is the two (naïve) TCOs, he can only choose the Orange scenario and “save” €20 Million. However, when time-value-of-money is considered the decision should clearly be for the Blue scenario that in terms of discounted cash flows yields €18 Million in its favor despite the TCO of €20 Million in favor of Orange. Obviously, the Blue scenario has many other advantages as opposed to Orange.

     

    When does it make sense to invest in the future?

     

    Frequently we are faced with  technology investment decisions that require spending incremental cash now for a feature or functionality that we might only need at some point in the future. We believe that the cash-out today is more efficient (i.e., better value) than introducing the feature/functionality at the time when we believe it might really be needed..

     

    Example of the value of optionality: Assuming that you have two investment options and you need to provide management with which of those two are more favorable.

     

    Product X with investment I1: provides support for 2 functionalities you need today and 1 that might be needed in the future (i.e., 3 Functionalities in total).

    Product Y with investment I2: provides support for the 2 functionalities you need today and 3 functionalities that you might need in the future (i.e., 5 Functionalities in total).

     

    I1 < I2 and \Delta = I2I1 > 0

     

    If, in the future, we need more than 1 additional functionality it clearly make sense to ask whether it is better upfront to invest in Product Y, rather than X and then later Y (when needed). Particular when Product X would have to be de-commissioned when introducing Product Y, it is quite possible that investing in Product Y upfront is more favorable. 

     

    From a naïve TCO perspective it clearly better to invest in Y than X + Y. The “naïve” analyst would claim that this saves us at least I1 (if he is really clever de-installation cost and write-offs might be included as well as saving or avoidance cost) by investing in Y upfront.

     

    Of course if it should turn out that we do not need all the extra functionality that Product Y provides (within the useful life of Product X) then we have clearly made a mistake and over-invested by\Delta and would have been better off sticking to Product X (i.e., the reference is now between investing in Product Y versus Product X upfront).

     

    Once we call upon an option, make an investment decision, other possibilities and alternatives are banished to the “land of lost opportunities”.

     

    Considering time-value-of-money (i.e., discounted cash-flows) the math would still come out more favorable for Y than X+Y, though the incremental penalty would be lower as the future investment in Product Y would be later and the investment would be discounted back to Present Value.

     

    So we should always upfront invest in the future?

     

    Categorically no we should not!

     

    Above we have identified 2 outcomes (though there are others as well);

    Outcome 1: Product Y is not needed within lifetime T of Product X.

    Outcome 2: Product Y is needed within lifetime T of Product X.

     

    In our example, for Outcome 1 the NPV difference between Product X and Product Y is -10 Million US$. If we invest into Product Y and do not need all its functionality within the lifetime of Product X we would have “wasted” 10 Million US$ (i.e., opportunity cost) that could have been avoided by sticking to Product X.

     

    The value of Outcome 2 is a bit more complicated as it depends on when Product Y is required within the lifetime of Product X. Let’s assume that Product X useful lifetime is 7 years, i.e., after which period we would need to replace Product X anyway requiring a modernization investment. We assume that for the first 2 years (i.e., yr 2 and yr 3) there is no need for the additional functionality that Product Y offers (or it would be obvious to deploy up-front at least within this examples economics). From Year 4 to Year 7 there is an increased likelihood of the functionalities of Product X to be required.

     

    product Y npv

    In Outcome 2 the blended NPV is 3.0 Million US$ positive to deploy Product X instead of Product Y and then later Product X (i.e., the X+Y scenario) when it is required. After the 7th year we would have to re-invest in a new product and the obviously looking beyond this timeline makes little sense in our simplified investment example.

     

    Finally if we assess that there is a 40% chance that the Product Y will not be required within the life-time of Product X, we have the overall effective NPV of our options would be negative (i.e., 40%(-10) + 3 = –1 Million). Thus we conclude it is better to defer the investment in Product Y than to invest in it upfront. In other words it is economical more valuable to deploy Product X within this examples assumptions.

     

    I could make an even stronger case for deferring investing in Product Y: (1) if I can re-use Product X when I introduce Product Y, (2) if I believe that the price of Product Y will be much lower in the future (i..e, due to maturity and competition), or (3) that there is a relative high likelihood that the Product Y might become obsolete before the additional functionalities are required (e.g., new superior products at lower cost compared to Product Y). The last point is often found when investing into the very first product releases (i.e., substantial immaturity) or highly innovative products just being introduced. Moreover, there might be lower-cost lower-tech options that could provide the same functionality when required that would make investing upfront in higher-tech higher-cost options un-economical. For example, a product that provide a single targeted functionality at the point in time it is needed, might be more economical than investing in a product supporting 5 functionalities (of which 3 is not required) long before it is really required.

     

    Many business cases are narrowly focusing on proving a particular point of view. Typically maximum 2 scenarios are compared directly, the old way and the proposed way. No surprise! The new proposed way of doing things will be more favorable than the old (why else do the analysis;-). While such analysis cannot be claimed to be wrong, it poses the danger of ignoring more valuable options available (but ignored by the analyst). The value of optionality and timing is ignored in most business cases.

     

    For many technology investment decisions time is more a friend than an enemy. Deferring investing into a promise of future functionality is frequently the better value-optimizing strategy.

     

    Rules of my thumb:

    • If a functionality is likely to be required beyond 36 months, the better decision is to defer the investment to later.
    • Innovative products with no immediate use are better introduced later rather than sooner as improvement cycles and competition are going to make such more economical to introduce later (and we avoid obsolescence risk).
    • Right timing is better than being the first (e.g., as Apple has proven a couple of times).

    Decision makers are frequently betting on a future event (i..e, knowingly or unknowingly) will happen and that making an incremental investment decision today is more valuable than deferring the decision to later. Basically we deal with an Option or a Choice. When we deal with a non-financial Option we will call such a Real Option. Analyzing Real Options can be complex. Many factors needs to be considered in order to form a reasonable judgment of whether investing today in a functionality that only later might be required makes sense or not;

    1. When will the functionality be required (i.e., the earliest, most-likely and the latest).
    2. Given the timing of when it is required, what is the likelihood that something cheaper and better will be available (i.e., price-erosion, product competition, product development, etc..).
    3. Solutions obsolescence risks.

    As there are various uncertain elements involved in whether or not to invest in a Real Option the analysis cannot be treated as a normal deterministic discounted cash flow. The probabilistic nature of the decision analysis needs to be correctly reflected in the analysis.

     

    Most business models & cases are deterministic despite the probabilistic (i.e., uncertain and risky) nature they aim to address.

     

    Most business models & cases are 1-dimensional in the sense of only considering what the analyst tries to prove and not per se alternative options.

     

    My 2006 paper deals with such decisions and how to analyze them systematically and provide a richer and hopefully better framework for decision making subject to uncertainty (i.e., a fairly high proportion of investment decisions within technology).

    Enjoy Winking smile!

    ABSTRACT

    The typical business case analysis, based on discounted cash flows (DCF) and net-present valuation (NPV), inherently assumes that the future is known and that regardless of future events the business will follow the strategy laid down in the present. It is obvious that the future is not deterministic but highly probabilistic, and that, depending on events, a company’s strategy will be adopted to achieve maximum value out of its operation. It is important for a company to manage its investment portfolio actively and understand which strategic options generate the highest return on investment. In every technology decision our industry is faced with various embedded options, which needs to be considered together with the ever-prevalent uncertainty and risk of the real world. It is often overlooked that uncertainty creates a wealth of opportunities if the risk can be managed by mitigation and hedging. An important result concerning options is that the higher the uncertainty of the underlying asset, the more valuable could the related option become. This paper will provide the background for conventional project valuation, such as DCF and NPV. Moreover, it will be shown how a deterministic (i.e., conventional) business case easily can be made probabilistic, and what additional information can be gained with simulating the private as well as market-related uncertainties. Finally, real options analysis (ROA) will be presented as a natural extension of the conventional net-present value analysis. This paper will provide several examples of options in technology, such as radio access site-rollout strategies, product development options, and platform architectural choices.

    INTRODUCTION

    In technology, as well as in mainstream finance, business decisions are more often than not based on discounted cash flow (DCF) calculations using net-present value (NPV) as decision rationale for initiating substantial investments. Irrespective of the complexity and multitudes of assumptions made in business modeling the decision is represented by one single figure, the net present value. The NPV basically takes the future cash flows and discount these back to the present, assuming a so-called “risk –adjusted” discount rate. In most conventional analysis the “risk-adjusted” rate is chosen rather arbitrarily (e.g., 10%-25%) and is assumed to represent all project uncertainties and risks.The risk-adjusted rate should always as a good practice be compared with the weighted average cost of capital (WACC) and benchmarked against what Capital Asset Pricing Model (CAPM) would yield. Though in general the base rate will be set by your finance department and not per se something the analyst needs to worry too much about. Suffice to say that I am not a believer that all risk can be accounted for in the discount rate and that including risks/uncertainty into the cash flow model is essential.

     

    It is naïve to believe that the applied discount rate can account for all risk a project may face.

     

    In many respects the conventional valuation can be seen as supporting a one-dimensional decision process. DCF and NPV methodologies are commonly accepted in our industry and the finance community [1]. However, there is a lack of understanding of how uncertainty and risk, which is part of our business, impacts the methodology in use. The bulk of business cases and plans are deterministic by design. It would be far more appropriate to work with probabilistic business models reflecting uncertainty and risk. A probabilistic business model, in the hands of the true practitioner, provides considerable insight useful for steering strategic investment initiatives. It is essential that a proper balance is found between model complexity and result transparency. With available tools, such as Palisade Corporation’s @RISK Microsoft Excel add-in software [2], it is very easy to convert a conventional business case into a probabilistic model. The Analyst would need to converse with subject-matter experts in order to provide a reasonable representation of relevant uncertainties, statistical distributions, and their ranges in the probabilistic business model [3].

     

    In this paper the word Uncertainty will be used as representing the stochastic (i.e., random) nature of the environment. Uncertainty as concept represents events and external factors, which cannot be directly controlled. The word Volatility will be used interchangeably with uncertainty. With Risk is meant the exposure to uncertainty, e.g., uncertain cash-flows resulting in out-of-money and catastrophic business failure. The total risk is determined by the collection of uncertain events and Management’s ability to deal with these uncertainties through mitigation and “luck”. Moreover, the words Option and Choice will also be used interchangeably throughout this paper.

     

    Luck is something that never should be underestimated.

     

    While working on the T-Mobile NL business case for the implementation of Wireless Application Protocol (WAP) for circuit switched data (CSD), a case was presented showing a 10% chance of losing money (over a 3 year period). The business case also showed an expected NPV of €10 Million, as well as a 10% chance of making more than €20 Million over a 3 year period. The spread in the NPV, due to identified uncertainties, were graphically visualized.

     

    Management, however, requested only to be presented with the “normal” business case NPV as this “was what they could make a decision upon”. It is worthwhile to understand that the presenters made the mistake to make the presentation to Management too probabilistic and mathematical which in retrospect was a wrong approach [4]. Furthermore, as WAP was seen as something strategically important for long-term business survival, moving towards mobile data, it is not conceivable that Management would have turned down WAP even if the business case had been negative.

    In retrospect, the WAP business case would have been more useful if it had pointed out the value of the embedded options inherent in the project;

    1. Defer/delay until market conditions became more certain.
    2. Defer/delay until GPRS became available.
    3. Outsource service with option to in-source or terminate depending on market conditions and service uptake.
    4. Defer/delay until technology becomes more mature, etc..

    Financial “wisdom” states that business decisions should be made which targets the creation of value [5]. It is widely accepted that given a positive NPV, monetary value will be created for the company therefore projects with positive NPV should be implemented. Most companies’ investment means are limited. Innovative companies often are in a situation with more funding demand than available. It is therefore reasonable that projects targeting superior NPVs should be chosen. Considering the importance and weight businesses associate with the conventional analysis using DCF and NPV it worthwhile summarizing the key assumptions underlying decisions made using NPV: 

    • As a Decision is made, future cash flow streams are assumed fixed. There is no flexibility as soon as a decision has been made, and the project will be “passively” managed.
    • Cash-flow uncertainty is not considered, other than working with a risk-adjusted discount rate. The discount rate is often arbitrarily chosen (between 9%-25%) reflecting the analyst’s subjective perception of risk (and uncertainty) with the logic being the higher the discount rate the higher the anticipated risk (note: the applied rate should be reasonably consistent with Weighted Average Cost of Capital  and Capital Asset Pricing Model (CAPM)).
    • All risks are completely accounted for in the discount rate (i.e., which is naïve)
    • The discount rate remains constant over the life-time of the project (i.e., which is naïve).
    • There is no consideration of the value of flexibility, choices and different options.
    • Strategic value is rarely incorporated into the analysis. It is well known that many important benefits are difficult (but not impossible) to value in a quantifiable sense, such as intangible assets or strategic positions. If a strategy cannot be valued or quantified it should not be pursued.
    • Different project outcomes and the associated expected NPVs are rarely considered.
    • Cash-flows and investments are discounted with a single discount rate assuming that market risk and private (company) risk is identical. Correct accounting should use the risk-free rate for private risk and cash-flows subject to market risks should make use of market risk-adjusted discount rate.

    In the following several valuation methodologies will be introduced, which build upon and extend the conventional discounted cash flow and net-present value analysis, providing more powerful means for decision and strategic thinking.

     

    TRADITIONAL VALUATION

    The net-present value is defined as the difference between the values assigned to a given asset, the cash-flows, and the cost and capital expenditures of operating the asset. The traditional valuation approach is based on the net-present value (NPV) formulation [6]

    NPV = \sum\limits_{t = 0}^T {\frac{{{C_t}}}{{{{\left( {1 + {r_{ram}}} \right)}^t}}}}  - \sum\limits_{t = 0}^T {\frac{{{I_t}}}{{{{\left( {1 + {r_{rap}}} \right)}^t}}}}  \approx \sum\limits_{t = 0}^T {\frac{{{C_t} - {I_t}}}{{{{\left( {1 + r*} \right)}^t}}}}  = \sum\limits_{t = 1}^T {\frac{{C_t^*}}{{{{\left( {1 + r*} \right)}^t}}}}  - {I_0}clip_image002

    T is the period during which the valuation is considered, Ct is the future cash flow at time t, rram is the risk-adjusted discount rate applied to market-related risk, It is the investment cost at time t, and rrap is the risk-adjusted-discount rate applied to private-related risk. In most analysis it is customary to assume the same discount rate for private as well as market risk as it simplifies the valuation analysis. The “effective” discount rate r* is often arbitrarily chosen. The I0 is the initial investment at time t=0, and Ct* = Ct – It (for t>0) is the difference between future cash flows and investment costs. The approximation (i.e., ≈ sign) only holds in the limit where the rate rrap is close to rram. The private risk-adjusted rate is expected to be lower than the market risk-adjusted rate. Therefore, any future investments and operating costs will weight more than the future cash flows. Eventually value will be destroyed unless value growth can be achieved. It is therefore important to manage incurred cost, and at the same time explore growth aggressively (at minimum cost) over the project period. Assuming a risk-adjusted or effective rate for both market and private risk investment, cost and cash-flows could lead to a even serious over-estimation of a given project’s value. In general, the private risk-adjusted rate rrap would be between the risk-free rate and the market risk-adjusted discount rate rram.

     example1

    EXAMPLE 1: An initial network investment of 20 mio euro needs to be committed to provide a new service for the customer base. It is assumed that sustenance investment per year amounts to 2% of the initial investment and that operations & maintenance is 20% of the accumulated investment (50% in initial year). Other network cost, such as transmission (assumes centralized platform solution) increases with 10% per year due to increased traffic with an initial cost of 150 thousand. The total network investment and cost structure should be discounted according with the risk-free rate (assumed to be 5%). Market assumptions: s-curve consistent growth assumed with a saturation of 5 Million service users after approximately 3 years. It has been assumed that the user pays 0.8 euro per month for the service and that the service price decreases with 10% per year. Cost of acquisition assumed to be 1 euro per customer, increasing with 5% per year. Other market dependent cost assumed initially to be 400 thousand and to increase with 10% per year. It is assumed that the project is terminated after 5 years and that the terminal value amounts to 0 euro. PV stands for present value and FV for future value. The PV has been discounted back to year 0. It can be seen from the table that the project breaks-even after 3 years. The first analysis presents the NPV results (over a 5 year period) when differentiating between private (private risk-adjusted rate) and market (market risk-adjusted rate) risk taking, a positive NPV of 26M is found. This should be compared with the standard approach assuming an effective rate of 12.5%, which (not surprisingly) results in a positive NPV of 46M. The difference between the two approaches amounts to about 19M.

    .

    Example above compares the approach of using an effective discount rate r* with an analysis that differentiates between private rrap and market risk rram in the NPV calculation. The example illustrates a project valuation example of introducing a new service. The introduction results in network investments and costs in order to provide and operate the service.  Future cash-flows arise from growth of customer base (i.e., service users), and is offset by market related costs. All network investments and costs are assumed to be subject to private risk and should be discounted with the risk-free rate. The market-related cost and revenues are subject to market risk and the risk-adjusted rate should be used [7]. Alternatively, all investment, costs and revenues can be treated with an effective discount rate. As seen from the example, the difference between the two valuation approaches can be substantial:

    • NPV = €26M for differentiated market and private risk, and
    • NPV = €46M using an effective discount rate (e.g., difference of €20M assuming the following discount rates rram = 20%, rrap =5%, r* = 12.5%). Obviously, as rram –> r* and rrap –> r* , the difference in the two valuation approaches will tend to zero. 

     

    UNCERTAINTY, RISK & VALUATION

    The traditional valuation methodology presented in the previous section makes no attempt to incorporate uncertainties and risk other than the effective discount-rate r* or risk-adjusted rates rram/rap. It is inherent in the analysis that cash-flows, as well as the future investments and cost structure, are assumed to be certain. The first level of incorporating uncertainty into the investment analysis would be to define market scenarios with an estimated (subjective) chance of occurring. A good introduction to uncertainty and risk modeling is provided in the well-written book by D. Vose [8], S.O. Sugiyama’s training notes [3] and S. Beninga’s “Financial Modeling” [7].

     

    The Business Analyst working on the service introduction, presented in Example 1, assesses that there are 3 main NPV outcomes for the business model; NPV1= 45, NPV2= 20 and NPV3= -30.  The outcomes have been based on 3 different market assumptions related to customer uptake: 1. Optimistic, 2. Base and 3. Pessimistic. The NPVs are associated with the following chances of occurrence: P1 = 25%, P2 = 50% and P3 = 25%.

     

    What would the expected net-present value be given the above scenarios?

     

    The expected NPV (ENPV) would be ENPV=P1×NPV1+ P2×NPV2+ P3×NPV3=25%×45+50%×20+25%×(-30) =14. Example 2 (below) illustrates the process of obtaining the expected NPV.

    example2

    Example 2: illustrates how to calculate the expected NPV (ENPV) when 3 NPV outcomes have been identified resulting from 3 different customer uptake scenarios. The expected NPV calculation assumes that we do not have any flexibility to avoid any of the 3 outcomes. The circular node represents a chance node yielding the expected outcome given the weighted NPVs.

     

    In general the expected NPV can be written as

    ENPV = \sum\limits_{i = 1}^N {NP{V_i} \times {P_i}}

    ,where N is number of possible NPV outcomes, NPVi is the net present value of the ith outcome and Pi is the chance that the ith outcome will occur.  By including scenarios in the valuation analysis, the uncertainty of the real world is being captured. The risk of overestimating or underestimating a project valuation is thereby minimized. Typically, the estimation of P, which is the chance or probability, for a particular outcome is based on subjective “feeling” of the Business Analyst, who obviously still need to build a credible story around his choices of likelihood for the scenarios in questions. Clearly this is not a very satisfactory situation as all kind of heuristic biases are likely to influence the choice of a given scenarios likelihood. Still it is clearly more realistic than a purely deterministic approach with only one locked-in happening.

     example3

    Example 3 shows various market outcomes used to study the uncertainty of market conditions upon the net-present value of Example 1and the project valuation subject these uncertainties. The curve represented by the thick solid line and open squares is the base market scenario used in Example 1, while the other curves represent variations to the base case.  Various uncertainties of the customer growth have been explored. An s-curve (logistic function) approach has been used to model the customer uptake of for the studied service: S(t) = \frac{{{S_{\max }}}}{{1 + b\,Exp( - a\,t)}}Exp[ - c\,max\left\{ {0,\left. {t - {t_d}} \right\}} \right.], t is time period, Smax is the maximum expected number of customer, be determines the slope in the growth phase, and (1/a) is the years to reach the mid-point of the S-curve. The Exp[ - c\;\max \{ 0,t - {t_d}\} ]function models the possible decline in customer base, with c being the rate of decline in the market share, and td the period when the decline sets in. Smax has been varied between 2.5 and 6.25 Million customers, with an average of 5.0 Million, b was chosen to be 50 (arbitrarily), (1/a) was varied between 1/3 and 2 (year), with a mean of 0.5 (year). In modeling the market decline, the rate of decline c was varied between 0% and 25% years, with a chosen mean value of 10%, and the td was varied between 0 and 3 years with a mean of 2 years before market decline starts. In all cases a so-called pert distribution was used to model the parameter variance. Instead of running a limited number of scenarios as shown in Example 2 (3 outcomes), a Monte Carlo (MC) simulation is carried out sampling several thousands of possible outcomes.

     

    As already discussed a valuation analysis often involves many uncertain variables and assumptions. In the above Example 3 different NPV scenarios had been identified, which resulted from studying the customer uptake. Typically, the identified uncertain input variables in a simplified scenario-sensitivity approach would each have at least three possible values; minimum (x), base-line or most-likely (y), and maximum (z). For every uncertain input variable the Analyst has identified a {\left\{ {{x_i},{y_i},{z_i}} \right\}_i} variation, i.e., 3 possible variations. For an analysis with 2 uncertain input variables, each with {\left\{ {{x_i},{y_i},{z_i}} \right\}_i}variation, it is not difficult to show that the outcome is 9 different scenario-combinations, for 3 uncertain input variables the result is 72 scenario-combinations, 4 uncertain input variables results in 479 different scenario permutations, and so forth. In complex models containing 10 or more uncertain input variables, the number of combinations would have exceeded 30 Million permutations [9]. Clearly, if 1 or 2 uncertain input variables have been identified in a model the above presented scenario-sensitivity approach is practical. However, the range of possibilities quickly becomes very large and the simple analysis breaks down. In these situations the Business Analyst should turn to Monte Carlo [10] simulations, where a great number of outcomes and combinations can be sampled in a probabilistic manner and enables proper statistical analysis. Before the Analyst can perform an actual Monte Carlo simulation, a probability density function (pdf) needs to be assigned to each identified uncertain input variable and any correlation between model variables needs to be addressed. It should be emphasized that with the help of subject-matter experts, an experienced Analyst in most cases can identify the proper pdf to use for each uncertain input variable. A tool such as Palisade Corporation’s @RISK toolbox [2] for MS Excel visualizes, supports and greatly simplifies the process of including uncertainty into a deterministic model, and efficiently performs Monte Carlo simulations in Microsoft Excel.

     

    Rather than guessing a given scenarios likelihood, it is preferable to transform the deterministic scenarios into one probabilistic scenario. Substituting important scalars (or drivers) with best practice probability distributions and introduce logical switches that mimic choices or options inherent in different driver outcomes. Statistical sampling across simulated outcomes will provide an effective (or blended) real option value.

     

    In Example 1a standard deterministic valuation analysis was performed for a new service and the corresponding network investments. The inherent assumption was that all future cash-flows as well as cost-structures were known. The analysis yielded a 5-year NPV of 26 mio (using the market-private discount rates). This can be regarded as a pure deterministic outcome. The Business Analyst is requested by Management to study the impact on the project valuation incorporating uncertainties into the business model. Thus, the deterministic business model should be translated into a probabilistic model. It is quickly identified that the market assumptions, the customer intake, is an area which needs more analysis. Example 3shows various possible market outcomes. The reference market model is represented by the thick-solid line and open squares. The market outcome is linked to the business model (cash-flows, cost and net-present value). The deterministic model in Example 1 has now been transformed into a probabilistic model including market uncertainty.

    example4

    Example 4: shows the impact of uncertainty in the marketing forecast of customer growth on the Net Present Value (extending Example 1). A Monte Carlo (MC) simulation was carried out subject to the variations of the market conditions (framed box with MC in right side) described above (Example 2) and the NPV results were sampled. As can be seen in the figure above an expected mean NPV of 22M was found with a standard deviation of 16M. Further, analysis reveals a 10% probability of loss (i.e., NPV £ 0 euro) and an opportunity of up to 46M. Charts below (Example 4b and 4c) show the NPV probability density function and integral (probability), respectively. 

    Example 4b                                                                        Example 4c

    example4bexample4c

    Example 4 above summarizes the result of carrying out a Monte Carlo (MC) simulation, using @RISK [2], determining the risks and opportunities of the proposed service and therefore obtaining a better foundation for decision making. In the previous examples the net-present value was represented as a single number; €26M in Example 1 and an expect NPV of €14M in Example 2. In Example 4, the NPV is far richer (see the probability charts of NPV at the bottom of the page) – first note that the mean NPV of €22M agree well with Example 1. Moreover, the Monte Carlo analysis shows the project down-side, that there is a 10% chance of ending up with a poor investment, resulting in value destruction. The opportunity or upside is a chance (i.e., 5%) of gaining more than €46M within a 5-year time-horizon. The project risk profile is represented with the NPV standard deviation, i.e. the project volatility, of €16M. It is Management’s responsibility to weight the risk, downside as well as upside, and ensure that proper mitigation will be considered to reduce the impact of the project downside and potential value destruction.

     

    The presented valuation methodologies so far do not consider flexibility in decision making. Once an investment decision has been taken investment management is assumed to be passive. Thus, should a project turn out to destroy value, which is inevitable if revenue growth becomes limited compared to the operating cost, Management is assumed not to terminate or abandon this project. In reality active Investment Management and Management Decision Making does consider options and their economical and strategic value. In the following a detailed discussion on the valuation of options and the impact on decision making are presented. The Real options analysis (ROA) will be introduced as a natural extension of probabilistic cash flow and net present value analysis. It should be emphasized that ROA is based on some advanced mathematical, as well as statistical concepts, which will not be addressed in this work.

    However, it is possible to get started on ROA with proper re-arrangement of the conventional valuation analysis, as well as incorporating uncertainty where ever appropriate. In the following the goal is to get the reader introduced to thinking about the value of options.

     

    REAL OPTIONS & VALUATION

    An investment option can be seen as a decision flexibility, which depending upon uncertain conditions, might be realized. It should be emphasized, that as with a financial option, it is at the investor’s discretion to realize an option. Any cost or investment for the option itself can be viewed as the premium a company has to pay in order to obtain the option. For example, a company could be looking at an initial technology investment, with the option later on to expand should market conditions be favorable for value growth. Exercising the option, or making the decision to expand the capacity, results in a commitment of additional cost and capital investments – the “strike price” – into realizing the plan/option. Once the option to expand has been exercised, the expected revenue stream becomes the additional value subject to private and market risks. In every technology decision a decision-maker is faced with various options and would need to consider the ever-prevalent uncertainty and risk of real-world decisions.

     

    In the following example, a multinational company is valuing a new service with the idea to commercially launch in all its operations. The cash-flows, associated with the service, are regarded as highly uncertain, and involve significant upfront development cost and investments in infrastructure to support the service. The company studying the service is faced with several options for the initial investment as well as future development of the service. Firstly, the company needs to make the decision to launch the service in all countries in which it is based, or to start-up in one or a few countries to test the service idea before committing to a full international deployment, investing in transport and service capacity. The company also needs to evaluate the architectural options in terms of platform centralization versus de-centralization, platform supplier harmonization or commit to a more-than-one-supplier strategy. In the following, options will be discussed in relation to the service deployment as well as the platform deployment, which supports the new service. In the first instance the Marketing strategy defines a base-line scenario in which the service is launched in all its operations at the same time. The base-line architectural choice is represented by a centralized platform scenario placed in one country, providing the service and initial capacity to the whole group.

    .

    Platform centralization provides for an efficient investment and resourcing; instead of several national platform implementation projects only one country focuses its resources. However, the operating costs might be higher due to need for international leased transmission connectivity to the centralized platform. Due to the uncertainty in the assumed cash-flows, arising from market uncertainties, the following strategy has been identified; The service will be launched initially in a limited number of operations (one or two) with the option to expand should the service be successful (option 1), or should the service fail to generate revenue and growth potential an option to abandon the service after 2 years (option 2). The valuation of the identified options should be assessed in comparison with the base-line scenario of launching the service in all operations. It is clear that the expansion option (option 1) leads to a range of options in terms of platform expansion strategies depending on the traffic volume and cost of the leased international transmission (carrying the traffic) to the centralized platform.

     

    For example, if the cost of transmission exceeds the cost of operating the service platform locally an option to locally deploy the service platform is created. From this example it can be seen that by breaking up the investment decisions into strategic options the company has ensured that it can abandon should the service fail to generate the expected revenue or cash-flows, reducing loses and destruction of wealth. However, more importantly the company, while protecting itself from the downside, has left open the option to expand at the cost of the initial investment. It is evident that as the new service has been launched and cash-flows start being generated (or lack of appropriate cash-flows) the company gains more certainty and better grounds for making decisions on which strategic options should be exercised.

     

    In the previous example, an investment and its associated valuation could be related to the choices which come naturally out of the collection of uncertainties and the resulting risk. In the literature (e.g., [11], [12]) it has been shown that conventional cash-flow analysis, which omits option valuation, tends to under-estimate the project value [13]. The additional project value results from identifying inherent options and valuing these options separately as strategic choices that can be made in a given time-horizon relevant to the project. The consideration of the value of options in the physical world closely relates to financial options theory and treatment of financial securities [14]. The financial options analysis relates to the valuation of derivatives [15] depending on financial assets, whereas the analysis described above identifying options related to physical or real assets, such as investment in tangible projects, is defined as real options analysis (ROA). Real options analysis is a fairly new development in project valuation (see [16], [17], [18], [19], [20], and [21]), and has been adopted to gain a better understanding of the value of flexibility of choice.

     

    One of the most important ideas about options in general and real options in particular, is that uncertainty widens the range of potential outcomes. By proper mitigation and contingency strategy the downside of uncertainty can be significantly reduced, leaving the upside potential. Uncertainty, often feared by Management, can be very valuable, provided the right level of mitigation is exercised. In our industry most committed investments involve a high degree of uncertainty, in particular concerning market forces and revenue expectations, but also technology-related uncertainty and risk is not negligible. The value of an option, or strategic choice, arises from the uncertainty and related risk that real-world projects will be facing during their life-time. The uncertain world, as well as project complexity, results in a portfolio of options, or choice-path, a company can choose from. It has been shown that such options can add significant value to a project – however, presently options are often ignored or valued incorrectly [1121]. In projects, which are inherently uncertain, the Analyst would look for project-valuable options such as, for example:

    1. Defer/Delay – wait and see strategy (call option)
    2. Future growth/ Expand/Extend – resource and capacity expansion (call option)
    3. Replacement – technology obsolescence/end-of-life issues (call option)
    4. Introduction of new technology, service and/or product (call option)
    5. Contraction – capacity decommissioning (put option)
    6. Terminate/abandon – poor cash-flow contribution or market obsolescence (put option)
    7. Switching options – dynamic/real-time decision flexibility (call/put option)
    8. Compound options – phased and sequential investment (call/put option)

    It is instructive to consider a number of examples of options/flexibilities which are representative for the mobile telecommunications industry. Real options or options on physical assets can be divided in to two basic types – calls and puts. A call option gives, the holder of the option, the right to buy an asset, and a put option provides the holder with the right to sell the underlying asset.

     

    First, the call option will be illustrated with a few examples: One of the most important options open to management is the option to Defer or Delay (1) a project. This is a call option, right to buy, on the value of the project. The defer/delay option will be addressed at length later in this paper. The choice to Expand (2) is an option to invest in additional capacity and increase the offered output if conditions are favorable. This is defined as a call option, i.e., the right to buy or invest, on the value of the additional capacity that could enable extra customers, minutes-of-use, and of course additional revenue. The exercise price of the call option is the investment and additional cost of providing the additional capacity discounted to the time of the option exercise. A good example is the expansion of a mobile switching infrastructure to accommodate an increase in the customer base. Another example of expansion could be moving from platform centralization to de-centralization as traffic grows and the cost of centralization becomes higher than the cost of decentralizing a platform. For example, the cost of transporting traffic to a centralized platform location could, depending on cost-structure and traffic volume, become un-economical. Moreover, Management is often faced with the option to extend the life of an asset by re-investing in renewal – this choice is a so-called Replacement Option (3). This is a call option, the right to re-invest, on the assets future value. An example could be the renewal of the GSM base-transceiver stations (BTS), which would extend the life and adding additional revenue streams in the form of options to offer new services and products not possible on the older equipment. Furthermore, there might be additional value in reducing operational cost of old equipment, which typically would have higher running cost, than with new equipment. Terminate/Abandonment (5) in a project is an option to either sell or terminate a project. It is a so-called put option, i.e., it gives the holder the right to sell, on the projects value. The strike price would be the termination value of the project reduced by any closing-down costs.  This option mitigates the impact of a poor investment outcome and increases the valuation of the project. A concrete example could be the option to terminate poorly revenue generating services or products, or abandon a technology where the operating costs results in value destruction. The growth in cash-flows cannot compensate the operating costs. Contraction choices  (6) are options to reduce the scale of a project’s operation. This is a put option, right to “sell”, on the value of the lost capacity. The exercise price is the present value of future cost and investments saved as seen at the time of exercising the option. In reality most real investment projects can be broken up in several phases and therefore also will consist of several options and the proper investment and decision strategy will depend on the combination these options. Phased or sequential investment strategies often include Compounded Options (8), which are a series of options arising sequentially.

     

    The radio access network site-rollout investment strategy is a good example of how compounded options analysis could be applied. The site rollout process can be broken out in (at least) 4 phases: 1. Site identification, 2. Site acquisition, 3. Site preparation (site build/civil work), and finally 4. Equipment installation, commissioning and network integration. Phase 2 depends on phase 1, phase 3 depends on phase 2, and phase 4 depends on phase 3 – a sequence of investment decisions depending on the previous decision, thus the anatomy of the real options is that of Compound Options (8) . Assuming that a given site location has been identified and acquired (call option on the site lease), which is typically the time-consuming and difficult part of the overall rollout process; the option to prepare the site emerges (Phase 3). This option, also a call option, could depend on the market expectations and the competitions strategy, local regulations and site-lease contract clauses. The flexibility arises from deferring/delaying the decision to commit investment to site preparation. The decision or option time-horizon for this deferral/delay option is typically set by the lease contract and its conditions. If the option expires the lease costs have been lost, but the value arises from not investing in a project that would result in negative cash-flow.  As market conditions for the rollout technology becomes more certain, higher confidence in revenue prospects, a decision to move to site preparation (Phase 3) can be made. In terms of investment management after Phase 3 has been completed there is little reason not to pursue Phase 4 and install and integrate the equipment enabling service coverage around the site location. If at the point of Phase 3 the technology or supplier choice still remains uncertain it might be a valuable option to await (deferral/delay option) a decision on supplier and/or technology to be deployed. In the site-rollout example described other options can be identified, such as abandon/terminate option on the lease contract (i.e., a put option). After Phase 4 has been completed there might come a day where an option to replace the existing equipment with new and more efficient / economical equipment arises.  It might even be interesting to consider the option value of terminating the site altogether and de-install the equipment. This could happen when operating costs exceeds the cash-flow. It should be noted that the termination option is quite dramatic with respect to site-rollout as this decision would disrupt network coverage and could aggress existing customers. However, the option to replace the older technology and maybe un-economical services with a new and more economical technology-service option might prove valuable. Most options are driven by various sources of uncertainty. In the site-rollout example, uncertainty might be found with respect to site-lease cost, time-to-secure-site, inflation (impacting the site-build cost), competition, site supply and demand, market uncertainties, and so forth

     

    Going back to Example 1 and Example 4, the platform subject-matter expert (often different from the Analyst) has identified that if the customer base exceeds 4 Million customers and expansion of €10M will be needed. Thus, the previous examples underestimate the potential investments in platform expansion due to customer growth. Given that the base-line market scenario does identify that that this would be the case in the 2nd year of the project the €10M is included in the deterministic conventional business case for the new service. The result of including the €10M in the 2nd year of Example 1 is that the NPV drops from €26M to €8.7M (∆NPV minus €17.6M). Obviously, the conventional Analyst would stop here and still be satisfied that this seems to be a good and solid business case. The approach of Example 4 is applied to the new situation, subject to the same market uncertainty given in Example 3. From the Monte Carlo simulation, it is found that the NPV mean-value only is €4.7M. However, the downside is that the probability of loss (i.e., an NPV less than 0) now is 38%. It is important to realize that in both examples is the assumption that there is no choice or flexibility concerning the €10M investment; the investment will be committed in year two. However, the project has an option – the option to expand provided that the customer base exceeds 4 Million customers. Time wise it is a flexible option in the sense that if the project expected lifetime is 5 years, any time within this time-horizon is there a possibility that the customer base exceeds the critical mass for platform expansion.

    example5

    Example 5: Shows the NPV valuation outcome when an option to expand is included in the model of Example 4. The €10M  is added if and only if the customer base exceeds 4 Million.

    In the above Example 5  the probabilistic model has been changed to add €10M if and only if the customer base exceeds 4 Million. Basically, the option of expansion is being simulated. Treating the expansion as an option is clearly valuable for the business case, as the NPV mean-value has increased from €4.7M to €7.6M. In principle the option value could be taken to €2.9M. It is worthwhile noticing that the probability of loss (from 38% to 25%) has also been reduced by allowing for the option not to expand the platform if the customer base target is not achieved. It should be noted that although the example does illustrate the idea of options and flexibility it is not completely in line with a proper real options analysis.

    example6

    Example 6 Shows the different valuation outcomes depending on whether the €10M platform expansion (when customer base exceeds 4 Million) is considered as un-avoidable (i.e., the “Deterministic No Option” and “Probabilistic No Option”) or as an option or choice to do so (“Probabilistic with Option”). It should be noted that the additional €3M in difference between “Probabilistic No Option” and “Probabilistic With Option” can be regarded as an effective option value, but it does not necessarily agree with a proper real-option valuation analysis of the option to expand. Another difference in the two probabilistic models is that in the model with option to expand an expansion can happen any year if customer base exceeds 4 Million, while the No option model only considers the expansion in year 2 where according with the marketing forecast the base exceeds the 4 Million. Note that Example 6 is different in assumptions than Example 1 and Example 4 as these do not include the additional €10M.

     

    Example 6 above summarizes the three different approaches of valuation analysis; deterministic (essential 1-dimensional), probabilistic with options, and probabilistic including value options.

    The investment analysis of real options as presented in this paper is not a revolution but rather an evolution of the conventional cash-flow and NPV analysis. The approach to valuation is first to understand and proper model the base-line case. After the conventional analysis has been carried out, the analyst, together with subject-matter experts, should determine areas of uncertainty by identifying the most relevant uncertain input parameters and their variation-ranges. As described in the previous section the deterministic business model is being transformed into a probabilistic model. The valuation range, or NPV probability distribution, is obtained by Monte Carlo simulations and the opportunity and risk profile is analyzed. The NPV opportunity-risk profile will identify the need for mitigation strategies, which in itself result in studying the various options inherent in the project. The next step in the valuation analysis is to value the identified project or real options. The qualitatively importance of considering real options in investment decisions has been provided in this paper. It has been shown that conventional investment analysis, represented by net-present value and discounted cash-flow analysis, gives only one side of the valuation analysis. As uncertainty is the “farther” of opportunity and risk it needs to be considered in the valuation process. Are identified options always valuable? The answer to that question is no – if we have certainty about an option movement is not in our favor then the option would be valuable. Think for example of considering a growth option at the onset of severe recession.

     

    The real options analysis is often presented as being difficult and too mathematical; in particular

    due to the involvement of the partial differential equations (PDE) that describes the underlying uncertainty (continuous-time stochastic processes, involvement of Markov processes, diffusion processes, and so forth). Studying PDEs are the basis for the ground-breaking work of the Black-Scholes-Merton [22] [23] on option pricing, which provided the financial community with an analytical expression for valuing financial options. However, “heavy” mathematical analysis is not really needed for getting started on real option.

     

    Real options are a way of thinking, identifying valuable options in a project or potential investment that could create even more value by considering as an option instead of a deterministic given.

     

    Furthermore, Cox et al [24] proposed a simplified algebraic approach, which involves so-called binominal trees representing price, cash-flow, or value movements in time. The binomial approach is very easy to understand and implement, resembling standard decision tree analysis, and visually easy to generate, as well as algebraically straightforward to solve.

     

    SUMMARY

    Real options are everywhere where uncertainty governs investment decisions. It should be clear that uncertainty can be turned into a great advantage for value growth providing proper contingencies are taken for reducing the downside of uncertainty – mitigating risk.  Very few investment decisions are static, as conventional discounted cash-flow analysis otherwise might indicate, but are ever changing due to changes in market conditions (global as well as local), technologies, cultural trends, etc. In order to continue to create wealth and value for the company value growth is needed and should force a dynamic investment management process that continuously looks at the existing as well as future valuable options available for the industry. It is compelling to say that a company’s value should be related to its real-options portfolio, and its track record in mitigating risk, and achieving the uncertain up-side of opportunities.

     

    ACKNOWEDGEMENT

    I am indebted to Sam Sugiyama (President & Founder of EC Risk USA & Europe) for taking time out from a very busy schedule and having a detailed look at the content of our paper. His insights and hard questions have greatly enriched this work. Moreover, I would also like to thank Maurice Ketel (Manager Network Economics), Jim Burke (who in 2006 was Head of T-Mobile Technology Office) and Norbert Matthes (who in 2007 was Head of Network Economics T-Mobile Deutschland) for their interest and very valuable comments and suggestions.

    ___________________________

    APPENDIX – MATHEMATICS OF VALUE.

    Firstly we note that the Future Value FV (of money) can be defined as the present Value PV (of money) times a relative increase given by an effective rate r* (i.e., that represents the change of money value between time periods), reflecting value increase or of course decrease over a cause of time t;

    F{V_t} = {(1 + r*)^t}\;PV clip_image004 

    So the Present Value given we know the Future Value would be

    PV = \frac{{F{V_t}}}{{{{(1 + r*)}^t}}}

    For a sequence (or series) of future money flow we can write the present value as 

    PV = \sum\limits_{t = 1}^N {\frac{{F{V_t}}}{{{{(1 + r*)}^t}}}}

    If r* is positive time-value-of-money follows naturally, i.e., money received in the future is worth less than today. It is a fundamental assumption that you can create more value with your money today than waiting to get them in the future (i.e., not per se right for majority of human beings but maybe for Homo Economicus).

    First the sequence of future money value (discounted to the present) has the structure of a geometric series: {V_n} = \sum\limits_{k = 0}^n {\frac{{{y_k}}}{{{{\left( {1 + r} \right)}^k}}}} , with yk+1 = g*yk (i.e., g* representing the change in y between two periods k and k+1).

    Define {a_k} = \frac{{{y_k}}}{{{{\left( {1 + r} \right)}^k}}}and note that\frac{{{a_{k + 1}}}}{{{a_k}}} = \frac{{g*}}{{1 + r}} = \frac{{1 + g}}{{1 + r}} = s, thus in this framework we have that{V_n} = \sum\limits_{k = 0}^n {{s^k}} (note: I am doing all kind of “naughty” simplifications to not get too much trouble with the math).

    The following relation is easy to realize:

    \begin{array}{l} {V_n} = 1 + s + {s^2} + {s^3} + .......... + {s^n}\\ s{V_n} = s + {s^2} + {s^3} + .......... + {s^n} + {s^{n + 1}} \end{array}, subtract the two equations from each other and the result is(1 - s){V_n} = (1 - {s^{n + 1}})\quad  \Leftrightarrow \quad {V_n} = \frac{{1 - {s^{n + 1}}}}{{1 - s}}\quad  \Leftrightarrow \quad {V_n} = \frac{{1 + r}}{{r - g}} - \frac{{(1 + g)}}{{r - g}}{\left( {\frac{{1 + g}}{{1 + r}}} \right)^n}

    . In the limit where n goes toward infinity (¥), providing that\left| s \right| < 1\quad  \Leftrightarrow \quad \left| {\frac{{1 + g}}{{1 + r}}} \right| < 1, it can be seen that .

    It is often forgotten that this only is correct if and only if \left| {1 + g} \right| < \left| {1 + r} \right| or in other words, if the discount rate (to present value) is higher than the future value growth rate.{V_\infty } = \frac{1}{{1 - s}}\quad  \Leftrightarrow \quad {V_\infty } = \frac{{1 + r}}{{r - g}}

    You might often hear you finance folks (or M&A jockeys) talk about Terminal Value (they might also call it continuation value or horizon value … for many years I called it Termination Value … though that’s of course slightly out of synch with Homo Financius not to be mistaken for Homo Economicus :-).

    PV = \sum\limits_{t = 1}^T {\frac{{FV{}_t}}{{{{(1 + r*)}^t}}}}  + T{V_{T \to \infty }} = NP{V_T} + \sum\limits_{t = T + 1}^\infty  {\frac{{FV{}_t}}{{{{(1 + r*)}^t}}}} with TV representing the Terminal Value and

    NPV representing the net present value as calculated over a well-defined time span T.

     

    I always found the Terminal Value fascinating as the size (matters?) or relative magnitude can be very substantial and frequently far greater than the NPV in terms of “value contribution” to the present value. Of course we do assume that our business model will survive to “Kingdom Come”. Appears to be a slightly optimistic assumptions (n’est pas mes amis? :-). We also assume that everything in the future is defined by the last year of cash-flow, the cash flow growth rate and our discount rate (hmmm don’t say that Homo Financius isn’t optimistic). Mathematically this is all okay (if \left| {1 + g} \right| < \left| {1 + r} \right|), economically maybe not so. I have had many and intense debates with past finance colleagues about the validity of Terminal Value. However, to date it remains a fairly standard practice to joggle up the enterprise value of a business model with a “bit” of Terminal Value.

    Using the above (i.e., including our somewhat “naughty” simplifications)

    TV = \sum\limits_{t = T + 1}^\infty  {\frac{{{y_t}}}{{{{(1 + r)}^t}}}}

    TV = \frac{{(1 + g)\,{y_T}}}{{{{(1 + r)}^{T + 1}}}}\sum\limits_{j = 0}^\infty  {\frac{{{{(1 + g)}^j}}}{{{{(1 + r)}^j}}}}

    TV \approx \frac{{(1 + g)\,{y_T}}}{{(r - g)\,{{(1 + r)}^T}}}\quad \forall \,\left| {1 + g} \right| < \left| {1 + r} \right|

    It is easy to see why TV can be a very substantial contribution to the total value of a business model. The denominator (r-g) tends to be a lot smaller than 1 (i.e., note that always we have g<r) and though “blows” up the TV contribution to the present value (even when g is chosen to be zero).

    Let’s evaluate the impact on uncertainty of the interest rate x, first re-write the NPV formula:

    NP{V_n} = {V_n} = \sum\limits_{k = 0}^n {\frac{{{y_k}}}{{{{\left( {1 + x} \right)}^k}}}} , yk is the cash-flow of time k (for the moment it remains unspecified), from

    error/uncertainty propagation it is known that the standard deviation can be written as\Delta {z^2} = {\left( {\frac{{\partial f}}{{\partial x}}} \right)^2}\Delta {x^2} + {\left( {\frac{{\partial f}}{{\partial y}}} \right)^2}\Delta {y^2} + ...., where z=f(x,y,z,…) is a multi-variate function. Identifying the terms in the NPV formula is easy: z = Vn and f(x,\left\{ {{y_k}} \right\};k) = \sum\limits_k {\frac{{{y_k}}}{{{{\left( {1 + x} \right)}^k}}}}

    In the first approximation assume that x is the uncertain parameter, while yk is certain (i.e., ∆yk=0), then the following holds for the NPV standard deviation:

    {\left( {\Delta {V_n}} \right)^2} = {\left( {\sum\limits_{k = 0}^n {\frac{{k{y_k}}}{{{{\left( {1 + x} \right)}^{k + 1}}}}} } \right)^2}{\left( {\Delta x} \right)^2}\quad  \Leftrightarrow \Delta {V_n} = \left| {\Delta x} \right|\left| {\sum\limits_{k = 0}^n {\frac{{k{y_k}}}{{{{(1 + x)}^{k + 1}}}}} } \right|,

    in the special case where yk is constant for all k’s,. It can be shown (similar analysis as above) that

    \Delta {V_n} = \left| {\Delta x} \right|\left| {{y_n}} \right|\left| {\frac{{1 - {r^{n + 1}}}}{{{{(1 - r)}^2}}} - \frac{{1 + n\,{r^{n + 1}}}}{{(1 - r)}}} \right| with r = \frac{1}{{1 + x}}.

    In the limit where n goes toward infinity, applying l’Hospital’s rule showing that n\,{r^{n + 1}} \to 0\;for\;n \to \infty , the following holds for propagating uncertainty/errors in the NPV formula:

    \Delta {V_\infty } = \left| {\Delta x} \right|\,\left| y \right|\;\left| {\frac{1}{{{{\left( {1 - r} \right)}^2}}} - \frac{1}{{(1 - r)}}} \right| = \left| {\Delta x} \right|\,\left| y \right|\;\left| {\frac{{ - r}}{{{{(1 - r)}^2}}}} \right| = \left| {\Delta x} \right|\,\left| y \right|\;\left| {\frac{{1 + x}}{{{x^2}}}} \right|

    Let’s take a numerical example, y=1, the interest rate x = 10% and the uncertainty/error is assumed to be no more than ∆x=3% (7%£ x £13%), assume that n®¥ (infinite time-horizon). Using the formula derived above NPV¥=11 and ∆NPV¥=±3.30 or a 30% error on estimated NPV. If the assumed cash-flows (i.e., yk) also uncertain the error will even be greater than 30%. The above analysis becomes more complex when yk is non-constant over time k and as yk to should be regarded as uncertain. The use of for example Microsoft Excel becomes rather useful to gain further insight (although the math is pretty fun too).


    [1] This is likely due to the widespread use of MS Excel and financial pocket calculators allowing for easy NPV calculations, without the necessity for the user to understand the underlying mathematics, treating the formula as “black” box calculation. Note a common mistake using MS Excel NPV function is to include initial investment (t=0) in the formula – this is wrong the NPV formula starts with t=1. Thus, initial investment would be discounted which would lead to an overestimation of value.

    [2] http://www.palisade-europe.com/. For purchases contact Palisade Sales & Training, The Blue House 30, Calvin Street, London E1 6NW, United Kingdom, Tel. +442074269955, Fax +442073751229.

    [3] Sugiyama, S.O., “Risk Assessment Training using The Decision Tools Suite 4.5 – A step-by-step Approach” and “Introduction to Advanced Applications for Decision Tools Suite – Training Notebook – A step-by-step Approach”, Palisade Corporation. The Training Course as well as the training material itself can be highly recommended.

    [4] Most people in general not schooled in probability theory, statistics and mathematical analysis. Great care should be taken to present matters in an intuitive rather than mathematical fashion.

    [5] Hill, A., “Corporate Finance”, Financial Times Pitman Publishing, London, 1998.

    [6]This result comes straight from geometric series calculus. Remember a geometric series is defined asclip_image024, where clip_image026 is constant. For the NPV geometric series it can easily be shown thatclip_image028, r being the interest rate. A very important property is that the series converge ifclip_image030, which is the case for the NPV formula when the interest rate r>1. The convergent series sums to a finite value of clip_image032 for k starting at 1 and summed up to ¥ (infinite).

    [7] Benninga, S., “Financial Modeling”, The MIT Press, Cambridge Massachusetts (2000), pp.27 – 52. Chapter 2 describes procedures for calculating cost of capital. This book is the true practitioners guide to financial modeling in MS Excel.

    [8] Vose, D., “Risk Analysis A Quantitative Guide”, (2nd edition), Wiley, New York, 2000. A very competent book on risk modeling with a lot of examples and insight into competent/correct use of probability distribution functions.

    [9] The number of scenario combinations are calculated as follows: an uncertain input variable can be characterized by the following possibility setclip_image034with length s, in case of k uncertain input variables the number of combinations can be calculated as clip_image036, where clip_image038is the COMBIN function of Microsoft Excel.

    [10] A Monte Carlo simulation refers to the traditional method of sampling random (stochastic) variables in modeling. Samples are chosen completely randomly across the range of the distribution. For highly skewed or long-tailed distributions a large numbers of samples are needed for convergence. The @Risk product from Palisade Corporation (see http://www.palisade.com) supplies the perfect tool-box (Excel add-in) for converting a deterministic business model (or any other model) into a probabilistic one.

    [11] Luehrman, T.A., “Investment Opportunities as Real Options: Getting Started with the Numbers”, Harvard Business Review, (July – August 1998), p.p. 3-15.

    [12] Luehrman, T.A., “Strategy as a Portfolio of Real Options”, Harvard Business Review, (September-October 1998), p.p. 89-99.

    [13] Providing that the business assumptions where not inflated to make the case positive in the first place.

    [14] Hull, J.C., “Options, Futures, and Other Derivatives”, 5th Edition, Prentice Hall, New Jersey, 2003. This is a wonderful book, which provides the basic and advanced material for understanding options.

    [15] A derivative is a financial instrument whose price depends on, or is derived from, the price of another asset.

    [16] Boer, F.P., “The Valuation of Technology Business and Financial Issues in R&D”, Wiley, New York, 1999.

    [17]  Amram, M., and Kulatilaka, N., “Real Options Managing Strategic Investment in an Uncertain World”, Harvard Business School Press, Boston, 1999. Non-mathematical, provides a lot of good insight into real options and qualitative analysis.

    [18] Copeland, T., and V. Antikarov, “Real Options: A Practitioners Guide”, Texere, New York, 2001. While the book provides a lot of insight into the area of practical implementation of Real Options, great care should be taken with the examples in this book. Most of the examples are full of numerical mistakes. Working out the examples and correcting the mistakes provides a great mean of obtaining practical experience.

    [19] Munn, J.C., “Real Options Analysis”, Wiley, New York, 2002.

    [20] Amram. M., “Value Sweep Mapping Corporate Growth Opportunities”, Harvard Business School Press, Boston, 2002.

    [21] Boer, F.P., “The Real Options Solution Finding Total Value in a High-Risk World”, Wiley, New York, 2002.

    [22]] Black, F., and Scholes, M., “The Pricing of Options and Corporate Liabilities”, Journal of Political Economy, 81 (May/June 1973), pp. 637-659.

    [23] Merton, J.C., “Theory of Rational Option Pricing”, Bell Journal of Economics and Management Science, 4 (Spring 1973), 141-183.

    [24] Cox, J.C., Ross, S.A., and Rubinstein, M., “Option Pricing: A Simplified Approach”, Journal of Financial Economics, 7 (October 1979) pp. 229-63.

    GSM – Gone So Much … or is it?

    • A Billion GSM subscriptions & almost $200 Billion GSM revenue will have gone within the next 5 years.
    • GSM earns a lot less than its “fair” share of the top-line, a trend that will further worsened going forward.
    • GSM revenue are fading out rapidly across a majority of the mobile markets across the Globe.
    • Accelerated GSM phase-out happens when pricing level of the next technology option relative to the GDP per capita drops below 2%.
    • 220 MHz of great spectrum is tied up in GSM, just waiting to be liberated.
    • GSM is horrific spectral in-efficient in comparison to today’s cellular standards.
    • Eventually we will have 1 GSM network across a given market, shared by all operators, supporting fringe legacy devices (e.g., M2M) while allowing operators to re-purpose remaining legacy GSM spectrum.
    • The single Shared-GSM network might survive past any economical justification for its existence merely serving legal and political interests.

    Gone So Much … GSM is ancient, uncool and so 90s … why would anybody bother with that stuff any longer … its synonymous  with the Nokia Handset (which btw is also ancient, uncool and so 90s … and almost no longer among us thanks to our friend Elop …). In many emerging markets GSM-only phones are hardly demanded or sold any longer in the grey markets. Grey market that make up 90% (or more) of  handset sales in many of those emerging markets. Moreover, its not only AT&T in the US talking about 2G phase-out but also an emerging market such as Thailand is believed to be void of GSM within the next couple of years.

    bananaphone

    A bit of Personal History. Some years ago I had the privilege to work with some very smart people in the Telecom Industry on merging two very big mobile operations (ca.140 million in combined customer base). One of our cardinal spectrum strategic and technology arguments were the gain in spectral efficiency such a merger would bring. Anecdotally it is worth mentioning that the technology synergies and spectrum strategic ideas largely would have financed the deal in shear synergies.

    In discussions with the country’s regulator we were asked why we could not “just” switch off GSM? Then use that freed GSM spectrum for new cellular technologies, such as UMTS and even LTE. Thereby gaining sufficient spectral efficiency that merging the two business would become un-necessary. The proposal would have effectively turned off the button of a service that served at ca. 70 Million GSM-only (incl, EDGE & GPRS) subscribers (at the time) across the country. Now that would have been expensive and most likely caused a couple or thousands of class action suits to the beat.

    Here is how one could have thought about the process of clearing out GSM for something better (though overall its is more for richer and poorer). There is no “just …press the off button”, as also Sprint experienced with their iDEN migration.

    customer migration

    Our thoughts (and submitted Declarations) were that by merging the two operators spectrum (and sites pool) we could create sufficient spectral capacity to support both GSM (which we all granted was phasing out) and provide more capacity and customer experience for the Now Generation Technology (i.e., HSPA+ or 4G as they like to call it in that particular market … Heretics! ;-). A recent must read GigaOM blog by Keith Fitchard  “AT&T begins cannibalizing 2G and 3G networks to boost LTE capacity” describes very well the aggressive no-nonsense thinking of US carriers (or simply desperation or both) when it comes to the quest for spectrum efficiency and enhanced customer experience (which co-incidentally also yields the best ARPUs).

    It is worth mentioning that more than 2×110 Mega Hertz is tied up in GSM, Up-to 2×35 MHz at 900MHz (if E-GSM has been evoked) and 2×75 MHz at 1800MHz (yes! I am ignoring US GSM band plans, they are messed up but pretty fun nevertheless … different story for another time). Being able to re-purpose this amount of spectrum to more spectral efficient cellular technologies (e.g., UMTS Voice, HSPA+ and LTE) would clearly leapfrog mobile broadband, increase voice capacity at increased quality, and serve the current billions of GSM-only users as well as the next billion un-connected or under-server customer segments with The Internet. The macro-economical benefits being very substantial.

    220 MHz of great spectrum is tied up in GSM, just waiting to be liberated.

    Back in the days of 2003 I did my first detailed GSM phase-out techno-economical analysis (a bit premature one might add). I was very interested in questions such as “when can we switched off GSM?”, “what are the economical premises of exiting GSM?”, “Why do operators today still continue to encourage subscriber growth on their GSM networks?”, “Today … if you got your hands on GSM usable spectrum, would you start a GSM operation?”, “Why?” and “Why not?”, etc..

    So why don’t we “just” switch off GSM? and let go of that old in-efficient cellular technology?

    How in-efficient? you may ask? … Pending a little bit on what state the GSM is in, we can have ca. 3 times more voice users in WCDMA (i.e., UMTS) compared to GSM with Adaptive Multi-Rate (AMR) codec support. Newer technology releases supports even more dramatic leaps in voice handling capabilities.

    voice efficiency GSM vs wcdma

    Data? what about cellular data? That GSM, including its data handling enhancements GPRS and EDGE, is light-bits away from the data handling capabilities of WCDMA, HSPA+, LTE and so forth is at this point a well establish fact.

    Clearly GSM is horrific spectral in-efficient in comparison to later cellular standards such as WCDMA, HSPA(+) and LTE(+) and its only light (in a very dark tunnel) is that it is supported at lower frequencies (i.e., more economical deployment in rural areas and for large surface area countries). Though today that no longer unique as UMTS and LTE are available in similar or even lower frequency ranges. … of course there are other economical issues at plays as well, which we will see below.

    Why do we still bother with a 27+ year old technology? a technology that has very poor spectral efficiency in comparison with later cellular technologies. GSM after all “only” provides Voice, SMS and pretty low bandwidth mobile data (while better than nothing, still very close to nothing).

    Well for one thing! there is of course the money thing? (and we know that that makes the world go around) ca. 4+ Billion GSM subscriptions worldwide (incl. GPRS & EDGE) generating a total GSM turnover of 280+ Billion US$.

    In 2017 we anticipate to have a little less than 3 Billion GSM subscriptions generating ca. 100+ Billion US$. So ….a Billion GSM subscriptions and almost 200 Billion US$ GSM revenue will have dis-appeared within the next 5 years (and for the sake of mobile operators hopefully replaced by something better).

    In this trend APAC, takes its lion share of the GSM subscription loss with ca. 65% (ca, 800 Million) of the total loss and ca. 50% of the GSM top-line loss (ca. 100 Billion US$).

    The share of GSM revenue is rapidly declining across (almost) all markets;

    gsm revenue share

    The GSM revenue as share of the total revenue (as well as in absolute terms) rapidly diminishes, as 3G and LTE are introduced and customer migrate to those more modern technologies.

    If the should be any doubts GSM does not get its fair share revenue compared to its share of the subscriptions (or subscribers for that matter):

    2012 GSM RS vs MS

    While the above data does contain two main clusters, it still pretty well illustrates (what should be no real surprise to any one) that GSM earns back a lot less than its “fair” share (whatever that really means). And again if anyone would be in doubt that picture will be grimmer as the we fast forward to the near future;

    2017 GSM RS vs MS

    Grim, Grimmer, Grimmest!

    Today GSM earns a lot less than its “fair” share of the top-line, this trend will be further worsened going forward.

    So we can soon phase-out GSM? Right? hmmmm! Maybe not so fast!

    Well while GSM revenue has certainly declined and expected to continue the decline, in many markets the GSM-only (e.g., here defined as a customers that only have GSM Voice, GPRS and/or EDGE available) customers have not declined in proportion to the related revenue might fool us to believe.

    gsm market share

    The above statistics illustrates the GSM-only subscription share of the total cellular business.

    There is more to GSM than market and revenue share … and we do need to have a look at the actual decline of GSM subscriptions (or unique users which is not per se the same) and revenue;GSM_actual_decline

    The GSM revenue are expected to massively free fall over the next 5 years!

    However, also observe (in the chart above) that we need to sustain the network and its associated cost as a considerable amount of customers remain on the network, despite generating a lot less top line.

    As we have already seen above, in the next 5 years there will be many markets where GSM subscription and subscriber share will remain reasonable strong albeit the technology’s ability to turn-over revenue will be in free-fall in most markets.

    Analyzing data from Pyramid Research (actual & projection for the period 2013 to 2017), including other analyst data sets (particular on actual data), extrapolating the data beyond 2017 by diffusion models approximating the dynamics of technology migration in the various market, we can get an idea about the remaining (residual) life of GSM. In other words we can make GSM phase-out projections as well as get a feel for the terminal revenue (or residual value) left in GSM. Further get an appreciation of how that terminal value compares to the total mobile turnover over the same GSM phase-out period.

    The chart below provide the results of such a comprehensive analysis. The colored bars illustrate the various years of onset of GSM phase-out; (a) the earliest year which is equal to the lower end of light-blue bar is typically the year where migration off GSM accelerates, (b) the upper end of the light-blue bar is a most-likely year where after GSM no longer would be profitable, and (c) the upper end of the red bar illustrates the maximum expected life of GSM. It should be noted that the GSM Phase-out chart below might not be shown in its entirety (particular right side of the chart). Clicking on the Chart itself will display it in full.

    gsm phase-out projections

    Taking the above GSM phase-out years, we can get a feeling for how many useful years GSM has left in terms of economical-life and customer life-time defined as which event comes first of (i) less than 1 Million GSM subscriptions or (ii) 5% GSM market-share. 2014 has been taken as the reference year;

    remaining usefull life of GSM

    It should be noted that the Useful Life-span of GSM chart above might not be shown in its entirety (particular right side of the chart). Clicking on the Chart itself will display it in full.

    AREAS #MARKETS GSM –
    REMAINING LIFE
    Western Europe               16       4.1 +/- 3.3 years
    Asia Pacific               13       6.4 +/- 5.0 years
    Middle East & Africa               17     11.0 +/- 6.2 years
    Central Eastern Europe                 8       6.9 +/- 4.8 years
    Latin America               19       6.6 +/- 3.7 years

    That Western Europe (and US which has not been shown here) has the most aggressive time-lines for GSM phase-out should come as no surprise. The 3G/UMTS has been deployed there the longest and the 3G price level to GDP has come down to a level where there is hardly any barrier for most mobile users to switch from GSM to UMTS. Also the WEU region has the most extensive UMTS coverage which also removes the GSM to UMTS switching barrier. Central Eastern Europe average is pulled up (i.e., longer useful life) substantial by Russia and Ukraine that shows fairly extreme laggardness in GSM phase-out (in comparison with the other CEE markets). For Middle East and Africa it should be noted that there are two very strong clusters of data distinguishing the Gulf States from the African Countries. Most of the Gulf States have only a very few years of remaining useful life of GSM. In general the GSM remaining life trend can be described fairly well with the amount of time UMTS has been in a given market (i.e., though smartphone introduction did kick-start the migration from GSM more than anything else), the extend of UMTS coverage (i.e., degree of pop and geo coverage) and the basic economics of UMTS.

    In my analysis I have assumed 4 major triggers for GSM phase-out;

    1. Analysis shows that once the 3G (or non-2G) ARPU is below 2% of the nominal GDP per capita an acceleration of migration away from GSM speeds up. I have (somewhat arbitrarily) chosen 1% as my limit where there is no longer any essential barrier of customer migrating off GSM.
    2. When GSM penetration is below 5% as a decision point for converting (by possible subsidies) GSM customers to a more modern and efficient technology. This obviously does depend on total customer base and the local economical framework and as such is only a heuristics rather than a universal rule.
    3. Last but not least, my 3rd criteria for phasing out GSM is when its base is below 1 million subscriptions (i.e., typically 500 to 800 thousand subscribers).
    4. Last but not least, before complete phase-out of GSM can commence, operators obviously need to provide the alternative technology (e.g., UMTS or LTE) coverage that can replace the existing GSM coverage. This is in general only economical if comparable frequency range can be used and thus for example for UMTS coverage replacement of GSM in many cases re-farming/re-purposing 900MHz from GSM to UMTS. This last point can be a very substantial bottleneck and show stopper for migration from GSM to UMTS, particular in rural areas or in countries with very substantial rural populations on GSM.

    Interestingly enough, extensive data analysis on more than 70 markets, shows that the GSM phase—out dynamics appears to have little or no dependency on (a) the 2G ARPU level, (b) 2G ARPU level relative to 3G ARPU and (c) handset pricing (although I should point out that I have not had a lot of data here to be firm in this conclusion, in particular reliable data for grey market handset pricing across the emerging markets is a challenge).

    One of the important trigger points for onset of accelerated GSM phase-out is the pricing level of the next technology (e.g., 3G) option relative to the GDP per capita.

    Migration decision appears less to do with the legacy price of the old technology or old technology price relative to new technology pricing.

    gsm market share vs 3G arpu to gdp

    Above chart illustrates an analysis made on 2012 actual data for more than 70+ markets all across WEU, CEE, APAC, EMEA and LA (i.e., coinciding with markets covered by Pyramid Research). It is very interesting to observe the dynamics as the markets develop into the future and the data moves towards the left indicating more affordable 3G pricing (relative to GDP per capita) and increasingly faster GSM phase-out as is evident from the chart below providing the same markets as above but fast forwarded 5 years (i.e., 2017).

    5yrs add gsm market share vs 3G arpu to gdp

    Firstly the GSM ARPU level across most markets is below 2% of a given markets GDP per capita. There is no clear evidence in the country data available that the GSM ARPU development has had any effect on slowing down or accelerating GSM phase-out. Most likely an indication that GSM has reached (or will reach shortly) a cost level where customers become insensitive.

    gsm market share vs 2G arpu to gdp

    Conceptually we can visualize the GSM phase-out dynamics in the following way were as the 3G gets increasingly affordable (which may or should include the device cost depending on taste), GSM phase-out accelerates (i.e., moving from right to left in the illustrative chart below). While the chart illustration below is more attuned to emerging market migration dynamics of GSM phase-out it can of course with minor adaptations be used for other more balanced prepaid-postpaid markets.

    We should keep in mind that unless the mobile operators new technology coverage (e.g., UMTS, LTE, ..) at the very least overlap the GSM coverage, the migration from GSM to UMTS (or LTE) will eventually stop. This can in countries with a substantial rural population in particular become a blocking stone for an effective 100% migration. Resulting in large areas and population share that will remain underserved (i.e., only GSM available) and thus depend on an in-efficient and ancient technology without the macro-economical benefits (i.e., boost of rural GDP) new and far more efficient cellular technologies could bring.

    share of gsm and 3G affordability

    That’s all fine … what a surprise that customers wants better when it gets affordable (like to have wanted that even more when it was not affordable)… and that affordability is relative is hardly a surprising either.

    In order for an operator to make an informed opinion about when to switch off GSM, it would need to evaluated the remaining business opportunity, or residual GSM value, against the value for re-purposing the GSM spectrum to a better technology, i.e., with a superior customer experience potential, and with a substantial higher ARPU utilization.

    Counting from 2014, the remaining life-time aka terminal aka residual GSM revenue will be in the order of 850 Billion US$ … agreeable an apparently dramatic number … however, the residual GSM revenue is on average no more than 5% of total cellular turnover and for many countries a lower than that. Actual 45 markets out of the 73 studied will have a terminal GSM revenue lower than 5%.

    terminal gsm revenue share histogram

    The chart below provides an overview of the Residual GSM Revenues in Billion of US$ (on a logarithmic scale) and the percentage of Residual GSM value out of the total cellular turnover (linear scale) for 75 top markets spread across Western Europe, Central Eastern Europe, Asia Pacific, Middle East & Africa, and Latin America.

    gsm terminal revenue & share

    Do note that the GSM Terminal Revenue chart above might not be shown in its entirety (right side of the chart). Clicking on the Chart itself will display it in full.

    It is quiet clear from the above chart that, apart from a few outliers, GSM revenue are fading out rapidly across a majority of the mobile markets across the globe. Even if the residual GSM topline might appear tempting, it obviously need to be compared to the operating expenses for sustaining the legacy technology as well as considering that a more modern technology would create higher efficiency (and possible ARPU arbitrage) and therefor mitigate margin decline sustaining more traffic and customers.

    Emerging APAC MNO Example: an emerging market in APAC has 100 Million subscriptions and ca. 70 Million unique cellular user base.One of the Mobile Network Operators (MNO) in this market has approx. 33% market share (revenue share slightly larger). in 2012 its EBITDA margin was 42%. Technology cost share of overall Opex is 25% and for the sake of simplicity the corresponding GSM cost share is in 2012 assumed to be 50% of the Total Technology Opex. As the business evolves it is assumed that the GSM cost base grows slower than non-GSM technology cost elements. This particular market has a residual GSM revenue potential of approx. 4 Billion US$ and the MNO under the loop has 1.3 Billion US$ remaining GSM revenue potential.

    Our analysis shows that the GSM business would start to breakdown (within the assumed economical framework or template) at around 5 Million GSM subscriptions or 3.5 Million unique users. This would happen around 2019 (+/- 2 years, with a bias towards earlier years) and thus leave the business with another 3 to 5 years of likely profitable GSM operation. See the chart below.

    mno gsm phase-out example

    This illustration shows (not surprisingly) that there is a point where even if the phasing-out GSM turns-over revenue, from an economical perspective it makes no sense for a single mobile operator to keep its GSM network alive for a diminishing customer base and even faster evaporating top-line.

    In the example above it is clear that the MNO should start planning for the inevitable – the demise of GSM. Having a clear GSM phase-out strategy as soon as possible and targeting GSM termination no later than 2018 to 2019 just makes pretty good sense. Looking at risks to the dynamics of the market development in this particular market there is a higher likelihood of no-profit being reached earlier rather than later.

    Would it make sense to startup a new GSM business in the market above? Given the 3 to 5 years that the existing mobile operators have to meet retire GSM before it becomes un-profitable, it hardly make much sense for a Greenfield operator to get started on the GSM idea (seem to be better ways for spending cash).

    However, if that Greenfield operator could become The GSM Operator for all existing MNO players in the market, allowing those legacy MNOs to re-purpose their existing GSM spectrum (and possible with a retro-active wholesale deal), then maybe in the short term it might make a little sense. However, it quiet frankly would be like peeing in your trousers on a cold winter day, it will be warm for a short while but then it really gets cold (as my Grandmother used to say).

    What GSM strategies makes really sense in its autumn days?

    Quit clearly GSM Network Sharing would make a lot of sense economically and operationally as it would allow re-purposing of legacy spectrum to more modern and substantially more efficient cellular technologies.

    The single Shared-GSM network would act as a bridge for legacy GSM M2M devices, extreme laggards and problematic coverage areas that might not be economical to replace in the shorter – medium term. Thus mobile operators could then solve possible long-term contractual obligations to businesses and consumers having fringe devices connecting with GSM (i.e., metering, alarms, etc..). The single Shared-GSM network might very well survive for a considerable time past any economical justification for its existence merely serving legal and political interests. Thanks to Stein Erik Paulsen who pointed this problem out for GSM phase-out.

    I am not (too) hanged up about the general Capex & Opex benefits of Network Sharing in this context (yet another story for another day). The compelling logical step of having 1 (ONE) GSM network across a given market, shared by all operators, supporting the phase-out of GSM while allowing to re-purpose legacy GSM spectrum for UMTS/HSPA and eventually  LTE(+), is almost screamingly obvious. This furthermore would feed a faster migration pace and phase-out as legacy spectrum would be available for re-purposing and customer migration.

    Of course Regulatory authorities would need to endorse such a scenario as it de-facto would result in a smelling-like creating a monopolistic GSM operator albeit serving all in a given market.

    The Regulatory Authority should obviously be very interested in this strategy as it would ensure substantial better utilization  of scarce spectral resources.  Furthermore, not only gaining in spectral efficiency but also winning the macro-economical boost from connecting the unconnected and under-served population groups to mobile data networks, and by that, the internet.

    ACKNOWLEDGEMENT

    I have made extensive use of historical and actual data from Pyramid Research country data bases. Wherever possible this data has been cross checked with other sources. In my opinion Pyramid Research have some of the best and most detailed mobile technology projections that would satisfy even the most data savvy analysts. The very extensive data analysis on Pyramid Research data sets are my own and any short falls in the analysis clearly should only be attributed to myself.

    SMS – Assimilation is inevitable, Resistance is Futile!

    Short Message Service or SMS for short, one of the corner stones of mobile services, just turned 20 years old in 2012.

    Talk about “Live Fast, Die Young” and the chances are that you are talking about SMS!

    The demise of SMS has already been heralded … Mobile operators rightfully are shedding tears of the (taken-for-granted?) decline of the most profitable 140 Bytes there ever was and possible ever will be.

    Before we completely kill off SMS, let’s have a brief look at

    SMS2012

    The average SMS user (across the world) consumed 136 SMS (ca. 19kByte) per month and paid 4.6 US$-cent per SMS and 2.6 US$ per month. Of course this is a worldwide average and should not be over interpreted. For example in the Philippines an average SMS user consumes 650+ SMS per month pays 0.258 US$-cent per SMS or 1.17 $ per month.The other extreme end of the SMS usage distribution we find in Cameroon with 4.6 SMS per month paying 8.19 US$-cent per SMS.

    We have all seen the headlines throughout 2012 (and better part of 2011) of SMS Dying, SMS Disaster, SMS usage dropping and revenues being annihilated by OTT applications offering messaging for free, etcetcetc… & blablabla … “Mobile Operators almost clueless and definitely blameless of the SMS challenges” … Right? … hmmmm maybe not so fast!

    All major market regions (i.e., WEU, CEE, NA, MEA, APAC, LA) have experienced a substantial slow down of SMS revenues in 2011 and 2012. A trend that is expected to continue and accelerate with mobile operators push for mobile broadband. Last but not least SMS volumes have slowed down as well (though less severe than the revenue slow down) as signalling-based short messaging service assimilates to IP-based messaging via mobile applications.

    Irrespective of all the drama! SMS phase-out is obvious (and has been for many years) … with the introduction of LTE, SMS will be retired.

    Resistance is (as the Borg’s would say) Futile!

    It should be clear that the phase out of SMS does Absolutely Not mean that messaging is dead or in decline. Far far from it!

    Messaging is Stronger than Ever and just got so many more communication channels beyond the signalling network of our legacy 2G & 3G networks.

    Its however important to understand how long the assimilation of SMS will take and what drivers impact the speed of the SMS assimilation. From an operator strategic perspective such considerations will provide insights into how quickly they will need to replace SMS Legacy Revenues with proportional Data Revenues or suffer increasingly on both Top and Bottom line.

    SMS2012 AND ITS GROWTH DYNAMICS

    So lets just have a look at the numbers (with the cautionary note that some care needs to be taken with exchange rate effects between US Dollar and Local Currencies across the various markets being wrapped up in a regional and a world view. Further, due to the structure of bundling propositions, product-based revenues such as SMS Revenues, can be and often are somewhat uncertain depending on the sophistication of a given market):

    2012 is expected worldwide to deliver more than 100 billion US Dollars in SMS revenues on more than 7 trillion revenue generating SMS.

    The 100 Billion US Dollars is ca. 10% of total worldwide mobile turnover. This is not much different from the 3 years prior and 1+ percentage-point up compared to 2008. Data revenues excluding SMS is expected in 2012 to be beyond 350 Billion US Dollar or 3.5 times that of SMS Revenues or 30+% of total worldwide mobile turnover (5 years ago this was 20% and ca. 2+ times SMS Revenues).

    SMS growth has slowed down over the last 5 years. Last 5 years SMS revenues CAGR was ca. 7% (worldwide). Between 2011 and 2012 SMS revenue growth is expected to be no more than 3%. Western Europe and Central Eastern Europe are both expected to generate less SMS revenues in 2012 than in 2011. SMS Volume grew with more than 20% per annum the last 5 years but generated SMS in 2012 is not expected to more than 10% higher than 2012.

    For the ones who like to compare SMS to Data Consumption (and please safe us from ludicrous claims of the benefits of satellites and other ideas out of too many visits to Dutch Coffee shops)

    2012 SMS Volume corresponds to 2.7 Terra Byte of daily data (not a lot! Really it is not!)

    Don’t be terrible exited about this number! It is like Nano-Dust compared to the total mobile data volume generated worldwide.

    The monthly Byte equivalent of SMS consumption is no more than 20 kilo Byte per individual mobile user in Western Europe.

    Let us have a look at how this distributes across the world broken down in Western Europe (WEU), Central Eastern Europe (CEE), North America (NA), Asia Pacific (APAC), Latin America (LA) and Middle East & Africa (MEA):

    sms_revenues_2012 sms_volume_2012

    From the above chart we see that

    Western Europe takes almost 30% of total worldwide SMS revenues but its share of total SMS generated is less than 10%.

    And to some extend also explains why Western Europe might be more exposed to SMS phase out than some other markets. We have already seen the evidence of Western Europe sensitivity to SMS revenues back in 2011, a trend that will spread in many more markets in 2012 and lead to an overall negative SMS revenue story of Western Europe in 2012. We will see that within some of the other regions there are countries that substantially more exposed to SMS phase-out than others in terms of SMS share of total mobile turnover.

    sms_pricing sms_per_individual

    In Western Europe a consumer would  for an SMS pay more than 7 times the price compared to a consumer in North America (i.e., Canada or USA). It is quiet clear that Western Europe has been very successful in charging for SMS compared to any other market in the World. An consumers have gladly paid the price (well I assume so;-).

    SMS Revenues in Western Europe are proportionally much more important in Western Europe than in other regions (maybe with the exception of Latin America).

    In 2012 17% of Total Western Europe Mobile Turnover is expected to come from SMS Revenues (was ca. 13% in 2008).

    WHAT DRIVES SMS GROWTH?

    It is interesting to ask what drives SMS behaviour across various markets and countries.

    Prior to reasonable good quality 3G networks and as importantly prior to the emergence of the Smartphone the SMS usage dynamics between different markets could easily be explained by relative few drivers, such as

    (1) Price decline year on year (the higher decline the faster does SMS per user grow, though rate and impact will depend on Smartphone penetration & 3G quality of coverage).

    (2) Price of an SMS relative to the price of a Minute (the lower the more SMS per User, in many countries there is a clear arbitrage in sending an SMS versus making a call which on average last between 60 – 120 seconds).

    (3) Prepaid to Contract ratios (higher prepaid ratios tend to result in fewer SMS, though this relationship is not per se very strong).

    (4) SMS ARPU to GDP (or average income if available) (The lower the higher higher the usage tend to be).

    (5) 2G penetration/adaptation and

    (6) literacy ratios (particular important in emerging markets. the lower the literacy rate is the lower the amount of SMS per user tend to be).

    Finer detailed models can be build with many more parameters. However, the 6 given here will provide a very decent worldview of SMS dynamics (i.e., amount and growth) across countries and cultures. So for mature markets we really talk about a time before 2009 – 2010 where Smartphone penetration started to approach or exceed 20% – 30% (beyond which the model becomes a bit more complex).

    In markets where the Smartphone penetration is beyond 30% and 3G networks has reached a certain coverage quality level the models describing SMS usage and growth changes to include Smartphone Penetration and to a lesser degree 3G Uptake (not Smartphone penetration and 3G uptake are not independent parameters and as such one or the other often suffice from a modelling perspective).

    Looking SMS usage and growth dynamics after 2008, I have found high quality statistical and descriptive models for SMS growth using the following parameters;

    (a) SMS Price Decline.

    (b) SMS price to MoU Price.

    (c) Prepaid percentage.

    (d) Smartphone penetration (Smartphone penetration has a negative impact on SMS growth and usage – unsurprisingly!)

    (e) SMS ARPU to GDP

    (f) 3G penetration/uptake (Higher the 3G penetration combined with very good coverage has a negative impact on SMS growth and usage. Less important though than Smartphone penetration).

    It should be noted that each of these parameters are varying with time and there for in extracting those from a comprehensive dataset time variation should be considered in order to produce a high quality descriptive model for SMS usage and growth.

    If a Market and its Mobile Operators would like to protect their SMS revenues or at least slow down the assimilation of SMS, the mobile operators clearly need to understand whether pushing Smartphones and Mobile Data can make up for the decline in SMS revenues that is bound to happen with the hard push of mobile broadband devices and services.

    EXPOSURE TO LOSS OF SMS REVENUE – A MARKET BY MARKET VIEW!

    As we have already seen and discussed it is not surprising that SMS is declining or stagnating. At least within its present form and business model. Mobile Broadband, the Smartphone and its many applications have created a multi-verse of alternatives to the SMS. Where in the past SMS was a clear convenience and often a much cheaper alternative to an equivalent voice call, today SMS has become in-convenient and not per se a cost-efficient alternative to Voice and certainly not when compared with IP-based messaging via a given data plan.

    exposure_to_SMS_decline

    74 countries (or markets) have been analysed for their exposure to SMS decline in terms of the share of SMS Revenues out of the Total Mobile Turnover. 4 categories have been identified (1) Very high risk >20%, (2) High risk for 10% – 20%, (3) Medium risk for 5% – 10% and (4) Lower risk when the SMS Revenues are below 5% of total mobile turnover.

    As Mobile operators push hard for mobile broadband and inevitably increases rapidly the Smartphone penetration, SMS will decline. In the “end-game” of LTE, SMS has been altogether phased out.

    Based on 2012 expectations lets look at the risk exposure that SMS phase-out brings in a market by market out-look;

    We see from the above analysis that 9 markets (out of a total 74 analyzed), with Philippines taking the pole position, are having what could be characterized as a very high exposure to SMS Decline. The UK market, with more than 30% of revenues tied up in SMS, have aggressively pushed for mobile broadband and LTE. It will be very interesting to follow how UK operators will mitigate the exposure to SMS decline as LTE is penetrating the market.  We will see whether LTE (and other mobile broadband propositions) can make up for the SMS decline.

    More than 40 markets have an SMS revenue dependency of more than 10% of total mobile turnover and thus do have a substantial exposure to SMS decline that needs to be mitigated by changes to the messaging business model.

    Mobile operators around the world still need to crack this SMS assimilation challenge … a good starting point would be to stop blaming OTT for all the evils and instead either manage their mobile broadband push and/or start changing their SMS business model to an IP-messaging business model.

    IS THERE A MARGIN EXPOSURE BEYOND LOSS OF SMS REVENUES?

    There is no doubt that SMS is a high-margin service, if not the highest, for The Mobile Industry.

    A small de-tour into the price for SMS and the comparison with the price of mobile data!

    The Basic: an SMS is 140 Bytes and max 160 characters.

    On average (worldwide) an SMS user pays (i.e., in 2012) ca. 4.615 US$-cent per short message.

    A Mega-Byte of data is equivalent to 7,490 SMSs which would have a “value” of ca. 345 US Dollars.

    Expensive?

    Yes! It would be if that was the price a user would pay for mobile broadband data (particular for average consumptions of 100 Mega Bytes per month of Smartphone consumption) …

    However, remember that an average user (worldwide) consumes no more than 20 kilo Byte per Month.

    One Mega-Byte of SMS would supposedly last for more than 50 month or more than 4 years.

    This is just to illustrate the silliness of getting into SMS value comparison with mobile data.

    A Byte is not just a Byte but depends what that Byte caries!

    Its quiet clear that an SMS equivalent IP-based messaging does not pose much of a challenge to a mobile broadband network being it either HSPA-based or LTE-based. To some extend IP-based messaging (as long as its equivalent to 140 Bytes) should be able to be delivered at better or similar margin as in a legacy based 2G mobile network.

    Thus, in my opinion a 140 Byte message should not cost more to deliver in an LTE or HSPA based network. In fact due to better spectral efficiency and at equivalent service levels, the cost of delivering 140 Bytes in LTE or HSPA should be a lot less than in GSM (or CS-3G).

    However, if the mobile operators are not able to adapt their messaging business models to recover the SMS revenues (which with the margin argument above might not be $ to $ recovery but could be less) at risk of being lost to the assimilation process of pushing mobile data … well then substantial margin decline will be experienced.

    Operators in the danger zone of SMS revenue exposure, and thus with the SMS revenue share exceeding 10% of the total mobile turnover, should urgently start strategizing on how they can control the SMS assimilation process without substantial financial loss to their operations.

    ACKNOWLEDGEMENT

    I have made extensive use of historical and actual data from Pyramid Research country data bases. Wherever possible this data has been cross checked with other sources. Pyramid Research have some of the best and most detailed mobile technology projections that would satisfy most data savvy analysts. The very extensive data analysis on Pyramid Research data sets are my own and any short falls in the analysis clearly should only be attributed to myself.

    The Economics of the Thousand Times Challenge: Spectrum, Efficiency and Small Cells

    By now the biggest challenge of the “1,000x challenge” is to read yet another story about the “1,000x challenge”.

    This said, Qualcomm has made many beautiful presentations on The Challenge. It leaves the reader with an impression that it is much less of a real challenge, as there is a solution for everything and then some.

    So bear with me while we take a look at the Economics and in particular the Economical Boundaries around the Thousand Times “Challenge” of providing (1) More spectrum, (2) Better efficiency and last but not least (3) Many more Small Cells.

    THE MISSING LINK

    While (almost) every technical challenge is solvable by clever engineering (i.e., something Qualcomm obviously have in abundance), it is not following naturally that such solutions are also feasible within the economical framework imposed by real world economics. At the very least, any technical solution should also be reasonable within the world of economics (and of course within a practical time-frame) or it becomes a clever solution but irrelevant to a real world business.

    A  Business will (maybe should is more in line with reality) care about customer happiness. However a business needs to do that within healthy financial boundaries of margin, cash and shareholder value. Not only should the customer be happy, but the happiness should extend to investors and shareholders that have trusted the Business with their livelihood.

    While technically, and almost mathematically, it follows that massive network densification would be required in the next 10 years IF WE KEEP FEEDING CUSTOMER DEMAND it might not be very economical to do so or at the very least such densification only make sense within a reasonable financial envelope.

    Its obvious that massive network densification, by means of macro-cellular expansion, is unrealistic, impractically as well as uneconomically. Thus Small Cell concepts including WiFi has been brought to the Telecoms Scene as an alternative and credible solution. While Small Cells are much more practical, the question whether they addresses sufficiently the economical boundaries, the Telecommunications Industry is facing, remains pretty much unanswered.

    PRE-AMP

    The Thousand Times Challenge, as it has been PR’ed by Qualcomm, states that the cellular capacity required in 2020 will be at least 1,000 times that of “today”. Actually, the 1,000 times challenge is referenced to the cellular demand & supply in 2010, so doing the math

    the 1,000x might “only” be a 100 times challenge between now and 2020 in the world of Qualcomm’s and alike. Not that it matters! … We still talk about the same demand, just referenced to a later (and maybe less “sexy” year).

    In my previous Blogs, I have accounted for the dubious affair (and non-nonsensical discussion) of over-emphasizing cellular data growth rates (see “The Thousand Times Challenge: The answer to everything about mobile data”) as well as the much more intelligent discussion about how the Mobile Industry provides for more cellular data capacity starting with the existing mobile networks (see “The Thousand Time Challenge: How to provide cellular data capacity?”).

    As it turns out  Cellular Network Capacity C can be described by 3 major components; (1) available bandwidth B, (2) (effective) spectral efficiency E and (3) number of cells deployed N.

    The SUPPLIED NETWORK CAPACITY in Mbps (i.e., C) is equal to  the AMOUNT OF SPECTRUM, i.e., available bandwidth, in MHz (i..e, B) multiplied with the SPECTRAL EFFICIENCY PER CELL in Mbps/MHz (i.e., E) multiplied by the NUMBER OF CELLS (i.e., N). For more details on how and when to apply the Cellular Network Capacity Equation read my previous Blog on “How to provide Cellular Data Capacity?”).

    SK Telekom (SK Telekom’s presentation at the 3GPP workshop on “Future Radio in 3GPP” is worth a careful study) , Mallinson (@WiseHarbor) and Qualcomm (@Qualcomm_tech, and many others as of late) have used the above capacity equation to impose a Target amount of cellular network capacity a mobile network should be able to supply by 2020: Realistic or Not, this target comes to a 1,000 times the supplied capacity level in 2010 (i.e., I assume that 2010 – 2020 sounds nicer than 2012 – 2022 … although the later would have been a lot more logical to aim for if one really would like to look at 10 years … of course that might not give 1,000 times which might ruin the marketing message?).

    So we have the following 2020 Cellular Network Capacity Challenge:

    Thus a cellular network in 2020 should have 3 times more spectral bandwidth B available (that’s fairly easy!), 6 times higher spectral efficiency E (so so … but not impossible, particular compared with 2010) and 56 times higher cell site density N (this one might  be a “real killer challenge” in more than one way), compared to 2010!.

    Personally I would not get too hanged up about whether its 3 x 6 x 56 or 6 x 3 x 56 or some other “multiplicators” resulting in a 1,000 times gain (though some combinations might be a lot more feasible than others!)

    Obviously we do NOT need a lot of insights to see that the 1,000x challenge is a

    Rally call for Small & then Smaller Cell Deployment!

    Also we do not need to be particular visionary (or have visited a Dutch Coffee Shop) to predict that by 2020 (aka The Future) compared to today (i.e., October 2012)?

    Data demand from mobile devices will be a lot higher in 2020!

    Cellular Networks have to (and will!) supply a lot more data capacity in 2020!

    Footnote: the observant reader will have seen that I am not making the claim that there will be hugely more data traffic on the cellular network in comparison to today. The WiFi path might (and most likely will) take a lot of the traffic growth away from the cellular network.

    BUT

    how economical will this journey be for the Mobile Network Operator?

    THE ECONOMICS OF THE THOUSAND TIMES CHALLENGE

    Mobile Network Operators (MNOs) will not have the luxury of getting the Cellular Data Supply and Demand Equation Wrong.

    The MNO will need to balance network investments with pricing strategies, churn & customer experience management as well as overall profitability and corporate financial well being:

    Growth, if not manage, will lead to capacity & cash crunch and destruction of share holder value!

    So for the Thousand Times Challenge, we need to look at the Total Cost of Ownership (TCO) or Total Investment required to get to a cellular network with 1,000 times more network capacity than today. We need to look at:

    Investment I(B) in additional bandwidth B, which would include (a) the price of spectral re-farming (i.e., re-purposing legacy spectrum to a new and more efficient technology), (b) technology migration (e.g., moving customers off 2G and onto 3G or LTE or both) and (c) possible acquisition of new spectrum (i..e, via auction, beauty contests, or M&As).

    Improving a cellular networks spectral efficiency I(E) is also likely to result in additional investments. In order to get an improved effective spectral efficiency, an operator would be required to (a) modernize its infrastructure, (b) invest into better antenna technologies, and (c) ensure that customer migration from older spectral in-efficient technologies into more spectral efficient technologies occurs at an appropriate pace.

    Last but NOT Least the investment in cell density I(N):

    Needing 56 times additional cell density is most likely NOT going to be FREE,

    even with clever small cell deployment strategies.

    Though I am pretty sure that some will make a very positive business case, out there in the Operator space, (note: the difference between Pest & Cholera might come out in favor of Cholera … though we would rather avoid both of them) comparing a macro-cellular expansion to Small Cell deployment, avoiding massive churn in case of outrageous cell congestion, rather than focusing on managing growth before such an event would occur.

    The Real “1,000x” Challenge will be Economical in nature and will relate to the following considerations:

    tco 2020

    In other words:

    Mobile Networks required to supply a 1,000 times present day cellular capacity are also required to provide that capacity gain at substantially less ABSOLUTE Total Cost of Ownership.

    I emphasize the ABSOLUTE aspects of the Total Cost of Ownership (TCO), as I have too many times seen our Mobile Industry providing financial benefits in relative terms (i.e., relative to a given quality improvement) and then fail to mention that in absolute cost the industry will incur increased Opex (compared to pre-improvement situation). Thus a margin decline (i.e., unless proportional revenue is gained … and how likely is that?) as well as negative cash impact due to increased investments to gain the improvements (i.e., again assuming that proportional revenue gain remains wishful thinking).

    Never Trust relative financial improvements! Absolutes don’t Lie!

    THE ECONOMICS OF SPECTRUM.

    Spectrum economics can be captured by three major themes: (A) ACQUISITION, (B) RETENTION and (C) PERFECTION. These 3 major themes should be well considered in any credible business plan: Short, Medium and Long-term.

    It is fairly clear that there will not be a lot new lower frequency (defined here as <2.5GHz) spectrum available in the next 10+ years (unless we get a real breakthrough in white-space). The biggest relative increase in cellular bandwidth dedicated to mobile data services will come from re-purposing (i.e., perfecting) existing legacy spectrum (i.e., by re-farming). Acquisition of some new bandwidth in the low frequency range (<800MHz), which per definition will not be a lot of bandwidth and will take time to become available. There are opportunities in the very high frequency range (>3GHz) which contains a lot of bandwidth. However this is only interesting for Small Cell and Femto Cell like deployments (feeding frenzy for small cells!).

    As many European Countries re-auction existing legacy spectrum after the set expiration period (typical 10 -15 years), it is paramount for a mobile operator to retain as much as possible of its existing legacy spectrum. Not only is current traffic tied up in the legacy bands, but future growth of mobile data will critical depend on its availability. Retention of existing spectrum position should be a very important element of an Operators  business plan and strategy.

    Most real-world mobile network operators that I have looked at can expect by acquisition & perfection to gain between 3 to 8 times spectral bandwidth for cellular data compared to today’s situation.

    For example, a typical Western European MNO have

    1. Max. 2x10MHz @ 900MHz primarily used for GSM. Though some operators are having UMTS 900 in operation or plans to re-farm to UMTS pending regulatory approval.
    2. 2×20 MHz @ 1800MHz, though here the variation tend to be fairly large in the MNO spectrum landscape, i.e., between 2x30MHz down-to 2x5MHz. Today this is exclusively in use for GSM. This is going to be a key LTE band in Europe and already supported in iPhone 5 for LTE.
    3. 2×10 – 15 MHz @ 2100MHz is the main 3G-band (UMTS/HSPA+) in Europe and is expected to remain so for at least the next 10 years.
    4. 2×10 @ 800 MHz per operator and typically distributed across 3 operator and dedicated to LTE. In countries with more than 3 operators typically some MNOs will have no position in this band.
    5. 40 MHz @ 2.6 GHz per operator and dedicated to LTE (FDD and/or TDD). From a coverage perspective this spectrum would in general be earmarked for capacity enhancements rather than coverage.

    Note that most European mobile operators did not have 800MHz and/or 2.6GHz in their spectrum portfolios prior to 2011. The above list has been visualized in the Figure below (though only for FDD and showing the single side of the frequency duplex).

    spectrum_details

    The 700MHz will eventually become available in Europe (already in use for LTE in USA via AT&T and VRZ) for LTE advanced. Though the time frame for 700MHz cellular deployment in Europe is still expected take maybe up to 8 years (or more) to get it fully cleared and perfected.

    Today (as of 2012) a typical European MNO would have approximately (a) 60 MHz (i.e., DL+UL) for GSM, (b) 20 – 30 MHz for UMTS and (c) between 40MHz – 60MHz for LTE (note that in 2010 this would have been 0MHz for most operators!). By 2020 it would be fair to assume that same MNO could have (d) 40 – 50 MHz for UMTS/HSPA+ and (e) 80MHz – 100MHz for LTE. Of course it is likely that mobile operators still would have a thin GSM layer to support roaming traffic and extreme laggards (this is however likely to be a shared resource among several operators). If by 2020 10MHz to 20MHz would be required to support voice capacity, then the MNO would have at least 100MHz and up-to 130MHz for data.

    Note if we Fast-Backward to 2010, assume that no 2.6GHz or 800MHz auction had happened and that only 2×10 – 15 MHz @ 2.1GHz provided for cellular data capacity, then we easily get a factor 3 to 5 boost in spectral capacity for data over the period. This just to illustrate the meaningless of relativizing the challenge of providing network capacity.

    So what’s the economical aspects of spectrum? Well show me the money!

    Spectrum:

    1. needs to be Acquired (including re-acquired = Retention) via (a) Auction, (b) Beauty contest or (c) Private transaction if allowed by the regulatory authorities (i.e., spectrum trading); Usually spectrum (in Europe at least) will be time-limited right-to-use! (e.g., 10 – 15 years) => Capital investments to (re)purchase spectrum.
    2. might need to be Perfected & Re-farmed to another more spectral efficient technology => new infrastructure investments & customer migration cost (incl. acquisition, retention & churn).
    3. new deployment with coverage & service obligations => new capital investments and associated operational cost.
    4. demand could result in joint ventures or mergers to acquire sufficient spectrum for growth.
    5. often has a re-occurring usage fee associate with its deployment => Operational expense burden.

    First 3 bullet points can be attributed mainly to Capital expenditures and point 5. would typically be an Operational expense. As we have seen in US with the failed AT&T – T-Mobile US merger, bullet point 4. can result in very high cost of spectrum acquisition. Though usually a merger brings with it many beneficial synergies, other than spectrum, that justifies such a merger.

    spectrum_cost

    Above Figure provides a historical view on spectrum pricing in US$ per MHz-pop. As we can see, not all spectrum have been borne equal and depending on timing of acquisition, premium might have been paid for some spectrum (e.g., Western European UMTS hyper pricing of 2000 – 2001).

    Some general spectrum acquisition heuristics can be derived by above historical overview (see my presentation “Techno-Economical Aspects of Mobile Broadband from 800MHz to 2.6GHz” on @slideshare for more in depth analysis).

    spectrum_heuristics

    Most of the operator cost associated with Spectrum Acquisition, Spectrum Retention and Spectrum Perfection should be more or less included in a Mobile Network Operators Business Plans. Though the demand for more spectrum can be accelerated (1) in highly competitive markets, (2) spectrum starved operations, and/or (3) if customer demand is being poorly managed within the spectral resources available to the MNO.

    WiFi, or in general any open radio-access technology operating in ISM bands (i.e., freely available frequency bands such as 2.4GHz, 5.8GHz), can be a source of mitigating costly controlled-spectrum resources by stimulating higher usage of such open-technologies and open-bands.

    The cash prevention or cash optimization from open-access technologies and frequency bands should not be under-estimated or forgotten. Even if such open-access deployment models does not make standalone economical sense, is likely to make good sense to use as an integral part for the Next Generation Mobile Data Network perfecting & optimizing open- & controlled radio-access technologies.

    The Economics of Spectrum Acquisition, Spectrum Retention & Spectrum Perfection is of such tremendous benefits that it should be on any Operators business plans: short, medium and long-term.

    THE ECONOMICS OF SPECTRAL EFFICIENCY

    The relative gain in spectral efficiency (as well as other radio performance metrics) with new 3GPP releases has been amazing between R99 and recent HSDPA releases. Lots of progress have been booked on the account of increased receiver and antenna sophistication.

    spectral_efficiency_gain_per_technology

    If we compare HSDPA 3.6Mbps (see above Figure) with the first Release of LTE, the spectral efficiency has been improved with a factor 4. Combined with more available bandwidth for LTE, provides an even larger relative boost of supplied bandwidth for increased capacity and customer quality. Do note above relative representation of spectral efficiency gain largely takes away the usual (almost religious) discussions of what is the right spectral efficiency and at what load. The effective (what that may be in your network) spectral efficiency gain moving from one radio-access release or generation to the next would be represented by the above Figure.

    Theoretically this is all great! However,

    Having the radio-access infrastructure supporting the most spectral efficient technology is the easy part (i.e., thousands of radio nodes), getting your customer base migrated to the most spectral efficient technology is where the challenge starts (i.e., millions of devices).

    In other words, to get maximum benefits of a given 3GPP Release gains, an operator needs to migrate his customer-base terminal equipment to that more Efficient Release. This will take time and might be costly, particular if accelerated. Irrespective, migrating a customer base from radio-access A (e.g., GSM) to radio-access B (e.g., LTE), will take time and adhere to normal market dynamics of churn, retention, replacement factors, and gross-adds. The migration to a better radio-access technology can be stimulated by above-market-average acquisition & retention investments and higher-than-market-average terminal equipment subsidies. In the end competitors market reactions to your market actions, will influence the migration time scale very substantially (this is typically under-estimate as competitive driving forces are ignored in most analysis of this problem).

    The typical radio-access network modernization cycle has so-far been around 5 years. Modernization is mainly driven by hardware obsolescence and need for more capacity per unit area than older (first & second) generation equipment could provide. The most recent and ongoing modernization cycle combines the need for LTE introduction with 2G and possibly 3G modernization. In some instances retiring relative modern 3G equipment on the expense of getting the latest multi-mode, so-called Single-RAN equipment, deployed, has been assessed to be worth the financial cost of write-off.  This new cycle of infrastructure improvements will in relative terms far exceed past upgrades. Software Definable Radios (SDR) with multi-mode (i.e., 2G, 3G, LTE) capabilities are being deployed in one integrated hardware platform, instead of the older generations that were separated with the associated floor space penalty and operational complexity. In theory only Software Maintenance & simple HW upgrades (i.e., CPU, memory, etc..) would be required to migrate from one radio-access technology to another. Have we seen the last HW modernization cycle? … I doubt it very much! (i.e., we still have Cloud and Virtualization concepts going out to the radio node blurring out the need for own core network).

    Multi-mode SDRs should in principle provide a more graceful software-dominated radio-evolution to increasingly more efficient radio access; as cellular networks and customers migrate from HSPA to HSPA+ to LTE and to LTE-advanced. However, in order to enable those spectral-efficient superior radio-access technologies, a Mobile Network Operator will have to follow through with high investments (or incur high incremental operational cost) into vastly improved backhaul-solutions and new antenna capabilities than the past access technologies required.

    Whilst the radio access network infrastructure has gotten a lot more efficient from a cash perspective, the peripheral supporting parts (i.e., antenna, backhaul, etc..) has gotten a lot more costly in absolute terms (irrespective of relative cost per Byte might be perfectly OKAY).

    Thus most of the economics of spectral efficiency can and will be captured within the modernization cycles and new software releases without much ado. However, backhaul and antenna technology investments and increased operational cost is likely to burden cash in the peak of new equipment (including modernization) deployment. Margin pressure is therefor likely if the Opex of supporting the increased performance is not well managed.

    To recapture the most important issues of Spectrum Efficiency Economics:

    • network infrastructure upgrades, from a hardware as well as software perspective, are required => capital investments, though typically result in better Operational cost.
    • optimal customer migration to better and more efficient radio-access technologies => market invest and terminal subsidies.

    Boosting spectrum much beyond 6 times today’s mobile data dedicated spectrum position is unlikely to happen within a foreseeable time frame. It is also unlikely to happen in bands that would be very interesting for both providing both excellent depth of coverage and at the same time depth of capacity (i.e., lower frequency bands with lots of bandwidth available). Spectral efficiency will improve with both next generation HSPA+ as well as with LTE and its evolutionary path. However, depending on how we count the relative improvement, it is not going to be sufficient to substantially boost capacity and performance to the level a “1,000 times challenge” would require.

    This brings us to the topic of vastly increased cell site density and of course Small Cell Economics.

    THE ECONOMICS OF INCREASED CELL SITE DENSITY

    It is fairly clear that there will not be a lot new spectrum available in the next 10+ years. The relative increase in cellular bandwidth will come from re-purposing & perfecting existing legacy spectrum (i.e., by re-farming) and acquiring some new bandwidth in the low frequency range (<800MHz) which per definition is not going to provide a lot of bandwidth.  The very high-frequency range (>3GHz) will contain a lot of bandwidth, but is only interesting for Small Cell and Femto-cell like deployments (feeding frenzy for Small Cells).

    Financially Mobile Operators in mature markets, such as Western Europe, will be lucky to keep their earning and margins stable over the next 8 – 10 years. Mobile revenues are likely to stagnate and possible even decline. Opex pressure will continue to increase (e.g., just simply from inflationary pressures alone). MNOs are unlikely to increase cell site density, if it leads to incremental cost & cash pressure that cannot be recovered by proportional Topline increases. Therefor it should be clear that adding many more cell sites (being it Macro, Pico, Nano or Femto) to meet increasing (often un-managed & unprofitable) cellular demand is economically unwise and unlikely to happen unless followed by Topline benefits.

    Increasing cell density dramatically (i.e., 56 times is dramatic!) to meet cellular data demand will only happen if it can be done with little incremental cost & cash pressure.

    I have no doubt that distributing mobile data traffic over more and smaller nodes (i.e., decrease traffic per node) and utilize open-access technologies to manage data traffic loads are likely to mitigate some of the cash and margin pressure from supporting the higher performance radio-access technologies.

    So let me emphasize that there will always be situations and geographical localized areas where cell site density will be increased disregarding the economics, in order to increase urgent capacity needs or to provide specialized-coverage needs. If an operator has substantially less spectral overhead (e.g., AT&T) than a competitor (e.g., T-Mobile US), the spectrum-starved operator might decide to densify with Small Cells and/or Distributed Antenna Systems (DAS) to be able to continue providing a competitive level of service (e.g., AT&T’s situation in many of its top markets). Such a spectrum starved operator might even have to rely on massive WiFi deployments to continue to provide a decent level of customer service in extreme hot traffic zones (e.g., Times Square in NYC) and remain competitive as well as having a credible future growth story to tell shareholders.

    Spectrum-starved mobile operators will move faster and more aggressively to Small Cell Network solutions including advanced (and not-so-advanced) WiFi solutions. This fast learning-curve might in the longer term make up for a poorer spectrum position.

    In the following I will consider Small Cells in the widest sense, including solutions based both on controlled frequency spectrum (e.g., HSPA+, LTE bands) as well in the ISM frequency bands (i.e., 2.4GHz and 5.8GHz). The differences between the various Small Cell options will in general translate into more or less cells due to radio-access link-budget differences.

    As I have been involved in many projects over the last couple of years looking at WiFi & Small Cell substitution for macro-cellular coverage, I would like to make clear that in my opinion:

    A Small Cells Network is not a good technical (or economical viable) solution for substituting macro-cellular coverage for a mobile network operator.

    However, Small Cells however are Great for

    • Specialized coverage solutions difficult to reach & capture with standard macro-cellular means.
    • Localized capacity addition in hot traffic zones.
    • Coverage & capacity underlay when macro-cellular cell split options have been exhausted.

    The last point in particular becomes important when mobile traffic exceeds the means for macro-cellular expansion possibilities, i.e., typically urban & dense-urban macro-cellular ranges below 200 meters and in some instances maybe below 500 meters pending on the radio-access choice of the Small Cell solution.

    Interference concerns will limit the transmit power and coverage range. However our focus are small localized and tailor-made coverage-capacity solutions, not a substituting macro-cellular coverage, range limitation is of lesser concern.

    For great accounts of Small Cell network designs please check out Iris Barcia (@IBTwi) & Simon Chapman (@simonchapman) both from Keima Wireless. I recommend the very insightful presentation from Iris “Radio Challenges and Opportunities for Large Scale Small Cell Deployments” which you can find at “3G & 4G Wireless Blog” by Zahid Ghadialy (@zahidtg, a solid telecom knowledge source for our Industry).

    When considering small cell deployment it makes good sense to understand the traffic behavior of your customer base. The Figure below illustrates a typical daily data and voice traffic profile across a (mature) cellular network:

    a_typical_traffic_day_in_europe

    • up-to 80% of cellular data traffic happens either at home or at work.+

    Currently there is an important trend, indicating that the evening cellular-data peak is disappearing coinciding with the WiFi-peak usage taking over the previous cellular peak hour.

    A great source of WiFi behavioral data, as it relates to Smartphone usage, you will find in Thomas Wehmeier’s (Principal Analyst, Informa: @Twehmeier) two pivotal white papers on  “Understanding Today’s Smatphone User” Part I and Part II.

    The above daily cellular-traffic profile combined with the below Figure on cellular-data usage per customer distributed across network cells

    traffic_over_network_distribution

    shows us something important when it comes to small cells:

    • Most cellular data traffic (per user) is limited to very few cells.
    • 80% (50%) of the cellular data traffic (per user) is limited to 3 (1) main cells.
    • The higher the cellular data usage (per user) the fewer cells are being used.

    It is not only important to understand how data traffic (on a per user) behaves across the cellular network. It is likewise very important to understand how the cellular-data traffic multiplex or aggregate across the cells in the mobile network.

    We find in most Western European Mature 3G networks the following trend:

    traffic_over_cell_distribution

    • 20% of the 3G Cells carries 60+% of the 3G data traffic.
    • 50% of the 3G Cells carriers 95% or more of the 3G data traffic.

    Thus relative few cells carries the bulk of the cellular data traffic. Not surprising really as this trend was even more skewed for GSM voice.

    The above trends are all good news for Small Cell deployment. It provides confidence that small cells can be effective means to taking traffic away from macro-cellular areas, where there is no longer an option for conventional capacity expansions (i.e., sectorization, additional carrier or conventional cell splits).

    For the Mobile Network Operator, Small Cell Economics is a Total Cost of Ownership exercise comparing Small Cell Network Deployment  to other means of adding capacity to the existing mobile network.

    The Small Cell Network needs (at least) to be compared to the following alternatives;

    1. Greenfield Macro-cellular solutions (assuming this is feasible).
    2. Overlay (co-locate) on existing network grid.
    3. Sectorization of an existing site solution (i.e., moving from 3 sectors to 3 + n on same site).

    Obviously, in the “extreme” cellular-demand limit where non of the above conventional means of providing additional cellular capacity are feasible, Small Cell deployment is the only alternative (besides doing nothing and letting the customer suffer). Irrespective we still need to understand how the economics will work out, as there might be instances where the most reasonable strategy is to let your customer “suffer” best-effort services. This would in particular be the case if there is no real competitive and incremental Topline incentive by adding more capacity.

    However,

    Competitive circumstances could force some spectrum-starved operators to deploy small cells irrespective of it being financially unfavorable to do so.

    Lets begin with the cost structure of a macro-cellular 3G Greenfield Rooftop Site Solution. We take the relevant cost structure of a configuration that we would be most likely to encounter in a Hot Traffic Zone / Metropolitan high-population density area which also is likely to be a candidate area for Small Cell deployment. The Figure below shows the Total Cost of Ownership, broken down in Annualized Capex and Annual Opex, for a Metropolitan 3G macro-cellular rooftop solution:

    tco_greenfield_rooftop_site

    Note 1: The annualized Capex has been estimated assuming 5 years for RAN Infra, Backaul & Core, and 10 years for Build. It is further assumed that the site is supported by leased-fiber backhaul. Opex is the annual operational expense for maintaining the site solution.

    Note 2: Operations Opex category covers Maintenance, Field-Services, Staff cost for Ops, Planning & optimization. The RAN infra Capex category covers: electronics, aggregation, antenna, cabling, installation & commissioning, etc..

    Note 3: The above illustrated cost structure reflects what one should expect from a typical European operation. North American or APAC operators will have different cost distributions. Though it is not expected to change conclusions substantially (just redo the math).

    When we discuss Small Cell deployment, particular as it relates to WiFi-based small cell deployment, with Infrastructure Suppliers as well as Chip Manufacturers you will get the impression that Small Cell deployment is Almost Free of Capex and Opex; i.e., hardly any build cost, free backhaul and extremely cheap infrastructure supported by no site rental, little maintenance and ultra-low energy consumption.

    Obviously if Small Cells cost almost nothing, increasing cell site density with 56 times or more becomes very interesting economics … Unfortunately such ideas are wishful thinking.

    For Small Cells not to substantially pressure margins and cash, Small Cell Cost Scaling needs to be very aggressive. If we talk about a 56x increase in cell site density the incremental total cost of ownership should at least be 56 times better than to deploy a macro-cellular expansion. Though let’s not fool ourselves!

    No mobile operator would densify their macro cellular network 56 times if absolute cost would proportionally increase!

    No Mobile operator would upsize their cellular network in any way unless it is at least margin, cost & cash neutral.

    (I have no doubt that out there some are making relative business cases for small cells comparing an equivalent macro-cellular expansion versus deploying Small Cells and coming up with great cases … This would be silly of course, not that this have ever prevented such cases to be made and presented to Boards and CxOs).

    The most problematic cost areas from a scaling perspective (relative to a macro-cellular Greenfield Site) are (a) Site Rental (lamp posts, shopping malls,), (b) Backhaul Cost (if relying on Cable, xDSL or Fiber connectivity), (c) Operational Cost (complexity in numbers, safety & security) and (d) Site Build Cost (legal requirements, safety & security,..).

    In most realistic cases (I have seen) we will find a 1:12 to 1:20 Total Cost of Ownership difference between a Small Cell unit cost and that of a Macro-Cellular Rooftop’s unit cost. While unit Capex can be reduced very substantially, the Operational Expense scaling is a lot harder to get down to the level required for very extensive Small Cell deployments.

    EXAMPLE:

    For a typical metropolitan rooftop (in Western Europe) we have the annualized capital expense (Capex) of ca. 15,000 Euro and operational expenses (Opex) in the order of 30,000 Euro per annum. The site-related Opex distribution would look something like this;

    • Macro-cellular Rooftop 3G Site Unit Annual Opex:
    • Site lease would be ca. 10,500EUR.
    • Backhaul would be ca. 9,000EUR.
    • Energy would be ca. 3,000EUR.
    • Operations would be ca. 7,500EUR.
    • i.e., total unit Opex of 30,000EUR (for average major metropolitan area)

    Assuming that all cost categories could be scaled back with a factor 56 (note: very big assumption that all cost elements can be scaled back with same factor!)

    • Target Unit Annual Opex cost for a Small Cell:
    • Site lease should be less than 200EUR (lamp post leases substantially higher)
    • Backhaul should be  less than 150EUR (doable though not for carrier grade QoS).
    • Energy should be less than 50EUR (very challenging for todays electronics)
    • Operations should be less than 150EUR (ca. 1 hour FTE per year … challenging).
    • Annual unit Opex should be less than 550EUR (not very likely to be realizable).

    Similar for the Small Cell unit Capital expense (Capex) would need to be done for less than 270EUR to be fully scalable with a macro-cellular rooftop (i.e., based on 56 times scaling).

    • Target Unit Annualized Capex cost for a Small Cell:
    • RAN Infra should be less than 100EUR (Simple WiFi maybe doable, Cellular challenging)
    • Backhaul would be less than 50EUR (simple router/switch/microwave maybe doable).
    • Build would be less than 100EUR (very challenging even to cover labor).
    • Core would be less than 20EUR (doable at scale).
    • Annualized Capex should be less than 270EUR (very challenging to meet this target)
    • Note: annualization factor: 5 years for all including Build.

    So we have a Total Cost of Ownership TARGET for a Small Cell of ca. 800EUR

    Inspecting the various capital as well as operational expense categories illustrates the huge challenge to be TCO comparable to a macro-cellular urban/dense-urban 3G-site configuration.

    Massive Small Cell Deployment needs to be almost without incremental cost to the Mobile Network Operator to be a reasonable scenario for the 1,000 times challenge.

    Most the analysis I have seen, as well as carried out myself, on real cost structure and aggressive pricing & solution designs shows that the if the Small Cell Network can be kept between 12 to 20 Cells (or Nodes) the TCO compares favorably to (i.e., beating) an equivalent macro-cellular solution. If the Mobile Operator is also a Fixed Broadband Operator (or have favorable partnership with one) there are in general better cost scaling possible than above would assume (e.g., another AT&T advantage in their DAS / Small Cell strategy).

    In realistic costing scenarios so far, Small Cell economical boundaries are given by the Figure below:

    Let me emphasize that above obviously assumes that an operator have a choice between deploying a Small Cell Network and conventional Cell Split, Nodal Overlay (or co-location on existing cellular site) or Sectorization (if spectral capacity allows). In the Future and in Hot Traffic Zones this might not be the case. Leaving Small Cell Network deployment or letting the customers “suffer” poorer QoS be the only options left to the mobile network operator.

    So how can we (i.e., the Mobile Operator) improve the Economics of Small Cell deployment?

    Having access fixed broadband such as fiber or high-quality cable infrastructure would make the backhaul scaling a lot better. Being a mobile and fixed broadband provider does become very advantageous for Small Cell Network Economics. However, the site lease (and maintenance) scaling remains a problem as lampposts or other interesting Small Cell locations might not scale very aggressively (e.g., there are examples of lamppost leases being as expensive as regular rooftop locations). From a capital investment point of view, I have my doubts whether the price will scale downwards as favorable as they would need to be. Much of the capacity gain comes from very sophisticated antenna configurations that is difficult to see being extremely cheap:

    Small Cell Equipment Suppliers would need to provide a Carrier-grade solution priced at  maximum 1,000EUR all included! to have a fighting chance of making massive small cell network deployment really economical.

    We could assume that most of the “Small Cells” are in fact customers existing private access points (or our customers employers access points) and simply push (almost) all cellular data traffic onto those whenever a customer is in vicinity of such. All those existing and future private access points are (at least in Western Europe) connected to at least fairly good quality fixed backhaul in the form of VDSL, Cable (DOCSIS3), and eventually Fiber. This would obviously improve the TCO of “Small Cells” tremendously … Right?

    Well it would reduce the MNOs TCO (as it shift the cost burden to the operator’s customer or employers of those customers) …Well … This picture also would  not really be Small Cells in the sense of proper designed and integrated cells in the Cellular sense of the word, providing the operator end-2-end control of his customers service experience. In fact taking the above scenario to the extreme we might not need Small Cells at all, in the Cellular sense, or at least dramatically less than using the standard cellular capacity formula above.

    In Qualcomm (as well as many infrastructure suppliers) ultimate vision the 1,000x challenge is solved by moving towards a super-heterogeneous network that consist of everything from Cellular Small Cells, Public & Private WiFi access points as well as Femto cells thrown into the equation as well.

    Such an ultimate picture might indeed make the Small Cell challenge economically feasible. However, it does very fundamentally change the current operational MNO business model and it is not clear that transition comes without cost and only benefits.

    Last but not least it is pretty clear than instead of 3 – 5 MNOs all going out plastering walls and lampposts with Small Cell Nodes & Antennas sharing might be an incredible clever idea. In fact I would not be altogether surprised if we will see new independent business models providing Shared Small Cell solutions for incumbent Mobile Network Operators.

    Before closing the Blog, I do find it instructive to pause and reflect on lessons from Japan’s massive WiFi deployment. It might serves as a lesson to massive Small Cell Network deployment as well and an indication that collaboration might be a lot smarter than competition when it comes to such deployment:

    softband_wifi_deployment

    The Thousand Times Challenge: PART 2 … How to provide cellular data capacity?

    CELLULAR DATA CAPACITY … A THOUSAND TIMES CHALLENGE?

    It should be obvious that I am somewhat skeptical about all the excitement around cellular data growth rates and whether its a 1,000x or 250x or 42x (see my blog on “The Thousand Times Challenge … The answer to everything about mobile data?”). In this I share very much Dean Bubley’s (Disruptive Wireless) critical view on the “cellular growth rate craze”. See Dean’s account in his recent Blog “Mobile data traffic growth – a thought experiment and forecast”.

    This obsession with cellular data growth rates is Largely Irrelevant or only serves Hysteria and Cool Blogs, Twittter and Press Headlines (which is for nothing else occasionally entertaining).

    What IS Important! is how to provide more (economical) cellular capacity, avoiding;

    • Massive Congestion and loss of customer service.
    • Economical devastation as operator tries to supply network resources for an un-managed cellular growth profile.

    (Source: adapted from K.K. Larsen “Spectrum Limitations Migrating to LTE … a Growth Market Dilemma?“)

    To me the discussion of how to Increase Network Capacity with a factor THOUSAND is an altogether more interesting discussion than what the cellular growth rate might or might not be in 2020 (or any other arbitrary chosen year).

    Mallinson article “The 2020 Vision for LTE”  in FierceWirelessEurope gives a good summary of this effort. Though my favorite account on how to increase network capacity focusing on small cell deployment  is from Iris Barcia (@ibtwi) & Simon Chapman (@simonchapman) from Keima Wireless.

    So how can we simply describe cellular network capacity?

    Well … it turns out that Cellular Network Capacity can be described by 3 major components; (1) available bandwidth B, (2) (effective) spectral efficiency and (3) number of cells deployed N.

    The SUPPLIED NETWORK CAPACITY in Mbps (i.e., C) is equal to  the AMOUNT OF SPECTRUM, i.e., available bandwidth, in MHz (i..e, B) multiplied with the  SPECTRAL EFFICIENCY PER CELL in Mbps/MHz (i.e., E) multiplied by the NUMBER OF CELLS (i.e., N).

    It should be understood that the best approach is to apply the formula on a per radio access technology basis, rather than across all access technologies. Also separate the analysis in Downlink capacity (i.e., from Base Station to Customer Device) and in Uplink (from consumer Device to Base Station). If averages across many access technologies or you are considering the total bandwidth B including spectrum both for Uplink and for Downlink, the spectral efficiency needs to be averaged accordingly. Also bear in mind that there could be some inter-dependency between the (effective) spectral efficiency and number cells deployed. Though it  depends on what approach you choose to take to Spectral Efficiency.

    It should be remembered that not all supplied capacity is being equally utilized. Most operators have 95% of their cellular traffic confined to 50% of less of their Cells. So supplied capacity in half (or more) of most cellular operator’s network remains substantially under-utilized (i.e., 50% or more of radio network carries 5% or less of the cellular traffic … if you thought that Network Sharing would make sense … yeah it does … but its a different story;-).

    Therefore I prefer to apply the cellular capacity formula to geographical limited areas of the mobile network, rather than network wide. This allows for more meaningful analysis and should avoid silly averaging effects.

    So we see that providing network capacity is “pretty easy”: The more bandwidth or available spectrum we have the more cellular capacity can be provided. The better and more efficient air-interface technology the more cellular capacity and quality can we provide to our customers. Last (but not least) the more cells we have build into our mobile network the more capacity can be provided (though economics does limit this one).

    The Cellular Network Capacity formula allow us to breakdown the important factors to solve the “1,000x Challenge”, which we should remember is based on a year 2010 reference (i.e., feels a little bit like cheating! right?;-) …

    The Cellular Capacity Gain formula:

    Basically the Cellular Network Capacity Gain in 2020 (over 2010) or the Capacity we can supply in 2020 is related to how much spectrum we have available (compared to today or 2010), the effective spectral efficiency relative improvement over today (or 2010) and the number of cells deployed in 2020 relative to today (or 2010).

    According with Mallinson’s article the “1,000x Challenge” looks the following (courtesy of SK Telekom);

    According with Mallinson (and SK Telekom, see “Efficient Spectrum Resource Usage for Next Generation NW” by H. Park, presented at 3GPP Workshop “on Rel.-12 and onwards”, Ljubljana, Slovenia, 11-12 June 2012) one should expect to have 3 times more spectrum available in 2020 (compared to 2010 for Cellular Data), 6 times more efficient access technology (compared to what was available in 2010) and 56 times higher cell density compared to 2010. Another important thing to remember when digesting the 3 x 6 x 56 is: this is an estimate from South Korea and SK Telekom and to a large extend driven by South Korean conditions.

    Above I have emphasized the 2010 reference. It is important to remember this reference to better appreciate where the high ratios come from in the above. For example in 2010 most mobile operators where using 1 to maximum 2 carriers or in the process to upgrade to 2 carriers to credible support HSPA+. Further many operators had not transitioned to HSPA+ and few not even added HSUPA to their access layer. Furthermore, most Western European operators had on average 2 carriers for UMTS (i.e., 2×10 MHz @ 2100MHz). Some operators with a little excess 900MHz may have deployed a single carrier and either postponed 2100MHz or only very lightly deployed the higher frequency UMTS carrier in their top cities. In 2010, the 3G population coverage (defined as having minimum HSDPA) was in Western Europe at maximum 80% and in Central Eastern & Southern Europe most places maximum 60%. 3G geographical coverage always on average across the European Union was in 2010 less than 60% (in Western Europe up-to 80% and in CEE up-to 50%).

    OPERATOR EXAMPLE:

    Take an European Operator with 4,000 site locations in 2010.

    In 2010 this operator had deployed 3 carriers supporting HSPA @ 2100MHz (i..e, total bandwidth of 2x15MHz)

    Further in 2010 the Operator also had:

    • 2×10 MHz GSM @ 900MHz (with possible migration path to UMTS900).
    • 2×30 MHz GSM @ 1800MHz (with possible migration path to LTE1800).

    By 2020 it retained all its spectrum and gained

    • 2×10 MHz @ 800MHz for LTE.
    • 2×20 MHz @ 2.6GHz for LTE.

    For simplicity (and idealistic reasons) let’s assume that by 2020 2G has finally been retired. Moreover, lets concern ourselves with cellular data at 3G and above service levels (i.e., ignoring GPRS & EDGE). Thus I do not distinguish between whether the air-interface is HSPA+ or LTE/LTE advanced.

    OPERATOR EXAMPLE: BANDWIDTH GAIN 2010 – 2020:

    The Bandwidth Gain part of the “Cellular Capacity Gain” formula is in general specific to individual operators and the particular future regulatory environment (i.e., in terms of new spectrum being released for cellular use). One should not expect a universally applicable ratio here. It will vary with a given operator’s spectrum position … Past, Present & Future.

    In 2010 our Operator had 15MHz (for either DL or UL) supporting cellular data.

    In 2020 the Operator should have 85MHz (for either DL or UL), which is a almost a factor 6 more than in 2010. Don’t be concerned about this not being 3! After all why should it be? Every country and operator will face different constraints and opportunities and therefor there is no reason why 3 x 6 x 56 would be a universal truth!

    If Regulator’s and Lawmakers would be more friendly towards spectrum sharing the boost of available spectrum for cellular data could be a lot more.

    SPECTRAL EFFICIENCY GAIN 2010 – 2020:

    The Spectral Efficiency Gain part of the “Cellular Capacity Gain” formula is more universally applicable to cellular operators at the same technology stage and with a similar customer mix. Thus in general for apples and apple comparison more or less same gains should be expected.

    In my experience Spectral Efficiency almost always gets experts emotions running high. More often than not there is a divide between those experts (across Operators, Suppliers, etc.) towards what would be an appropriate spectral efficiency to use in capacity assessments. Clearly everybody understands that the theoretical peak spectral efficiency is not reflecting the real service experience of customers or the amount of capacity an operator has in his Mobile Network. Thus, in general an effective (or average) spectral efficiency is being applied often based on real network measurements or estimates based on such.

    When LTE was initially specified its performance targets was referenced to HSxPA Release 6. The LTE aim was to get 3 -4 times the DL spectral efficiency and 2 – 3 times the UL spectral efficiency. LTE advanced targets to double the peak spectral efficiency for both DL and UL.

    At maximum expect the spectral efficiency to be:

    • @Downlink to be 6 – 8 times that of Release 6.
    • @Uplink to be 4 – 6 times that of Release 6.

    Note that this comparison is assuming an operator’s LTE deployment would move 4×4 MiMo to 8×8 MiMo in Downlink and from 64QAM SiSo to 4×4 MiMo in Uplink. Thus a quantum leap in antenna technology and substantial antenna upgrades over the period from LTE to LTE-advanced would be on the to-do list of the mobile operators.

    In theory for LTE-advanced (and depending on the 2010 starting point) one could expect a factor 6 boost in spectral efficiency  by 2020 compared to 2010, as put down in the “1,000x challenge”.

    However, it is highly unlikely that all devices by 2020 would be LTE-advanced. Most markets would be have at least 40% 3G penetration, some laggard markets would still have a very substantial 2G base. While LTE would be growing rapidly the share of LTE-advanced terminals might be fairly low even at 2020.

    Using a x6 spectral efficiency factor by 2020 is likely being extremely optimistic.

    A more realistic assessment would be a factor 3 – 4 by 2020 considering the blend of technologies in play at that time.

    INTERLUDE

    The critical observer sees that we have reached a capacity gain (compared to 2010) of 6 x (3-4) or 18 to 24 times. Thus to reach 1,000x we still need between 40 and 56 times the cell density.

    and that translate into a lot of additional cells!

    CELL DENSITY GAIN 2010 – 2020:

    The Cell Density Gain part of the “Cellular Capacity Gain” formula is in general specific to individual operators and the cellular traffic demand they might experience, i.e., there is no unique universal number to be expected here.

    So to get to 1,000x the capacity of 2010 we need either magic or a 50+x increase in cell density (which some may argue would amount to magic as well) …

    Obviously … this sounds like a real challenge … getting more spectrum and high spectral efficiency is piece of cake compared to a 50+ times more cell density. Clearly our Mobile Operator would go broke if it would be required to finance 50 x 4000 = 200,000 sites (or cells, i.e., 3 cells = 1 macro site ). The Opex and Capex requirements would simply NOT BE PERMISSIBLE.

    50+ times site density on a macro scale is Economical & Practical Nonsense … The Cellular Network Capacity heuristics in such a limit works ONLY for localized areas of a Mobile Network!

    The good news is that such macro level densification would also not be required … this is where Small Cells enter the Scene. This is where you run to experts such as Simon Chapman (@simonchapman) from Keima Wireless or similar companies specialized in providing intelligent small cell deployment. Its clear that this is better done early on in the network design rather than when the capacity pressure becomes a real problem.

    Note that I am currently assuming that Economics and Deployment Complexity will not become challenging with Small Cell deployment strategy … this (as we shall see) is not necessarily a reasonable assumption in all deployment scenarios.

    Traffic is not equally distributed across a mobile network as the chart below clearly shows (see also Kim K Larsen’s “Capacity Planning in Mobile Data Networks Experiencing Exponential Growh in Demand”):

    20% of the 3G-cells carries 60% of the data traffic and 50% of the 3G-cells carries as much as 95% of the 3G traffic.

    Good news is that I might not need to worry too much about half of my cellular network that only carries 5% of my traffic.

    Bad news is that up-to 50% of my cells might actually give me a substantial headache if I don’t have sufficient spectral capacity and enough customers on the most efficient access technology. Leaving me little choice but to increase my cellular network density, i.e., build more cells to my existing cellular grid.

    Further, most of the data traffic is carried within the densest macro-cellular network grid (at least if an operator starts exhausting its spectral capacity with a traditional coverage grid). In a typical European City ca. 20% of Macro Cells will have a range of 300 meter or less and 50% of the Macro Cells will have a range of 500 meter or less (see below chart on “Cell ranges in a typical European City”).

    Finding suitable and permissible candidates for Macro cellular cell splits below 300 meter is rather unlikely.  Between 300 and 500 meter there might still be macro cellular split optionallity and if so would make the most sense to commence on (pending on future anticipated traffic growth). Above 500 meter its usually fairly likely to find suitable macro cellular site candidates (i.e., in most European Cities).

    Clearly if the cellular data traffic increase would require a densification ratio of 50+ times current macro-cellular density a macro cellular alternative might be out of the question even for cell ranges up-to 2 km.

    A new cellular network paradigm is required as the classical cellular network design brakes down!

    Small Cell implementation is often the only alternative a Mobile Operator has to provide more capacity in a dense urban or high-traffic urban environment.

    As Mobile Operators changes their cellular design, in dense urban and urban environments, to respond to the increasing cellular data demand, what kind of economical boundaries would need to be imposed to make a factor 50x increase in cell density work out.

    No Mobile Operator can afford to see its Opex and Capex pressure rise! (i.e., unless revenue follows or exceed which might not be that likely).

    For a moment … remember that this site density challenge is not limited to a single mobile operator … imagining that all operators (i.e., typical 3 -5 except for India with 13+;-) in a given market needs to increase their cellular site density with a factor 50. Even if there is (in theory) lots of space on the street level for Small Cells … one could imagine the regulatory resistance (not to mention consumer resistance) if a city would see a demand for Small Cell locations increase with a factor 150 – 200.

    Thus, Sharing Small Cell Locations and Supporting Infrastructure will become an important trend … which should also lead to Better Economics.

    This bring us to The Economics of the “1,000x Challenge” … Stay tuned!